Escolar Documentos
Profissional Documentos
Cultura Documentos
D17079GC10
Edition 1.0
February 2004
D39100
®
Authors Copyright © 2004, Oracle. All rights reserved.
Ric Van Dyke This documentation contains proprietary information of Oracle Corporation. It is
provided under a license agreement containing restrictions on use and disclosure and
Lex de Haan is also protected by copyright law. Reverse engineering of the software is prohibited.
Christine Jeal If this documentation is delivered to a U.S. Government Agency of the Department of
Janet Stern Defense, then it is delivered with Restricted Rights and the following legend is
Jean-Francois Verrier applicable:
Publisher
Nita K. Brozowski
Contents
Introduction
Overview I-2
How DBAs Spend Their Time I-3
Oracle Database 10g Manageability Goals I-4
Database Management Challenges I-5
Oracle Database 10g Solution: Self-Managing Database I-6
How Oracle Database 10g DBAs Spend Their Time I-7
Today’s IT Infrastructure I-8
Grid Computing I-9
Oracle Database 10g: The Database for the Grid I-10
Further Information I-11
Suggested Schedule I-12
Student Preface I-13
1 Installation
Objectives 1-2
Installation New Feature Support 1-3
Performance Enhancements to Installation 1-4
Checking the Pre-Install Requirements 1-5
Miscellaneous Installation Enhancements 1-6
CD Pack Contents 1-7
Management Options 1-8
File Storage Options 1-9
Backup and Recovery Options 1-10
File Storage Options 1-9
Backup and Recovery Options 1-10
Passwords 1-11
Summary 1-12
2 Server Configuration
Objectives 2-2
Database Configuration Assistant (DBCA) Enhancements 2-3
Using Database Control for Management 2-5
SYSAUX and DBCA 2-6
Using Enterprise Manager 2-7
Database Cloning 2-8
MetaLink Integration 2-9
Database Feature Usage: Statistics Collection 2-10
Database Feature Usage: EM Interface 2-11
Database Feature Usage: HWM Page 2-12
Policy Framework 2-13
Policy Framework in EM 2-14
Policy Violations Page 2-15
Managing Policies 2-16
Simplified Initialization Parameters 2-17
Viewing Initialization Parameters 2-18
Irreversible Data File Compatibility 2-19
iii
Response File Improvements 2-20
Summary 2-21
Practice 2 Overview 2-22
iv
4 Automatic Management
Objectives 4-2
Oracle Database 10g Solution: Self-Managing Database 4-3
Automatic Database Diagnostic Monitor: Overview 4-4
ADDM Performance Monitoring 4-5
ADDM Methodology 4-6
Top Performance Issues Detected 4-7
Database Control and ADDM Findings 4-8
ADDM Analysis Results 4-9
ADDM Recommendations 4-10
Database Control and ADDM Task 4-11
Changing ADDM Attributes 4-12
Retrieving ADDM Reports Using SQL 4-13
Automatic Shared Memory Management: Overview 4-14
SGA Tuning Principles 4-15
Benefits of Automatic Shared Memory Management 4-16
Database Control and Automatic Shared Memory Management 4-17
Manual Configuration 4-18
Behavior of Auto-Tuned SGA Parameters 4-19
Behavior of Manually Tuned SGA Parameters 4-20
Using the V$PARAMETER View 4-21
Resizing SGA_TARGET 4-22
Disable Automatic Shared Memory Management 4-23
Manually Resizing Dynamic SGA Parameters 4-24
Automatic Optimizer Statistics Collection: Overview 4-25
GATHER_STATS_JOB 4-26
Changing the GATHER_STATS_JOB Schedule 4-27
Locking Statistics 4-28
Using the DBMS_STATS Package 4-29
Automatic Statistics Collection: Considerations 4-30
History of Optimizer Statistics 4-31
Managing Historical Optimizer Statistics 4-32
Automatic Undo Retention Tuning 4-33
Automatic Checkpoint Tuning 4-34
Summary 4-35
Practice 4 Overview 4-36
5 Manageability Infrastructure
Objectives 5-2
Oracle Database 10g Solution: Self-Managing Database 5-3
Automatic Workload Repository 5-4
Automatic Workload Repository: Overview 5-5
Automatic Workload Repository Data 5-6
Active Session History 5-7
Base Statistics and Metrics 5-8
v
Workload Repository 5-9
Statistic Levels 5-10
AWR Snapshot Baselines 5-11
AWR Snapshot Purging Policy 5-12
Database Control and AWR 5-13
AWR Reports 5-14
Statspack and AWR 5-15
Server-Generated Alerts 5-16
Server-Generated Alerts: Overview 5-17
Alert Models Architecture 5-18
Server-Generated Alert Types 5-19
Out-of-Box Server-Generated Alerts 5-20
Database Control Usage Model 5-21
Database Control Interface to Alerts 5-22
Setting Alert Thresholds 5-23
Alerts Notification 5-24
Metric Details Severity History 5-25
Metric and Alert Views 5-26
PL/SQL Interface for Threshold Settings 5-27
Alert Consumption: Manual Configuration 5-28
Automatic Routine Administration Tasks 5-29
Job Scheduler Concepts 5-30
DBCA and Automated Tasks 5-31
Adding New Tasks Using EM 5-32
Adding a New Task Using PL/SQL 5-33
Common Manageability Infrastructure: Advisory Framework 5-34
Advisory Framework: Overview 5-35
Typical Advisor Tuning Session 5-36
Database Control and Advisors 5-37
DBMS_ADVISOR Package 5-38
Dictionary Changes 5-39
Using PL/SQL: Example 5-40
Summary 5-41
Practice 5 Overview 5-42
6 Application Tuning
Objectives 6-2
Oracle Database 10g Solution: Self-Managing Database 6-3
Automatic Statistics Gathering 6-4
Enhanced Query Optimization 6-5
Statistics on Dictionary Objects 6-6
Dictionary Statistics: Best Practices 6-7
Miscellaneous Statistics-Related Changes 6-8
DML Table Monitoring Changes 6-9
vi
Rule-Based Optimizer Obsolescence 6-10
Automatic SQL Tuning: Overview 6-11
Application Tuning Challenges 6-12
SQL Tuning Advisor: Overview 6-13
Stale or Missing Object Statistics 6-14
SQL Statement Profiling 6-15
Plan Tuning Flow and SQL Profile Creation 6-16
SQL Tuning Loop 6-17
Access Path Analysis 6-18
SQL Structure Analysis 6-19
SQL Tuning Advisor: Usage Model 6-20
Database Control and SQL Tuning Advisor 6-21
SQL Tuning Advisor: Options and Recommendations 6-22
DBMS_SQLTUNE Package 6-23
DBMS_SQLTUNE: Examples 6-24
Automatic SQL Tuning Categories 6-25
SQL Access Advisor: Overview 6-26
SQL Access Advisor: Usage Model 6-27
Possible Recommendations 6-28
Typical SQL Access Advisor Session 6-29
Recommendation Options 6-30
Review Recommendations 6-31
SQL Access Advisor: Procedure Flow 6-32
Performance Monitoring Solutions 6-33
Performance Management Approach 6-34
Database Home Page 6-35
Database Performance Page 6-36
Concurrency Wait Class: Drill Down 6-37
Top SQL by Waits: Drill Down 6-38
Summary 6-39
Practice 6: Overview 6-40
vii
Integrating Interrow Calculations in SQL 7-11
Partitions, Measures, and Dimensions 7-12
Interrow Calculations: Conceptual Overview 7-13
SQL MODEL: Example 7-16
Materialized Join View (MJV) Enhancements 7-18
REWRITE_OR_ERROR Hint 7-19
REWRITE_TABLE: New Columns 7-20
Partition Maintenance Operations (PMOPs) 7-21
MV Execution Plans 7-22
Tuning Manually Created MVs 7-23
Making MVs Fast-Refreshable 7-24
MV Decomposition Example 7-25
TUNE_MVIEW Usage: Example 7-27
MV Refresh Using Trusted Constraints 7-29
Partition Change Tracking (PCT) 7-30
PCT Using List Partitioning 7-31
PCT Using Join Dependency 7-32
PCT Using TRUNCATE PARTITION 7-33
Forcing PCT-Based Refresh 7-34
Summary 7-35
Practice 7 Overview 7-36
8 System Resource Management
Objectives 8-2
Oracle Database 10g Solution: Self-Managing Database 8-3
Database Resource Manager 8-4
Setting Idle Timeouts 8-5
Switching Back to the Initial Consumer Group at End of Call 8-6
Creating a Mapping Using Database Control 8-7
Creating a Mapping Using DBMS_RESOURCE_MANAGER 8-8
Assigning Priorities Using DBMS_RESOURCE_MANAGER 8-9
Changes to DBMS_RESOURCE_MANAGER Package 8-11
Using the RATIO Allocation Method 8-12
Monitoring the Resource Manager 8-13
Summary 8-14
9 Automating Tasks with the Scheduler
Objectives 9-2
Scheduling Needs 9-3
Scheduler Concepts 9-4
Privileges for Scheduler Components 9-6
Creating a Scheduler Job 9-8
Creating a Scheduler Job: Example 9-9
viii
Setting the Repeat Interval for a Job 9-10
Calendaring Expressions 9-11
Using Scheduler Programs 9-12
Creating a Program Using EM 9-13
Specifying Schedules for a Job 9-14
Creating and Using Schedules 9-15
Using EM to Create Schedules: Schedule 9-16
Advanced Scheduler Concepts 9-17
Creating a Job Class 9-18
Creating a Job Class Using Enterprise Manager 9-19
Job Logging 9-20
Creating a Window 9-21
Prioritizing Jobs Within a Window 9-23
Enabling and Disabling Scheduler Components 9-25
Managing Jobs 9-26
Managing Programs 9-27
Managing Programs with EM 9-28
Managing Schedules 9-29
Managing Windows 9-30
Window Priority 9-32
Managing Attributes of Scheduler Components 9-33
Managing Attributes of the Scheduler 9-35
Viewing Job Execution Details 9-36
Viewing Job Logs 9-37
Purging Job Logs 9-38
Data Dictionary Views 9-40
Summary 9-41
Practice 9: Overview 9-42
10 Space Management
Objectives 10-2
Oracle Database 10g Solution: Self-Managing Database 10-3
Proactive Tablespace Monitoring Overview 10-4
Tablespace Space Usage Monitoring 10-5
Edit Tablespace Space Usage Thresholds 10-6
Edit Tablespace Page 10-7
PL/SQL and Tablespace Space Usage Thresholds 10-8
Proactive Undo Tablespace Monitoring 10-10
Shrinking Segments: Overview 10-11
Shrinking Segments: Considerations 10-12
Shrinking Segments Using SQL 10-13
Segment Shrink: Basic Execution 10-14
Segment Shrink: Execution Considerations 10-15
ix
Database Control and Segment Shrink 10-16
Segment Advisor: Overview 10-17
Segment Advisor 10-18
Growth Trend Report 10-19
Segment Resource Estimation 10-20
Undo Management Page 10-21
Undo Advisor Page 10-22
Fast Ramp-Up 10-23
Sorted Hash Cluster: Overview 10-24
Sorted Hash Cluster: Example 10-25
Sorted Hash Cluster: Basic Architecture 10-26
Sorted Hash Cluster: Considerations 10-27
Summary 10-28
Practice 10: Overview 10-29
11 Improved VLDB Support
Objectives 11-2
Bigfile Tablespaces: Overview 11-3
Bigfile Tablespace Benefits 11-4
Bigfile Tablespace Usage Model 11-5
Creating Bigfile Tablespaces 11-6
SQL Statement Changes and Additions 11-7
BFTs and SQL Statements: Examples 11-8
Data Dictionary Changes and Additions 11-9
Bigfile Tablespaces and DBVERIFY 11-10
Configuration Parameters and BFTs 11-11
DBMS_UTILITY Package and BFTs 11-12
Migration and Bigfile Tablespaces 11-13
Extended ROWID Format and BFTs 11-14
DBMS_ROWID Package Changes 11-15
Temporary Tablespace Group: Overview 11-16
Temporary Tablespace Group: Benefits 11-17
Creating Temporary Tablespace Groups 11-18
Maintaining Temporary Tablespace Groups 11-19
Temporary Tablespace Group SQL: Examples 11-20
Data Dictionary Changes 11-22
Database Control: Creating a Partition 11-23
Database Control: Partition Maintenance 11-24
Partitioned IOT Enhancements 11-25
Local Partitioned Index Enhancements 11-26
Skipping Unusable Indexes 11-27
Hash-Partitioned Global Indexes: Overview 11-28
Contention Scenario 11-29
Hash-Partitioned Global Indexes: Benefits 11-30
Creating Hash-Partitioned Global Indexes 11-31
Adding and Coalescing Partitions 11-32
x
Range and Hash Global Index Commands 11-33
Operations Not Supported 11-34
Usage Example 11-35
Bitmap Index Storage Enhancements 11-36
Summary 11-37
Practice 11: Overview 11-38
12 Backup and Recovery Enhancements
Objectives 12-2
Oracle Database 10g Solution: Self-Managing Database 12-3
New Backup and Recovery Strategy 12-4
Flash Backup and Recovery 12-5
Defining Flash Recovery Area Using Database Control 12-6
Defining a Flash Recovery Area Using SQL 12-7
Flash Recovery Area Space Management 12-8
Backing Up Data Files to a Flash Recovery Area 12-9
Modifying the Flash Recovery Area 12-10
Backing Up the Flash Recovery Area 12-11
New Flash Recovery Area View 12-12
New Flash Recovery Area Columns 12-13
Best Practices for the Database and Flash Recovery Area 12-14
Changes in SQL Statement Behavior 12-15
Recovering with Incrementally Updated Backups 12-17
Fast Incremental Backup 12-18
Enabling Fast Incremental Backup Using Database Control 12-19
Enabling Fast Incremental Backup Using SQL 12-20
Monitoring Block Change Tracking 12-21
Oracle-Suggested Strategy 12-22
RMAN Command Changes 12-23
Backup Type Enhancements Using Database Control 12-24
Backup Maintenance 12-25
Backing Up the Entire Database 12-26
Backing Up Individual Tablespaces 12-27
Backing Up Data Files and Control Files 12-28
Implementing Fast Recovery 12-29
Automated Instance Creation and TSPITR 12-30
Auxiliary Location in EM 12-31
Creating Compressed Backups 12-32
Monitoring Compressed Backups 12-33
Simplified Recovery Through RESETLOGS 12-34
Recovery Through RESETLOGS: Changes 12-35
Recovering Data Files Not Backed Up 12-36
Dropping a Database 12-37
Automatic Channel Failover 12-38
xi
Enhanced RMAN Scripts 12-39
Setting Duration and Throttling Option 12-40
Placing All Files in Online Backup Mode 12-41
How Does File Status Affect BEGIN BACKUP? 12-42
Changes to the END BACKUP Command 12-44
How Does File Status Affect END BACKUP? 12-45
Summary 12-46
Practice 12 Overview 12-47
13 Flashback Any Error
Objectives 13-2
Flashback Time Navigation 13-3
Flashback Error Correction 13-4
Flashback Database: Overview 13-5
Flashback Database Eliminates Restore Time 13-6
Flashback Database Architecture 13-7
Configuring Flashback Database Using EM 13-8
Flashback Your Database Using EM 13-9
Manually Configuring Flashback Database 13-10
Flashback Database: Examples 13-11
Monitoring Flashback Database 13-12
Excluding Tablespaces from Flashback Database 13-13
Flashback Database Considerations 13-14
Flashback Drop: Overview 13-15
Recycle Bin 13-16
Flash Back Dropped Tables Using EM 13-17
Querying the Recycle Bin 13-18
Restoring Tables from the Recycle Bin 13-19
Recycle Bin Automatic Space Reclamation 13-20
Recycle Bin Manual Space Reclamation 13-21
Bypassing the Recycle Bin 13-22
Querying Dropped Tables 13-23
Flashback Drop Considerations 13-24
Flashback Versions Query: Overview 13-25
Flashback Versions Query Using EM 13-26
Flashback Versions Query Syntax 13-27
Flashback Versions Query: Example 13-28
Flashback Versions Query: Considerations 13-29
Flashback Transaction Query: Overview 13-30
Flashback Transaction Query Using EM 13-31
Querying FLASHBACK_TRANSACTION_QUERY 13-32
Using Flashback Versions Query and Flashback Transaction Query 13-33
Flashback Transaction Query: Considerations 13-34
Flashback Table: Overview 13-35
Using EM to Flash Back Tables 13-36
xii
Flashback Table: Example 13-37
Rolling Back a Flashback Table Operation 13-38
Flashback Table: Considerations 13-39
Guaranteed Undo Retention 13-40
SCN and Time Mapping Enhancements 13-41
Granting Flashback Privileges 13-42
When to Use Flashback Technology 13-43
Flashback Technology: Benefits 13-44
Summary 13-45
Practice 13: Overview 13-46
xiii
15 Automatic Storage Management (ASM)
Objectives 15-2
What Is Automatic Storage Management? 15-3
ASM: Key Features and Benefits 15-4
ASM: New Concepts 15-5
ASM: General Architecture 15-6
ASM Administration 15-8
ASM Instance Functionalities 15-9
ASM Instance Creation 15-10
ASM Instance Initialization Parameters 15-11
Accessing an ASM Instance 15-12
Dynamic Performance View Additions 15-13
ASM Home Page 15-14
ASM Performance Page 15-15
Starting Up an ASM Instance 15-17
Shutting Down an ASM Instance 15-18
ASM Administration 15-19
ASM Disk Group 15-20
Failure Group 15-21
Disk Group Mirroring 15-22
Disk Group Dynamic Rebalancing 15-23
ASM Administration Page 15-24
Create DiskGroup Page 15-25
Create or Delete Disk Groups 15-26
Adding Disks to Disk Groups 15-27
Miscellaneous Alter Commands 15-28
Monitoring Long-Running Operations Using V$ASM_OPERATION 15-30
ASM Administration 15-31
ASM Files 15-32
ASM File Names 15-33
ASM File Name Syntax 15-34
ASM File Name Mapping 15-36
ASM File Templates 15-37
Template and Alias Examples 15-38
Retrieving Aliases 15-39
SQL Commands and File Naming 15-40
DBCA and Storage Options 15-41
Database Instance Parameter Changes 15-42
Migrate Your Database to ASM 15-43
Summary 15-44
Practice 15 Overview 15-45
16 Maintaining Software
Objectives 16-2
Oracle Database 10g Upgrade Paths 16-3
xiv
Choose an Upgrade Method 16-4
DBUA Advantages 16-5
Manual Upgrade: Advantages and Disadvantages 16-6
New Pre-Upgrade Information Utility 16-7
Oracle Database 10g: Simplified Upgrade 16-8
New Post-Upgrade Status Utility 16-9
Properly Prepared Upgrade 16-10
Creating SYSAUX Tablespace 16-11
Recompiling Invalid Objects 16-12
Backing Up the Database Before Upgrade 16-13
Selecting Database Control 16-14
Specifying a Flash Recovery Area 16-15
Selecting Passwords 16-16
Upgrade Summary 16-17
Upgrade Results 16-18
Performing the Manual Upgrade 16-19
Summary 16-22
17 Security
Objectives 17-2
Virtual Private Database: Overview 17-3
Virtual Private Database: Enhancements 17-4
Column-Level VPD: Example 17-5
Creating a Column-Level Policy 17-6
Policy Types: Overview 17-7
Static Policies 17-8
Context-Sensitive Policies 17-9
Sharing Policy Functions 17-10
Auditing Mechanisms: Overview 17-11
Uniform Audit Trails 17-12
Enhanced Enterprise User Auditing 17-13
Fine-Grained Auditing Enhancements 17-14
Fine-Grained Auditing Policy: Example 17-15
Audited DML Statement Considerations 17-16
Summary 17-17
Practice 17: Overview 17-18
xv
18 Miscellaneous New Features
Objectives 18-2
Transaction Monitoring 18-3
Dynamic Performance View Changes 18-4
V$FAST_START_TRANSACTIONS view 18-5
Session-Based Tracing 18-6
End-to-End Tracing 18-7
New Statistic Aggregation Dimensions 18-8
Using Enterprise Manager to Enable Statistics Aggregation 18-9
Using DBMS_MONITOR to Enable Statistics Aggregation 18-10
Generalized Trace Enabling 18-11
Using Enterprise Manager to Enable and View SQL Tracing 18-12
Enabling and Disabling Tracing 18-13
Configurationless Client Connect 18-14
Simplified Shared Server Configuration 18-16
Viewing the Dispatcher Configuration 18-18
Resumable Space Allocation Enhancements 18-19
Flushing the Buffer Cache 18-20
MAXTRANS and Maximum Concurrency 18-21
Large Object (LOB) Data Type Changes 18-22
Implicit Conversion Between CLOB and NCLOB 18-23
Regular Expression Support 18-24
Matching Mechanism 18-25
Syntax: Example 18-26
Using REGEXP_LIKE in SQL 18-27
Case- and Accent-Insensitive Query and Sort 18-28
Changes in Configuration Parameters 18-29
Support in SQL and Functions 18-30
Quote Operator q 18-31
UTL_MAIL Package 18-32
UTL_MAIL Examples 18-33
UTL_COMPRESS Package 18-34
LogMiner Enhancements 18-35
Summary 18-36
Practice 18 Overview 18-37
A Practices
B Solutions
xvi
Automatic Storage Management
(ASM)
Operating System
ASM
Database
disk group
File system
Extent
file Allocation unit
or
raw device
Oracle
block Physical
block
DB tom=ant tom=bee DB
Instance dick=ant dick=bee Instance
harry=ant harry=bee
SID=sales SID=sales
DBW0 ASMB ASMB DBW0
FG
ASM ASM FG RBAL
RBAL
Instance Instance
FG FG
SID=ant SID=bee
ASMB ASMB
RBAL RBAL
DB DBW0 DBW0 DB
ARB0 ARB0
Instance Instance
RBAL
… … RBAL
SID=test ARBA ARBA SID=test
ASM Disks ASM Disks ASM Disks ASM Disks ASM Disks ASM Disks
• ASM instance
0010
• Files 0010
ASM
instance
Database
instance
INSTANCE_TYPE = ASM
DB_UNIQUE_NAME = +ASM
ASM_POWER_LIMIT = 1
ASM_DISKSTRING = ’/dev/rdsk/*s2’, ’/dev/rdsk/c1*’
ASM_DISKGROUPS = dgroupA, dgroupB
LARGE_POOL_SIZE = 8MB
ASM
AS SYSDBA AS SYSOPER
instance
Storage system
V$ASM_TEMPLATE
V$ASM_CLIENT V$ASM_DISKGROUP
V$ASM_FILE
V$ASM_ALIAS
Storage system
V$ASM_DISK
V$ASM_OPERATION
$ sqlplus /nolog
SQL> CONNECT / AS sysdba
Connected to an idle instance.
SQL> STARTUP;
ASM instance started
Total System Global Area 147936196 bytes
Fixed Size 324548 bytes
Variable Size 96468992 bytes
Database Buffers 50331648 bytes
Redo Buffers 811008 bytes
ASM diskgroups mounted
ASM instance
SHUTDOWN NORMAL
1 1
• ASM instance
0010
• Files 0010
Disk group
4
3
1 7 13 1 7 13 1 7 13
1 7 13 1 7 13 1 7 13
1 7 13 1 7 13 1 7 13
Disk group A
Failure Group
A failure group is a set of disks, inside one particular disk group, sharing a common resource
whose failure needs to be tolerated. An example of a failure group is a string of SCSI disks
connected to a common SCSI controller. A failure of the controller leads to all of the disks
on its SCSI bus becoming unavailable, although each of the individual disks is still
functional.
What constitutes a failure group is site-specific. It is largely based upon failure modes that a
site is willing to tolerate. By default, ASM assigns each disk to its own failure group. When
creating a disk group or adding a disk to a disk group, administrators may specify their own
grouping of disks into failure groups. After failure groups are identified, ASM can optimize
file layout to reduce the unavailability of data due to the failure of a shared resource.
• Automatic online
rebalance whenever
storage configuration
changes
• Only move data
proportional to
storage added
• No need for manual
I/O tuning
• Online migration to
new storage
Disk formatting
• ASM instance
0010
• Files 0010
Database file
RMAN 1 Automatic
2 ASM file
3 creation
4
1 2 3 4
ASM Files
ASM files are Oracle database files stored in ASM disk groups. When a file is created,
certain file attributes are permanently set. Among these are its protection policy and its
striping policy.
ASM files are Oracle-managed files. Any file that is created by ASM is automatically
deleted when it is no longer needed. However, ASM files that are created by specifying a
user alias are not considered Oracle-managed files. These files are not automatically deleted.
All circumstances where a database must create a new file allow for the specification of a
disk group for automatically generating a unique file name.
With ASM, file operations are specified in terms of database objects. Administration of
databases never requires knowing the name of a file, though the name of the file is exposed
through some data dictionary views or the ALTER DATABASE BACKUP CONTROLFILE
TO TRACE command.
Because each file in a disk group is physically spread across all disks in the disk group, a
backup of a single disk is not useful. Database backups of ASM files must be made with
RMAN.
Note: ASM does not manage binaries, alert logs, trace files, or password files.
ASM
file name
Single-file Multiple-file
Reference
creation creation
Incomplete
Fully Alias with
Numeric Alias Incomplete with
qualified template
template
+<group>/<dbname>/<file_type>/<tag>.<file#>.<incarnation#>
+<group>.<file#>.<incarnation#>
+<group>/<directory1>/…/<directoryn>/<file_name>
+<group>/<directory1>/…/<directoryn>/<file_name>(<temp>)
+<group>
+<group>(<temp>)
SELECT name
FROM V$ASM_ALIAS
WHERE parent_index = :alias_id;
Retrieving Aliases
Suppose that you want to retrieve all aliases that are defined inside the previously defined
directory +dgroupA/mydir. You can traverse the directory tree, as shown in the
example.
The REFERENCE_INDEX number is usable only for entries that are directory entries in the
alias directory. For nondirectory entries, the reference index is set to zero. The example
retrieves REFERENCE_INDEX numbers for each subdirectory and uses the last
REFERENCE_INDEX as the PARENT_INDEX of needed aliases.
…
INSTANCE_TYPE = RDBMS
LOG_ARCHIVE_FORMAT
DB_BLOCK_SIZE
DB_CREATE_ONLINE_LOG_DEST_n
DB_CREATE_FILE_DEST
DB_RECOVERY_FILE_DEST
CONTROL_FILES
LOG_ARCHIVE_DEST_n
LOG_ARCHIVE_DEST
STANDBY_ARCHIVE_DEST
LARGE_POOL_SIZE = 8MB
…
DBUA Advantages
Your upgrade process is automated by DBUA, which performs all of the tasks you would
normally perform manually. Before the upgrade can begin, the following pre-upgrade steps
are performed by DBUA:
• Check for any invalid user accounts or roles
• Check for any invalid data types
• Check for any desupported character sets
• Check for adequate resources, including rollback segments, tablespaces, and free disk
space
• Optionally backs up all necessary files
• Disable archiving during upgrade phase
DBUA automatically modifies or creates newly required tablespaces, invokes the appropriate
upgrade scripts, archives the redo logs, and disables archiving during the upgrade phase.
While the upgrade is running, DBUA shows the upgrade progress for each component, writes
detailed trace and log files, and produces a complete HTML report for later reference. To
enhance security, DBUA automatically locks new user accounts in the upgraded database,
then proceeds to create new configuration files (parameter and listener files) in the new
Oracle home. In a RAC environment, DBUA upgrades all the database and configuration
files on all nodes in the cluster. DBUA supports a silent mode of operation where no user
interface is presented to the user.
Oracle Database 10g: New Features for Administrators 16-5
Manual Upgrade: Advantages and
Disadvantages
Advantages:
• The DBA controls every step of the upgrade
process.
Disadvantages:
• More work:
– Must perform a manual space check for SYSTEM
tablespace
– Must manually adjust all obsolete or deprecated
initialization parameters
– Must perform a user-driven backup of the database
• Subject to errors
Selecting Passwords
You can set a single password that is applied to each of these Enterprise Manager user
accounts, or you can provide unique passwords for each.
Upgrade Summary
You can review your upgrade selections from this page before committing to upgrade the
database. You should use this page to verify the following upgrade details:
• Database name
• Source Oracle home
• Source database version
• Target Oracle home
• Target database version
• Upgrade time
DBUA additionally lists the database components to be upgraded and the initialization
parameters that are changed during the upgrade.
Scroll down to see the upgrade time. The upgrade time is an estimate of how long the
upgrade of the database will take. It does not include the recompilation time of the invalid
PL/SQL modules.
Note: After you click Finish and start the upgrade, you cannot go back to previous screens.
You can, however, click Stop to stop the upgrade operation. If you click Stop, Oracle
Corporation recommends that you remove the database that you are upgrading and restore
the backup database.
Upgrade Results
You can use this screen to:
• Examine the results of the upgrade
• Manage the passwords in the upgraded database
• Restore the original database settings (if necessary)
If you are not satisfied with the upgrade results, you can click Restore. Depending on the
method you used to back up your database, the Restore option performs one of two tasks:
• If you used DBUA to back up your database, then clicking Restore restores the original
database and the original database settings from the backup.
• If you used your own backup procedure to back up the database, clicking Restore only
restores the original database settings. You must perform a database restore manually
with your own backup utilities.
BEGIN
dbms_rls.add_policy(object_schema => ’hr’,
object_name => ’employees’,
policy_name => ’hr_policy’,
function_schema =>’hr’,
policy_function => ’hrsec’,
statement_types =>’select,insert’,
sec_relevant_cols=>’salary,commission_pct’);
END;
/
Static Policies
When you use static policies, VPD always enforces the same predicate for access control.
Regardless of which user accesses the objects, everyone gets the same predicate.
The Oracle database only needs to execute the policy function once. The returned predicate
is cached in SGA for all static policies with the same policy function. This makes static
policies very fast since the database does not reexecute the policy function for each query.
You use a static policy when every query needs the same policy predicate.
For the static category, shared static policies allow you to share the same policy function
with multiple objects. The caching behavior in this case is exactly the same except that the
Oracle database first looks for cached predicates generated by the same policy function of
the same policy type.
You enable static or shared static policies by setting the POLICY_TYPE parameter of the
DBMS_RLS.ADD_POLICY procedure to DBM_RLS.STATIC or
DBMS_RLS.SHARED_STATIC, respectively.
In this example, the business policy is that a manager can access EMPLOYEES sensitive
information only for his employees.
Note: Although the policy predicate is the same for every rewritten statements, each
execution of the same rewritten statement could produce a different row set because the
predicate may filter the data differently based on context attributes or functions like
SYSDATE.
Oracle Database 10g: New Features for Administrators 17-8
Context-Sensitive Policies
Context-Sensitive Policies
There are cases where policy predicates should be static for a particular user session, though
different users may be subjected to different predicates. There are also cases where the
policy predicate can change when certain context attributes are changed within a user
session. So a context-sensitive policy assumes that the policy predicate may be changed
after statement parsing for a particular database session, and that such change can occur only
if there are some session context changes. Therefore the server reevaluates the policy
function at statement execution time if it detects context changes since the last use of the
cursor. The policy predicate is cached in the session memory.
When a context-sensitive policy shares its policy function, the caching behavior is similar
except that the server first looks for cached policy predicate generated by the same policy
function for the same policy type within the same database session.
You use a context-sensitive policy when a predicate need not change for a user’s session, but
the policy must enforce two or more different predicates for different users. You enable
context-sensitive or shared context-sensitive policies by setting the POLICY_TYPE
parameter of the DBMS_RLS.ADD_POLICY procedure to
DBM_RLS.CONTEXT_SENSITIVE or DBMS_RLS.SHARED_CONTEXT_SENSITIVE,
respectively.
In this example, the business policy is that a manager can access EMPLOYEES2 sensitive
information only for his employees, and employees who are not managers can access only
their own sensitive information.
Oracle Database 10g: New Features for Administrators 17-9
Sharing Policy Functions
departments emp_v
Same policy
function
countries employees
STATEMENTID,
AUDIT_TRAIL=DB_EXTENDED
ENTRYID
DBA_AUDIT_TRAIL DBA_FGA_AUDIT_TRAIL
EXTENDED_TIMESTAMP,
PROXY_SESSIONID, GLOBAL_UID,
INSTANCE_NUMBER, OS_PROCESS,
TRANSACTIONID, SCN, SQL_BIND, SQL_TEXT
DBA_COMMON_AUDIT_TRAIL
USERNAME USERNAME
GLOBAL_UID
DB_USER DB_USER
GLOBAL_UID
BEGIN
dbms_fga.add_policy(
object_schema => ’HR’,
object_name => ’EMPLOYEES’,
policy_name => ’my_policy’,
audit_condition => NULL,
audit_column => ’SALARY,COMMISSION_PCT’,
audit_column_opts => DBMS_FGA.ALL_COLUMNS,
audit_trail => DBMS_FGA.DB,
statement_types => ’INSERT,UPDATE’);
END;
UPDATE hr.employees
SET salary = 10
WHERE commission_pct = 90;
UPDATE hr.employees
SET salary = 10
WHERE employee_id = 111;
Rollback
Monitor Historical
Transaction Monitoring
In previous releases, you could monitor parallel transaction recovery with two views:
V$FAST_START_SERVERS and V$FAST_START_TRANSACTIONS. However, you could
not monitor normal transaction rollback or transactions recovered by SMON.
Through enhancements to transaction rollback monitoring, you can now monitor (in real-time)
normal transaction rollback and transaction recovery by SMON. In addition, you can view
historical information about transaction recovery and transaction rollback. Given historical
information about transaction recovery, you can calculate average rollback duration. When
you have the current state of the recovery, you can determine how much work has been done
and how much work remains. Using these two pieces of information, you can better estimate
transaction recovery time and set the FAST_START_PARALLEL_ROLLBACK initialization
parameter more appropriately to optimize system performance.
PXID RCVSERVERS
V$FAST_START_TRANSACTIONS
XID
V$FAST_START_SERVERS
SELECT state,undoblocksdone,undoblockstotal,cputime
FROM v$fast_start_transactions;
V$FAST_START_TRANSACTIONS view
This statement can be used to track transaction recovery after instance startup. As you can
see, once the transaction is recovered, its statistics remain inside the
V$FAST_START_TRANSACTIONS view, but its STATE is set to RECOVERED. Historical
information is kept in V$FAST_START_TRANSACTIONS until the next instance
shutdown.
More operations are added into V$SESSION_LONGOPS. This allows you to monitor
ROLLBACK and ROLLBACK TO operations longer than six seconds.
SQL> SELECT message FROM v$session_longops;
MESSAGE
---------------------------------------------
Transaction Rollback: xid:0x0001.00a.00000812 : 4600 out
of 4600 Blocks done
Transaction Rollback: xid:0x0001.007.00000812 : 4601 out
of 4601 Blocks done
2 rows selected.
SQL>
Shared Servers
Dispatcher
$ trcsess output=<user.trc>
clientid = <user_name> *.trc
Session-Based Tracing
In a shared server environment there are many trace files that may be associated with a
given session, which makes tracing the life of a session very difficult.
You can consolidate the information from these many trace files into a single output using
the trcsess command line tool.
This output is then directed to a file or to your window. You supply the name of the trace
files to be consolidated, and these file names can contain wildcard characters.
The output is raw consolidated information from the trace files. To evaluate the information,
you need to format it with the tkprof utility.
Client Client
identifier identifier
End-to-End Tracing
End-to-End Tracing facilitates the following tasks:
• Debugging of performance problems in multitier environments: In multitier
environments, a request from an end-client is routed to different database sessions by
the middle tier. Previously, there was no easy way to keep track of a client across these
different database sessions. End-to-End Tracing makes this possible by introducing a
new attribute, CLIENT_IDENTIFIER, which uniquely identifies a given end-client
and is carried through all tiers to the database server. Enabling tracing based on the
CLIENT_IDENTIFIER solves the problem of debugging performance problems in
multitier environments. The client identifier is visible in the CLIENT_IDENTIFIER
column of V$SESSION and also visible through the system context, as shown in the
following query:
SQL> SELECT SYS_CONTEXT (’USERENV’, ’CLIENT_IDENTIFIER’)
2 FROM dual;
• Efficient management and accounting of workload: For applications using services that
have been instrumented with MODULE and ACTION name annotation, End-to-End
Tracing provides a means to set apart important transactions in an application.
Service name,
module name,
Client identifier and action name
begin
dbms_monitor. client_id_trace_enable
(client_id => <user_name>);
end;
$ trcsess output=<user.trc>
clientid = <user_name> *.trc
CLIENT_ID_TRACE_[ENABLE|DISABLE](<client_id>|,
waits |, binds)
SERV_MOD_ACT_TRACE_[ENABLE|DISABLE](
<service_name>, <module_name>,<action_name> |,
waits |, binds |, <instance_name>)
SESSION_TRACE_[ENABLE|DISABLE](<session_id> |,
<serial_num> |, waits |, binds)
physical_attributes_clause::=
PCTFREE integer
PCTUSED integer
INITRANS integer
Storage_clause
a a b c d
^ Look for ’a’ and succeed Match
^ Look for ’b’ and fail No match
^ Look for ’c’ and fail, reset, and advance No match
^ Look for ’a’ and succeed Match
^ Look for ’b’ and succeed, remember ’c’ as Match
alternative
^ Look for ’d’ and fail No match
^ Look for ’c’ as last remembered No match
alternative and fail, reset, and advance
^ Look for ’a’ and fail, reset, and advance No match
^ Look for ’a’ and fail, reset, and advance No match
^ Look for ’a’ and fail, reset, and advance No match
Matching Mechanism
A match is attempted at the beginning of the string at the first character. If the character does
not match, the process starts again at the next character in the string. If the character
matches, the next character to be matched is examined and any alternatives that would also
have resulted in a match are remembered. This process continues until a character fails. The
latest remembered alternative is then attempted at the position for which it was valid.
If the regular expression has no more alternatives to the match, it fails to produce a positive
result.
Note: Oracle Database 10g follows the exact syntax and matching semantics for these
operators as defined in the POSIX standard for matching ASCII data.
Syntax: Example
In the syntax, the REGEXP_LIKE condition evaluates to TRUE if the search value
(srcstr) matches the regular expression (pattern), optionally using the matching option
(match_option).
The search value is a character expression and can be any of the following data types:
CHAR, VARCHAR2, NCHAR, NVARCHAR2, CLOB, or NCLOB. The regular expression is
usually a text literal and can be a CHAR, VARCHAR2, NCHAR, or NVARCHAR2 data type. If
the data type of the regular expression is different from the data type of the search value, the
regular expression is implicitly converted to the data type of the search value. The matching
option modifies the default matching behavior. You can, for example, choose case-
insensitive matching or treat a search value as a multiple line string. If either the search
value or the regular expression is NULL, the result is unknown.
• Examples:
NLS_SORT = FRENCH_M_AI
NLS_SORT = XGERMAN_CI
CUST_LAST_NAME
--------------------
de Niro
De Niro
dë Nirõ
You typically use the NLSSORT function in an ORDER BY or WHERE clause when the
linguistic setting of the session parameter NLS_SORT is different from the linguistic setting
in the SQL statement. The example in the illustration searches for all occurrences of “De
Niro” regardless of the case and accent. You can achieve the same outcome as shown in the
illustration by setting the NLS_COMP parameter:
ALTER SESSION SET NLS_SORT=generic_m_ai;
ALTER SESSION SET NLS_COMP=ansi;
Quote Operator q
You can eliminate previous additional quotation strings in character string literals by
choosing your own quotation mark delimiter. It supports both CHAR and NCHAR literals.
You choose any convenient delimiter, single or multibyte, or any of the [ ], { }, ( ), or < >
character pairs. The delimiter can even be a single quotation mark. However, if the delimiter
appears in the text literal itself, ensure that it is not immediately followed by a single
quotation mark. In the first example, X is used as the quotation mark delimiter. You do not
need to prefix the single quotation mark inside ‘John’s Bait Shop’ with another single
quotation mark. The second example shows PL/SQL using the paired brackets [ ] as the
delimiter. Hence you do not need to use two single quotation marks inside the character
string used to initialize the v_string1 variable. This makes your SQL text much more
readable. The new quote operator can also be used with the new SQL_TUNE function, which
takes a SQL statement as its argument. For example:
EXECUTE SQL_TUNE –
('select * from emp where name LIKE ''%DBMS_%''',
...)
With the new quote operator, you can rewrite this statement as:
EXECUTE SQL_TUNE –
(q'!select * from emp where name LIKE '%DBMS_%'!')
UTL_MAIL Package
The supplied UTL_MAIL PL/SQL package is a user-friendly utility for managing e-mail. It
allows you to use well-defined and commonly used e-mail features such as attachments, Cc,
and Bcc. Using UTL_MAIL, you can send e-mail in a single PL/SQL call and use
parameters to pass commonly used e-mail elements such as the sender, receivers, Cc, Bcc,
subject, and message body.
As an example, if you have a bug system and you want an easy interface to send e-mail with
a small attachment to a recipient, UTL_MAIL provides a good interface for this task.
For sending large attachments (larger than 32 KB) to hundreds of recipients, use the
UTL_SMTP package. The UTL_MAIL package will be enhanced in the future to include
greater functionality.
Note: To install the UTL_MAIL package, you need to run the utlmail.sql and
prvtmail.plb scripts located in the $ORACLE_HOME/rdbms/admin directory.
For more information on the UTL_MAIL package, please see the PL/SQL Packages and
Types Reference.
execute utl_mail.SEND -
(sender => ’the.instructor@oracle.com’ -
,recipients=> ’some valid email address’ -
,subject => ’10gNF, UTL_MAIL Demo’ -
,message => ’Note: 1st message, no attachment’);
execute utl_mail.SEND_ATTACH_VARCHAR2 -
( ... -
, message => ’2nd message, attachment in line’ -
, attachment=> ’The 2nd demo attachment text’ -
, att_inline=> TRUE );
execute utl_mail.SEND_ATTACH_VARCHAR2 -
( ... -
, att_inline => FALSE -
, att_filename =>’message.doc’ );
UTL_MAIL Examples
The first example shows how you can use the SEND procedure to send an e-mail message
without attachments. The second example shows you how to use the
SEND_ATTACH_VARCHAR2 procedure to send an e-mail with an in-line attachment. The
ATT_INLINE argument is set to TRUE.
The third example shows how you can send e-mail messages with out-of-line attachments.
Note: The ATT_FILENAME argument is the suggested file name if recipients are to save
the attachment as a file. To send a file as an out-of-line attachment, you need to copy it into
a BLOB and then send it as an attachment.
UTL_COMPRESS Package
You use the UTL_COMPRESS package to compress data then restore the data to its original
uncompressed format. Compressed data is less expensive to store in terms of disk or
database space. The compression process reduces the size of a piece of data in such a way
that it can later be expanded to its original form. A file is scanned for repeated patterns of
bytes that can be expressed in terms of bytes and numbers in the compressed file. Some file
types, such as those that tend to contain repeated word patterns, benefit more from
compression. Compression efficiency and speed are tunable parameters. The greater the
speed of compression, the less the overall efficiency (and vice versa). The optional
quality argument allows you to choose between speed and compression quality,
meaning the percentage of reduction in size. A faster speed results in less compression of the
data, and a slower speed results in more compression of the data. Valid values are 1–9, with
1=fastest and 9=slowest.
For this release of the UTL_COMPRESS utility, compression must perform at the rate of at
least 10 KB per minute.
Note: The output of the UTL_COMPRESS compressed data is compatible with gzip (with
-n option) or gunzip on a single file.
LogMiner Enhancements
If you are using LogMiner against the same database that generated the redo log files,
LogMiner scans the control file and determines the redo log files that are needed to satisfy
your requested time or system change number (SCN) range. LogMiner adds the redo logs
from the mining database by default. You no longer need to map your time frame to an
explicit set of redo log files. You must use the CONTINUOUS_MINE option and specify a
STARTSCN or STARTTIME. If you are not sure of the SCN or time, you can query
V$ARCHIVED_LOG to obtain that information. You can also use this view to determine
what redo log files are available for automatic detection by LogMiner. You can use the
NO_ROWID_IN_STMT option to disable the generation of physical row identifiers in the
reconstructed SQL statements. With supplemental logging, the redo stream contains
logically unique identifiers for the modified rows, so the physical row identifiers are not
needed.
In previous releases, you used the REMOVEFILE option with ADD_LOGFILE to remove
redo log files. That option has been deprecated; now you can remove redo log files with the
REMOVE_LOGFILE procedure.
For most of these practice exercises you will be using Enterprise Manager Database Control.
In this exercise you will take a full offline backup of your database, and start your database using
Database Control. You will then perform some navigational exercises to familiarize yourself
with Enterprise Manager Database Control.
2. Determine your host IP address from your PC from the /etc/hosts file. Open your
browser and enter the URL: http://<IP address>:5500/em to initiate Enterprise Manager
Database Control. At the status page, click startup.
3. Enter the host credentials (oracle/oracle) and database credentials (sys/oracle) and click OK.
Click Yes to confirm startup database in open mode.
Enter username as SYS with a password of oracle and connect as SYSDBA.
For this practice, you should log in as system, with a password of oracle, either through
Database Control or SQL*Plus.
1. Using Database Control, create a new user called DP identified by DP. The user DP should have
EXAMPLE as its default tablespace, and TEMP as its temporary tablespace. Make sure that you
grant the CONNECT, RESOURCE, and DBA roles to user DP.
2. You decide that you need to export the SALES, PRODUCTS, and COSTS tables from the SH
schema. However, you want to know which other tables the above three are depending on.
Determine the complete list of tables that need to be exported.
3. Connected as user DP, use the Data Pump Export wizard to export the following SH tables:
SALES, PRODUCTS, COSTS, CHANNELS, PROMOTIONS, TIMES, CUSTOMERS, and
COUNTRIES. Ensure the following:
• Set 1 as the maximum number of threads in the export job.
• The Oracle directory DPDIR1 is used to store both the log file and the dump file set.
DPDIR1 should point directly to your $HOME OS directory. Make sure you specify the
complete path without using environment variables.
• Do not include the row that corresponds to CHANNEL_ID 5 in the CHANNELS table.
• Do not submit the export job.
• Write down the Data Pump job name.
5. You want to see and change some of the Data Pump job characteristics. How can you do this?
6. Now that the Data Pump job execution is suspended, connect as user DP through SQL*Plus.
What table has been created?
7. Connect as user oracle from your terminal emulator window and determine the list of
processes associated with your instance. What is your conclusion?
Hint: Look at all the new processes spawned in Oracle Database 10g.
8. How can you see the amount of work performed so far by your Data Pump job?
9. Connect as the oracle user from your terminal emulator window and attach to the existing
export Data Pump job. Connect as the oracle user from a second terminal emulator window
and determine the list of processes associated with your instance. What is your conclusion?
11. Remove the Database Control job run from the repository.
12. Now that you successfully exported your tables from the SH user, import only the SALES and
PRODUCTS tables back into the DP schema using the Data Pump Import Wizard.
13. To clean up your environment, you should delete your Database Control job run, and drop the
user DP from your database.
1. Use Database Control to create a new tablespace called TBSADDM. This tablespace should have
only one 50 MB file and must be locally managed. Also, make sure that TBSADDM does not use
automatic segment space management.
2. Using Database Control, create a new user called ADDM identified by ADDM. Make sure that the
ADDM user has TBSADDM as its default tablespace, and TEMP as its temporary tablespace. When
done, grant the following roles to the ADDM user: CONNECT, RESOURCE, DBA.
4. Connect as user oracle from your terminal emulator and execute the lab_04_01_04.sh
script from your labs directory.
5. From the Database Control home page go to the Performance page. If this is the first time you
go to the Performance page, you need to click Accept in the Adobe license agreement pop-up
screen. On the Performance page, make sure that the View Data field is set to Real Time: 15
Seconds Refresh. After a while, you should see a spike on the Sessions: Waiting and Working
graph. After the spike is finished, execute the lab_04_01_05.sql script. This script forces
the creation of a new snapshot. Looking at the graph, you can already determine that this instance
is suffering concurrency problems.
6. Return to the Database Control home page. Because the ADDM data is not refreshed too
frequently on the console, you may not see the latest ADDM result in the Diagnostic Summary
region. Retrieve the latest ADDM findings, and determine the cause of the problem.
7. To fix the problem, create a new tablespace called TBSADDM2, and execute the
lab_04_01_07.sql script from your labs directory. This script drops the ADDM table, and re-
creates it in the new tablespace. This script also gathers statistics on the table and takes a new
snapshot.
8. Connect as user oracle in your terminal emulator and execute again the lab_04_01_04.sh
script from your labs directory.
9. From the Database Control home page, go to the Performance page. On the Performance
page, make sure that the View Data field is set to Real Time: 15 Seconds Refresh. After a
while, you should see a spike on the Sessions: Waiting and Working graph. When the spike is
finished, execute the lab_04_01_05.sql script. This script forces the creation of a new
snapshot. Looking at the graph, you can already determine that this instance is no longer
suffering from concurrency problems.
1. Use Database Control to shut down your instance, and start it up again using the
init_sgalab.ora initialization parameter file located in your labs directory. Before doing
this, make sure that the init_sgalab.ora parameter file can be used to start up your
instance.
2. Connect as SYSDBA through SQL*Plus and execute the lab_04_02_02.sql script. This
script creates a new tablespace and a new table, and populates the table.
3. Use Database Control to check the size of the various SGA buffers of your instance.
4. Connect as SYSDBA through SQL*Plus and execute the lab_04_02_04.sql script. This
script executes a parallel query on the previously created table. What happens and why?
5. Using Database Control only, how can you fix this problem? Implement your solution.
6. Connect as SYSDBA through SQL*Plus and determine the effects of the previous step on the
memory buffers. What are your conclusions?
7. Connect as SYSDBA through SQL*Plus and execute the lab_04_02_04.sql script again.
This script executes a parallel query on the previously created table. Using Database Control, and
while the script is running, verify that your solution is working. What happens and why?
8. If you use SQL*Plus instead of Database Control, what commands do you execute to enable the
Automatic Shared Memory Management feature after you started your instance using the
init_sgalab.ora file? Explain your decision. It is assumed that you want to have a
maximum of 256MB of SGA memory allocated.
9. Connect as SYSDBA through SQL*Plus and execute the lab_04_02_09.sql script to clean
up your environment.
1. Connect as SYSDBA through SQL*Plus and execute the lab_05_01_01.sql script. This
script adds the new subscriber ALERT_USR1 to the internal ALERT_QUE queue. It then grants
user SYSTEM the right to dequeue from the ALERT_QUE. Then, the script creates a special
procedure that is used by user SYSTEM to dequeue alert information from the ALERT_QUE.
2. Connect as SYSDBA through SQL*Plus, check that you do not have any outstanding alerts
for the User Commits Per Sec metric, and look at your alert history. Then, set the User
Commits Per Sec metric with a warning threshold set to 3, and a critical threshold set to 6. Make
sure that the observation period is set to one minute, and that the number of consecutive
occurrences is set to 2. When done, check that the metrics thresholds have been set correctly.
Again, look at your outstanding alerts and alert history. What are your conclusions?
3. Execute the lab_05_01_03.sql script. This script creates a new table and inserts one row in
it.
4. Connect as SYSDBA through Database Control and look at the corresponding metrics graphic
rate. Then, execute the lab_05_01_04.sql script. This script generates a commit rate
between three and six commits per second for one minute on your system. While the script is
executing, observe the metrics graph using Database Control. After a minute or two, through
SQL*Plus, look at your outstanding alerts and alert history. What are your conclusions?
5. While connected as SYSDBA through Database Control, look at the corresponding metrics
graphic rate, and execute the lab_05_01_05.sql script. This script generates a commit rate
of five commits per second for three minutes on your system. While the script is executing,
observe the metrics graph using Database Control. After the script finishes its execution, examine
your outstanding alerts and alert history using both SQL*Plus and another Database Control
session. What are your conclusions?
6. Wait three more minutes, and view your outstanding alerts and the alert history again. What are
your conclusions?
7. While connected as SYSDBA through Database Control, look at the corresponding metrics
graphic rate, and execute the lab_05_01_07.sql script. This script generates a commit rate
of eight commits per second for three minutes on your system. While the script is executing,
observe the metrics graph using Database Control. After the script finishes its execution, look at
your outstanding alerts and alert history using both SQL*Plus and Database Control. What are
your conclusions?
8. Wait three more minutes, and look at your outstanding alerts and the alert history again. What are
your conclusions?
9. Connect as user SYSTEM through SQL*Plus and execute the SYS.SA_DEQUEUE procedure
multiple times. This procedure was created during the first step. Before executing the procedure,
execute the SET SERVEROUTPUT ON command. What are your conclusions?
1. Connect as SYSDBA through Database Control and navigate to the Performance tab of the
Database Control home page. On the Performance page, make sure that the View Data field is
set to Real Time: 15 second Refresh. When done, open a terminal emulator window connected
as user oracle. When done change your current directory to your labs directory: cd
$HOME/labs. Then, enter the following command from the OS prompt:
. ./setup_perflab.sh
2. When the setup_perflab.sh script completes, in approximately five minutes, observe the
Performance page for six minutes. What are your conclusions?
5. After you fix the problem, how can you quickly verify that the problem was solved?
7. Assume that someone else has executed the previous steps some time ago when you were out for
vacation. Back in the office, you want to see what happened while you were away. How can you
do this?
Returning to the previous example, retrieve the history of what happened to your system.
8. Return to the Performance page. During the period of time where the workload was running,
determine the most important wait category from the Sessions: Waiting and Working graph,
and find the history of what was done to fix the problem.
9. To clean up your environment, execute the following command from your command-line
window: . ./cleanup_perflab.sh.
1. Connected as SYSDBA through SQL*Plus, flush the shared pool and execute the following four
scripts, in this order:
a. lab_06_02_01a.sql
b. lab_06_02_01b.sql: (Star query)
c. lab_06_02_01c.sql: (Star query)
d. lab_06_02_01d.sql: (Order by)
2. Connected as SYSDBA through SQL*Plus, execute the lab_06_02_02.sql script. This script
creates a new SQL tuning set called MY_STS_WORKLOAD, which captures the SQL statements
that you ran in step one.
3. Connected as SYSDBA through Database Control, use the SQL Access Advisor to generate
recommendations for the MY_STS_WORKLOAD SQL tuning set.
4. Looking at the Recommendations page for your SQL Access Advisor task, what are your
conclusions?
5. Implement the SQL Access Advisor recommendation that has the most benefit on your workload.
Then, redo Steps 3 and 4. What are your conclusions?
In this practice you will create two small tables, based on the SH schema. Using these two tables, you
investigate the difference between a partitioned outer join and a regular outer join. Unless specified
otherwise, you should be logging in as SH either through SQL*Plus or iSQL*Plus.
1. Connect to the SH schema, and alter the session so that the NLS_DATE_FORMAT is set to 'DD-
MON-YYYY'. Confirm the two tables T1 and S1 you create in the next step do not presently
exist. You can use the script lab_07_01_01.sql.
3. Execute the lab_07_01_03.sql script to create the table T1. Query the contents of T1.
4. Define a break on PROD_ID (to enhance output readability), and execute the right outer join
query in the lab_07_01_04.sql script. What do you notice about the returned rows?
5. Now, execute the partitioned outer join query in the lab_07_01_05.sql script.
6. Compare the results of the two queries executed in steps 4 and 5. What is the difference?
7. You will need the S1 table in the next practice; you can drop the T1 table now.
You can use the S1 table you created in the previous practice to experiment with the new MODEL
clause to perform inter-row calculations.
1. Connect as the SH schema and query all rows of the S1 table to see the table contents.
TIP: Remember to clear the format break set in the previous practice.
3. Change the above query to suppress the original rows from the S1 table by adding the RETURN
UPDATED ROWS clause after the RULES keyword. You can execute the lab_07_02_03.sql
script.
1. Connect to the SH schema, and run the lab_07_03_01.sql script to ensure the MY_MV
materialized view does not exist.
2. Execute the lab_07_03_02.sql script to create a materialized view called MY_MV, and
execute the dbms_stats.gather_table_stats(USER, 'MY_MV') procedure to
gather statistics against MY_MV.
4. Fix the error: Change QUANTITY_SOLD into AMOUNT_SOLD on line 3, and repeat the test.
5. Run the lab_07_03_05.sql script to execute EXPLAIN PLAN against the query in the
previous step and query the PLAN_TABLE table, to see the improved execution plan readability.
6. Before you can use the DBMS_MVIEW.EXPLAIN_REWRITE procedure, you must create the
REWRITE_TABLE table with the utlxrw.sql script available in the
$ORACLE_HOME/rdbms/admin directory. Run the
$ORACLE_HOME/rdbms/admin/utlxrw.sql script now.
In this practice, you use the Database Control application to define and monitor the Scheduler and
automate tasks. Unless specified otherwise, you should be logging in as SYSDBA either through
Database Control or SQL*Plus.
1. Log in to EM Database Control as the SYSTEM user and grant the following roles to the HR user:
• CONNECT role
• RESOURCE role
• DBA role
Because you are going to use user HR to administer jobs through Database Control, you need to
make sure that HR is registered as a possible Administrator.
2. Log in to Database Control as the HR user. From the Administration tab, click the Jobs link in
the Scheduler region, at the bottom right corner of the page. Are there any existing jobs?
3. Are there any existing programs? (Hint: Use the browser Back button).
5. Are there any existing windows? What resource plan is associated with each window?
6. Are there any existing job classes? If so, what resource consumer group is associated with each
job class?
In this practice, you will use Database Control to create Scheduler objects and automate tasks. Unless
specified otherwise, you should be logging in as SYSDBA either through
Database Control or SQL*Plus.
1. While logged in to the database as the HR user in Database Control, click the Administration
tab. Under the heading Scheduler, click Jobs. Click the Create button to open the Create Job
window.
Create a simple job that runs a SQL script:
• General:
Name: CREATE_LOG_TABLE_JOB
Owner: HR
Description: Create the SESSION_HISTORY table for the next part of this practice
Logging level: RUNS
Command type: In-line Program: Executable
Executable: /home/oracle/labs/lab_09_02_01.sh
• Schedule:
Repeating: Do not Repeat
Start: Immediately
• Options:
No special options.
3. If the job does not appear on the Scheduler Jobs page, click the Refresh button. Then click the
Run History tab and verify that the job ran successfully.
4. Create a program called LOG_SESS_COUNT_PRGM that logs the current number of database
sessions into a table. Use the following code, or use the lab_09_02_04.sql script:
DECLARE
sess_count NUMBER;
BEGIN
SELECT COUNT(*) INTO sess_count FROM V$SESSION;
INSERT INTO session_history VALUES (systimestamp, sess_count);
COMMIT;
END;
6. Return to the Database Control, and verify that the schedule was created.
Hint: You may have to refresh the page for the Schedule to appear.
7. Using Database Control, create a job named LOG_SESSIONS_JOB that uses the
LOG_SESS_COUNT_PRGM program and the SESS_UPDATE_SCHED schedule. Make sure the
job uses FULL logging.
9. Use Database Control to alter the SESS_UPDATE_SCHED schedule from every three seconds
to every three minutes.
10. Connect as HR schema, and query the SESSION_HISTORY table to verify that the rows are
being added every three minutes now, instead of every three seconds.
12. Alter the LOG_SESS_COUNT_PRGM program to log new information into the logging table.
Modify the code to look like the following text, or use the lab_09_02_12.sql script:
DECLARE
sess_count NUMBER;
back_count NUMBER;
BEGIN
SELECT COUNT(*) INTO sess_count FROM V$SESSION;
SELECT COUNT(*) INTO back_count
FROM V$SESSION
WHERE type = ''BACKGROUND'';
INSERT INTO session_history VALUES (systimestamp, sess_count,
back_count);
COMMIT;
END;
13. Run the LOG_SESSIONS_JOB job immediately, and verify that the new information was added
to the HR.SESSION_HISTORY table.
14. Drop the LOG_SESSIONS_JOB job, the LOG_SESS_COUNT_PRGM program, and the schedule
SESS_UPDATE_SCHED. Note: Make sure you do not delete the wrong schedule.
2. Check the database-wide threshold values for the Tablespace Space Usage metric by using the
following command:
SELECT warning_value,critical_value
FROM dba_thresholds
WHERE metrics_name='Tablespace Space Usage'
AND object_name IS NULL;
3. Create a new tablespace called TBSALERT with one 5 MB file called alert1.dbf. Make sure
this tablespace is locally managed and uses Automatic Segment Space Management. Also, do not
make it autoextensible, and do not specify any thresholds for this tablespace. Use Database
Control to create it. If this tablespace already exists in your database, drop it first, including its
files.
4. Using Database Control, change the Tablespace Space Usage thresholds of the TBSALERT
tablespace. Set its warning level to 50 percent, and its critical level to 60 percent.
6. Select the reason and resolution columns from DBA_ALERT_HISTORY for the
TBSALERT tablespace. How do you explain the result?
8. Check the fullness level of the TBSALERT tablespace using either Database Control or
SQL*Plus. The current level should be around 53%. Wait for approximately 10 minutes, and
check that the warning level is reached for the TBSALERT tablespace.
9. Execute the lab_10_01_09.sql script to add data to TBSALERT. Wait for 10 minutes and
view the critical level in both the database and in Database Control. Verify that TBSALERT
fullness is around 63%.
10. Execute the lab_10_01_10.sql script. This script deletes rows from tables in TBSALERT.
11. Now run the Segment Advisor for the TBSALERT tablespace by using Database Control. Make
sure that you run the Advisor in Comprehensive mode without time limitation. Accept and
implement its recommendations. After the recommendations have been implemented, check that
the fullness level of TBSALERT is below 50%.
12. Wait for approximately 10 more minutes, and check that there are no longer any outstanding
alerts for the TBSALERT tablespace.
14. Reset the database wide default thresholds from the Tablespace Space Usage metric for
tablespace TBSALERT.
1. Create a database session connected as SYSDBA through SQL*Plus. This session is referred to as
the First session. Using either Database Control or SQL*Plus, create a new undo tablespace
called UT2 with only one 1MB file.
3. Using a second SQL*Plus session, connect as SYSDBA. This session is referred to as the
Second session. Execute the lab_10_02_03.sql script. If you get an error when executing
the script, switch your undo tablespace back to UNDOTBS1, and start again.
4. In the second session, prepare to execute the lab_10_02_04a.sql script, and in the first
session prepare to execute the lab_10_02_04b.sql script. When done, execute the script in
the second session, and then immediately after, execute the one from the first session. What
happens and why?
5. From the first session look at the alert history. What do you see? Use Database Control to locate
the warning, and click the corresponding alert link.
6. Use the Undo Advisor to get recommendations to correctly size UT2. Use the recommendation to
correctly size the UT2 tablespace.
8. Switch your undo tablespace back to UNDOTBS1, and drop UT2 including its data files, as well
as TBSALERT.
1. Using Database Control, create a new bigfile tablespace called TBSBF containing one 5 MB file.
2. Using Database Control, try to add a new file to TBSBF. What happens and why?
3. Using Database Control, how can you resize TBSBF to 10 MB? What simplification can you
observe?
4. Using Database Control, create table EMP as a copy of HR.EMPLOYEES. Make sure that EMP
resides in the TBSBF tablespace.
5. Explain why the following statement is incorrect. Then fix it, and determine the correct output:
SELECT distinct DBMS_ROWID.ROWID_RELATIVE_FNO(ROWID)
FROM sys.emp;
6. Explain why the following statement is incorrect. Then fix it, and determine the correct output:
SELECT distinct DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID,'BIGFILE')
FROM hr.employees;
8. Execute the following statement with the previously found restricted ROWID. Explain why it is
incorrect, and then fix it:
SELECT first_name
FROM sys.emp
WHERE rowid = (SELECT
DBMS_ROWID.ROWID_TO_EXTENDED('&rid',NULL,NULL,0) FROM dual);
1. In this lab you will be following Oracle’s best practices for managing database files and recovery
related files by establishing a database area and flash recovery area for your database. Use
Database Control to configure OMF to /u01/app/oracle/oradata/orcl. Ensure that
parameter changes are written to the current SPFILE. Turn on ARCHIVELOG mode for your
database. This requires a restart of your instance.
2. Using Database Control, check that you are now using automatically a flash recovery area. Then
make sure that the size of your flash recovery area is set to 3GB. What happens to the Archive
Log Destination 10?
1. Using Database Control, enable fast incremental backups for your database. What is the default
location for the change tracking file? Ensure that your retention policy allows for recovery within
the last 31 days.
2. Query the v$block_change_tracking view to show the status, file name, and size of the
file. You can use the lab_12_02_02.sql script.
1. Using Database Control, back up the Oracle database using the Oracle Suggested Strategy. View
the backup logs as they are generated through the backup progress. The log generation is
dynamic, so refresh your browser to view more output.
1. Run the lab_12_04_01.sql script to create a new user called HR1, using the EXAMPLE
tablespace to store the created tables. Using Database Control, confirm the existence of the
following tables:
• BR_JOB_HISTORY
• BR_EMPLOYEES
• BR_JOBS
• BR_DEPARTMENTS
• BR_LOCATIONS
• BR_COUNTRIES
• BR_REGIONS
2. Run the Oracle Suggested Strategy again by creating a new backup job. Follow the same steps as
in Practice 12-3.
4. After the backup job has completed, run the lab_12_04_04.sql script to view the formatted
output of the number of blocks actually backed up.
2. Run the Oracle Suggested Strategy again by creating a new backup job. Follow the same steps as
in Practice 12-3.
3. View the log of the RMAN backup job. You can see that RMAN merges the previous
incremental backup into the image copies.
2. Using Database Control, reduce the size of the flash recovery area so that a warning is issued on
the next backup.
3. Run the Oracle Suggested Strategy again by creating a new backup job. Follow the same steps as
in Practice 12-3.
4. When there is no more space available in the flash recovery area, the following actions occur:
• The RMAN backup job errors because there is no more space for the backup file. View the
EM job output (if possible)
• An error is written to the alert.log.
• A row is inserted into the DBA_OUTSTANDING_ALERTS view.
Using Database Control, look at the latest entries in the alert.log.
1. In this exercise you will simulate a channel failover when using multiple channels and backing
up to tape. From a terminal window, use mkdir to create a temporary directory location for the
tape device to act as the pseudo-SBT device type at /home/oracle/tape. Set the RMAN
channel configuration by running the lab_12_07_01.sql script.
1. Create a new locally managed tablespace called TBSFD containing only one 500 KB file. Also,
TBSFD should use Automatic Segment Space Management. Use either Database Control or
command line to create it. If this tablespace already exists in your database, drop it first,
including its files.
2. Create a new user called FD, identified by FD, having TBSFD as its default tablespace and TEMP
as its temporary tablespace. Make sure that user FD has the following roles granted: CONNECT,
RESOURCE, and DBA. If this user already exists on your system, drop it first.
3. Connect as user FD and execute the lab_13_01_03.sql script through SQL*Plus. This script
creates:
• Table EMP as a copy of HR.EMPLOYEES
• Table DEPT as a copy of HR.DEPARTMENTS
• The NOTHING trigger on EMP
• The EMP primary key
• The DEPT primary key
• The EMPFK constraint on EMP that references the DEPT primary key
• The EMPFKINDX index on EMPFK
• The EMPSALCONS check constraint on EMP
• The EMPIDMGRFK self-referencing constraint on EMP
• A materialized view log on EMP
4. Use Database Control to determine the available free space remaining on the TBSFD tablespace.
Connected as FD in SQL*Plus, list the segments and constraints created by user FD. In the report,
also include the size of each segment.
5. Using Database Control, drop the EMP table, and look at the FD users recycle bin. What do you
observe?
6. Connect as user FD through SQL*Plus and determine the size of each free extent in the TBSFD
tablespace. What is your conclusion?
7. Although the EMP table has been dropped, it is still possible to query its content as long as it is
visible from the recycle bin. Query the content of the dropped EMP table using Database Control.
8. Connect as user FD through SQL*Plus, and list all the objects and constraints that belong to user
FD. What are your conclusions?
10. Connect as user FD through SQL*Plus, query the EMP table, and list the available free space in
tablespace TBSFD. What are your conclusions?
12. Using Database Control, drop the DEPT2 table and purge the corresponding entry in the FD
recycle bin.
13. Connected as SYSDBA through SQL*Plus, execute the lab_13_01_13.sql script to clean up
the environment.
Unless specified otherwise, you should be logging in as SYSDBA through either SQL*Plus or
Database Control.
1. Connected as SYSDBA through SQL*Plus, execute the lab_13_02_01.sql script. This script
creates a new user called JFV identified by JFV, and also creates a new tablespace called
JFVTBS.
2. Using SQ*Plus, determine the list of processes associated to your instance. Then check that your
database is in NOARCHIVELOG mode, and that it does not use flashback logging. List the content
of your flash recovery area.
3. Using Database Control, enable both ARCHIVELOG mode and flashback logging.
4. Using SQL*Plus, determine the list of processes associated to your instance. Then check that
your database is in ARCHIVELOG mode, and that it uses flashback logging. List the content of
your flash recovery area. What are your conclusions?
5. Connected as user JFV under SQL*Plus, execute the lab_13_02_05.sql script. This script
creates a new table called EMP. It also selects the sum of all the salaries of the EMP table. Then
the script returns the current SCN of your database, and it looks at the contents of
V$UNDOSTAT, V$FLASHBACK_DATABASE_LOG, and V$FLASHBACK_DATABASE_STAT.
Write down the information provided by lab_13_02_05.sql.
6. Connected as user JFV under SQL*Plus, repeat the execution of the lab_13_02_06.sql
script three times. What are your conclusions?
7. Connected as user JFV under SQL*Plus, create a new tablespace called JFVTBS2. This
tablespace should have only one 500 KB data file. When done, disable flashback logging on
JFVTBS2. Then check that flashback logging is not enabled on JFVTBS2.
8. Connected as user JFV under SQL*Plus, execute the lab_13_02_08.sql script. This script
creates a new table called EMP2 inside tablespace JFVTBS2. The script also returns the
flashback statistics and then executes a long running update of the EMP2 table. In the end, the
script shows you again the flashback statistics. What are your conclusions?
9. Connected as user JFV under SQL*Plus, execute the lab_13_02_09.sql script. Write down
the information returned by this script.
10. Connected as SYSDBA under SQL*Plus try to recover your database to the SCN calculated
during step 9. What happens and why?
11. Using SQL*Plus, fix the problem, and redo step 10. When done, open your database in READ
ONLY mode, and check the result of your flashback database operation. Then, shutdown and
startup mount your instance.
12. Connected as SYSDBA under SQL*Plus, flashback your database to the SCN returned in step 5.
Then open your database in READ WRITE mode, and check your database. What is your
conclusion?
1. Execute the lab_14_01_01.sql script to create a new table that will be used to generate a
workload on your instance.
2. Use Database Control to shut down your instance, and start it up again using the
init_lfszadv.ora initialization parameter file located in your labs directory. Before doing
this, make sure that the init_lfszadv.ora parameter file can be used to start up your
instance.
3. Execute the lab_14_01_03.sql script. This script updates the previously defined
T_LFSZADV table. This is done to generate a workload on your instance.
4. When done, determine the size advice for your redo log groups using Database Control.
5. Implement the recommendation by adding two new redo log groups of 50 MB, and by dropping
the existing redo log groups.
8. To clean up the environment, log out from any session that you created so far and connect as
SYSDBA through SQL*Plus. Then execute the lab_14_01_08.sql script.
1. Use DBCA to create the ASM instance on your machine. During the ASM instance creation,
DBCA asks you whether you want to change the default values for the ASM initialization
parameter. Make sure that the disk discovery string is set to /u02/asmdisks/*. Then DBCA
asks you to create new disk groups. Create one disk group called DGROUP1 that is using the
following four ASM disks:
• /u02/asmdisks/disk0
• /u02/asmdisks/disk1
• /u02/asmdisks/disk2
• /u02/asmdisks/disk3
Make sure to specify that DGROUP1 is using external redundancy. After the ASM instance and
the disk group are created, you can exit DBCA. Do not create a database.
1. Connected as user oracle in your terminal emulator window, start your ASM instance and list
the processes associated to it. Then determine the characteristics of:
• The mounted disk groups
• The associated ASM disks
• The associated ASM files
2. Connected as SYSDBA under SQL*Plus in another terminal emulator window, determine the list
of disk groups that are visible from your database instance. Then list the processes associated to
your database instance. When done, create a new tablespace called TBSASM that is stored inside
the ASM disk group DGROUP1, and that has only one 200 MB data file. When done, determine
the list of processes associated to your database instance again, and list the data files associated
to your database. What do you observe?
3. Back on your ASM instance, list all the ASM files that were created so far. Then, look at the
ASM disk activity and free space. Execute the lab_15_02_03.sql script to simulate the
addition of a new disk to your system. Again, look at the ASM disk activity and free space.
When done, add the new disk /u02/asmdisks/disk4 to DGROUP1. Look at the ongoing
ASM operations until there is no outstanding one. Then look again at the ASM disk activity and
free space. What are your conclusions?
4. On your database instance, execute the lab_15_02_04.sql script. This script creates and
populates a new table called T, which is stored in TBSASM. When executed, set timing statistics
in your SQL*Plus session and execute the following query:
SELECT count(distinct -
DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID,'SMALLFILE'))
FROM t;
5. From your ASM instance, drop the ASM disk DGROUP1_0004 from DGROUP1.
7. Back in your ASM instance, check the impact on the ASM disk activity and free space. What are
your conclusions?
1. Connected as SYSDBA under SQL*Plus in your database instance, create a new tablespace called
TBSASMMIG. This tablespace should contain only one 10 MB file stored in your file system (not
using ASM). Create a table called T2 stored in TBSASMMIG. Insert one row inside T2.
2. From your database instance, migrate TBSASMMIG to ASM storage. When done, check that the
migration was successful.
3. From your ASM instance, check the number of files created in your ASM disks.
4. From your database instance, cleanup your environment by dropping tablespace TBSASMMIG
including its contents and data file. Do the same with tablespace TBSASM. Also, remove the file
system file that was originally created to store TBSASMMIG.
Unless specified otherwise, you should be logging in as SYSDBA either through Database Control or
SQL*Plus.
2. Connect as user VPD through SQL*Plus and create a new package called
APP_SECURITY_CONTEXT. This package should contain only one procedure called
SET_EMPNO. The goal of the SET_EMPNO procedure is to assign to the EMPNO attribute of the
VPD_CONTEXT context the employee’s identifier corresponding to the connected user. Use the
procedure DBMS_SESSION.SET_CONTEXT to set the EMPNO attribute, and the
SYS_CONTEXT('USERENV','SESSION_USER') function to determine the name of the
connected user.
5. Connect as user VPD through SQL*Plus and execute the lab_17_01_05.sql script. This
script creates a new package called VPD_SECURITY. This package contains one function called
EMPNO_SEC. The goal of this function is to return the VPD predicate used by your policy. In this
case the returned predicate is:
employee_id = SYS_CONTEXT('vpd_context', 'empno').
6. Connect as VPD through SQL*Plus and create a new policy called VPD_POLICY. This policy
should have the following characteristics:
• Is attached to the VPD.EMPLOYEES table
• Uses the VPD.VPD_SECURITY.EMPNO_SEC function
• Is applied only for SELECT statements
• Is a dynamic policy
• Specifies the SALARY and COMMISSION_PCT columns as the list of relevant columns
You can use the lab_17_01_06.sql script.
9. Connect as user VPD and drop the VPD_POLICY policy, and re-create it with the exact same
characteristics except that it should now be a static policy instead of being dynamic. When done,
flush the shared pool of your instance. You can use the lab_17_01_09.sql script.
10. Connect as user JF through SQL*Plus and execute the following statements:
select first_name from vpd.employees;
select first_name from vpd.employees;
select last_name from vpd.employees;
select salary from vpd.employees;
select commission_pct from vpd.employees;
What do you observe, and what are your conclusions?
11. Connect as SYSDBA and determine which statements are using the defined policy on your
instance. What are your conclusions?
1. Connect as SYSDBA and write a query with a single WHERE clause condition (using the
REGEXP_LIKE function) that asks for a search-string and then displays the view
definitions of all views with the name [DBA|USER|ALL]_search-string. Make sure your
query is case insensitive. You can use lab_18_01_01.sql.
a. Use the REGEXP_INSTR function to alter this query to return the position of the fifth
word in this banner text. You can use lab_18_01_02a.sql.
b. Use the REGEXP_INSTR function to return the position of the second word starting with
a lowercase or uppercase “e” with a length of at least seven characters. You can use
lab_18_01_02b.sql.
1. Connect to the HR schema, and create a table called NAMES, with first names using the following
statements:
create table names as
select first_name
from employees
where rownum <= 30;
update names
set first_name = lower(first_name)
where rownum <= 15;
3. By default, uppercase characters sort before lowercase characters. Using the ALTER SESSION
command, change NLS_SORT for your session to use case-insensitive binary sorting and repeat
the query from the previous step.
4. Drop the NAMES table, and reset your session to use default binary sorting.
1. Start two sessions, one connected as SYSDBA and one connected as SH.
2. From the SYSDBA session, determine the session ID (sid) and serial number (serial#) from
v$session for the SH user, and then describe the DBMS_MONITOR package. Then, from the
SYSDBA session, enable tracing using the sid and serial# values for the other session,
including the waits and bind information, with the following command:
execute dbms_monitor.session_trace_enable ( -
session_id => <sid> , -
serial_num => <serial#> , -
waits => true , -
binds => true ) ;
3. From the SH session, execute the lab_18_03_03.sql script, and then exit your session.
4. From the remaining SYSDBA session, determine your user_dump_dest location, locate the
trace file, and view the contents.
For most of these practice exercises you will be using Enterprise Manager Database Control.
In this exercise you will take a full offline backup of your database, and start your database using
Database Control. You will then perform some navigational exercises to familiarize yourself
with Enterprise Manager Database Control.
set echo on
connect / as sysdba
shutdown immediate;
exit;
END
cp $ORACLE_BASE/oradata/orcl/* $HOME/DONTTOUCH
cp $ORACLE_HOME/dbs/spfile*.ora $HOME/DONTTOUCH
2. Determine your host IP address from your PC from the /etc/hosts file. Open your
browser and enter the URL: http://<IP address>:5500/em to initiate Enterprise Manager
Database Control. At the status page, click startup.
more /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost
139.185.35.115 edrsr15p1.us.oracle.com edrsr15p1
3. Enter the host credentials (oracle/oracle) and database credentials (sys/oracle) and click OK.
Click Yes to confirm startup database in open mode.
Enter username as SYS with a password of oracle and connect as SYSDBA.
a. At the Startup/Shutdown: Specify Host and Target Database Credentials, enter oracle / oracle
for the target database machine credentials, and sys / oracle for the target database credentials.
b. Click OK when done.
c. In the Startup/Shutdown window, click Yes to confirm startup database in open mode.
d. Enter the username as SYS with a password of oracle, connect as SYSDBA, and click Login.
a. From the Database Control home page: Maintenance > Configure Recovery Settings
b. Scroll down to the Flash Recovery Area region.
a. From the Database Control home page: Administration > All Initialization Parameters
b. Use the scroll bar or the Filter field to search on a name or partial name.
a. From the Database Control home page: Administration > Configuration Management > Database
Usage Statistics
b. Use the Previous and Next links to scroll through the features.
For this practice, you should log in as system, with a password of oracle, either through
Database Control or SQL*Plus.
1. Using Database Control, create a new user called DP identified by DP. The user DP should have
EXAMPLE as its default tablespace, and TEMP as its temporary tablespace. Make sure that you
grant the CONNECT, RESOURCE, and DBA roles to user DP.
2. You decide that you need to export the SALES, PRODUCTS, and COSTS tables from the SH
schema. However, you want to know which other tables the above three are depending on.
Determine the complete list of tables that need to be exported.
a. From the Database Control home page, click the Administration tab.
b. On the Administration page, click the Tables link in the Schema region.
c. On the Tables page, specify SH in the Schema field in the Search region. Then click the Go button.
d. Select the SALES table from the Results region, and select Show Dependencies from the Actions drop-
down list. Then click the Go button in the Results region.
e. On the Show Dependencies page for the SALES table, the Dependencies tab lists the tables on which
SALES depends. From that list you can determine that SALES depends on CHANNELS, CUSTOMERS,
PRODUCTS, PROMOTIONS, and TIMES.
f. Similarly, you can determine that PRODUCTS does not depend on any other table, and that COSTS
depends on CHANNELS, PRODUCTS, PROMOTIONS, TIMES and COUNTRIES.
g. Now, by using the same procedure, you can determine that CHANNELS, PROMOTIONS, and TIMES are
not depending on other tables. However CUSTOMERS depends on COUNTRIES.
h. So, the complete list of tables that need to be exported is: SALES, PRODUCTS, COSTS, CHANNELS,
PROMOTIONS, TIMES, CUSTOMERS, and COUNTRIES.
a. From the Database Control home page, click the Maintenance tab.
b. On the Maintenance page, click the Export to Files link in the Utilities region.
c. On the Export: Export Type page, select the Tables option button, and specify the username and
password for your host credentials. You should use the ones corresponding to your Oracle account. Make
sure that the Save as Preferred Credential checkbox is selected. When done, click the Continue button.
d. On the Export: Tables page click the Add button.
e. On the Export: Add Tables page, enter SH in the Schema field, and make sure that the Tables option
button is selected. Then click the Go button.
f. In the Search Results region, select the following tables: SALES, PRODUCTS, COSTS, CHANNELS,
PROMOTIONS, TIMES, CUSTOMERS, and COUNTRIES. (You may have to click the Next link in the
Search Results region to see all of the above tables.)
g. When done, click the Select button.
h. Back to the Export:Tables page, click the Next button.
i. On the Export: Options page, set the Maximum Number of Threads in Export Job field to 1.
j. In the Optional File region, make sure that the Generate Log File option is selected. Also, click the
Create Directory Object button to create the DPDIR1 directory.
k. On the Export: Create Directory page, specify the corresponding Name and Operating System
Directory fields. Then click the OK button.
l. Back on the Export: Options page, select DPDIR1 from the Directory Object drop-down list.
m. When done, click the Show Advanced Options link.
n. In the Query region of the Export: Options page, click the Add button.
o. On the Export Options: Add Query page, enter SH.CHANNELS in the Table Name field.
p. In the Predicate Clause field, enter WHERE channel_id<>5.
q. When done, click the OK button.
r. After you a returned to the Export: Options page, click the Next button.
s. On the Export: Files page, make sure that you select DPDIR1 in the directory Object drop-down list.
Then click the Next button.
t. On the Export: Schedule page, make sure the Immediately option button is selected. Then click the
Next button.
u. On the Export: Review page, retrieve the Data Pump job name.
5. You want to see and change some of the Data Pump job characteristics. How can you do this?
a. You must suspend the Data Pump job execution. When you are on the Status page, click the View Job
button.
b. This brings you to the Execution page from where you must click the Monitor Data Pump Job button.
c. On the Monitor Data Pump Job page, you can see the objects currently being exported, and you can add
new files to your dump set. You can also change the Data Pump job parallelism degree.
d. On the Monitor Data Pump Job page, click the Change Job State button.
e. This brings you to the Change Data Pump Job State page. On this page, select the Suspend option
button, and click the OK button.
f. You should now see the Change Job Status Success message. Click the OK button to return to the
Execution page.
6. Now that the Data Pump job execution is suspended, connect as user DP through SQL*Plus.
What table has been created?
Answer: There is one table that was created by the Data Pump job. This is the Master Table
associated with the job.
connect dp/dp
TABLE_NAME
------------------------------
EXPORT000005
SQL>
Answer: There are currently no Data Pump background processes running. This is because the
current export Data Pump job has been suspended.
8. How can you see the amount of work performed so far by your Data Pump job?
a. Back on the Execution page of your job, click the Export link in the Logs region.
b. This brings you to the Step: Export page where you can see the log output. It should look similar to the
following:
9. Connect as the oracle user from your terminal emulator window and attach to the existing
export Data Pump job. Connect as the oracle user from a second terminal emulator window
and determine the list of processes associated with your instance. What is your conclusion?
Answer: You have a new DM00 process started. This process corresponds to the Master process
of your Data Pump job.
Job: EXPORT000005
Owner: DP
Operation: EXPORT
Creator Privs: FALSE
GUID: CDF5ACAB8ADA03C2E030007F0100562C
Start Time: Monday, 08 December, 2003 2:52
Mode: TABLE
Instance: orcl
Max Parallelism: 1
EXPORT Job Parameters:
Parameter Name Parameter Value:
DATA_ACCESS_METHOD AUTOMATIC
ESTIMATE BLOCKS
INCLUDE_METADATA 1
LOG_FILE_DIRECTORY DPDIR1
Worker 1 Status:
State: UNDEFINED
Object Schema: SH
Object Name: COSTS
Object Type: TABLE_EXPORT/TABLE/TBL_TABLE_DATA/TABLE/TABLE_DATA
Completed Objects: 62
Total Objects: 62
Export>
Answer: Because the Data Pump job is not finished yet, you can still see the Master Table
process DM00, and also the four worker processes called DW0n.
Export> parallel = 4
Export>
Export> status
Job: EXPORT000062
Operation: EXPORT
Mode: TABLE
State: IDLING
Bytes Processed: 0
Current Parallelism: 4
Job Error Count: 0
Dump File: /home/oracle/EXPDAT%u.DMP
Dump File: /home/oracle/EXPDAT01.DMP
bytes written: 4,096
Worker 1 Status:
State: UNDEFINED
Object Schema: SH
Object Name: COSTS
Object Type: TABLE_EXPORT/TABLE/TBL_TABLE_DATA/TABLE/TABLE_DATA
Completed Objects: 62
Export> start_job
Export> continue_client
Job EXPORT000062 has been reopened at Monday, 08 December, 2003 4:22
Restarting "DP"."EXPORT000062":
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TBL_TABLE_DATA/TABLE/TABLE_DATA
. estimated "SH"."CUSTOMERS" 12 MB
. estimated "SH"."SALES":"SALES_Q4_2001" 2 MB
. estimated "SH"."SALES":"SALES_Q1_1999" 1024 KB
. estimated "SH"."SALES":"SALES_Q3_2001" 1024 KB
. estimated "SH"."SALES":"SALES_Q1_2000" 960 KB
.
.output truncated
.
. estimated "SH"."COSTS":"COSTS_Q3_1999" 0 KB
. estimated "SH"."COSTS":"COSTS_Q3_2000" 0 KB
. estimated "SH"."COSTS":"COSTS_Q3_2001" 0 KB
. estimated "SH"."COSTS":"COSTS_Q3_2002" 0 KB
. estimated "SH"."COSTS":"COSTS_Q3_2003" 0 KB
. estimated "SH"."COSTS":"COSTS_Q4_1998" 0 KB
. estimated "SH"."SALES":"SALES_Q4_2002" 0 KB
. estimated "SH"."SALES":"SALES_Q4_2003" 0 KB
. . exported "SH"."CUSTOMERS" 9.850 MB 55500
rows
11. Remove the Database Control job run from the repository.
a. Back on the Execution page of your job, click the Delete Run button.
b. On the Confirmation page, click the Yes button.
12. Now that you successfully exported your tables from the SH user, import only the SALES and
PRODUCTS tables back into the DP schema using the Data Pump Import Wizard.
a. From the Database Control home page, click the Maintenance link.
b. On the Maintenance page, click the Import from Files link.
c. On the Import: Files page, make sure that the Database Version of Files to Import field is set to 10g or
later, and click the Go button.
d. In the Files region, select the DPDIR1 directory from where the Data Pump Import job can retrieve the
previously generated Dump File Set. Also, make sure that the File Name field is set correctly.
e. In the Import Type region, select the Tables option button. Make sure that the host credentials are
correct.
f. When done, click the Continue button.
g. Data Pump starts reading the Dump File set to extract the metadata information.
h. At this stage, you can look at the objects owned by the DP user under your SQL*Plus session. You should
see that the Master Table has been resurrected from the Dump File Set. If you want to do so, execute
again step 6 of this lab.
i. After the metadata is successfully extracted from the Dump File Set, you should see the Import Read
Succeed message on the Import: Tables page.
j. Now, click the Add button on this page.
k. In the Import: Add Tables page, enter SH in the Schema field in the Search region. Then click the Go
button.
l. You should now see the list of tables that you exported previously.
m. Select the SALES and PRODUCTS tables from the Search Results list.
n. When done, click the Select button.
o. Back to the Import: Tables page, click the Next button.
p. On the Import: Re-Mapping page, click the Add Another Row button in the Re-Map Schemas region.
q. When done, make sure that the Source Schema is set to SH and that the Destination Schema is set to
DP.
r. Then click the Next button.
s. On the Import: Options page, make sure that the Directory Object field is set to DPDIR1, and then click
the Next button.
t. On the Import: Schedule page, make sure the Immediately option button is selected, and then click the
Next button.
13. To clean up your environment, you should delete your Database Control job run, and drop the
user DP from your database.
a. From the Execution: orcl page, click the Delete Run button.
b. On the Confirmation page, click the Yes button.
c. Log out from Database Control, and log in again as SYSDBA.
d. On the Database Control home page, click the Administration tab.
e. On the Administration tab, click the Users link.
f. On the Users page, select user DP in the Results table, and then click the Delete button.
g. On the Confirmation page, make sure that you no longer have any DP connections, and then click the
Yes button.
1. Use Database Control to create a new tablespace called TBSADDM. This tablespace should have
only one 50 MB file and must be locally managed. Also, make sure that TBSADDM does not use
automatic segment space management.
2. Using Database Control, create a new user called ADDM identified by ADDM. Make sure that the
ADDM user has TBSADDM as its default tablespace, and TEMP as its temporary tablespace. When
done, grant the following roles to the ADDM user: CONNECT, RESOURCE, DBA.
connect addm/addm
exec DBMS_STATS.GATHER_TABLE_STATS(-
ownname=>'ADDM', tabname=>'ADDM',-
estimate_percent=>DBMS_STATS.AUTO_SAMPLE_SIZE);
exec DBMS_WORKLOAD_REPOSITORY.CREATE_SNAPSHOT();
$ . ./lab_04_01_04.sh
[1] 14195
[2] 14197
[3] 14201
[4] 14203
[5] 14205
[6] 14210
[7] 14212
[8] 14214
$
5. From the Database Control home page go to the Performance page. If this is the first time you
go to the Performance page, you need to click Accept in the Adobe license agreement pop-up
screen. On the Performance page, make sure that the View Data field is set to Real Time: 15
Seconds Refresh. After a while, you should see a spike on the Sessions: Waiting and Working
graph. After the spike is finished, execute the lab_04_01_05.sql script. This script forces
the creation of a new snapshot. Looking at the graph, you can already determine that this instance
is suffering concurrency problems.
Note: Depending on when you run the workload, you may see differences between your graph and the one
provided in this solution.
Connect addm/addm
exec DBMS_WORKLOAD_REPOSITORY.CREATE_SNAPSHOT();
exec DBMS_STATS.GATHER_TABLE_STATS(-
ownname=>'ADDM', tabname=>'ADDM',-
estimate_percent=>DBMS_STATS.AUTO_SAMPLE_SIZE);
a. From Database Control home page click the Advisor Central link under the Related Links section.
b. On the Advisor Central page, select ADDM in the Advisory Type drop-down list, and select Last 24
Hours in the Advisor Runs drop-down list. When done, click the Go button.
c. Select the latest ADDM task completed by the ADDM user. When done, click the View Result button.
d. This brings you to the Automatic Database Diagnostic Monitor (ADDM) page where you can see the
results of the Performance Analysis.
Note: Depending on when you run the workload, you may see differences between your findings and the
ones provided in this solution.
e. Looking at the Performance Analysis region, you can see that the first finding has a 100% impact on the
system. So your first reflex is to look at the corresponding recommendation. Click the SQL statements
consuming significant database time were found link to investigate further. This brings you to the
Performance Finding Details page where ADDM identifies the high-load SQL statement.
f. Click the Run Advisor Now button to tune this statement. When the analysis is done, you are directed to
the Recommendations for SQL ID: dadywybcgph5f page. Unfortunately, there is no possible
recommendation for this INSERT statement.
g. Therefore, the problem is further below in the stack. Return to the Automatic Database Diagnostic
Monitor (ADDM) page to investigate further.
h. The second recommendation indicates a lack of CPU on the system. Because you cannot change this right
now, look at the third recommendation by clicking the Read and write contention on database blocks
was consuming significant database time link. This recommendation is related to the schema category.
7. To fix the problem, create a new tablespace called TBSADDM2, and execute the
lab_04_01_07.sql script from your labs directory. This script drops the ADDM table, and re-
creates it in the new tablespace. This script also gathers statistics on the table and takes a new
snapshot.
a. Therefore, to implement the recommendation, you must re-create the objects. First, you need to create a
new tablespace that uses the Automatic Space Management feature. Return to the Database Control
home page, and click the Administration tab.
b. Click the Tablespaces link, and click the Create button. Specify the name of the new tablespace in the
Name field. You can call this new tablespace TBSADDM2. Click the Add button to add a file to this
tablespace. You can call this file addm2_1.dbf.
c. On the Create Tablespace: Add Datafile page, specify the name of the new file, and make sure that its
size is set to 50MB. When done, click the Continue button.
d. Back to the Create Tablespace page, click the Storage tab, and make sure that Automatic is set in the
Segment Space Management region. Then click the OK button to create the new tablespace.
e. Now, you need to recreate table ADDM in the new tablespace:
@lab_04_01_07.sql
8. Connect as user oracle in your terminal emulator and execute again the lab_04_01_04.sh
script from your labs directory.
$ . ./lab_04_01_04.sh
[1] 14195
[2] 14197
[3] 14201
[4] 14203
[5] 14205
[6] 14210
[7] 14212
[8] 14214
$
Note: Depending on when you run the workload, you may see differences between your graph and the one
provided in this solution.
Connect addm/addm
exec DBMS_WORKLOAD_REPOSITORY.CREATE_SNAPSHOT();
exec DBMS_STATS.GATHER_TABLE_STATS(-
ownname=>'ADDM', tabname=>'ADDM',-
estimate_percent=>DBMS_STATS.AUTO_SAMPLE_SIZE);
a. Return to the Database Control home page. Because the ADDM data is not refreshed very frequently on
the console, you may not see the latest ADDM result in the Diagnostic Summary region. Retrieve the
latest ADDM findings, and verify that the previously analyzed problem was fixed.
b. From Database Control home page click the Advisor Central link under the Related Links section.
c. On the Advisor Central page, select ADDM in the Advisory Type drop-down list, and select Last 24
Hours in the Advisor Runs drop-down list. When done, click the Go button.
d. Then select the latest ADDM task COMPLETED by the ADDM user. When done, click the View Result
button.
e. This brings you to the Automatic Database Diagnostic Monitor (ADDM) page from where you can see
the results of the Performance Analysis.
Note: Depending on when you run the workload, you may see differences between your findings and the
ones provided in this solution.
connect / as sysdba
1. Use Database Control to shut down your instance, and start it up again using the
init_sgalab.ora initialization parameter file located in your labs directory. Before doing
this, make sure that the init_sgalab.ora parameter file can be used to start up your
instance.
a. From the Database Control home page, click the Shutdown button.
b. On the Startup/Shutdown: Specify Host and Target Database Credentials page, specify the needed
credentials and make sure that you save them to disk.
c. Click the OK button.
d. On the Startup/Shutdown: Confirmation page click the Yes button.
e. After a while, click the Refresh button on the Startup/Shutdown: Activity Information page.
f. On the Database: orcl page, click the Startup button.
g. If necessary, specify all the needed credentials on the Startup/Shutdown: Specify Host and Target
Database Credentials page, and then click the OK button.
h. On the Startup/Shutdown: Confirmation page, click the Advanced Options button.
i. On the Startup/Shutdown: Advanced Startup Options page, make sure that you select the Specify
parameter file (pfile) on the database server machine option button, and specify the location and name
of the parameter file you want to use. Then click the OK button.
j. After you are returned to the Startup/Shutdown: Confirmation page, click the Yes button.
k. On the Login to Database: orcl page, specify your SYSDBA credentials, and click the Login button.
2. Connect as SYSDBA through SQL*Plus and execute the lab_04_02_02.sql script. This
script creates a new tablespace and a new table, and populates the table.
connect / as sysdba
begin
for i in 1..100000 loop
insert into sgalab values (i, i);
end loop;
end;
/
commit;
3. Use Database Control to check the size of the various SGA buffers of your instance.
4. Connect as SYSDBA through SQL*Plus and execute the lab_04_02_04.sql script. This
script executes a parallel query on the previously created table. What happens and why?
Answer: Because your large pool buffer is too small, and because Automatic Shared Memory
Tuning is not enabled, you get an ORA-04031 error.
5. Using Database Control only, how can you fix this problem? Implement your solution.
6. Connect as SYSDBA through SQL*Plus and determine the effects of the previous step on the
memory buffers. What are your conclusions?
Answer: On the server side, the SGA_TARGET initialization parameter was dynamically
changed to a non-zero value to enable the Automatic Shared Memory Management feature. The
sizes of the automatically tuned buffers are still the same, but their corresponding values in the
V$PARAMETER view are modified to their minimum value. This is done automatically by
Database Control to allow those buffers to shrink. You can verify this by looking at the ALTER
SYSTEM commands that were logged in the alert.log file.
select component,current_size,min_size,granule_size
from v$sga_dynamic_components
where component in ('shared pool','large pool',
'java pool','DEFAULT buffer cache');
7. Connect as SYSDBA through SQL*Plus and execute the lab_04_02_04.sql script again.
This script executes a parallel query on the previously created table. Using Database Control, and
while the script is running, verify that your solution is working. What happens and why?
Answer: While the script is running, you can click the Refresh button of the Memory
Parameters page. You should see that the large pool buffer has dynamically allocated more
memory to satisfy the parallel query execution. You should not get any error, and the script
should complete. When the query runs, you should observe that no more memory is allocated to
the large pool buffer. After a while, most of the memory that was allocated to the large pool
should return automatically to the buffer cache memory pool.
connect / as sysdba
8. If you use SQL*Plus instead of Database Control, what commands do you execute to enable the
Automatic Shared Memory Management feature after you started your instance using the
init_sgalab.ora file? Explain your decision. It is assumed that you want to have a
maximum of 256MB of SGA memory allocated.
connect / as sysdba
shutdown immediate;
startup;
1. Connect as SYSDBA through SQL*Plus and execute the lab_05_01_01.sql script. This
script adds the new subscriber ALERT_USR1 to the internal ALERT_QUE queue. It then grants
user SYSTEM the right to dequeue from the ALERT_QUE. Then, the script creates a special
procedure that is used by user SYSTEM to dequeue alert information from the ALERT_QUE.
connect / as sysdba
exec DBMS_AQADM.ADD_SUBSCRIBER('SYS.ALERT_QUE',-
AQ$_AGENT('ALERT_USR1','',0));
-- exec DBMS_AQADM.CREATE_AQ_AGENT(agent_name=>'ALERT_USR1');
exec DBMS_AQADM.ENABLE_DB_ACCESS(agent_name=>'ALERT_USR1',-
db_username=>'SYSTEM');
exec DBMS_AQADM.GRANT_QUEUE_PRIVILEGE(privilege=>'DEQUEUE',-
queue_name=>'ALERT_QUE',-
grantee=>'SYSTEM',grant_option=>FALSE);
-- DECLARE
-- reginfo aq$_reg_info;
-- reginfolist aq$_reg_info_list;
-- BEGIN
-- reginfo := AQ$_REG_INFO('ALERT_QUE:ALERT_USR1',
-- DBMS_AQ.NAMESPACE_AQ, 'mailto://yourname@yourcompany.com',NULL);
-- -- Create the registration info list
-- reginfolist := AQ$_REG_INFO_LIST(reginfo);
-- -- Register the registration info list
-- DBMS_AQ.REGISTER(reginfolist, 1);
-- END;
-- /
-- BEGIN
-- DBMS_AQELM.SET_MAILHOST('yourmailhost.com');
-- DBMS_AQELM.SET_MAILPORT(25);
-- DBMS_AQELM.SET_SENDFROM('janedoe@yourcompany.com');
-- COMMIT;
-- END;
-- /
2. Connect as SYSDBA through SQL*Plus, check that you do not have any outstanding alerts
for the User Commits Per Sec metric, and look at your alert history. Then, set the User
Commits Per Sec metric with a warning threshold set to 3, and a critical threshold set to 6. Make
sure that the observation period is set to one minute, and that the number of consecutive
occurrences is set to 2. When done, check that the metrics thresholds have been set correctly.
Again, look at your outstanding alerts and alert history. What are your conclusions?
Answer: After you set the metric thresholds, you should see a new row in the alert history that
indicates that thresholds were updated on the User Commits Per Sec metric.
connect / as sysdba
select reason
from dba_alert_history
where upper(reason) like '%COMMIT%' and
to_date(substr(to_char(creation_time),1,18)||
substr(to_char(creation_time),26,3) ,
'DD-MON-YY HH:MI:SS AM') > sysdate-30/1440
order by creation_time desc;
exec DBMS_SERVER_ALERT.set_threshold( -
dbms_server_alert.user_commits_sec, -
select reason
from dba_alert_history
where upper(reason) like '%COMMIT%' and
to_date(substr(to_char(creation_time),1,18)||
substr(to_char(creation_time),26,3) ,
'DD-MON-YY HH:MI:SS AM') > sysdate-30/1440
order by creation_time desc;
3. Execute the lab_05_01_03.sql script. This script creates a new table and inserts one row in
it.
connect / as sysdba
commit;
4. Connect as SYSDBA through Database Control and look at the corresponding metrics graphic
rate. Then, execute the lab_05_01_04.sql script. This script generates a commit rate
between three and six commits per second for one minute on your system. While the script is
executing, observe the metrics graph using Database Control. After a minute or two, through
SQL*Plus, look at your outstanding alerts and alert history. What are your conclusions?
Answer: Although the commit rate is going above the warning level, you do not get any
outstanding alert. This is because an alert is raised only after two consecutive occurrences of the
observation period’s violation. So the rate should be above the warning level for more than one
minute.
connect / as sysdba
a. From the Database Control home page, click the All Metrics link.
b. On the All Metrics page, expand the Throughput link.
c. On the All Metrics page, under Throughput link, click the User Commits (per second) link.
d. On the User Commits (per second) page, make sure that the View Data field is set to Real Time: 30
Seconds Refresh.
Note: Depending on when you run the workload, you may see differences between your graph and the
one provided in this solution.
connect / as sysdba
select reason
from dba_alert_history
where upper(reason) like '%COMMIT%' and
to_date(substr(to_char(creation_time),1,18)||
substr(to_char(creation_time),26,3) ,
'DD-MON-YY HH:MI:SS AM') > sysdate-30/1440
order by creation_time desc;
5. While connected as SYSDBA through Database Control, look at the corresponding metrics
graphic rate, and execute the lab_05_01_05.sql script. This script generates a commit rate
of five commits per second for three minutes on your system. While the script is executing,
observe the metrics graph using Database Control. After the script finishes its execution, examine
your outstanding alerts and alert history using both SQL*Plus and another Database Control
session. What are your conclusions?
Answer: Because this time the commit rate is maintained above the warning level, and less than
the critical level for more than two minutes, you should get a warning alert.
a. From the Database Control home page, click the All Metrics link.
Note: Depending on when you run the workload, you may see differences between your graph and the
one provided in this solution.
b. You can see the alert history by changing the View Data field on the User Commits (per second) page.
Change its value to Last 24 hours, and you will see the alert history in the Alert History Last 24 Hours
region of the page:
Note: Depending on when you run the workload, you may see differences between your output and the one
provided in this solution.
connect / as sysdba
REASON
----------------------------------------------------------------------------
Metrics "User Commits Per Sec" is at 5
select reason
from dba_alert_history
where upper(reason) like '%COMMIT%' and
REASON
----------------------------------------------------------------------------
Threshold is updated on metrics "User Commits Per Sec" for instance "orcl"
6. Wait three more minutes, and view your outstanding alerts and the alert history again. What are
your conclusions?
Answer: Because the commit rate is now close to zero for more than three minutes, the alert is
automatically cleared.
Note: Depending on when you run the workload, you may see differences between your output and the one
provided in this solution.
connect / as sysdba
no rows selected
select reason
from dba_alert_history
where upper(reason) like '%COMMIT%' and
to_date(substr(to_char(creation_time),1,18)||
substr(to_char(creation_time),26,3) ,
'DD-MON-YY HH:MI:SS AM') > sysdate-30/1440
order by creation_time desc;
REASON
----------------------------------------------------------------------------
Metrics "User Commits Per Sec" is at 0
Threshold is updated on metrics "User Commits Per Sec" for instance "orcl"
a. You should see that the alert is cleared from the User Commits (per second) page. If not, try to refresh
the data:
7. While connected as SYSDBA through Database Control, look at the corresponding metrics
graphic rate, and execute the lab_05_01_07.sql script. This script generates a commit rate
of eight commits per second for three minutes on your system. While the script is executing,
observe the metrics graph using Database Control. After the script finishes its execution, look at
your outstanding alerts and alert history using both SQL*Plus and Database Control. What are
your conclusions?
connect / as sysdba
a. On the User Commits (per second) page with View Data set to Real Time: 30 Second Refresh.
Note: Depending on when you run the workload, you may see differences between your graph and the
one provided in this solution.
b. On the User Commits (per second) page with View Data set to Last 24 Hours:
Note: Depending on when you run the workload, you may see differences between your output and the one
provided in this solution.
connect / as sysdba
REASON
----------------------------------------------------------------------------
select reason
from dba_alert_history
where upper(reason) like '%COMMIT%' and
to_date(substr(to_char(creation_time),1,18)||
substr(to_char(creation_time),26,3) ,
'DD-MON-YY HH:MI:SS AM') > sysdate-30/1440
order by creation_time desc;
REASON
----------------------------------------------------------------------------
Metrics "User Commits Per Sec" is at 0
Threshold is updated on metrics "User Commits Per Sec" for instance "orcl"
8. Wait three more minutes, and look at your outstanding alerts and the alert history again. What are
your conclusions?
Answer: Because the commit rate is now close to zero for more than three minutes, the alert is
automatically cleared.
Note: Depending on when you run the workload, you may see differences between your output and the one
provided in this solution.
connect / as sysdba
no rows selected
select reason
from dba_alert_history
where upper(reason) like '%COMMIT%' and
to_date(substr(to_char(creation_time),1,18)||
substr(to_char(creation_time),26,3) ,
'DD-MON-YY HH:MI:SS AM') > sysdate-30/1440
order by creation_time desc;
REASON
----------------------------------------------------------------------------
Metrics "User Commits Per Sec" is at 0
Metrics "User Commits Per Sec" is at 0
Threshold is updated on metrics "User Commits Per Sec" for instance "orcl"
a. You should see that the alert is cleared from the User Commits (per second) page. If not, try to refresh
the data:
Answer: Because the ALERT_QUE is a multiconsumer queue, alerts that were sent to the
ALERT_QUE, and that were consumed by Database Control, are still available for other
consumers such as user SYSTEM. So you must execute the SYS.SA_DEQUEUE procedure
multiple times to retrieve the history of your metric. Do this until an error message is returned, at
which point you no longer have messages to dequeue.
Note: Depending on when you run the workload, you may see differences between your output and the one
provided in this solution.
SQL> connect system/oracle
Connected.
SQL> set serveroutput on
SQL>
SQL> exec sys.sa_dequeue;
Alert message dequeued:
Timestamp: 03-DEC-03 05.45.43.868116 AM -08:00
Organization Id: oracle.com
SQL>
SQL> exec sys.sa_dequeue;
Alert message dequeued:
Timestamp: 03-DEC-03 06.20.57.144219 AM -08:00
Organization Id: oracle.com
SQL>
SQL> exec sys.sa_dequeue;
Alert message dequeued:
Timestamp: 03-DEC-03 06.24.01.890329 AM -08:00
Organization Id: oracle.com
Component Id: SMG
Hosting Client Id:
Message Type: Notification
Message Group: Performance
Message Level: 32
Host id: EDCDR5P1
Host Network Addr: 127.0.0.1
Module Id: SERVER MANAGEABILITY:kelr.c
Process Id: "orcl"."orcl"
Execution Context:
Reason: Metrics "User Commits Per Sec" is at 0
Sequence Id: 146
Reason Id: 36
Object Owner:
Object Name: SYSTEM
Subobject Name:
Object Type: SYSTEM
Instance Name: orcl
Instance Number: 1
Suggested action: Run ADDM to get more performance analysis about your
system.
Error instance id: CD952EFA576F-34C9-E030-007F01000E83-0
Advisor Name: ADDM
Scope: Instance
SQL>
SQL> exec sys.sa_dequeue;
Alert message dequeued:
Timestamp: 03-DEC-03 06.37.20.191643 AM -08:00
Organization Id: oracle.com
Component Id: SMG
Hosting Client Id:
Message Type: Warning
Message Group: Performance
Message Level: 5
Host id: EDCDR5P1
Host Network Addr: 127.0.0.1
Module Id: SERVER MANAGEABILITY:kelr.c
Process Id: "orcl"."orcl"
Execution Context:
Reason: Metrics "User Commits Per Sec" is at 8
SQL>
SQL> exec sys.sa_dequeue;
Alert message dequeued:
Timestamp: 03-DEC-03 06.39.23.060356 AM -08:00
Organization Id: oracle.com
Component Id: SMG
Hosting Client Id:
Message Type: Warning
Message Group: Performance
Message Level: 1
Host id: EDCDR5P1
Host Network Addr: 127.0.0.1
Module Id: SERVER MANAGEABILITY:kelr.c
Process Id: "orcl"."orcl"
Execution Context:
Reason: Metrics "User Commits Per Sec" is at 6
Sequence Id: 147
Reason Id: 36
Object Owner:
Object Name: SYSTEM
Subobject Name:
Object Type: SYSTEM
Instance Name: orcl
Instance Number: 1
Suggested action: Run ADDM to get more performance analysis about your
system.
Error instance id: CD956992746C-D39B-E030-007F01000E83-0
Advisor Name: ADDM
Scope: Instance
SQL>
SQL> exec sys.sa_dequeue;
Alert message dequeued:
Timestamp: 03-DEC-03 06.41.25.860327 AM -08:00
Organization Id: oracle.com
Component Id: SMG
Hosting Client Id:
Message Type: Notification
Message Group: Performance
Message Level: 32
Host id: EDCDR5P1
Host Network Addr: 127.0.0.1
SQL>
SQL> exec sys.sa_dequeue;
BEGIN sys.sa_dequeue; END;
*
ERROR at line 1:
ORA-25228: timeout or end-of-fetch during message dequeue from SYS.ALERT_QUE
ORA-06512: at "SYS.DBMS_AQ", line 333
ORA-06512: at "SYS.SA_DEQUEUE", line 15
ORA-06512: at line 1
SQL>
10. Using Database Control, disable the thresholds check for the User Commits (per second)
metric.
a. From the Database Control home page, click the Manage Metrics link.
b. On the Manage Metrics page, click the Edit Thresholds button.
c. On the Edit Thresholds page, scroll down to the User Commits (per second) entry in the table.
d. Then, remove the values corresponding to the two thresholds, and click the OK button.
connect / as sysdba
exec DBMS_SERVER_ALERT.set_threshold( -
dbms_server_alert.user_commits_sec, -
null,null, -
null,null, -
1, 1, 'orcl', -
dbms_server_alert.object_type_system, null);
exec dbms_aqadm.disable_db_access('ALERT_USR1','SYSTEM');
exec DBMS_AQADM.REMOVE_SUBSCRIBER('SYS.ALERT_QUE',-
AQ$_AGENT('ALERT_USR1','',0));
1. Connect as SYSDBA through Database Control and navigate to the Performance tab of the
Database Control home page. On the Performance page, make sure that the View Data field is
set to Real Time: 15 second Refresh. When done, open a terminal emulator window connected
as user oracle. When done change your current directory to your labs directory: cd
$HOME/labs. Then, enter the following command from the OS prompt:
. ./setup_perflab.sh
2. When the setup_perflab.sh script completes, in approximately five minutes, observe the
Performance page for six minutes. What are your conclusions?
Answer: You should see the workload activity going up very quickly. Because the CPU used by
the workload is very close to the maximum CPU available on your system, there must be an issue
with this workload. Because the most important area corresponding to a wait class is the User I/O
wait class, the issue must be associated to that class. Note that the snapshot interval is now
around two minutes.
Note: Depending on when you run the workload, you may see differences between your graph and the one
provided in this solution.
Answer: First of all, you must determine the problem itself. The fastest way to determine it is by
looking at an ADDM report analysis executed during the problematic period. Then, by following
its analysis, ADDM should guide you through the process of fixing the problem.
a. From the Database Control home page, there are basically two different ways to identify the correct
ADDM analysis task:
• If the time corresponding to the problematic time period corresponds with the latest ADDM run
detected by Database Control, you should find the link corresponding to the correct performance
analysis directly in the Diagnostic Summary region of the Database Control home page.
• If not, you should go to the Advisor Central page and search for the correct ADDM task. This is
how you can retrieve the task from the Advisor Central page:
• From the Database Control home page, click the Advisor Central link.
• On the Advisor Central page, select ADDM in the Advisory Type drop-down list, and
select Last 24 Hours in the Advisor Runs drop-down list.
• When done, click the Go button.
• Then, select the ADDM task corresponding to the time of the problematic period.
• When done, click the View Result button.
b. This brings you to the Automatic Database Diagnostic Monitor (ADDM) page where you can see the
results of the Performance Analysis in question.
a. On the corresponding Automatic Database Diagnostic Monitor (ADDM) page, click the finding with
the highest impact on the database time. It should correspond to a SQL Tuning recommendation.
d. After the task has executed, you are given the details of the corresponding recommendations:
f. If you click the spectacles icon associated to the proposed SQL profile, you can see the new execution
plan.
g. Because the potential benefit of using the proposed SQL profile is very high, you implement the SQL
profile. To implement this tuning recommendation, click the Implement button after selecting the
appropriate SQL profile from the Recommendations table.
a. From the Database Control home page, click the Performance tab.
b. On the Performance page, you should see a dramatic drop for CPU Used, and all the wait class
categories on the Sessions: Waiting and Working graph. Similarly, you should see a dramatic drop in
the number of physical reads per second in the Instance Throughput graph.
Note: Depending on when you run the workload, you may see differences between your graph and the
one provided in this solution.
Answer: You must return to the history of your database activity. You can do this by using the
performance pages of Database Control.
a. From the Database Control home page, click the Performance tab.
b. On the Performance page, if the period for which you want to observe your database activity is still
visible on the Sessions: Waiting and Working graph, then you can use the current graph. However, if
the problematic period is no longer visible on the graph, you can select the Historical value from the
View Data drop-down list. This allows you to select the desired period in the Historical Interval
Selection region of the Performance page.
c. Returning to the current example, you should be able to see the problematic period without having to
define the historical information.
a. Looking at the Sessions: Waiting and Working graph for the critical period, the User I/O wait class is
probably the most important one with the Concurrency category. Click the User I/O category in the
graph’s legend.
Note: Depending on when you run the workload, you may see differences between your graph and the
one provided in this solution.
b. This brings you to the Active Sessions Waiting: User I/O page. You should see that in the User I/O wait
class, the read by other session waits is the most important. If necessary, move the time window to the
exact time when the workload was at its maximum activity. When done, the Detail region should be
refreshed to show you the corresponding Top Waiting SQL and Top Waiting Sessions graphs.
c. You should see that there is one SQL statement that is using almost all of the available resources on your
system. Also, the Top Waiting Sessions graph shows you that the top five sessions are connected as SH
and are consuming almost the same amount of resources. This seems to indicate that these top sessions are
executing the same statement.
d. In the legend of the Top Waiting SQL graph, select the top SQL statement.
Note: Depending on when you run the workload, you may see differences between your graph and the
one provided in this solution.
g. By clicking the Execution History tab, you can see what happened to the statement during the observed
period.
Note: Depending on when you run the workload, you may see differences between your graph and the
one provided in this solution.
9. To clean up your environment, execute the following command from your command-line
window: . ./cleanup_perflab.sh.
1. Connected as SYSDBA through SQL*Plus, flush the shared pool and execute the following four
scripts, in this order:
a. lab_06_02_01a.sql
b. lab_06_02_01b.sql: (Star query)
c. lab_06_02_01c.sql: (Star query)
d. lab_06_02_01d.sql: (Order by)
connect / as sysdba;
-- query 1:
SELECT /* QueryJFV 1*/
t.calendar_month_desc,sum(s.amount_sold) AS dollars
FROM sh.sales s
, sh.times t
WHERE s.time_id = t.time_id
AND s.time_id between TO_DATE('01-JAN-2000', 'DD-MON-YYYY')
AND TO_DATE('01-JUL-2000', 'DD-MON-YYYY')
GROUP BY t.calendar_month_desc;
SQL_TEXT
-----------------------------------------------------------------------------
---
SELECT /* QueryJFV 4 */ c.country_id, c.cust_city, c.cust_last_name FROM
sh.custome
rs c WHERE c.country_id in (52790, 52798) ORDER BY c.country_id, c.cust_city,
c.
cust_last_name
SQL_TEXT
-----------------------------------------------------------------------------
---
select sql_text from v$sql where sql_text like '%Query%'
SELECT /* QueryJFV 1*/ t.calendar_month_desc,sum(s.amount_sold) AS
dollars
FROM sh.sales s , sh.times t WHERE s.time_id = t.time_id AND
s.time_id between TO_DATE('01-JAN-2000', 'DD-MON-YYYY')
AND TO_DATE('01-JUL-2000', 'DD-MON-YYYY') GROUP BY t.calendar_month_desc
SQL_TEXT
-----------------------------------------------------------------------------
---
,'1999-02') GROUP BY ch.channel_class, c.cust_city, t.calendar_quarter_desc
SQL> DECLARE
2 sqlsetname VARCHAR2(30);
3 sqlsetcur dbms_sqltune.sqlset_cursor;
4 BEGIN
5 sqlsetname := 'MY_STS_WORKLOAD';
6
7 dbms_sqltune.create_sqlset(sqlsetname, 'Access Advisor data');
8
9 OPEN sqlsetcur FOR
10 SELECT VALUE(P)
11 FROM TABLE(
12 dbms_sqltune.select_cursor_cache(
13 'sql_text like ''SELECT /* Query %''',
14 NULL,
15 NULL,
16 NULL,
17 NULL,
18 NULL,
19 null)
20 ) P;
21
22 dbms_sqltune.load_sqlset(sqlsetname, sqlsetcur);
23 end;
24 /
SQL>
3. Connected as SYSDBA through Database Control, use the SQL Access Advisor to generate
recommendations for the MY_STS_WORKLOAD SQL tuning set.
a. From the Database Control home page, click the Advisor Central link.
b. On the Advisor Central page, click the SQL Access Advisor link.
c. On the SQL Access Advisor: Workload Source page, select the Import Workload from SQL
Repository option button, and set the SQL Tuning Set field to MY_STS_WORKLOAD. When done, click
the Next button.
d. On the SQL Access Advisor: Recommendation Options page, click the Both Indexes and
Materialized Views and Comprehensive Mode buttons. Then click the Show Advanced Options link,
and make sure you use the EXAMPLE tablespace and the SH schema for indexes and materialized views in
the Default Storage locations region. When done, click the Next button.
e. On the SQL Access Advisor: Schedule page, select Standard in the Schedule Type field. Make sure
that the Immediately option button is selected, and click the Next button.
f. On the SQL Access Advisor: Review page, click the Submit button.
g. Return to the Advisor Central page, wait for one minute, and click the Refresh button. Repeat this
operation until you see the COMPLETED status associated to your SQL Access Advisor task.
4. Looking at the Recommendations page for your SQL Access Advisor task, what are your
conclusions?
Answer: By implementing the two recommendations, three statements of your workload can
benefit from them.
Select SQL Statements Improved by Recommendations in the View field of the Recommendations for Task
page.
Answer: This shows you that by implementing the first recommendation, only the second one is
produced after running the analysis again. This is indeed expected, and no additional
recommendations are produced.
a. On the Recommendations page, make sure that you select only the Recommendation ID that has the
most Workload Cost Benefit. Then click the Schedule Implementation button.
b. On the Schedule Implementation page, ensure the Immediately option button is selected, and click the
Submit button.
c. On the Scheduler Jobs page, click the Refresh button until your job no longer appears as a Running job.
d. Click the Run History tab, and make sure that your job’s status is now SUCCEEDED.
e. When done, click the Database tab at the top of the Scheduler Jobs page.
f. Return to the Database Control home page and click the Advisor Central link.
g. On the Advisor Central page, click the SQL Access Advisor link.
h. On the SQL Access Advisor: Workload Source page, select the Import Workload from SQL
Repository option button, and set the SQL Tuning Set field to MY_STS_WORKLOAD. When done, click
the Next button.
i. On the SQL Access Advisor: Recommendation Options page, click the Both Indexes and
Materialized Views and Comprehensive Mode buttons. Then click the Show Advanced Options link,
and make sure you use the EXAMPLE tablespace and the SH schema for indexes and materialized views in
the Default Storage locations region. When done, click the Next button.
j. On the SQL Access Advisor: Schedule page, ensure the Immediately option button is selected, and then
click the Next button.
k. On the SQL Access Advisor: Review page, click the Submit button.
l. Return to the Advisor Central page, wait for one minute, and click the Refresh button. Repeat this
operation until you see the COMPLETED status associated to your SQL Access Advisor task.
m. Select your SQL Access Advisor task, and click the View Result button.
a. On the Recommendations page for your SQL Access Advisor task, click the Advisor Central link.
b. On the Advisor Central page, select your first SQL Access Advisor task, and click the View Result
button.
c. On the Recommendations page, click the Show SQL button.
d. On the Show SQL page, write down the names of the created objects, and click the OK button.
e. Return to the Recommendations page, click the Advisor Central link again.
f. On the Advisor Central page, select your SQL Access Advisor tasks, one at a time, and click the Delete
button. For each task, click the Yes button on the Confirmation page.
g. On the Advisor Central page, click the Database: orcl link.
h. On the Database Control home page, click the Administration tab.
i. On the Administration page, click the SQL Tuning Sets link.
j. On the SQL Tuning Sets page, select MY_STS_WORKLOAD, and click the Delete button.
k. On the Confirmation page, click the Yes button.
l. Return to the SQL Tuning Sets page, and click the Database: orcl link.
m. Return to the Administration page, and click the Jobs link.
n. On the Scheduler Jobs page, click the Run History tab.
o. On the Run History page, select your SQL Access Advisor implementation job, and click the Purge Log
button.
p. On the Confirmation page, click the Yes button.
*
ERROR at line 1:
ORA-13754: "SQL Tuning Set" "MY_STS_WORKLOAD" does not exist.
ORA-06512: at "SYS.DBMS_SQLTUNE_INTERNAL", line 2948
ORA-06512: at "SYS.DBMS_SQLTUNE", line 478
ORA-06512: at line 1
MVIEW_NAME
------------------------------
CAL_MONTH_SALES_MV
FWEEK_PSCAT_SALES_MV
MV$$_01620002
SQL> -- Use the last value returned by the previous query. Something like
MV$$_2
SQL>
In this practice you will create two small tables, based on the SH schema. Using these two tables, you
investigate the difference between a partitioned outer join and a regular outer join. Unless specified
otherwise, you should be logging in as SH either through SQL*Plus or iSQL*Plus.
1. Connect to the SH schema, and alter the session so that the NLS_DATE_FORMAT is set to 'DD-
MON-YYYY'. Confirm the two tables T1 and S1 you create in the next step do not presently
exist. You can use the script lab_07_01_01.sql.
Table dropped.
Table created.
Table created.
SQL> begin
2 for i in 0..3 loop
3 insert into t1 values (to_date('02-JAN-2001') + i);
4 end loop;
5 end;
6 /
SQL> commit;
Commit complete.
TIME_ID
-----------
02-JAN-2001
03-JAN-2001
04-JAN-2001
05-JAN-2001
4. Define a break on PROD_ID (to enhance output readability), and execute the right outer join
query in the lab_07_01_04.sql script. What do you notice about the returned rows?
Answer: The regular outer join is adding rows for days without any sales at all: 05-JAN-2001.
SQL>
5. Now, execute the partitioned outer join query in the lab_07_01_05.sql script.
Oracle Database 10g: New Features for Administrators B-55
SQL> SELECT prod_id, time_id, quantity_sold
2 , sum(quantity_sold) over
3 ( partition by prod_id
4 order by time_id
5 ) as cumulative
6 FROM s1
7 PARTITION BY (prod_id)
8 RIGHT OUTER JOIN t1
9 using (time_id)
10 ORDER BY prod_id, time_id;
8 rows selected.
6. Compare the results of the two queries executed in steps 4 and 5. What is the difference?
Answer: The regular outer join from step 4 is only adding rows for days without any sales at all:
05-JAN-2001. The partitioned outer join from step 5 has added additional rows for each day one
of the products was not sold. You have two products and four days, resulting in eight rows.
7. You will need the S1 table in the next practice; you can drop the T1 table now.
You can use the S1 table you created in the previous practice to experiment with the new MODEL
clause to perform inter-row calculations.
1. Connect as the SH schema and query all rows of the S1 table to see the table contents.
TIP: Remember to clear the format break set in the previous practice.
TIME_ID PROD_ID QS
--------- -------- --------
02-JAN-01 13 1
04-JAN-01 13 1
02-JAN-01 14 1
03-JAN-01 14 1
TIME_ID PROD_ID QS
--------- -------- --------
02-JAN-01 13 1
04-JAN-01 13 1
09-JAN-01 13 2
02-JAN-01 14 1
03-JAN-01 14 1
09-JAN-01 14 6
09-JAN-01 15 42
7 rows selected.
TIME_ID PROD_ID QS
--------- -------- --------
09-JAN-01 13 2
09-JAN-01 14 6
09-JAN-01 15 42
3 rows selected.
1. Connect to the SH schema, and run the lab_07_03_01.sql script to ensure the MY_MV
materialized view does not exist.
2. Execute the lab_07_03_02.sql script to create a materialized view called MY_MV, and
execute the dbms_stats.gather_table_stats(USER, 'MY_MV') procedure to
gather statistics against MY_MV.
PROD_ID AVG_AMOUNT
---------- ----------
16 11.99
21 899.99
26 149.99
27 44.99
30 9.99
35 49.99
40 44.99
46 22.99
48 11.99
116 11.99
128 27.99
PROD_ID AVG_AMOUNT
---------- ----------
147 7.99
12 rows selected.
Answer: Full text match query rewrite is not possible because the quantity_sold column
does not appear in the underlying MY_MV materialized view definition.
from sales
*
ERROR at line 4:
ORA-30393: a query block in the statement did not rewrite
4. Fix the error: Change QUANTITY_SOLD into AMOUNT_SOLD on line 3, and repeat the test.
PROD_ID AVG(AMOUNT_SOLD)
-------- ----------------
16 11.99
21 899.99
26 149.99
27 44.99
30 9.99
35 49.99
40 44.99
46 22.99
48 11.99
116 11.99
128 27.99
147 7.99
12 rows selected.
5. Run the lab_07_03_05.sql script to execute EXPLAIN PLAN against the query in the
previous step and query the PLAN_TABLE table, to see the improved execution plan readability.
PLAN_TABLE_OUTPUT
---------------------------------------------------------------------
Plan hash value: 3745461064
---------------------------------------------------------------------
| Id | Operation | Name |Rows |Bytes |Cost(%CPU)|
---------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 12 | 84 | 4 (25)|
| 1 | MAT_VIEW REWRITE ACCESS FULL| MY_MV | 12 | 84 | 4 (25)|
---------------------------------------------------------------------
6. Before you can use the DBMS_MVIEW.EXPLAIN_REWRITE procedure, you must create the
REWRITE_TABLE table with the utlxrw.sql script available in the
$ORACLE_HOME/rdbms/admin directory. Run the
$ORACLE_HOME/rdbms/admin/utlxrw.sql script now.
SQL> @$ORACLE_HOME/rdbms/admin/utlxrw.sql
Table created.
In this practice, you use the Database Control application to define and monitor the Scheduler and
automate tasks. Unless specified otherwise, you should be logging in as SYSDBA either through
Database Control or SQL*Plus.
1. Log in to EM Database Control as the SYSTEM user and grant the following roles to the HR user:
• CONNECT role
• RESOURCE role
• DBA role
Because you are going to use user HR to administer jobs through Database Control, you need to
make sure that HR is registered as a possible Administrator.
a. From the Database Control home page, click the Administration tab.
b. On the Administration page, click the Users link in the Security region.
c. On the Users page, click the HR username to edit the account.
d. On the Edit User page, click the Roles tab. Then click the Modify button on the right side of the page.
e. On the Modify Roles page, click the DBA role, and then press the Move button to grant this role to the HR
user. Repeat these steps for the RESOURCE role. Then click the OK button.
f. On the Edit User page, click Apply.
g. Click the Setup link.
h. On the Administrators page, click the Create button.
i. On the Create Administrators: Properties page, enter HR in the Name, Password, and Confirm
Password fields.
j. Click the Finish button.
k. On the Create Administrator: Review page, click the Finish button.
l. Back to the Administrators page, click the Database tab.
2. Log in to Database Control as the HR user. From the Administration tab, click the Jobs link in
the Scheduler region, at the bottom right corner of the page. Are there any existing jobs?
Answer: No.
a. In the upper right corner of the current page, click the Logout link.
b. Click the Login button to log in again.
c. For the username and password enter HR. Then click Login.
d. On the Oracle Database Licensing Information 10g page click the I Agree button.
e. Click the Administration tab.
f. Click the Jobs link in the Scheduler region in the bottom right corner of the page.
3. Are there any existing programs? (Hint: Use the browser Back button).
a. Return to the Administration main page, and click the Programs link under the heading Scheduler.
b. There are no existing programs.
a. Return to the Administration main page, and click the Schedules link under the heading Scheduler.
b. There is one schedule, called DAILY_PURGE_SCHEDULE
5. Are there any existing windows? What resource plan is associated with each window?
a. Return to the Administration main page, and click the Windows link under the heading Scheduler.
b. There are two windows, named WEEKNIGHT_WINDOW and WEEKEND_WINDOW. The windows do not
have any resource plan associated with them
6. Are there any existing job classes? If so, what resource consumer group is associated with each
job class?
a. Return to the Administration main page, and click the Job Classes link under the heading Scheduler.
b. There are two job classes:
• DEFAULT_JOB_CLASS: no resource consumer group.
• AUTO_TASKS_JOB_CLASS is associated to the AUTO_TASK_CONSUMER_GROUP resource
consumer group.
In this practice, you will use Database Control to create Scheduler objects and automate tasks. Unless
specified otherwise, you should be logging in as SYSDBA either through
Database Control or SQL*Plus.
1. While logged in to the database as the HR user in Database Control, click the Administration
tab. Under the heading Scheduler, click Jobs. Click the Create button to open the Create Job
window.
Create a simple job that runs a SQL script:
• General:
Name: CREATE_LOG_TABLE_JOB
Owner: HR
Description: Create the SESSION_HISTORY table for the next part of this practice
Logging level: RUNS
Command type: In-line Program: Executable
Executable: /home/oracle/labs/lab_09_02_01.sh
• Schedule:
Repeating: Do not Repeat
Start: Immediately
• Options:
No special options.
a. From the Database Console home page, click the Administration tab.
b. In the Scheduler section, click the Jobs link.
c. On the Scheduler Jobs page, click the Create button.
d. On the Create Job page, enter CREATE_LOG_TABLE_JOB in the Name field. Make sure that HR is
specified in the Owner field. Enter Create the SESSION_HISTORY table for the next part of this
practice in the Description field. Make sure that Logging Level is set to Log job runs only (RUNS).
Make sure that the Job Class is set to DEFAULT_JOB_CLASS. Make sure that Auto Drop is set to
FALSE. Make sure that Restartable is set to FALSE.
e. In the Command section, click the Change Command Type button.
f. On the Select Command Option, select the In-line Program: Executable radio button, and click the
OK button.
g. Back to the Create Job page, enter /home/oracle/labs/lab_09_02_01.sh in the Executable
Name field.
h. Click the Schedule tab.
i. On the Schedule page, make sure that the Immediately radio button is selected, and that the Repeat field
is set to Do Not Repeat.
3. If the job does not appear on the Scheduler Jobs page, click the Refresh button. Then click the
Run History tab and verify that the job ran successfully.
a. From the Administration page, click the Programs link under the heading Scheduler.
b. Click Create.
c. Enter LOG_SESS_COUNT_PRGM for the name of the Program. Set Enabled to Yes.
d. Leave the type set to PL/SQL Block.
e. Enter the above PL/SQL text into the Source field.
f. Click OK.
Connect hr/hr
BEGIN
DBMS_SCHEDULER.CREATE_SCHEDULE (
schedule_name => 'SESS_UPDATE_SCHED',
start_date => SYSTIMESTAMP,
repeat_interval => 'FREQ=SECONDLY;INTERVAL=3',
comments => 'Every three seconds');
END;
/
6. Return to the Database Control, and verify that the schedule was created.
Hint: You may have to refresh the page for the Schedule to appear.
a. Return to the Administration main page, and click the Schedules link under the heading Scheduler.
7. Using Database Control, create a job named LOG_SESSIONS_JOB that uses the
LOG_SESS_COUNT_PRGM program and the SESS_UPDATE_SCHED schedule. Make sure the
job uses FULL logging.
8. Check the HR.SESSION_HISTORY table for rows. If there are rows in the table, are the
timestamps three seconds apart?
Answer: Yes there are rows, and yes the timestamps are three seconds apart.
9. Use Database Control to alter the SESS_UPDATE_SCHED schedule from every three seconds
to every three minutes.
10. Connect as HR schema, and query the SESSION_HISTORY table to verify that the rows are
being added every three minutes now, instead of every three seconds.
Connect hr/hr
a. From the Administration page, click Tables under the heading Schema.
b. Enter HR for the schema and SESSION_HISTORY for the table name, then click Go.
c. Click the table name in the Results list.
d. Click the Add 5 Table Columns button.
e. In the first empty row (after NUM_SESSIONS) enter BACKGROUND_COUNT for the column name and
NUMBER for the data type.
f. Click Apply to alter the table.
a. From the Administration page, click Programs under the heading Scheduler.
b. Click the LOG_SESS_COUNT_PRGM link.
c. Change the Source code to match the above text.
d. Click Apply.
13. Run the LOG_SESSIONS_JOB job immediately, and verify that the new information was added
to the HR.SESSION_HISTORY table.
a. From the Administration page, click Jobs under the heading Scheduler.
b. With the job LOG_SESSIONS_JOB selected, click the Run Now button.
c. Click the Run History tab to verify that the job ran successfully. You might have to refresh the
Scheduler Jobs page in order to see the LOG_SESSIONS_JOB in the Scheduled tab.
d. Query the HR.SESSION_HISTORY table to verify that the newest rows contain the background session
count.
14. Drop the LOG_SESSIONS_JOB job, the LOG_SESS_COUNT_PRGM program, and the schedule
SESS_UPDATE_SCHED. Note: Make sure you do not delete the wrong schedule.
a. From the Administration page, click Jobs under the heading Scheduler.
b. With the LOG_SESSIONS_JOB job selected, click the Delete button. Select Drop the job and stop any
running instance, and then click Yes.
c. Click the database breadcrumb at the left top corner of the page to return to the Administration page.
Then click Programs under the heading Scheduler.
d. With the LOG_SESS_COUNT_PRGM program selected, click the Delete button. Click Yes to confirm.
e. Click the Database breadcrumb at the left top corner of the page to return to the Administration page.
Click Schedules under the heading Scheduler.
f. With the schedule SESS_UPDATE_SCHED selected, click the Delete button. Make sure you do not
delete the wrong schedule.
g. Select If there are dependent objects, it will not be dropped, then click Yes to confirm.
exec DBMS_SERVER_ALERT.SET_THRESHOLD(-
dbms_server_alert.tablespace_pct_full,-
NULL,NULL,NULL,NULL,1,1,NULL,-
dbms_server_alert.object_type_tablespace,NULL);
2. Check the database-wide threshold values for the Tablespace Space Usage metric by using the
following command:
SELECT warning_value,critical_value
FROM dba_thresholds
WHERE metrics_name='Tablespace Space Usage'
AND object_name IS NULL;
select warning_value,critical_value
from dba_thresholds
where metrics_name='Tablespace Space Usage' and object_name is null;
3. Create a new tablespace called TBSALERT with one 5 MB file called alert1.dbf. Make sure
this tablespace is locally managed and uses Automatic Segment Space Management. Also, do not
make it autoextensible, and do not specify any thresholds for this tablespace. Use Database
Control to create it. If this tablespace already exists in your database, drop it first, including its
files.
select warning_value,critical_value
from dba_thresholds
where metrics_name='Tablespace Space Usage' and object_name='TBSALERT';
6. Select the reason and resolution columns from DBA_ALERT_HISTORY for the
TBSALERT tablespace. How do you explain the result?
Answer: If you used Database Control to create TBSALERT, you should see two identical rows.
This is because Database Control explicitly sets the tablespace thresholds when creating it. If you
used SQL*Plus, then you should see only one row, corresponding to the thresholds update.
select reason,resolution
from dba_alert_history
where object_name='TBSALERT';
-- exec dbms_workload_repository.create_snapshot();
BEGIN
FOR i in 1..5 LOOP
insert into employees1 select * from employees1;
insert into employees2 select * from employees2;
insert into employees3 select * from employees3;
insert into employees4 select * from employees4;
insert into employees5 select * from employees5;
commit;
-- exec dbms_workload_repository.create_snapshot();
-- 37.97%
-- select (select sum(bytes)
-- from dba_extents
-- where tablespace_name='TBSALERT')*100/5177344
-- from dual;
commit;
-- exec dbms_workload_repository.create_snapshot();
8. Check the fullness level of the TBSALERT tablespace using either Database Control or
SQL*Plus. The current level should be around 53%. Wait for approximately 10 minutes, and
check that the warning level is reached for the TBSALERT tablespace.
-- 53.16%
select (select sum(bytes)
from dba_extents
where tablespace_name='TBSALERT')*100/5177344
from dual;
9. Execute the lab_10_01_09.sql script to add data to TBSALERT. Wait for 10 minutes and
view the critical level in both the database and in Database Control. Verify that TBSALERT
fullness is around 63%.
Oracle Database 10g: New Features for Administrators B-71
insert into employees4 select * from employees4;
commit;
-- 58.22%
a. From the Database Control home page, you should see the new alert in the Space Usage region.
b. You should see the red flag instead of the yellow one.
c. To check the fullness level of the TBSALERT tablespace, from the Database Control home page:
Administration > Tablespaces
-- 63.29%
select (select sum(bytes)
from dba_extents
where tablespace_name='TBSALERT')*100/5177344
from dual;
10. Execute the lab_10_01_10.sql script. This script deletes rows from tables in TBSALERT.
12. Wait for approximately 10 more minutes, and check that there are no longer any outstanding
alerts for the TBSALERT tablespace.
a. From the Database Control home page, you should see the green flag for the Space Usage region.
14. Reset the database wide default thresholds from the Tablespace Space Usage metric for
tablespace TBSALERT.
a. From the context of the Tablespace Space Used (%): Tablespace Name TBSALERT page, click the
Edit Tablespace link.
b. This brings you to the Edit Tablespace: TBSALERT page.
c. Click the Thresholds tab, and then select the Use Default Thresholds option button.
d. Click the Apply button.
1. Create a database session connected as SYSDBA through SQL*Plus. This session is referred to as
the First session. Using either Database Control or SQL*Plus, create a new undo tablespace
called UT2 with only one 1MB file.
a. From the Database Control home page: Administration > Undo Management
b. On the Undo Management page, click the Change Tablespace button.
c. On the Change Undo Tablespace page, select UT2 and click the OK button.
3. Using a second SQL*Plus session, connect as SYSDBA. This session is referred to as the
Second session. Execute the lab_10_02_03.sql script. If you get an error when executing
the script, switch your undo tablespace back to UNDOTBS1, and start again.
-- Second session
begin
for i in 1..100 loop
insert into b values(i, rpad('s',100));
end loop;
end;
/
commit;
Answer: The second session gets a Snapshot Too Old error. This is because the UT2 is too
small to handle this workload.
-- Second session
declare
b number;
cursor c1 is select b from b;
begin
open c1;
loop
fetch c1 into b;
dbms_lock.sleep(1);
exit when c1%notfound;
end loop;
close c1;
end;
/
-- First session
begin
for i in 1..100 loop
update b set b=+1, s=rpad('t',100);
commit;
end loop;
end;
/
5. From the first session look at the alert history. What do you see? Use Database Control to locate
the warning, and click the corresponding alert link.
Answer: You can see that the alert history contains the Snapshot Too Old error. It was
automatically detected by the Oracle database.
-- First session
6. Use the Undo Advisor to get recommendations to correctly size UT2. Use the recommendation to
correctly size the UT2 tablespace.
a. From the Database Control home page: Administration > Undo Management
b. On the Undo Management page change the Analysis Time Period field to Last One Hour and click the
Update Analysis button.
c. On the Undo Management page, you should see a recommendation to size UT2 to 10 MB.
d. Click the Undo Advisor button to obtain more details about the recommendation. In particular, look at
the Required Tablespace Size by Undo Retention Length graph. You can, for example, change the
New Undo Retention field to see the impact on your undo tablespace size.
e. Click the Cancel button.
f. After you are returned to the Undo Management page, click the Edit Undo Tablespace button.
g. This brings you to the Edit Tablespace: UT2 page from where you can add a new 10 MB file to UT2.
h. Click the Add button.
i. On the Edit Tablespace: UT2: Add Datafile page, specify the name of your additional file, and also
specify 10 MB for its size.
j. Click the Continue button.
k. After you are returned to the Edit Tablespace: UT2 page, click the Apply button.
Answer: This time both sessions succeed without any error. The Undo Advisor gave you a good
recommendation.
-- Second session
declare
b number;
cursor c1 is select b from b;
begin
open c1;
loop
fetch c1 into b;
dbms_lock.sleep(1);
exit when c1%notfound;
end loop;
close c1;
end;
/
begin
for i in 1..100 loop
update b set b=+1, s=rpad('t',100);
commit;
end loop;
end;
/
8. Switch your undo tablespace back to UNDOTBS1, and drop UT2 including its data files, as well
as TBSALERT.
1. Using Database Control, create a new bigfile tablespace called TBSBF containing one 5 MB file.
2. Using Database Control, try to add a new file to TBSBF. What happens and why?
Answer: Because a bigfile tablespace can contain only one data file, it is not possible to add a
data file to TBSBF.
3. Using Database Control, how can you resize TBSBF to 10 MB? What simplification can you
observe?
Answer: With bigfile tablespaces, you do not need to explicitly select one data file. The only
possible one is already selected, and the resulting SQL statement is to apply the operation on the
whole tablespace, and not on its corresponding file.
5. Explain why the following statement is incorrect. Then fix it, and determine the correct output:
SELECT distinct DBMS_ROWID.ROWID_RELATIVE_FNO(ROWID)
FROM sys.emp;
Answer: Because SYS.EMP is stored in a bigfile tablespace, you should use the BIGFILE value
as the second argument of the ROWID_RELATIVE_FNO function. The default value of the
second argument is SMALLFILE.
DBMS_ROWID.ROWID_RELATIVE_FNO(ROWID)
------------------------------------
0
SQL>
SQL> SELECT distinct DBMS_ROWID.ROWID_RELATIVE_FNO(ROWID,'BIGFILE')
2 FROM sys.emp;
DBMS_ROWID.ROWID_RELATIVE_FNO(ROWID,'BIGFILE')
----------------------------------------------
1024
SQL>
6. Explain why the following statement is incorrect. Then fix it, and determine the correct output:
SELECT distinct DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID,'BIGFILE')
FROM hr.employees;
Answer: Because HR.EMPLOYEES is stored in a smallfile tablespace, you should use the
SMALLFILE value as the second argument of the ROWID_BLOCK_NUMBER function. The
default value of the second argument is SMALLFILE.
DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID,'SMALLFILE')
------------------------------------------------
54378
54379
SQL>
Answer: It is correct because there is no other possible argument for this call. However, it shows
that you should no longer try to interpret restricted ROWIDs for rows from bigfile tablespaces.
DBMS_ROWID.ROWID_T
------------------
00000014.0000.0000
SQL>
8. Execute the following statement with the previously found restricted ROWID. Explain why it is
incorrect, and then fix it:
SELECT first_name
FROM sys.emp
WHERE rowid = (SELECT
DBMS_ROWID.ROWID_TO_EXTENDED('&rid',NULL,NULL,0) FROM dual);
Answer: It is incorrect because you are using a restricted ROWID without specifying the
corresponding table. So, in the above context, the ROWID is interpreted as a row of a
SMALLFILE tablespace. Thus, it cannot be interpreted. To fix the statement, you must explicitly
specify the table corresponding to this ROWID.
FIRST_NAME
--------------------
Steven
SQL>
Tablespace dropped.
SQL>
1. In this lab you will be following Oracle’s best practices for managing database files and recovery
related files by establishing a database area and flash recovery area for your database. Use
Database Control to configure OMF to /u01/app/oracle/oradata/orcl. Ensure that
parameter changes are written to the current SPFILE. Turn on ARCHIVELOG mode for your
database. This requires a restart of your instance.
a. From the Database Control home page, select Administration > Instance > All Initialization
Parameters.
b. Select SPFile tab, and check Apply changes in SPFile mode to the current running instance(s) box.
c. In the Filter field, type in db_create and click Go. Set the path name to
Oracle Database 10g: New Features for Administrators B-82
/u01/app/oracle/oradata/orcl for db_create_file_dest and
db_create_online_log_dest_1.
d. Click the Apply button.
e. From the Database Control home page, click the Disabled link of the Archiving field in the High
Availability section.
f. On the Configure Recovery Settings page, check the ARCHIVELOG Mode box and click Apply.
g. On the Confirmation page, click Yes.
h. If needed, specify the necessary credentials on the Restart Database:Specify Host and Target
Database Credentials page, and click OK.
i. On the Restart Database:Confirmation page, click Yes.
j. On the Restart Database: Activity Information page, click the Refresh button after a while.
2. Using Database Control, check that you are now using automatically a flash recovery area. Then
make sure that the size of your flash recovery area is set to 3GB. What happens to the Archive
Log Destination 10?
Answer: The Archive Log Destination 10 reflects the new extended syntax
USE_DB_RECOVERY_FILE_DEST.
a. From the Database Control home page, select Maintenance > Configure Recovery Settings
b. In the Flash Recovery Area Location field enter /u01/app/oracle/flash_recovery_area/.
c. In the Flash Recovery Area Size field , enter 3 and click Apply.
d. After the Flash Recovery Area has been set, Archive Log Destination # 10 reflects the new extended
syntax of USE_DB_RECOVERY_FILE_DEST.
SQL>
1. Using Database Control, enable fast incremental backups for your database. What is the default
location for the change tracking file? Ensure that your retention policy allows for recovery within
the last 31 days.
Answer: Because you enabled OMF in Practice 12-1, there is no need to specify a block-change
tracking file name. An Oracle-managed file is created in the database area for the block-change
tracking file.
a. From the Database Control home page: Maintenance > Configure Backup Settings
b. Click the Policy tab.
c. Select the Enable block change tracking for faster incremental backups checkbox.
d. Select the “Retain backups that are necessary for a recovery to any time within the specified number
of days” item.
e. Enter the host credentials as oracle/oracle.
f. When done, click the OK button.
2. Query the v$block_change_tracking view to show the status, file name, and size of the
file. You can use the lab_12_02_02.sql script.
FILENAME
-----------------------------------------------------------------------------
---
STATUS BYTES
---------- ----------
/u01/app/oracle/oradata/orcl/ORCL_EDRSR14P1/changetracking/o1_mf_0259vq9m_.ch
g
ENABLED 11599872
SQL>
1. Using Database Control, back up the Oracle database using the Oracle Suggested Strategy. View
the backup logs as they are generated through the backup progress. The log generation is
dynamic, so refresh your browser to view more output.
a. From the Database Control home page: Maintenance > Schedule Backup
b. Ensure that Oracle-suggested is selected in the Backup Strategy pull-down list.
c. Select Disk as your backup destination, and enter your Host Credentials.
d. Click Next.
e. The flash recovery area is displayed in the Disk Settings region.
f. Click Next.
g. Determine your current time zone, choose values for Backup Time that cause the backup to begin the
soonest, then press Next.
h. Review your Backup Settings, and press Submit Job.
i. When your backup is running, you can click View Job to track the progress of the backup job.
j. As your backup progresses, logs are generated. You can click the log name links to view the progress of
the backup job. Refresh your browser to view more output as it is generated.
1. Run the lab_12_04_01.sql script to create a new user called HR1, using the EXAMPLE
tablespace to store the created tables. Using Database Control, confirm the existence of the
following tables:
• BR_JOB_HISTORY
• BR_EMPLOYEES
• BR_JOBS
• BR_DEPARTMENTS
• BR_LOCATIONS
• BR_COUNTRIES
• BR_REGIONS
a. From the Database Control home page: Administration > Schema > Tables
b. Type in HR1 in the Schema field and click Go.
2. Run the Oracle Suggested Strategy again by creating a new backup job. Follow the same steps as
in Practice 12-3.
Click the Backup link on the Execution page once the operation completed.
4. After the backup job has completed, run the lab_12_04_04.sql script to view the formatted
output of the number of blocks actually backed up.
SQL> @lab_12_04_04.sql
Connected.
5 rows selected.
SQL> @lab_12_05_01.sql
Session altered.
Connected.
****** Populating REGIONS table ....
1 row created.
1 row created.
1 row created.
1 row created.
Commit complete.
SQL>
2. Run the Oracle Suggested Strategy again by creating a new backup job. Follow the same steps as
in Practice 12-3.
Click the Backup link on the Execution page once the operation completed.
NAME
--------------------------------------------------------------------------
------
SPACE_LIMIT SPACE_USED SPACE_RECLAIMABLE NUMBER_OF_FILES
----------- ---------- ----------------- ---------------
/u01/app/oracle/flash_recovery_area
3221225472 921220096 0 11
1 row selected.
SQL>
2. Using Database Control, reduce the size of the flash recovery area so that a warning is issued on
the next backup.
a. From the Database Control home page, select Maintenance > Configure Recovery Settings
b. In the Flash Recovery Area Size field, enter a value that is very close to what you see in Used Flash
Recovery Area Size (MB) and click Apply. This value should be close to 1MB.
3. Run the Oracle Suggested Strategy again by creating a new backup job. Follow the same steps as
in Practice 12-3.
4. When there is no more space available in the flash recovery area, the following actions occur:
• The RMAN backup job errors because there is no more space for the backup file. View the
EM job output (if possible)
• An error is written to the alert.log.
• A row is inserted into the DBA_OUTSTANDING_ALERTS view.
Using Database Control, look at the latest entries in the alert.log.
a. From the Home page, in the Related Links section click the Alert Log Content link.
a. From the Database Control home page, select Maintenance > Configure Recovery Settings.
b. In Flash Recovery Area Size field, enter 3GB and click Apply.
1. In this exercise you will simulate a channel failover when using multiple channels and backing
up to tape. From a terminal window, use mkdir to create a temporary directory location for the
tape device to act as the pseudo-SBT device type at /home/oracle/tape. Set the RMAN
channel configuration by running the lab_12_07_01.sql script.
$ . ./sol_12_07_01.sh
mkdir /home/oracle/tape
RMAN>
RMAN>
using target database controlfile instead of recovery catalog
new RMAN configuration parameters:
CONFIGURE DEVICE TYPE 'SBT_TAPE' PARALLELISM 2 BACKUP TYPE TO BACKUPSET;
new RMAN configuration parameters are successfully stored
RMAN>
new RMAN configuration parameters:
CONFIGURE CHANNEL 2 DEVICE TYPE 'SBT_TAPE' PARMS
'SBT_LIBRARY=oracle.disksbt, ;
new RMAN configuration parameters are successfully stored
RMAN>
new RMAN configuration parameters:
CONFIGURE CHANNEL 1 DEVICE TYPE 'SBT_TAPE' PARMS
'SBT_LIBRARY=oracle.disksbt, ;
new RMAN configuration parameters are successfully stored
RMAN>
a. From the Database Control home page, select Maintenance > Schedule Backup
b. Select Customized from the Backup Strategy drop-down list.
c. Select Whole Database.
d. Enter your host credentials and click Next.
e. On the Schedule Backup: Options, click Next.
f. On the Schedule Backup: Settings page, select Tape, then click Next.
g. On the Schedule Backup: Schedule page, click Next.
h. On the Schedule Backup: Review page, click Submit Job.
i. Click View Job to view the job’s progress.
j. Click the Backup link on the Execution page once the operation completed.
a. From the Database Control home page, select Administration > Security > Users
b. Select HR1 from the list of usernames, then press Delete.
c. Click Yes at the Confirmation window to delete with the CASCADE option.
d. From the Database Control home page, confirm that archiving is Enabled in the High Availability
region. Click the Enabled link. Clear the ARCHIVELOG Mode checkbox and click Apply. If the
Archive log field is still disabled on the home page click the Disable link.
e. On the Confirmation page, click the Yes button.
f. On the Restart Database: Specify Host and Target Database Credentials page, specify the host and
database credentials. Then click the OK button.
g. On the Restart Database: Confirmation page, click the Yes button.
h. After a while, click the Refresh button on the Restart Database: Activity Information page.
i. From your operating system terminal emulator window, remove the /home/oracle/tape directory.
1. Create a new locally managed tablespace called TBSFD containing only one 500 KB file. Also,
TBSFD should use Automatic Segment Space Management. Use either Database Control or
command line to create it. If this tablespace already exists in your database, drop it first,
including its files.
2. Create a new user called FD, identified by FD, having TBSFD as its default tablespace and TEMP
as its temporary tablespace. Make sure that user FD has the following roles granted: CONNECT,
RESOURCE, and DBA. If this user already exists on your system, drop it first.
a. From the Database Control home page:Administration > Users > Create
b. On the Create User page, specify the following fields: Name, Enter Password, Confirm
Password, Default Tablespace, Temporary Tablespace.
c. Click the Roles tab.
d. By default, CONNECT is already associated to the user.
e. Click the Modify button.
f. On the Modify Roles page, select both RESOURCE and DBA from the Available Roles list.
g. When done, click the Move link, and then the OK button.
h. After you are returned to the Create User page, click the OK button to create the FD user.
3. Connect as user FD and execute the lab_13_01_03.sql script through SQL*Plus. This script
creates:
• Table EMP as a copy of HR.EMPLOYEES
• Table DEPT as a copy of HR.DEPARTMENTS
• The NOTHING trigger on EMP
• The EMP primary key
• The DEPT primary key
• The EMPFK constraint on EMP that references the DEPT primary key
• The EMPFKINDX index on EMPFK
• The EMPSALCONS check constraint on EMP
• The EMPIDMGRFK self-referencing constraint on EMP
• A materialized view log on EMP
4. Use Database Control to determine the available free space remaining on the TBSFD tablespace.
Connected as FD in SQL*Plus, list the segments and constraints created by user FD. In the report,
also include the size of each segment.
Answer: In addition to EMP, you should now see two indexes and the NOTHING trigger inside
the FD recycle bin. Note that the materialized view log is not in the recycle bin. This indicates
that it is now definitely lost.
6. Connect as user FD through SQL*Plus and determine the size of each free extent in the TBSFD
tablespace. What is your conclusion?
Answer: You should observe that you have four extents of eight blocks each. These extents
correspond to the EMP table, its associated two indexes, and the materialized view log.
BLOCKS
----------
8
8
8
8
SQL>
7. Although the EMP table has been dropped, it is still possible to query its content as long as it is
visible from the recycle bin. Query the content of the dropped EMP table using Database Control.
a. From Database Control: Still in the Recycle Bin page, click the View Content button for the
corresponding recycle bin row.
b. This brings you to the View Data for Table: FD.BIN$zImvyfFWP8ngNAgAINC/Yg==$0 page where
you can see the corresponding rows.
c. On this page you can refine the query by clicking the Refine Query button.
d. This allows you to select specific columns and define your WHERE clause.
e. Click the OK button.
Answer: You can see that renamed objects still belong to user FD. Also, you definitely lost the
two referential constraints defined on EMP, and all the other constraints that were defined on EMP
have been renamed as well.
OBJECT_NAME OBJECT_TYPE
-------------------------------------------------- -------------------
DEPT TABLE
BIN$08bRi60c5DDgMLmLciNQmQ==$0 TABLE
BIN$08bRi60b5DDgMLmLciNQmQ==$0 TRIGGER
DEPTPK INDEX
BIN$08bRi60a5DDgMLmLciNQmQ==$0 INDEX
BIN$08bRi60Z5DDgMLmLciNQmQ==$0 INDEX
6 rows selected.
CONSTRAINT_NAME C TABLE_NAME
------------------------------ - ------------------------------
BIN$08bRi60Y5DDgMLmLciNQmQ==$0 C BIN$08bRi60c5DDgMLmLciNQmQ==$0
SYS_C007682 C DEPT
BIN$08bRi60W5DDgMLmLciNQmQ==$0 C BIN$08bRi60c5DDgMLmLciNQmQ==$0
BIN$08bRi60V5DDgMLmLciNQmQ==$0 C BIN$08bRi60c5DDgMLmLciNQmQ==$0
BIN$08bRi60U5DDgMLmLciNQmQ==$0 C BIN$08bRi60c5DDgMLmLciNQmQ==$0
BIN$08bRi60T5DDgMLmLciNQmQ==$0 C BIN$08bRi60c5DDgMLmLciNQmQ==$0
DEPTPK P DEPT
BIN$08bRi60X5DDgMLmLciNQmQ==$0 P BIN$08bRi60c5DDgMLmLciNQmQ==$0
8 rows selected.
SQL>
a. Still in the Recycle Bin page, select the EMP recycle bin object, and click the Flashback Drop button.
b. This brings you to the Perform Recovery: Rename page where you can change the original name to
something different in case you created another EMP table after dropping the original one.
c. Leave the original name, and click the Next button.
d. On the Perform Recovery: Review page, click the Submit button.
e. On the Confirmation page, click the OK button.
f. This returns you to the Recycle Bin page, which is now empty.
10. Connect as user FD through SQL*Plus, query the EMP table, and list the available free space in
tablespace TBSFD. What are your conclusions?
SQL>
SQL> select blocks from dba_free_space where tablespace_name='TBSFD';
BLOCKS
----------
8
SQL>
11. Connect as user FD through SQL*Plus and create a new table DEPT2 as a copy of
HR.DEPARTMENTS. Make sure that DEPT2 resides in TBSFD. When done, drop the table EMP
again, and create a new table EMP2 as a copy of HR.EMPLOYEES. Make sure that EMP2 is
stored in TBSFD. When done, try to flash back the dropped EMP table. What happens and why?
Table created.
Table dropped.
Table created.
SQL>
12. Using Database Control, drop the DEPT2 table and purge the corresponding entry in the FD
recycle bin.
13. Connected as SYSDBA through SQL*Plus, execute the lab_13_01_13.sql script to clean up
the environment.
-- Cleanup
connect / as sysdba
Unless specified otherwise, you should be logging in as SYSDBA through either SQL*Plus or
Database Control.
1. Connected as SYSDBA through SQL*Plus, execute the lab_13_02_01.sql script. This script
creates a new user called JFV identified by JFV, and also creates a new tablespace called
JFVTBS.
Tablespace created.
User created.
Grant succeeded.
SQL>
SQL>
SQL> archive log list
Database log mode No Archive Mode
Automatic archival Disabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 78
Current log sequence 80
SQL>
SQL> select flashback_on from v$database;
FLA
---
NO
SQL>
SQL> host ls -lr $ORACLE_BASE/flash_recovery_area/*
drwxr-x--- 2 oracle oinstall 4096 Feb 5 12:42 datafile
drwxr-x--- 3 oracle oinstall 4096 Feb 5 12:42 backupset
drwxr-x--- 3 oracle oinstall 4096 Feb 5 12:22 archivelog
a. From the Database Control home page, click the Maintenance tab.
b. On the Maintenance page, click the Configure Recovery Settings link.
c. On the Configure Recovery Settings page, select the ARCHIVELOG Mode checkbox and the Enable
flashback logging for fast database point-in-time recovery checkbox.
d. When done, click the Apply button.
e. On the Confirmation page, click the Yes button.
f. On the Restart Database: Specify Host and Target Database Credentials page, specify the host and
database credentials. Then click the OK button.
g. On the Restart Database: Confirmation page, click the Yes button.
h. After a while, click the Refresh button on the Restart Database: Activity Information page.
4. Using SQL*Plus, determine the list of processes associated to your instance. Then check that
your database is in ARCHIVELOG mode, and that it uses flashback logging. List the content of
your flash recovery area. What are your conclusions?
Answer: Because your database is now using flashback logging, you can see that the RVWR
process is started. Also, one file has already been created by the RVWR process.
FLA
---
YES
SQL>
5. Connected as user JFV under SQL*Plus, execute the lab_13_02_05.sql script. This script
creates a new table called EMP. It also selects the sum of all the salaries of the EMP table. Then
the script returns the current SCN of your database, and it looks at the contents of
V$UNDOSTAT, V$FLASHBACK_DATABASE_LOG, and V$FLASHBACK_DATABASE_STAT.
Write down the information provided by lab_13_02_05.sql.
Table created.
SUM(SALARY)
-----------
691400
SQL> -- scn2
SQL> select current_scn from v$database;
CURRENT_SCN
-----------
518159
UNDOBLKS
----------
0
SQL>
6. Connected as user JFV under SQL*Plus, repeat the execution of the lab_13_02_06.sql
script three times. What are your conclusions?
Answer: Because the script modifies the same blocks, the overhead of flashback logging is less
during the second and third executions.
UNDOBLKS
----------
6250
108
187
SQL>
SQL> begin
2 for i in 1..10000 loop
3 update emp set salary=salary+1;
4 end loop;
5 commit;
6 end;
7 /
UNDOBLKS
----------
12429
108
187
SQL>
SQL> begin
2 for i in 1..10000 loop
3 update emp set salary=salary+1;
4 end loop;
5 commit;
6 end;
7 /
UNDOBLKS
----------
18604
108
187
7. Connected as user JFV under SQL*Plus, create a new tablespace called JFVTBS2. This
tablespace should have only one 500 KB data file. When done, disable flashback logging on
JFVTBS2. Then check that flashback logging is not enabled on JFVTBS2.
Tablespace created.
Tablespace altered.
NAME FLA
------------------------------ ---
SYSTEM YES
UNDOTBS1 YES
SYSAUX YES
USERS YES
TEMP YES
EXAMPLE YES
JFVTBS YES
JFVTBS2 NO
8 rows selected.
SQL>
Answer: Because the table blocks that are modified are located in a tablespace that does not log
flashback data, you should not see a significant increase in the FLASHBACK_DATA statistic.
However, if you notice an important increase similar to that shown in this solution, this is due to
the corresponding rollback data that are still logged. If you do not see a significant increase, this
is probably because the same rollback segment blocks were reused again.
Table created.
UNDOBLKS
----------
5
1396
18636
108
187
SQL> begin
2 for i in 1..10000 loop
3 update emp2 set salary=salary+1;
4 end loop;
5 commit;
6 end;
7 /
UNDOBLKS
----------
6172
1396
18636
108
SQL>
9. Connected as user JFV under SQL*Plus, execute the lab_13_02_09.sql script. Write down
the information returned by this script.
SUM(SALARY)
-----------
3901400
SQL> -- scn1
SQL> select current_scn from v$database;
CURRENT_SCN
-----------
604083
SQL> commit;
Commit complete.
SUM(SALARY)
-----------
7802800
SQL>
Answer: It is not possible to flash back this database because one of its data files did not log its
modifications.
SQL>
11. Using SQL*Plus, fix the problem, and redo step 10. When done, open your database in READ
ONLY mode, and check the result of your flashback database operation. Then, shutdown and
startup mount your instance.
SQL> @sol_13_02_11.sql
SQL>
SQL> connect / as sysdba
Connected.
SQL>
SQL> select name from v$datafile;
NAME
-----------------------------------------------------------------------------
7 rows selected.
SQL>
SQL> alter database
2 datafile '/u01/app/oracle/product/10.1.0/db_1/dbs/jfvtbs2.dbf' offline
for;
Database altered.
SQL>
SQL> -- scn1
SQL> flashback database to scn &scn;
Enter value for scn: 604083
old 1: flashback database to scn &scn
new 1: flashback database to scn 604083
Flashback complete.
SQL>
SQL> alter database open read only;
Database altered.
SQL>
SQL> select count(*) from jfv.emp;
COUNT(*)
----------
107
SQL>
SQL> select sum(salary) from jfv.emp;
SUM(SALARY)
-----------
3901400
SQL>
12. Connected as SYSDBA under SQL*Plus, flashback your database to the SCN returned in step 5.
Then open your database in READ WRITE mode, and check your database. What is your
conclusion?
Answer: The JFVTBS2 tablespace is now gone, including its data file.
Flashback complete.
Database altered.
TABLESPACE_NAME
------------------------------
SYSTEM
UNDOTBS1
SYSAUX
TEMP
USERS
EXAMPLE
JFVTBS
7 rows selected.
COUNT(*)
----------
107
SUM(SALARY)
-----------
691400
SQL>
13. Still connected as SYSDBA under SQL*Plus, clean up your environment by doing the following:
• Drop the JFVTBS tablespace including its data file.
Tablespace dropped.
User dropped.
Database altered.
Database altered.
Database altered.
SQL>
1. Execute the lab_14_01_01.sql script to create a new table that will be used to generate a
workload on your instance.
connect / as sysdba
exec DBMS_WORKLOAD_REPOSITORY.MODIFY_SNAPSHOT_SETTINGS(10080,0);
begin
for i in 1 .. 10000 loop
insert into t_lfszadv values (i, 'a', 'a', 'a', NULL);
end loop;
commit;
end;
/
2. Use Database Control to shut down your instance, and start it up again using the
init_lfszadv.ora initialization parameter file located in your labs directory. Before doing
this, make sure that the init_lfszadv.ora parameter file can be used to start up your
instance.
a. From the Database Control home page, click the Shutdown button.
b. On the Startup/Shutdown: Specify Host and Target Database Credentials page, specify the necessary
credentials and make sure that you save them to disk.
c. Click the OK button.
d. On the Startup/Shutdown: Confirmation page click the Yes button.
e. After a while, click the Refresh button on the Startup/Shutdown: Activity Information page.
f. On the Database: orcl page, click the Startup button.
g. If necessary, specify all the required credentials on the Startup/Shutdown: Specify Host and Target
Database Credentials page, and then click the OK button.
h. On the Startup/Shutdown: Confirmation page, click the Advanced Options button.
i. On the Startup/Shutdown: Advanced Startup Options page, make sure that you select the Specify
parameter file (pfile) on the database server machine option button, and specify the location and name
of the parameter file you want to use. Then click the OK button.
3. Execute the lab_14_01_03.sql script. This script updates the previously defined
T_LFSZADV table. This is done to generate a workload on your instance.
begin
update t_lfszadv set
c5='1111111111111111111111111111111111111111111111111111111111111111111111111
11111111111111111111111111111111111111111111111111111111111111111111111111111
11111111111111111111111111111111111111111111111111111111111111111111111111111
11111111111111111111111111111111111111111111111111111111111111111111111111111
11111111111111111111111111111111111111111111111111111111111111111111111111111
1111111111111111111'
where mod(c1,1)=0;
commit;
end;
/
4. When done, determine the size advice for your redo log groups using Database Control.
a. From the Database Control home page: Click the Administration tab.
b. On the Administration tab, click the Redo Log Groups link.
c. On the Redo Log Groups page, you can see that the size of each group is 10 MB.
d. Still on the Redo Log Groups page, select Sizing Advice in the Actions drop-down list.
e. Click the Go button.
f. In the Update Message region of the Redo Log Groups page, you should now see the recommended
optimal redo log file size of approximiately 50 MB. Note that, depending on the background activity, the
size may be more but it should be between 49 MB and 60 MB.
5. Implement the recommendation by adding two new redo log groups of 50 MB, and by dropping
the existing redo log groups.
exec DBMS_WORKLOAD_REPOSITORY.MODIFY_SNAPSHOT_SETTINGS(10080,0);
begin
for i in 1 .. 10000 loop
insert into t_lfszadv values (i, 'a', 'a', 'a', NULL);
end loop;
commit;
end;
/
a. From the Database Control home page, click the Shutdown button.
b. On the Startup/Shutdown: Specify Host and Target Database Credentials page, specify the necessary
credentials and make sure that you save them to disk.
c. Click the OK button.
d. On the Startup/Shutdown: Confirmation page click the Yes button.
e. After a while, click the Refresh button on the Startup/Shutdown: Activity Information page.
f. On the Database: orcl page, click the Startup button.
g. If necessary, specify all the required credentials on the Startup/Shutdown: Specify Host and Target
Database Credentials page, and then click the OK button.
h. On the Startup/Shutdown: Confirmation page, click the Advanced Options button.
i. On the Startup/Shutdown: Advanced Startup Options page, make sure that you select the Specify
parameter file (pfile) on the database server machine option button and specify the location and name
of the parameter file you want to use. Then click the OK button.
j. After you are returned to the Startup/Shutdown: Confirmation page, click the Yes button.
k. On the Login to Database: orcl page, specify your SYSDBA credentials, and then click the Login button.
Answer: After you ran the workload again, you should see that the size advice is identical to the
one provided in step 4. However, because there may be some background activity, it is possible
that the recommendation is different. However, there should not be a big difference, and it should
be between 49 MB and 60 MB.
begin
update t_lfszadv set
c5='1111111111111111111111111111111111111111111111111111111111111111111111111
11111111111111111111111111111111111111111111111111111111111111111111111111111
11111111111111111111111111111111111111111111111111111111111111111111111111111
11111111111111111111111111111111111111111111111111111111111111111111111111111
11111111111111111111111111111111111111111111111111111111111111111111111111111
1111111111111111111'
where mod(c1,1)=0;
commit;
end;
/
a. From the Database Control home page: Click the Administration tab.
b. On the Administration tab, click the Redo Log Groups link.
c. On the Redo Log Groups page, you can see that the size of each group is 50 MB.
d. Still on the Redo Log Groups page, select Sizing Advice in the Actions drop-down list.
e. Click the Go button.
f. In the Update Message region of the Redo Log Groups page, you should now see the recommended
optimal redo log file size of approximately 50 MB. Note that, depending on the background activity, the
size may be more but it should be between 49 MB and 60 MB.
8. To clean up the environment, log out from any session that you created so far and connect as
SYSDBA through SQL*Plus. Then execute the lab_14_01_08.sql script.
connect / as sysdba
exec DBMS_WORKLOAD_REPOSITORY.MODIFY_SNAPSHOT_SETTINGS(10080,30);
host rm /u01/app/oracle/oradata/orcl/redo4.log
host rm /u01/app/oracle/oradata/orcl/redo5.log
shutdown immediate;
startup;
1. Use DBCA to create the ASM instance on your machine. During the ASM instance creation,
DBCA asks you whether you want to change the default values for the ASM initialization
parameter. Make sure that the disk discovery string is set to /u02/asmdisks/*. Then DBCA
asks you to create new disk groups. Create one disk group called DGROUP1 that is using the
following four ASM disks:
• /u02/asmdisks/disk0
• /u02/asmdisks/disk1
• /u02/asmdisks/disk2
• /u02/asmdisks/disk3
Make sure to specify that DGROUP1 is using external redundancy. After the ASM instance and
the disk group are created, you can exit DBCA. Do not create a database.
a. $ dbca
1. Connected as user oracle in your terminal emulator window, start your ASM instance and list
the processes associated to it. Then determine the characteristics of:
• The mounted disk groups
• The associated ASM disks
• The associated ASM files
$ ORACLE_SID=+ASM
$ export ORACLE_SID
$ echo $ORACLE_SID
+ASM
$ sqlplus / as sysdba
SQL> @sol_15_02_01.sql
SQL>
SQL> startup
ASM instance started
no rows selected
SQL>
2. Connected as SYSDBA under SQL*Plus in another terminal emulator window, determine the list
of disk groups that are visible from your database instance. Then list the processes associated to
your database instance. When done, create a new tablespace called TBSASM that is stored inside
the ASM disk group DGROUP1, and that has only one 200 MB data file. When done, determine
the list of processes associated to your database instance again, and list the data files associated
to your database. What do you observe?
Answer: As soon as the new tablespace is created, the ASM processes are started on the database
instance. They are used to communicate with the ASM instance.
SQL>
SQL> host ps -ef | grep orcl
oracle 3240 1 0 Feb05 ? 00:00:01
/u01/app/oracle/product/10.1.0/p
oracle 18036 1 0 05:05 ? 00:00:00 ora_pmon_orcl
oracle 18038 1 0 05:05 ? 00:00:00 ora_mman_orcl
oracle 18040 1 0 05:05 ? 00:00:00 ora_dbw0_orcl
oracle 18042 1 0 05:05 ? 00:00:00 ora_lgwr_orcl
oracle 18044 1 0 05:05 ? 00:00:00 ora_ckpt_orcl
oracle 18046 1 0 05:05 ? 00:00:01 ora_smon_orcl
oracle 18048 1 0 05:05 ? 00:00:00 ora_reco_orcl
oracle 18050 1 0 05:05 ? 00:00:00 ora_cjq0_orcl
oracle 18052 1 0 05:05 ? 00:00:00 ora_d000_orcl
Tablespace created.
SQL>
SQL> col file_name format a46
SQL>
SQL> select file_name,tablespace_name
2 from dba_data_files;
FILE_NAME TABLESPACE_NAME
---------------------------------------------- ------------------------------
/u01/app/oracle/oradata/orcl/users01.dbf USERS
/u01/app/oracle/oradata/orcl/sysaux01.dbf SYSAUX
/u01/app/oracle/oradata/orcl/undotbs01.dbf UNDOTBS1
/u01/app/oracle/oradata/orcl/system01.dbf SYSTEM
/u01/app/oracle/oradata/orcl/example01.dbf EXAMPLE
+DGROUP1/orcl/datafile/tbsasm.256.1 TBSASM
6 rows selected.
SQL>
Answer: You can see that the disk activity is almost equally distributed across all ASM disks
during the tablespace creation. After the tablespace has been created, the free space is almost the
same on each ASM disk. This is because ASM tries to stripe ASM extents across all ASM disks.
Then when you add a new ASM disk to the disk group, ASM automatically starts a rebalance
operation to redistribute some of the ASM extents to the new disk. In the end, each disk has the
same amount of free space. During the rebalance operation, disk4 had a lot of writes, and few
reads, whereas the other disks had the same amount of read, and few writes.
Diskgroup altered.
SQL> /
SQL> /
no rows selected
SQL>
4. On your database instance, execute the lab_15_02_04.sql script. This script creates and
populates a new table called T, which is stored in TBSASM. When executed, set timing statistics
in your SQL*Plus session and execute the following query:
SELECT count(distinct -
DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID,'SMALLFILE'))
FROM t;
Table created.
1 row created.
SQL> commit;
Commit complete.
1 row created.
SQL> /
2 rows created.
4 rows created.
SQL> /
8 rows created.
.
output truncated
.
SQL> commit;
Commit complete.
SQL> commit;
Commit complete.
COUNT(DISTINCTDBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID,'SMALLFILE'))
---------------------------------------------------------------
1589
Elapsed: 00:01:39.78
SQL>
5. From your ASM instance, drop the ASM disk DGROUP1_0004 from DGROUP1.
Diskgroup altered.
SQL>
Answer: When you drop an ASM disk, ASM automatically rebalances the extents of that disk to
the remaining ones. While this process is going on in the background, you can continue to
execute queries against that disk group without experiencing any interruption. Therefore, the time
to execute the query the second time is a little bit more than for the first execution, but this does
not represent a noticeable impact.
COUNT(DISTINCTDBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID,'SMALLFILE'))
---------------------------------------------------------------
1589
Elapsed: 00:01:41.56
SQL> set timing off;
SQL>
7. Back in your ASM instance, check the impact on the ASM disk activity and free space. What are
your conclusions?
Answer: The same amount of free space is available on each ASM disk, and the major activity
on the ASM disk represents the same amount of writes on each of them. The amount of read is
not significant in this case.
SQL>
1. Connected as SYSDBA under SQL*Plus in your database instance, create a new tablespace called
TBSASMMIG. This tablespace should contain only one 10 MB file stored in your file system (not
using ASM). Create a table called T2 stored in TBSASMMIG. Insert one row inside T2.
Tablespace created.
FILE_NAME TABLESPACE
---------------------------------------------------- ----------
/u01/app/oracle/oradata/orcl/users01.dbf USERS
/u01/app/oracle/oradata/orcl/sysaux01.dbf SYSAUX
/u01/app/oracle/oradata/orcl/undotbs01.dbf UNDOTBS1
/u01/app/oracle/oradata/orcl/system01.dbf SYSTEM
/u01/app/oracle/oradata/orcl/example01.dbf EXAMPLE
+DGROUP1/orcl/datafile/tbsasm.256.1 TBSASM
/u01/app/oracle/product/10.1.0/db_1/dbs/asmmig1.dbf TBSASMMIG
7 rows selected.
SQL>
Table created.
1 row created.
SQL> commit;
Commit complete.
SQL>
2. From your database instance, migrate TBSASMMIG to ASM storage. When done, check that the
migration was successful.
RMAN> exit
FILE_NAME TABLESPACE
---------------------------------------------------- ----------
/u01/app/oracle/oradata/orcl/users01.dbf USERS
/u01/app/oracle/oradata/orcl/sysaux01.dbf SYSAUX
/u01/app/oracle/oradata/orcl/undotbs01.dbf UNDOTBS1
/u01/app/oracle/oradata/orcl/system01.dbf SYSTEM
/u01/app/oracle/oradata/orcl/example01.dbf EXAMPLE
+DGROUP1/orcl/datafile/tbsasm.256.1 TBSASM
+DGROUP1/orcl/datafile/tbsasmmig.257.1 TBSASMMIG
7 rows selected.
C
----------
1
SQL>
SQL>
1 257 10493952
DATAFILE COARSE
SQL>
4. From your database instance, cleanup your environment by dropping tablespace TBSASMMIG
including its contents and data file. Do the same with tablespace TBSASM. Also, remove the file
system file that was originally created to store TBSASMMIG.
Tablespace dropped.
Tablespace dropped.
SQL>
Unless specified otherwise, you should be logging in as SYSDBA either through Database Control or
SQL*Plus.
connect / as sysdba
connect vpd/vpd
commit;
2. Connect as user VPD through SQL*Plus and create a new package called
APP_SECURITY_CONTEXT. This package should contain only one procedure called
SET_EMPNO. The goal of the SET_EMPNO procedure is to assign to the EMPNO attribute of the
VPD_CONTEXT context the employee’s identifier corresponding to the connected user. Use the
procedure DBMS_SESSION.SET_CONTEXT to set the EMPNO attribute, and the
SYS_CONTEXT('USERENV','SESSION_USER') function to determine the name of the
connected user.
connect vpd/vpd
connect vpd/vpd
connect vpd/vpd
5. Connect as user VPD through SQL*Plus and execute the lab_17_01_05.sql script. This
script creates a new package called VPD_SECURITY. This package contains one function called
EMPNO_SEC. The goal of this function is to return the VPD predicate used by your policy. In this
case the returned predicate is:
employee_id = SYS_CONTEXT('vpd_context', 'empno').
connect vpd/vpd
connect vpd/vpd
exec DBMS_RLS.ADD_POLICY( -
OBJECT_SCHEMA => 'vpd' ,-
OBJECT_NAME => 'employees' ,-
POLICY_NAME => 'vpd_policy' ,-
FUNCTION_SCHEMA => 'vpd' ,-
POLICY_FUNCTION => 'vpd_security.empno_sec',-
STATEMENT_TYPES => 'select' ,-
UPDATE_CHECK => false ,-
ENABLE => true ,-
STATIC_POLICY => false ,-
POLICY_TYPE => DBMS_RLS.DYNAMIC ,-
LONG_PREDICATE => false ,-
SEC_RELEVANT_COLS => 'SALARY,COMMISSION_PCT');
Answer: Each time you execute a statement that is not parsed already, the policy function is
evaluated. This is because the policy is set to be dynamic. The fact that the policy function
evaluation is long in this case is simply because the EMPNO_SEC function is looping for a while
before returning the predicate. Also, the last statement returns only one row corresponding to the
connected user. So it is clear that the policy function is applied only in the last case.
connect jf/jf
Answer: The first two statements are already parsed in memory because of the previous step. So,
the policy function is not evaluated in this case because it has already been done. However, for
the third statement, the function is evaluated because the statement has never been executed. The
last statement returns the salary of the corresponding user. Again the policy function is applied
only on the last statement because it references the SALARY column.
connect mh/mh
9. Connect as user VPD and drop the VPD_POLICY policy, and re-create it with the exact same
characteristics except that it should now be a static policy instead of being dynamic. When done,
flush the shared pool of your instance. You can use the lab_17_01_09.sql script.
connect vpd/vpd
exec DBMS_RLS.DROP_POLICY( -
OBJECT_SCHEMA => 'vpd', -
OBJECT_NAME => 'employees', -
POLICY_NAME => 'vpd_policy');
exec DBMS_RLS.ADD_POLICY( -
OBJECT_SCHEMA => 'vpd' ,-
OBJECT_NAME => 'employees' ,-
POLICY_NAME => 'vpd_policy' ,-
FUNCTION_SCHEMA => 'vpd' ,-
POLICY_FUNCTION => 'vpd_security.empno_sec',-
STATEMENT_TYPES => 'select' ,-
UPDATE_CHECK => false ,-
ENABLE => true ,-
STATIC_POLICY => true ,-
POLICY_TYPE => NULL ,-
LONG_PREDICATE => false ,-
SEC_RELEVANT_COLS => 'SALARY,COMMISSION_PCT');
10. Connect as user JF through SQL*Plus and execute the following statements:
Answer: In this case, because the policy is declared to be static, the function is evaluated only
once.
connect jf/jf
11. Connect as SYSDBA and determine which statements are using the defined policy on your
instance. What are your conclusions?
Answer: This step confirms that only two statements were using the policy function. They are
the ones that reference the SALARY and COMMISSION_PCT columns. This can be verified by
using the V$VPD_POLICY view.
connect / as sysdba
select sql_text
from v$sql
where sql_id in (select sql_id from v$vpd_policy);
connect / as sysdba
1. Connect as SYSDBA and write a query with a single WHERE clause condition (using the
REGEXP_LIKE function) that asks for a search-string and then displays the view
definitions of all views with the name [DBA|USER|ALL]_search-string. Make sure your
query is case insensitive. You can use lab_18_01_01.sql.
When you have the solution, you can try these search-string alternatives: catalog,
constraints, clusters, data_files, db_links, extents, tablespaces, …
BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 – Prod
a. Use the REGEXP_INSTR function to alter this query to return the position of the fifth
word in this banner text. You can use lab_18_01_02a.sql.
BANNER WORD_5
---------------------------------------------------------------- ------
Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Prod 32
BANNER HIT_2
---------------------------------------------------------------- -----
Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Prod 32
1. Connect to the HR schema, and create a table called NAMES, with first names using the following
statements:
create table names as
select first_name
from employees
where rownum <= 30;
update names
set first_name = lower(first_name)
where rownum <= 15;
FIRST_NAME
----------
Alberto
Britney
Bruce
Curtis
Daniel
Jennifer
John
Julia
Karen
Kelly
Kevin
Lex
Louise
Nanette
Pat
alexis
amit
3. By default, uppercase characters sort before lowercase characters. Using the ALTER SESSION
command, change NLS_SORT for your session to use case-insensitive binary sorting and repeat
the query from the previous step.
FIRST_NAME
----------
Alberto
alexis
amit
anthony
Britney
Bruce
Curtis
Daniel
david
david
elizabeth
ellen
gerald
harrison
hermann
Jennifer
John
Julia
Karen
Kelly
Kevin
laura
Lex
Louise
mozhe
Nanette
Pat
sarah
shelli
sundar
1. Start two sessions, one connected as SYSDBA and one connected as SH.
SQL>
2. From the SYSDBA session, determine the session ID (sid) and serial number (serial#) from
v$session for the SH user, and then describe the DBMS_MONITOR package. Then, from the
SYSDBA session, enable tracing using the sid and serial# values for the other session,
including the waits and bind information, with the following command:
execute dbms_monitor.session_trace_enable ( -
session_id => <sid> , -
serial_num => <serial#> , -
waits => true , -
binds => true ) ;
SID SERIAL#
---------- ----------
131 26696
SQL>
...
PROCEDURE SESSION_TRACE_ENABLE
Argument Name Type In/Out Default?
------------------------------ ----------------------- ------ --------
SESSION_ID BINARY_INTEGER IN DEFAULT
SERIAL_NUM BINARY_INTEGER IN DEFAULT
WAITS BOOLEAN IN DEFAULT
BINDS BOOLEAN IN DEFAULT
SQL>
SQL>
3. From the SH session, execute the lab_18_03_03.sql script, and then exit your session.
SQL> exit
4. From the remaining SYSDBA session, determine your user_dump_dest location, locate the
trace file, and view the contents.
SQL> host
$ cd /u01/app/oracle/admin/orcl/udump
$ view orcl_ora_26997.trc
...
*** 2003-12-22 07:43:27.760
*** ACTION NAME:() 2003-12-22 07:43:27.759
*** MODULE NAME:(SQL*Plus) 2003-12-22 07:43:27.759
*** SERVICE NAME:(SYS$USERS) 2003-12-22 07:43:27.759
*** SESSION ID:(139.18972) 2003-12-22 07:43:27.759
PARSING IN CURSOR #1 len=259 dep=0 uid=61 oct=3 lid=61
m=1046980281015229 hv=215424196 ad='57cd5ae8'
select c.cust_last_name
, t.calendar_year
, sum(s.amount_sold)
from sales s join
customers c using (cust_id) join
times t using (time_id)
group by c.cust_last_name, t.calendar_year
order by c.cust_last_name, t.calendar_year
END OF STMT
PARSE
1:c=20000,e=20382,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,tim=1046980281015200
BINDS #1:
EXEC #1:c=0,e=198,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,tim=1046980281016198
WAIT #1: nam='SQL*Net message to client' ela= 11 p1=1650815232 p2=1 p3=0
WAIT #1: nam='db file sequential read' ela= 74 p1=5 p2=3715 p3=1
WAIT #1: nam='db file scattered read' ela= 10676 p1=5 p2=3716 p3=5
WAIT #1: nam='db file scattered read' ela= 15295 p1=5 p2=3769 p3=8
WAIT #1: nam='db file scattered read' ela= 2130 p1=5 p2=3778 p3=7
WAIT #1: nam='db file scattered read' ela= 328 p1=5 p2=3785 p3=8
...
WAIT #1: nam='direct path write temp' ela= 2 p1=201 p2=4687 p3=7
WAIT #1: nam='direct path write temp' ela= 2 p1=201 p2=4736 p3=7
WAIT #1: nam='direct path read temp' ela= 38 p1=201 p2=4745 p3=7
WAIT #1: nam='direct path read temp' ela= 23 p1=201 p2=4701 p3=7
...
FETCH #1:c=20330000,e=23361287,p=5085,cr=3232,cu=0,mis=0,r=1,dep=0,og=1,
tim=1046980304385232
WAIT #1: nam='SQL*Net message from client' ela= 870 p1=1650815232 p2=1
3=0
WAIT #1: nam='SQL*Net message to client' ela= 5 p1=1650815232 p2=1 p3=0
FETCH
1:c=0,e=158,p=0,cr=0,cu=0,mis=0,r=15,dep=0,og=1,tim=1046980304386861
...
*** 2003-12-22 07:44:05.697
WAIT #1: nam='SQL*Net message from client' ela= 11328652 p1=1650815232
2=1 p3=0
XCTEND rlbk=0, rd_only=1
STAT #1 id=1 cnt=3026 pid=0 pos=1 obj=0
op='SORT GROUP BY (cr=3232 pr=5085 pw=1932 time=23370900 us)'