Escolar Documentos
Profissional Documentos
Cultura Documentos
Executive Overview.......................................................................................... 3
Introduction ....................................................................................................... 3
Database Migration to ASM overview ........................................................... 3
Cold Migration .............................................................................................. 3
Hot Migration................................................................................................ 4
Database Migration – Detailed Steps ............................................................. 5
Assumptions .................................................................................................. 5
Choosing the correct method ..................................................................... 6
Cold Migration.......................................................................................... 6
Hot Migration ........................................................................................... 6
Cold Migration to ASM ............................................................................... 8
1. Cold Migration - Pre Migration ......................................................... 8
2. Cold Migration - First Outage – Recovery Area moves to ASM.. 9
3. Cold Migration - Database Backup ................................................. 11
4. Cold Migration - Second Outage – Data Area moves to ASM... 12
5. Cold Migration - Post Migration ..................................................... 15
Hot Migration to ASM............................................................................... 16
1. Hot Migration - Pre Migration......................................................... 16
2. Hot Migration - The Switch ............................................................. 18
3. Hot Migration - Post Migration....................................................... 20
Conclusion........................................................................................................ 21
Appendix 1: Migrating ASM Disk Groups Back to Original Database
Storage............................................................................................................... 22
EXECUTIVE OVERVIEW
In Oracle Database 10g, storage management and provisioning for the database has
becomes much more simplified with a new feature called Automatic Storage
Management (ASM). ASM provides filesystem and volume manager capabilities
built into the Oracle database kernel. With this capability, ASM simplifies storage
management tasks, such as creating/laying out databases and disk space
management. Since ASM allows disk management to be done using familiar
create/alter/drop SQL statements, DBAs do not need to learn a new skill set or
make crucial decisions on provisioning.
This white paper describes two methods of migrating the Oracle database from its
current storage to ASM.
INTRODUCTION
With the introduction of Oracle Database 10g, Oracle now provides Automatic
Storage Management (ASM) that is optimized for Oracle files. You may decide to
migrate to ASM with a piecemeal approach by allocating new Oracle data files into
ASM as the database grows, however to receive the full benefits of ASM such as
the ability to add or remove storage from the database configuration with
automated balancing of the distribution of the data files without downtime, the
entire database should be migrated to ASM. This white paper introduces two
methods to migrate an existing database completely from legacy storage to ASM.
Cold Migration
The cold migration method is used when there is insufficient unallocated disk space
available to hold a full copy of the database. This method consists of two phases:
Hot Migration
The hot migration method is used when there is sufficient, unallocated disk space
that can be used for ASM. The amount of disk space required for hot migration
depends on the backup strategy and the amount of disk space used by disk-based
backups. The minimum amount of unallocated disk space required is equivalent to
at least the size of the database. This method consists of a preparation phase that
builds and configures the ASM storage while the database remains online followed
by a short outage phase that switches the database to the new ASM storage while
the database is offline.
Assumptions
The following assumptions have been made about the environment.
The database area (init.ora:db_create_file_dest) for the database resides on the
Unix file system /oradata, and is striped and mirrored over 4 individual disks.
The recovery area (init.ora:db_recovery_file_dest) for the database resides on
the Unix file system /flash_recovery, and is striped and mirrored over 4
individual disks.
The database has two redo log members per group, one on the /oradata file
system and the other on the /flash_recovery file system.
The database has two control files, one on the /oradata file system and the other
on the /flash_recovery file system.
There are 8 additional disks available for the Hot Migration.
The following picture represents the disk layout before and after the migration is
completed.
Note: For more information on ASM best practices and failure groups, please take
a look at the following links:
ASM on OTN
Oracle Database Administrator's Guide, Chapter 12
1The commands may differ from those described below depending upon the
Operating System.
Second Outage
Proportional to database
restore and recovery
Cold Migration
Cold Migration is required when there is not enough disk space to make an ASM
disk group large enough to contain a copy of the database and recovery area files
during the course of the migration. This can be done using RMAN with two
methods: disk or tape.
Cold Migration using disk relies on the ability to create a full RMAN backup in the
Flash Recovery Area to speed up the time take for the database to be restored
during the second outage.
Cold Migration using tape is the slowest method to migrate to ASM and requires
the use of tape devices to hold the backup of the database. The method described
below assumes that the default device type for backup’s has been set to tape using
the RMAN configure command.
Hot Migration
Hot Migration is possible when there is sufficient disk space available to create an
ASM disk group to contain a duplicate of the database during the course of the
migration.
Hot Migration to new storage relies upon the additional allocation of disk space to
be available on a permanent basis. This method is ideal in the event that new
storage is being added to hold the database, and the current storage is remaining in
place to be used to hold disk backups.
Hot Migration with current storage utilizes the temporary presence of additional
storage. The method has additional steps that are documented in Appendix 1 of the
paper, and is used when the disk space is available for a limited amount of time.
The ultimate goal is to migrate the database back to the original storage system after
it is configured with ASM.
The ASM Instance is created using a special init.ora file. A typical ASM init.ora file
is as follows;
*.instance_type='asm'
*.remote_login_passwordfile='SHARED'
*.large_pool_size=12M
*.asm_power_limit=10
*.background_dump_dest='/u01/app/oracle/admin/+ASM/bdump'
*.core_dump_dest='/u01/app/oracle/admin/+ASM/cdump'
*.user_dump_dest='/u01/app/oracle/admin/+ASM/udump'
*.asm_diskstring='/dev/raw/raw*'
Oracle defaults the ASM instance to have an Oracle SID of ‘+ASM’
It is considered best practice to create an spfile for the ASM Instance so that the
asm_diskgroups parameter is automatically updated when a new disk group is
created. With this parameter set, the proper ASM disk groups are mounted when
the ASM Instance is subsequently started.
$ export ORACLE_SID=+ASM
$ sqlplus “/as sysdba”
Connected to an idle instance.
File created.
The ASM instance is started no mount in the same way that any other Oracle
Instance may be started.
SQL> startup nomount
During this phase, the database must be shutdown so that the storage currently
used for the recovery area may be reformatted for use by ASM.
Clear the old Recovery Area
Prepare the disks for use by ASM
Change the permissions on the disk device file
Create the Recovery Area Disk Group
Prepare the Production Database to use the ASM disk group
Remove the redo log members from the /flash_recovery file system. Query
v$logfile and for all files residing in the /flash_recovery file system, and then
drop the redo log member. This should be completed for both online and
standby redo log files.
SQL> alter database drop logfile member
'/flash_recovery/ORCL/onlinelog/o1_mf_1_0fpqygx6_.log';
Shutdown the database and then remount
RMAN> shutdown immediate;
This step varies depending on the operating system. For example, on Linux, the
device that the original file system was built on must be removed, the RAID device
must be stopped, and then the disks must have a RAW device created over the
block device.
The ASM Instance runs as the Oracle user and therefore the permissions of the
disk device files for all the disks that will be used by ASM should be changed so
that the ASM instance has write access.
# chown oracle:dba /dev/raw/raw[5-8]
# chmod 640 /dev/raw/raw[5-8]
Using the disk device files created previously, the ASM disk group for the Recovery
Area can be built. Assuming the Recovery Area is to be built on 4 disks with two
failure groups the command would be.
SQL> create diskgroup RECOVERY_AREA normal redundancy
failgroup controller1 disk ‘/dev/raw/raw5’,’/dev/raw/raw6’
failgroup controller2 disk ‘/dev/raw/raw7’,’/dev/raw/raw8’;
Diskgroup created.
With the creation of the ASM disk group, the Oracle database instance must be
instructed to use the ASM disk group for recovery related files.
The Oracle database instance must be mounted to allow the changes to proceed
SQL> startup mount
Change the “db_recovery_file_dest” parameter to point to the
When an ASM Disk Group is used in
RECOVERY_AREA disk group.
an Oracle database, the disk group
name is prefixed by a “+” sign. SQL> alter system set db_recovery_file_dest='+RECOVERY_AREA'
scope=both;
Re-enable flashback database if required
SQL> alter database flashback on;
Re-establish the redo logfile members back into the database.
SQL> alter database add logfile member '+RECOVERY_AREA' to group 1;
NAME
-------------------------------------------------------------------
--------
/oradata/ORCL/controlfile/o1_mf_0fpqyfw7_.ctl
SQL> shutdown
The database is available and the Oracle instance is using the original storage for
the database area and the ASM disk group (+RECOVERY_AREA) for flash recovery area
files. We can now prepare for the second phase of the migration.
Enable optimized incremental backups
Make the initial database backup to the ASM Disk Group
Remove the redo logfile member
Create an incremental backup of the database to the ASM Disk Group
Oracle 10g introduced optimized incremental backups via the use of the block
change-tracking file. If block change tracking has not been enabled previously on
the database, then it should be enabled for the duration of the ASM migration. The
use of the block change-tracking file will reduce the time that the final incremental
backups take to run.
SQL> alter database enable block change tracking;
Ensure that the directory structure exists in the new ASM Disk Group for the
control files.
SQL> alter database backup controlfile to ‘+RECOVERY_AREA’;
Determine the value of the db_unique_name init.ora parameter.
If the DB_UNIQUE_NAME is not set,
SQL> show parameter db_unique_name
then the DB_UNIQUE_NAME defaults
NAME TYPE VALUE
to the value of the DB_NAME
------------------------------------ ----------- ------------------
parameter. --------
db_unique_name string ORCL
BYTES NAME
---------- --------------------------------------------------------
----
20971520 /oradata/ORCL/datafile/o1_mf_temp_0fpr0dbs_.tmp
Drop the temporary files
SQL> alter database tempfile
‘/oradata/ORCL/datafile/o1_mf_temp_0fpr0dbs_.tmp’ drop;
We now need to shutdown the database in preparation for the creation of the
ASM DATA_AREA Disk Group
SQL> shutdown immediate
Unmount the Data Area file system
# umount /oradata
Repeat Prepare the disks for use by ASM and Change the permissions on the
disk device file on page 10 for the new disks that will be added to the Data
Area Disk Group.
Create the Data Area Disk Group - Assuming that the Data Area is to be built
on the following 4 disks, with two failure groups
SQL> create diskgroup DATA_AREA normal redundancy
failgroup controller1 disk ‘/dev/raw/raw1’,’/dev/raw/raw2’
failgroup controller2 disk ‘/dev/raw/raw3’,’/dev/raw/raw4’;
Diskgroup created.
Migrate the Control File to the official locations on both ASM Disk Groups
At present, there is only one redo log file member per group that resides in the
ASM RECOVERY_AREA Disk Group.
Re-establish the second redo log member on the +DATA_AREA diskgroup for
all online and standby redo log groups.
SQL> alter database add logfile member '+DATA_AREA' to group 1;
All that remains is to validate that all files have been moved to the ASM Disk
Groups
We can now query the database and ensure that all files reside in either the
DATA_AREA or RECOVERY_AREA ASM Disk Group
SQL> select name from v$datafile
union
select name from v$tempfile
union
select member from v$logfile
union
select name from v$controlfile
union
select filename from v$block_change_tracking
union
select name from v$flashback_database_logfile;
NAME
-------------------------------------------------------------------
---------
+DATA_AREA/orcl/changetracking/ctf.262.1
+DATA_AREA/orcl/controlfile/mycontrol.ctl
+DATA_AREA/orcl/datafile/sysaux.260.1
+DATA_AREA/orcl/datafile/system.258.1
+DATA_AREA/orcl/datafile/undotbs1.259.1
+DATA_AREA/orcl/datafile/users.261.1
+DATA_AREA/orcl/onlinelog/group_1.264.1
+DATA_AREA/orcl/onlinelog/group_2.265.1
+DATA_AREA/orcl/onlinelog/group_3.266.1
+DATA_AREA/orcl/tempfile/temp.263.1
+RECOVERY_AREA/orcl/controlfile/mycontrol.ctl
+RECOVERY_AREA/orcl/flashback/log_1.256.1
+RECOVERY_AREA/orcl/onlinelog/group_1.257.1
+RECOVERY_AREA/orcl/onlinelog/group_2.258.1
+RECOVERY_AREA/orcl/onlinelog/group_3.259.
During this phase of the migration, there is no outage to the primary database.
Prepare the ASM Instance
Create the Data Area Disk Group
Create the Recovery Area Disk Group
Prepare the Production Database for ASM Disk Group usage
Migrate the current RMAN backups to the Recovery Area
Make the initial copy of the Oracle Datafiles
Migrate the Oracle Redo Log and Standby Redo Log files to ASM Disk Groups
Migrate the tempfiles to ASM Disk Groups
Refresh the previous copy of the Oracle Datafiles
Repeat “Prepare the disks for use by ASM” through “Start the ASM Instance”
commencing on page 10 for the new disks that will be added to the Data
Area Disk Group.
Diskgroup created.
Assuming that the Recovery Area is to be built on the following 4 disks, with
two failure groups
SQL> create diskgroup RECOVERY_AREA normal redundancy
failgroup controller1 disk ‘/dev/raw/raw13’,’/dev/raw/raw14’
Diskgroup created.
The next phase is to advise the production database that ASM Disk Groups should
be used for all new data files as well as all recovery area usage.
Change the db_create_file_dest init.ora parameter to point to the
DATA_AREA diskgroup.
SQL> alter system set db_create_file_dest='+DATA_AREA' scope=both;
Change the db_recovery_file_dest init.ora parameter to point to the
RECOVERY_AREA diskgroup.
SQL> alter system set db_recovery_file_dest='+RECOVERY_AREA'
scope=both;
This phase will migrate all the current RMAN backups in the recovery area to the
ASM Disk Group.
Move current backup sets to the ASM disk groups
RMAN> backup backupset all delete input;
Move current data file copies to the ASM disk groups
RMAN> backup as copy datafilecopy all delete input;
Move current archive log files
RMAN> backup as copy archivelog all delete input;
If Database Block Change Tracking has been enabled previously, the file must
The Block Change Tracking file cannot be recreated in the ASM Disk Groups.
be moved to the ASM Disk Group,
SQL> alter database disable block change tracking;
which means that the all Level 1
backups taken after the Block Change Database altered.
Tracking file has been recreated will SQL> alter database enable block change tracking;
not be able to exploit the Block
Database altered.
Change Tracking file.
If Database Block Change Tracking has not been enabled previously on the
In order to exploit the Block Change database, then it must be enabled for the duration of the ASM Migration.
Tracking file, a new level 0 backup SQL> alter database enable block change tracking;
must be taken.
Database altered.
This phase will make copies of all the Oracle datafiles in to the DATA_AREA
ASM Disk Group
Using RMAN backup the database using the ‘AS COPY’ syntax.
RMAN> backup device type disk incremental level 0 as copy tag
'ASM_Migration' database format '+DATA_AREA';
Migrate the Oracle Redo Log and Standby Redo Log files to ASM Disk Groups
This phase will move the Oracle redo log files and Oracle standby redo log files to
ASM Disk Groups. How the new redo log files are added, depends upon the
MEMBER
-------------------------------------------------------------------
---------
/oradata/ORCL/onlinelog/o1_mf_1_0fs38tdh_.log
/flash_recovery/ORCL/onlinelog/o1_mf_1_0fs38tyq_.log
/oradata/ORCL/onlinelog/o1_mf_2_0fs38vmw_.log
/flash_recovery/ORCL/onlinelog/o1_mf_2_0fs393bj_.log
/oradata/ORCL/onlinelog/o1_mf_3_0fs3942r_.log
/flash_recovery/ORCL/onlinelog/o1_mf_3_0fs39c12_.log
Since you cannot drop a member from For each redo log group,
the current logfile group, you must
Drop one of the two current redo log members.
switch logs at least once.
SQL> alter database drop logfile member
'/flash_recovery/ORCL/onlinelog/o1_mf_1_0fs38tyq_.log';
Add the two new redo log members
SQL> alter database add logfile member
'+DATA_AREA','+RECOVERY_AREA' to group 1;
Before a logfile member can be And finally drop the other original redo log member
dropped, a new member must have SQL> alter database drop logfile member
been initialized, so each logfile group '/oradata/ORCL/onlinelog/o1_mf_1_0fs38tdh_.log';
BYTES NAME
---------- --------------------------------------------------------
---------
20971520 /oradata/ORCL/datafile/o1_mf_temp_0fs3bq8w_.tmp
Add the new temporary file
SQL> alter tablespace temp add tempfile size 20m;
Then remove the original temporary file
SQL> alter database tempfile
'/oradata/ORCL/datafile/o1_mf_temp_0fs3bq8w_.tmp' drop;
This is the start of the outage phase, which should be kept to a minimum of steps
Prepare the control_file from ASM Disk Groups
We must first ensure that the directory structure exists in the new ASM Disk
Groups for the control files.
SQL> alter database backup controlfile to ‘+DATA_AREA’;
SQL> alter database backup controlfile to ‘+RECOVERY_AREA’;
We must now determine the value of the db_unique_name init.ora parameter.
If the DB_UNIQUE_NAME is not set,
SQL> show parameter db_name
then the DB_UNIQUE_NAME defaults
NAME TYPE VALUE
to the value of the DB_NAME
------------------------------------ ----------- ------------------
parameter. ---------
db_name string ORCL
SQL> show parameter db_unique_name
NAME
-------------------------------------------------------------------
--------
/oradata/ORCL/controlfile/o1_mf_0fs38sx3_.ctl
/flash_recovery/ORCL/controlfile/o1_mf_0fs38t2w_.ctl
We must now disable and re-enable Flashback Database so that the flashback log
files are recreated in the ASM Recovery Area disk group.
RMAN> sql "alter database flashback off";
All that remains is to remove the block change-tracking file and to validate that all
files have been moved to the ASM Disk Groups
If Block Change Tracking was enabled for the purpose of the migration, then
this should now be disabled
SQL> alter database disable block change tracking;
We can now query the database and ensure that all files reside in either the
DATA_AREA or RECOVERY_AREA ASM Disk Group
SQL> select name from v$controlfile
union
select name from v$datafile
union
select name from v$tempfile
union
select member from v$logfile
union
select filename from v$block_change_tracking
union
select name from v$flashback_database_logfile;
NAME
-------------------------------------------------------------------
---------
+DATA_AREA/orcl/changetracking/ctf.256.1
+DATA_AREA/orcl/controlfile/mycontrol.ctl
+DATA_AREA/orcl/datafile/sysaux.259.1
+DATA_AREA/orcl/datafile/system.257.1
+DATA_AREA/orcl/datafile/undotbs1.258.1
+DATA_AREA/orcl/datafile/users.260.1
+DATA_AREA/orcl/onlinelog/group_1.263.1
+DATA_AREA/orcl/onlinelog/group_2.264.1
+DATA_AREA/orcl/onlinelog/group_3.265.1
+DATA_AREA/orcl/tempfile/temp.266.1
+RECOVERY_AREA/orcl/controlfile/mycontrol.ctl
+RECOVERY_AREA/orcl/flashback/log_1.276.1
+RECOVERY_AREA/orcl/onlinelog/group_1.265.1
+RECOVERY_AREA/orcl/onlinelog/group_2.266.1
+RECOVERY_AREA/orcl/onlinelog/group_3.267.1
FAILGROUP NAME
----------------------------- ------------------------------
CONTROLLER2 RECOVERY_AREA_0003
CONTROLLER2 RECOVERY_AREA_0002
CONTROLLER1 RECOVERY_AREA_0001
CONTROLLER1 RECOVERY_AREA_0000
CONTROLLER2 DATA_AREA_0003
CONTROLLER2 DATA_AREA_0002
CONTROLLER1 DATA_AREA_0001
CONTROLLER1 DATA_AREA_0000
Now we can modify the disk group, remove the temporary storage and add the
original storage back into the ASM Disk Group. Before the disks can be
added into the appropriate disk group, the original storage must be formatted
for use by ASM.
SQL> alter diskgroup data_area
drop disk data_area_0000, data_area_0001, data_area_0002,
data_area_0003
add failgroup controller1 disk ‘/dev/raw/raw1’,’/dev/raw/raw2’
failgroup controller2 disk ‘/dev/raw/raw3’,’/dev/raw/raw4’;
Diskgroup altered.
Diskgroup altered.
To check the status of the rebalance operation, that occurs in the background,
query the v$asm_operation view;
SQL> select * from v$asm_operation;
Oracle Corporation
World Headquarters
500 Oracle Parkway
Redwood Shores, CA 94065
U.S.A.
Worldwide Inquiries:
Phone: +1.650.506.7000
Fax: +1.650.506.7200
www.oracle.com