Escolar Documentos
Profissional Documentos
Cultura Documentos
For systems running Solaris 11 Express, follow MOS note 1431284.1 to upgrade to
Solaris 11. After upgrading to Solaris 11, apply Solaris 11 SRU 7 or later.
Note: All references to ORACLE_HOME in this procedure are to the RDBMS ORACLE_HO
ME directory (usually /u01/app/oracle/product/11.2.0/dbhome_1) except where spec
ifically noted. All references to GI_HOME should be replaced with the ORACLE_HOM
E directory for the Grid Infrastructure (GI).
By convention, the dollar sign ($) prompt signifies a command run as the oracle
user (or Oracle software owner account) and the hash (#) prompt signifies a comm
and that is run as root. This is further clarified by prefixing the $ or # with
(oracle)$ or (root)#.
If running Solaris 11, the system must be running Solaris 11 SRU 07 or later. Ad
ditionally, the libfuse package must be installed. Presence of the libfuse packa
ge can be verified with "pkg list libfuse" (should return one line).
To verify the SRU currently on the system, as root run: "pkg info entire | grep
SRU" and you'll see a reference to the SRU in the output. The delivered SRU vers
ion based on the Exadata release may be found in 888828.1. If the system is runn
ing SRU 06 or earlier, it will require an update before installing the libfuse p
ackage. If the system is running SRU 07 or later, skip to the next step to insta
ll libfuse.
After reviewing note 1021281.1 to configure repository access, run: pkg update
The system will apply the latest package updates and create a new boot environme
nt and set it as the default. To confirm, run: beadm list. You should see a "R"
shown next to the boot environment that will be active upon reboot. The "N" will
show the boot environment that is active now. At this stage, these two letters
should be on different lines until you reboot the system.
Reboot the server to have it boot to the updated SRU environment.
If running Solaris 11, ensure that the libfuse package is installed by running "
pkg info libfuse" at the prompt. If no rows or an error are returned, then follo
w the steps below to install libfuse.
After reviewing note 1021281.1 to configure repository access, run this command
to install libfuse: pkg install libfuse
Confirm that it installed by running: pkg verify libfuse
The pkg verify command should have no output if successful.
In the procedures listed in this note, both Solaris and Linux database servers a
re assumed to have user equivalence for root and the DBFS respository database (
typically "oracle") users. Each of those users is assumed to have a dbs_group fi
le in their $HOME directory that contains a list of cluster hostnames. The dcli
utility is assumed to be available on both Solaris and Linux database nodes.
When non-root commands are shown, it is assumed that proper environment variable
s for ORACLE_SID and ORACLE_HOME have been set and that the PATH has been modifi
ed to include $ORACLE_HOME/bin. These things may be done automatically by the or
aenv script on Linux or Solaris systems.
For Linux database servers, there are several steps to perform as root. Solaris
database servers do not require this step and can skip it. First, add the oracle
user to the fuse group on Linux. Run these commands as the root user.
(root)# dcli -g ~/dbs_group -l root usermod -a -G fuse oracle
Create the /etc/fuse.conf file with the user_allow_other option. Ensure proper p
rivileges are applied to this file.
(root)# dcli -g ~/dbs_group -l root "echo user_allow_other > /etc/fuse.conf"
(root)# dcli -g ~/dbs_group -l root chmod 644 /etc/fuse.conf
For Solaris database servers, to enable easier debugging and troubleshooting, it
is suggested to add a line to the /etc/user_attr file to give the oracle user t
he ability to mount filesystems directly. As root, run this on a database server
:
autoextend on, allocating additional 8GB to the tablespace as needed. You shoul
d size your tablespace according to your expected DBFS utilization. A bigfile ta
blespace is used in this example for convenience, but smallfile tablespaces may
be used as well.
SQL> create bigfile tablespace dbfsts datafile '+DBFS_DG' size 32g autoextend on
next 8g maxsize 300g NOLOGGING EXTENT MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SP
ACE MANAGEMENT AUTO ;
SQL> create user dbfs_user identified by dbfs_passwd default tablespace dbfsts q
uota unlimited on dbfsts;
SQL> grant create session, create table, create view, create procedure, dbfs_rol
e to dbfs_user;
With the user created and privileges granted, create the database objects that w
ill hold DBFS.
(oracle)$ cd $ORACLE_HOME/rdbms/admin
(oracle)$ sqlplus dbfs_user/dbfs_passwd
SQL> start dbfs_create_filesystem dbfsts FS1
This script takes two arguments:
dbfsts: tablespace for the DBFS database objects
FS1: filesystem name, this can be any string and will appear as a directory unde
r the mount point
For more information about these arguments, see the DBFS documentation at http:/
/download.oracle.com/docs/cd/E11882_01/appdev.112/e18294/adlob_client.htm
Check the output of the dbfs_create_filesystem script for errors.
Perform the one-time setup steps for mounting the filesystem. The mount-dbfs.sh
script attached to this note provides the logic and necessary scripting to mount
DBFS as a cluster resource. The one-time setup steps required for each of the t
wo mount methods (dbfs_client or mount) are outlined below. There are two option
s for mounting the DBFS filesystem and each will result in the filesystem being
available at /dbfs_direct. Choose one of the two options.
The first option is to utilize the dbfs_client command directly, without using a
n Oracle Wallet. There are no additional setup steps required to use this option
.
The second option is to use the Oracle Wallet to store the password and make use
of the mount command. The wallet directory ($HOME/dbfs/wallet in the example he
re) may be any oracle-writable directory (creating a new, empty directory is rec
ommended). All commands in this section should be run by the oracle user unless
otherwise noted.
On Linux DB nodes, set the library path on all nodes using the commands that fol
low (substitute proper RDBMS ORACLE_HOMEs):
(root)# dcli -g dbs_group -l root mkdir -p /usr/local/lib
(root)# dcli -g dbs_group -l root ln -s /u01/app/oracle/product/11.2.0/dbhome_1/
lib/libnnz11.so /usr/local/lib/libnnz11.so
(root)# dcli -g dbs_group -l root ln -s /u01/app/oracle/product/11.2.0/dbhome_1/
lib/libclntsh.so.11.1 /usr/local/lib/libclntsh.so.11.1
(root)# dcli -g dbs_group -l root ln -s /lib64/libfuse.so.2 /usr/local/lib/libfu
se.so.2
(root)# dcli -g dbs_group -l root 'echo /usr/local/lib >> /etc/ld.so.conf.d/usr_
local_lib.conf'
(root)# dcli -g dbs_group -l root ldconfig
Create a new TNS_ADMIN directory ($HOME/dbfs/tnsadmin) for exclusive use by the
DBFS mount script.
PATH=$ORACLE_HOME/bin:$PATH
export PATH ORACLE_HOME
crsctl add resource $RESNAME \
-type local_resource \
-attr "ACTION_SCRIPT=$ACTION_SCRIPT, \
CHECK_INTERVAL=30,RESTART_ATTEMPTS=10, \
START_DEPENDENCIES='hard(ora.$DBNAMEL.db)pullup(ora.$DBNAMEL.db)',\
STOP_DEPENDENCIES='hard(ora.$DBNAMEL.db)',\
SCRIPT_TIMEOUT=300"
##### end script add-dbfs-resource.sh
Then run this as the Grid Infrastructure owner (typically oracle) on one databas
e server only:
(oracle)$ sh ./add-dbfs-resource.sh
When successful, this command has no output.
It is not necessary to restart the database resource at this point, however, you
should review the following note regarding restarting the database now that the
dependencies have been added.
Note: After creating the $RESNAME resource, in order to stop the $DBNAME databas
e when the $RESNAME resource is ONLINE, you will have to specify the force flag
when using srvctl. For example: "srvctl stop database -d fsdb -f". If you do not
specify the -f flag, you will receive an error like this:
(oracle)$ srvctl stop database -d fsdb
PRCD-1124 : Failed to stop database fsdb and its services
PRCR-1065 : Failed to stop resource (((((NAME STARTS_WITH ora.fsdb.) && (NAME EN
DS_WITH .svc)) && (TYPE == ora.service.type)) && ((STATE != OFFLINE) || (TARGET
!= OFFLINE))) || (((NAME == ora.fsdb.db) && (TYPE == ora.database.type)) && (STA
TE != OFFLINE)))
CRS-2529: Unable to act on 'ora.fsdb.db' because that would require stopping or
relocating 'dbfs_mount', but the force option was not specified
Using the -f flag allows a successful shutdown and results in no output.
Also note that once the $RESNAME resource is started and then the database it de
pends on is shut down as shown above (with the -f flag), the database will remai
n down. However, if Clusterware is then stopped and started, because the $RESNAM
E resource is still has a target state of ONLINE, it will cause the database to
be started automatically when normally it would have remained down. To remedy th
is, ensure that $RESNAME is taken offline (crsctl stop resource $RESNAME) at the
same time the DBFS database is shutdown.
Managing DBFS mounting via Oracle Clusterware
After the resource is created, you should be able to see the dbfs_mount resource
by running crsctl stat res dbfs_mount and it should show OFFLINE on all nodes.
For example:
(oracle)$ <GI_HOME>/bin/crsctl stat res dbfs_mount -t
-------------------------------------------------------------------------------NAME
TARGET STATE
SERVER
STATE_DETAILS
-------------------------------------------------------------------------------Local Resources
-------------------------------------------------------------------------------dbfs_mount
OFFLINE OFFLINE
dscbac05
OFFLINE OFFLINE
dscbac06
To bring dbfs_mount online which will mount the filesystem on all nodes, run crs
ctl start resource dbfs_mount from any cluster node. This will mount DBFS on all
nodes. For example:
(oracle)$ <GI_HOME>/bin/crsctl start resource dbfs_mount
CRS-2672: Attempting to start 'dbfs_mount' on 'dscbac05'
CRS-2672: Attempting to start 'dbfs_mount' on 'dscbac06'
CRS-2676: Start of 'dbfs_mount' on 'dscbac06' succeeded
CRS-2676: Start of 'dbfs_mount' on 'dscbac05' succeeded
(oracle)$ <GI_HOME>/bin/crsctl stat res dbfs_mount -t
-------------------------------------------------------------------------------NAME
TARGET STATE
SERVER
STATE_DETAILS
-------------------------------------------------------------------------------Local Resources
-------------------------------------------------------------------------------dbfs_mount
ONLINE ONLINE
dscbac05
ONLINE ONLINE
dscbac06
Once the dbfs_mount Clusterware resource is online, you should be able to observ
e the mount point with df -h on each node. Also, the default startup for this re
source is "restore" which means that if it is online before Clusterware is stopp
ed, it will attempt to come online after Clusterware is restarted. For example:
(oracle)$ df -h /dbfs_direct
Filesystem
Size Used Avail Use% Mounted on
dbfs
1.5M 40K 1.4M 3% /dbfs_direct
To unmount DBFS on all nodes, run this as the oracle user:
(oracle)$ <GI_HOME>/bin/crsctl stop res dbfs_mount
Note the following regarding restarting the database now that the dependencies h
ave been added between the dbfs_mount resource and the DBFS repository database
resource.
Note: After creating the dbfs_mount resource, in order to stop the DBFS reposito
ry database when the dbfs_mount resource is ONLINE, you will have to specify the
force flag when using srvctl. For example: "srvctl stop database -d fsdb -f". I
f you do not specify the -f flag, you will receive an error like this:
(oracle)$ srvctl stop database -d fsdb
PRCD-1124 : Failed to stop database fsdb and its services
PRCR-1065 : Failed to stop resource (((((NAME STARTS_WITH ora.fsdb.) && (NAME EN
DS_WITH .svc)) && (TYPE == ora.service.type)) && ((STATE != OFFLINE) || (TARGET
!= OFFLINE))) || (((NAME == ora.fsdb.db) && (TYPE == ora.database.type)) && (STA
TE != OFFLINE)))
CRS-2529: Unable to act on 'ora.fsdb.db' because that would require stopping or
relocating 'dbfs_mount', but the force option was not specified
Using the -f flag allows a successful shutdown and results in no output.
Also note that once the dbfs_mount resource is started and then the database it
depends on is shut down as shown above (with the -f flag), the database will rem
ain down. However, if Clusterware is then stopped and started, because the dbfs_
mount resource still has a target state of ONLINE, it will cause the database to
be started automatically when normally it would have remained down. To remedy t
his, ensure that dbfs_mount is taken offline (crsctl stop resource dbfs_mount) a
t the same time the DBFS database is shutdown.
Steps to Perform If Grid Home or Database Home Changes
There are several cases where the ORACLE_HOMEs used in the management or mountin
g of DBFS may change. The most common case is when performing an out-of-place up
grade or doing out-of-place patching by cloning an ORACLE_HOME. When the Grid In
frastructure ORACLE_HOME or RDBMS ORACLE_HOME change, a few changes are required
. The items that require changing are:
Modifications to the mount-dbfs.sh script. This is also a good time to consider
updating to the latest version of the script attached to this note.
If using the wallet-based mount on Linux hosts, the shared libraries must be res
et.
For example, if the new RDBMS ORACLE_HOME=/u01/app/oracle/product/11.2.0.2/dbhom
e_1 *AND* the wallet-based mounting method using /etc/fstab is chosen, then the
following commands will be required as the root user. If the default method (usi
ng dbfs_client directly) is used, these steps may be skipped.
(root)# dcli -l root -g ~/dbs_group rm -f /usr/local/lib/libnnz11.so /usr/local/
lib/libclntsh.so.11.1
(root)# dcli -l root -g ~/dbs_group "cd /usr/local/lib; ln -sf /u01/app/oracle/p
roduct/11.2.0.2/dbhome_1/lib/libnnz11.so"
(root)# dcli -l root -g ~/dbs_group "cd /usr/local/lib; ln -sf /u01/app/oracle/p
roduct/11.2.0.2/dbhome_1/lib/libclntsh.so.11.1"
(root)# dcli -l root -g ~/dbs_group ldconfig
(root)# dcli -l root -g ~/dbs_group rm -f /sbin/mount.dbfs ### remove this, new
deployments don't use it any longer
For all deployments, the mount-dbfs.sh script must be located in the new Grid In
frastructure ORACLE_HOME (<GI_HOME>/crs/script/mount-dbfs.sh). At times when the
ORACLE_HOMEs change, the latest mount-dbfs.sh script should be downloaded from
this note's attachments and deployed using the steps detailed earlier in this no
te steps 14-16. Since the custom resource is already registered, it does not nee
d to be registered again.
With the new script deployed into the correct location on the new ORACLE_HOME, t
he next step is to modify the cluster resource, to change the location of the mo
unt-dbfs.sh script. Also, if not already configured, take the opportunity to cha
nge the RESTART_ATTEMPTS=10. Use these commands which should be run from any clu
ster node (replace <NEW_GI_HOME> with full path appropriately):
(oracle)$ crsctl modify resource dbfs_mount -attr "ACTION_SCRIPT=<NEW_GI_HOME>/c
rs/script/mount-dbfs.sh"
(oracle)$ crsctl modify resource dbfs_mount -attr "RESTART_ATTEMPTS=10"
After these changes are complete, verify that the status of the resources is sti
ll online. This concludes the changes required when the ORACLE_HOMEs change.
Removing DBFS configuration
The steps in this section will deconfigure the components configured by the step
s above. The steps here will only deconfigure the parts that were configured by
this procedure.
Stop the dbfs_mount service in clusterware using the oracle account.
(oracle)$ <GI_HOME>/bin/crsctl stop resource dbfs_mount
CRS-2673: Attempting to stop 'dbfs_mount' on 'dadzab06'
CRS-2673: Attempting to stop 'dbfs_mount' on 'dadzab05'
CRS-2677: Stop of 'dbfs_mount' on 'dadzab05' succeeded
CRS-2677: Stop of 'dbfs_mount' on 'dadzab06' succeeded
Confirm that the resource is stopped and then remove the clusterware resource fo
r dbfs_mount as the oracle (or Grid Infrastructure owner) user.
dadzab06
On Solaris hosts, if you want to inspect the arguments for dbfs_client to see wh
at options it was invoked with, you will want to identify the process ID using "
ps -ef|grep dbfs_client", but then you'll need to use the "pargs <PID>" command
to see the complete options. The Solaris output for the ps command truncates the
command line at 80 characters which is typically not enough to display all opti
ons.
If you receive an error saying "File system already present at specified mount p
oint <mountpoint>" then ensure that the mount point directory is empty. If there
are any contents in the mount point directory, this error will prevent the file
system mount from succeeding. Seasoned system administrators will note that this
behavior differs from typical filesystem mounts where mount point directories c
an have contents and those contents will be hidden while the mounted filesystem
remains mounted. In other terms, it "overlays" the new mount. With fuse-mounted
filesystems, the mount point directory must be empty prior to mounting the fuse
(in this case, DBFS) filesystem.