Você está na página 1de 32

Oracle Optimized Solution for Backup and

Recovery
Zero to Sun ZFS Storage Appliance Backup and
Recovery in 60 Minutes
ORACLE HANDS ON LAB

ORACLE OPEN WORLD 2016

Table of Contents
Lab Overview

Environment Start Up

Sun ZFS Storage Appliance setup process

Initial Setup for Sun ZFS Storage Appliance Simulator

Start all three Virtual Machines

How to login into the VMs

Short introduction to the Storage Manager for first time users

Error! Bookmark not

defined.
Why BUI Usage

Error! Bookmark not defined.

Preparation Work

Create Project

Mount your share on Solaris and write some files

How to setup replication, on demand, scheduled and continuous

Adding a replication target

Adding a SHARE to replicate

Switch directions of replication with continuous replication share


Breaking the continuous replication

9
9

Promote the shares on the remote system ZBA-02

10

Move files on the promoted share

11

Switch directions of replication back to ZBA-01

11

Check new content after switching directions.

13

How to replicate a ISCSI Lun on scheduled time

ORACLE OPTIMIZED SOLUTION FOR ORACLE SUPERCLUSTER BACKUP AND RECOVERY

14

Configuring iscsi devices on the host and ZFS appliance

14

Adding iscsi lun

14

Adding iscsi lun to the host

15

Replicate the iscsi lun

17

Save ZFS Appliance configuration files

18

Create NFS share to replicate from ZBA-01 to ZBA-02

18

Replicate NFS share config to ZBA-02

19

Configure NDMP on the ZFS appliance

20

Preparation Work for NDMP Backup share

21

How to backup file systems to ZFS disk with NDMP via Oracle Secure Backup

21

Create Backup Dataset

21

Perform a Backup

22

Destroy some files to restore

22

Perform the OSB filesystem Restore

22

How to backup volumes (iscsi or FC LUN) to ZFS

24

Create Backup Dataset

24

Create a LUN with files

24

Perform a backup

24

Delete Data on Lun

26

Perform a LUN Data Restore

27

ORACLE OPTIMIZED SOLUTION FOR ORACLE SUPERCLUSTER BACKUP AND RECOVERY

Lab Overview
We want to show backup to ZFS SA, replicate to another datacenter, and use NDMP to
back up to ZFS SA Disk.
This lab will take you through 3 basic steps.
1. We will protect data to disk using replication and backup.
2. Then damage or change the data
3. Recover data from disk using the remote replica and the backup.

While the lab is not meant to be an exhaustive lab for any single product along the way
for those 3 steps we have tried to let the user spend time getting their hands dirty with the
primary products used for backup and recovery, Oracle Secure Backup, NDMP,
Replication and ZFS Storage Appliance.

During this lab you will use 3 Virtual Box VMs: HOL1773-ZBA-01 and HOL1773-ZBA-02,
HOL1773-OSB-01.

ZBA-01 is your primary source disk using the ZFS Storage Appliance
ZBA-02 is your secondary remote target disk using the ZFS Storage Appliance
OSB-01 is our Oracle Secure Backup machine using NDMP to backup to ZFS Storage
Appliance Disk.

VM NAME

SHORT NAME

LONG NAME

IP ADDRESS

HOL1773-OSB-01
HOL1773-ZFS-01
HOL1773-ZFS-02

solaris
zba-01
zba-02

solaris-node-1
zba-01.hol.com
zba-02.hol.com

192.168.56.60
192.168.56.70
192.168.56.80

The user accounts logins are:


root / welcome1 for ZFS Storage Appliance
oracle / welcome1 for the Solaris VM user oracle
admin / oracle for the OSB login (obtool)

1 |

Oracle Optimized Solution for Oracle SuperCluster Backup and Recovery

Environment Start Up
The environment may or may not be already running when you sit to do the lab. The
following steps show how to start the virtual machines if they are not already running. If
there are any other virtual machines running that are not show below please close them
for the best possible hands-on lab experience. The three virtual machines used in the
lab are shown in the following diagram.
Sun ZFS Storage Appliance setup process
Initial Setup for Sun ZFS Storage Appliance Simulator
These steps were performed for you before the lab or can be done on your own if you
are building the lab later on.

First you have to download he Sun ZFS Storage Appliance Simulator and import it to
Virtual Box. Exact downloading instructions for these steps change from time to time
and are not covered in this hands on lab for that reason. The simulator comes with
great documentation on how to download and import the appliance into Virtual Box and
can be found by searching for Sun ZFS Simulator on Oracle.com.
Start all three Virtual Machines
If not done already start the virtual machine HOL1773-ZFS-01 and HOL1773-ZFS-02
first and as last the HOL1773-OSB-01
Step 1. Start Virtual Box Manager
Step 2. Highlight the HOL1773-ZFS-01 and click the start arrow.
Step 3. Highlight the HOL1773-ZFS-02 and click the start arrow.
Step 4. Highlight the HOL1773-OSB-01 and click the start arrow.
Step 5. Be patient, this might take a few minutes, please wait to see three login prompts.

Start the ZFS VMs only but do not login


into the HOL1773-ZFS-01 and
HOL1773-ZFS-02 console, leave it as is.

2 | ORACLE OPTIMIZED SOLUTION FOR ORACLE SUPERCLUSTER BACKUP AND RECOVERY

How to login into the VMs

Step 1. Go to the Solaris Console of OSB-01 and login as:


user oracle
password welcome1
Step 2. Start up the Firefox browser within this VM. It should bring up empty screen with
in the toolbar the short links pre-defined as below picture shows.

From this point forward you will configure the appliance via the BUI in Firefox. You will
not need the command line interface any more.
Click on the ZBA-01. It might be that your browser complains about a non-secure site or
unsupported browser, just continue. The ZFS Storage Appliance is already preconfigured and ready for use.
Login here with:
User: root
Password: welcome1
One important thing to remember is how to get out of the VMs with RIGHT-CTRL click.

3 | ORACLE OPTIMIZED SOLUTION FOR ORACLE SUPERCLUSTER BACKUP AND RECOVERY

Preparation Work
Create Project

Step 1. Click on Shares


Step 2. Click on Projects (left side of the screen)
Step 3. Click on the + in front of ALL

Step 4. Enter Project-A and press Apply

NOTE:
We have now successfully created a new Project named Project-A, we are going to
create a share within this Project in the next four steps.

Create Share

Step 1. Click on the project named: Project-A, you should see this screen:

Step 2. Click on the + in front Filesystems


Step 3. Enter with Name the share name: MyShare01
Step 4. Click on Apply
Step 5. You will now see in the above picture the share named MyShare01

4 | ORACLE OPTIMIZED SOLUTION FOR ORACLE SUPERCLUSTER BACKUP AND RECOVERY

NOTE:
We have finished the preparation work and we are ready to create the replication in the
next chapter.
Mount your share on Solaris and write some files

When prompt for sudo password use welcome1.


Step 1. Go to your Solaris console
Step 2. Start a terminal session (see top bar)
Step 3. Mount the share local, by typing the following:
$ sudo mount 192.168.56.70:/export/MyShare01 /mnt
$ Password: (welcome1)
NOTE: The first mount might take a short while, due to virtual box image.
Step 4. Write some files to the share, by typing the following:
$ sudo cp files/file-* /mnt
Step 5. Look at the files, by typing the following:
$ sudo ls -l /mnt
You should see some files here.
Step 6. Keep the share mounted on /mnt

5 | ORACLE OPTIMIZED SOLUTION FOR ORACLE SUPERCLUSTER BACKUP AND RECOVERY

How to setup replication, on demand, scheduled and continuous


NOTE:
We use Replication between the ZBA-01 and ZBA-02 to demonstrate a Site Failover. A
Site Failover of the ZFS Storage Appliance can be part of a Disaster-Recovery plan. Our
goal is to replicate a SHARE named MyShare01 from ZBA-01 to ZBA-02. Then well
power-off the ZBA-01 and we will recover the Share and the Lun from ZBA-02. Later
when ZBA-01 is available again we will reverse the replication and the share and LUN
will be restored.
Adding a replication target

Step 1. Goto https://192.168.56.70:215 main menu (ZBA-01)


( When needed accept Firefox certificate )
Step 2. Click on Configuration
Step 3. Click on Remote Replication and switch it to ON by clicking ON/OFF button
(it should be green and online)
Step 4. Click on the + in front of Targets
The following screen should popup:

192.168.56.80 = the second zba-02


Step 5. Fill in the above specifics password is welcome1 and press the add button.
If everything went well the following screen should be presented:

Adding a SHARE to replicate

Step 1. Stay on (ZBA-01)


Step 2. Click on Shares (top bar)
Step 3. Click on Projects and then Project-A (left side)
Step 4. Click on Replication

6 | ORACLE OPTIMIZED SOLUTION FOR ORACLE SUPERCLUSTER BACKUP AND RECOVERY

Step 5. Click on the + in front of Actions


The following screen should pop-up :

NOTE:
Notice the Target and Pool are already filled in automatically.
We have some options here play around with the possibilities like On-Demand,
Scheduled or Continuous. Default the Replication stream will be compressed.
Step 6. Click on Continuous and press the ADD button.
You see the Target is green and the Status on Continuous, which is good:

7 | ORACLE OPTIMIZED SOLUTION FOR ORACLE SUPERCLUSTER BACKUP AND RECOVERY

NOTE:
You have successfully managed to setup a continuous replicated SHARE, to see it work
go to the second ZBA-02 via your browser via the 2nd TAB within Firefox:
Step 1. Goto https://192.168.56.80:215 main menu (ZBA-02)
Step 2. Click on Shares (top view)
Step 3. Click on Projects (left side)
Step 4. Click on All and then on zba-01: Project-A (left side)
Step 5. Click on Replication (screen middle)
Here you see the remote started Project which is replicated to this unit.

Step 6. Another way to see it is, Click on Status (top view)


Step 7. Click on Replication (left side) (it should be green and running)
Step 8. Click on Sources (right side)

Any problems will be reported here.


NOTE:
We are now ready for the disaster which we will create on the next page.

8 | ORACLE OPTIMIZED SOLUTION FOR ORACLE SUPERCLUSTER BACKUP AND RECOVERY

Switch directions of replication with continuous replication share


NOTE:
In this lab will take you through the following steps.
1. We create a disaster at side A and continue on side B
2. We will break the continuous replication
3. Promote the shares on the remote system ZBA-02
4. Change some files on the promoted share
5. Switch directions of replication back to ZBA-01
6. Check new content after switching directions.
Breaking the continuous replication

Shutdown ZBA-01 by pressing the following button on ZBA-01


In this way we simulate a zfs appliance or site disaster.
In any case the unit is not available any longer and we are going to
promote the remote site as being the new primary.
Step 1. Login into ZBA-01 and press the shutdown button,
continued with the POWER-OFF selection and press OK.
You should see this in your browser:

Step 2. Notice the NFS share MyShare01 is not working any longer.
Step 3. Start a terminal session (see top bar)
Step 4. Try to list the files in /mnt, by typing the following:
$ sudo ls -ltr /mnt

You will see that the session is hanging. Which confirms the share is unavailable.
Step 5. Use CTRL-C to stop the command.
Step 6. Stop the mount to the not working ZFS share by typing:
$ sudo umount -f /mnt
NOTE: We are now going to promote the second appliance with the replicated share.

9 | ORACLE OPTIMIZED SOLUTION FOR ORACLE SUPERCLUSTER BACKUP AND RECOVERY

Promote the shares on the remote system ZBA-02

Step 1. Goto https://192.168.56.80:215 main menu (ZBA-02)


Step 2. Click on Shares
Step 3. Click on zba-01:Project-A
Step 4. Click on Replication
At the right side you will see 5 options :
1.
2.
3.
4.
5.

Import update from external media


Enable/Disable Replication
Clone most recently received project snapshot
Sever (splitting) replication connection
Reverse the direction of replication

Step 5. Click on Reverse Direction of Replication


last of the four options.
Step 6. You are asked for a new name which will be used are new project name.

Step 7. Accept the warning by pressing OK.

You should see that the Project name is changed to DR-Project-A.

Step 8. Mount the filesystem on the new primary ZBA-02


Step 9. Start a new terminal session (see top bar)
Step 10. Try to list the files in /mnt, by typing the following (if prompted for password use
welcome1):
$ sudo mount 192.168.56.80:/export/MyShare01 /mnt
$ sudo ls -l /mnt

10 | ORACLE OPTIMIZED SOLUTION FOR ORACLE SUPERCLUSTER BACKUP AND RECOVERY

Move files on the promoted share

Step 1. Go to the /mnt mount point and move some files so we can see the changes
later on when we are back on ZBA-01.
$ sudo su $ cd /mnt
$ mkdir updated
$ mv file-* updated
$ du a
$ exit
Switch directions of replication back to ZBA-01

Step 1. Umount the filesystem. Do this nice and clean.


$ sudo umount /mnt
Step 2. Start the ZBA-01 VM again within Virtual box by pressing START.
Step 3. Look at the console until you see the login prompt, this takes a few minutes
Step 4. When available again. Goto https://192.168.56.70:215 main menu (ZBA-01)
Login with root again
Step 5. Click on Shares (top view)
Step 6. Click on Projects (left side) and look around.
(We could even mount the shares again, but then we lose the changes)
NOTE:
You see here that this unit is not aware of the promoted replicated share and have made
the shares available to mount again, but the data is updated on the remote device so
this data is old. So we are going to replicate back the data from the remote site to the
original primary site.
Step 7. Goto https://192.168.56.80:215 main menu (ZBA-02)
Step 8. Click on Shares
Step 9. Click on All Projects (left side)
Step 10. Click on DR-Project-A
Step 11. Click on Replication
Here you see the following:

11 | ORACLE OPTIMIZED SOLUTION FOR ORACLE SUPERCLUSTER BACKUP AND RECOVERY

NOTE:
After the role-reversal, the replication is automatically set to manual mode. The mode
is not automatically changed to continuous - since the primary site can be in an
unavailable state.
Step 12. Click on the two arrows in front of Sync Now.
Wait until it says Synced.
Then youll see the following:

Step 13. Go to https://192.168.56.70:215 main menu (ZBA-01)


Step 14. Click on Shares
Step 15. Click on All Project

Now you see the old Project-A has lost the share and is moved to DR-Project-A
Step 16. Click on zba02: DR-Project-A
Step 17. Click on Replication
Step 18. Click on Reverse Direction of Replication
Step 19. You are asked for a new name use Project-A-new. Press OK
NOTE:
After the role-reversal, the replication is automatically set to manual mode. The mode
is not automatically changed to continuous.

12 | ORACLE OPTIMIZED SOLUTION FOR ORACLE SUPERCLUSTER BACKUP AND RECOVERY

It might be that you notice some errors in this Virtual ZFS Storage VM the error causes
firefox to stop working, if this happens please restart firefox at this point.
(TIP: hard kill #pkill firefox from terminal)
Check new content after switching directions.

Step 1. Start a terminal session (see top bar)


Step 2. Mount the share in /mnt, by typing the following:
$ sudo mount 192.168.56.70:/export/MyShare01 /mnt
$ sudo ls -l /mnt
You should now see the directory with the updated folder in it.

Step 3. Unmount the share as we do not need it anymore:


$ sudo umount /mnt

NOTE:
Congratulations. You have successfully managed to switch replicated directions. We
have showed this now with only a single share, but you can imagine this works the same
for all shares and luns. Working with projects is therefore important.

13 | ORACLE OPTIMIZED SOLUTION FOR ORACLE SUPERCLUSTER BACKUP AND RECOVERY

How to replicate a ISCSI Lun on scheduled time


Configuring iscsi devices on the host and ZFS appliance
NOTE:
This is preparation work we have already done to save some LAB time, we followed the
following procedure which contains a few steps:
http://www.oracle.com/technetwork/server-storage/sun-unifiedstorage/documentation/iscsi-quickstart-v1-2-051512-1641594.pdf
Adding iscsi lun

Step 1. Goto https://192.168.56.70:215 main menu (ZBA-01)


Step 2. Click on Shares
Step 3. Click on Projects (left side)
Step 4. Click on the + in front of ALL
Step 5. Enter the name solaris-servers and press Apply
Step 6. Click on the project solaris-servers
Step 7. Click on the LUNs
Step 8. Click on the + in front of LUNS
Step 9. Enter the following specifics and press Apply: (capslock sensitive)

NOTE: You have successfully added an iscsi lun.

14 | ORACLE OPTIMIZED SOLUTION FOR ORACLE SUPERCLUSTER BACKUP AND RECOVERY

Adding iscsi lun to the host


Step 1. Go back to your solaris prompt.
Step 2. Start a terminal session (see top bar)
Step 3. First add ISCSI server:
$ sudo iscsiadm add discovery-address 192.168.56.70
$ sudo iscsiadm modify discovery -t enable
$ sudo iscsiadm list discovery-address
Discovery Address: 192.168.56.70:3260
Step 4. Initiate access to the iSCSI LUN.:
$ sudo devfsadm -i iscsi -v
Step 5. Check the results:
$ sudo tail -100 /var/adm/messages | grep -i scsi
Jul 10 16:25:35 solaris-node-1 genunix: [ID 408114 kern.info]
/scsi_vhci/disk@g600144f0c86a791a0000559ff0c10001 (sd2) online
NOTE:
In this example, the multipath status is shown as degraded because IP multipathing has
not been configured. In a production environment, IP multipathing would typically be
configured. The disk device is now available similarly to an internal server disk.
Step 6. Format the disk:
NOTE:
We dont need to format the disk because this will be automatically done when we create
a ZFS ZPOOL from the disk. However the format command is handy to show the disk
name within solaris. We need the disk name when we create the ZPOOL.
$ echo | sudo format
You should see one more devices now
AVAILABLE DISK SELECTIONS:
0. c0t600144F0C86A791A0000???????????d0 <SUN-Sun Storage 7000-1.0 cyl
1022 alt 2 hd 64 sec 32>
/scsi_vhci/disk@g600144f0c86a791a0000559ff0c10001
1. c1t0d0 <ATA-VBOX HARDDISK-1.0-10.18GB>
/pci@0,0/pci8086,2829@d/disk@0,0.
Step 7. Please copy the greyed disk name above: (every system got its own name)
c0t600144F0C86A791A0000???????????d0

15 | ORACLE OPTIMIZED SOLUTION FOR ORACLE SUPERCLUSTER BACKUP AND RECOVERY

Step 8. Create zpool:


$ sudo zpool create iscsi-demo c0t600144F0C86A791A0000???????????d0
NOTE: paste the disk name from Step 6.

Step 9. Create zfs filesystem:


$ sudo zfs create iscsi-demo/Files
If you have trouble to get this created, you could run the script ./create.zpool which we
have pre-defined with the right commands.
Step 10. Check the filessystem:
$ df | grep -i iscsi
/iscsi-demo
(iscsi-demo
): 1998599 blocks 1998599 files
/iscsi-demo/Files (iscsi-demo/Files ): 1998599 blocks 1998599 files
$ sudo zpool status -v iscsi-demo
$ sudo zpool list
NOTE:
It is not needed to format the iscsi disk. This is done automatically by ZFS when the disk
is added to the zpool. Also like the df command shows, the iscsi disk is automatically
mounted.
Step 11. Put some files on the file system:
$ sudo su
$ cd /iscsi-demo/Files
$ tar xvf /install/lundata2.tar .
$ exit

16 | ORACLE OPTIMIZED SOLUTION FOR ORACLE SUPERCLUSTER BACKUP AND RECOVERY

Replicate the iscsi lun


Step 1. Goto https://192.168.56.70:215 main menu (ZBA-01)
Step 2. Click on Shares
Step 3. Click on All Projects and then Project solaris-servers
Step 4. Click on Replication
Step 5. Click on the + in front of Actions
Step 6. Select defaults + Scheduled Half Hour at 00 minutes after

Step 7. Press the Add button.


Step 8. Start the replication by pressing the Sync arrows

Step 9. You should be able to check on the ZBA-02 under Shares, Projects (left side),
Replica, zba-01:solaris-servers, LUNS the creation remotely of the Project
MySolarisLun01.
NOTE:
If you need to restore this lun, the same steps are taken as in the previous switch Switch
directions of replication back to ZBA-01 on page 13. We do not do this now because of
limited time we have for this LAB. If you have some time left over and the end, we
challenge you perform the reverse replication for MySolarisLun01.

17 | ORACLE OPTIMIZED SOLUTION FOR ORACLE SUPERCLUSTER BACKUP AND RECOVERY

Backup the ZFSSA configuration, save to NFS share on the array and replicate it to the
second array
Save ZFS Appliance configuration files

Step 1. Goto https://192.168.56.70:215 main menu (ZBA-01)


Step 2. Click on Maintenance
Step 3. Click on System
Step 4. Click on BACKUP
Step 5. Enter in the Comment line Configuration backup and press COMMIT
Step 6. In the following screen you see the backup line configuration backup and at the
end
When your mouse moves over VERSION youll see this picture, click on it.
Step 7. Save the config file locally.
NOTE:
The created backup ONLY contains configuration data of the ZFS Storage Appliance
and NOT actual data that is on the shares and/or Luns. You can keep the backup of the
configuration in any save place you want. We save it on the other ZFS Storage
Appliance using Replication.
Create NFS share to replicate from ZBA-01 to ZBA-02

Step 1. Stay on (ZBA-01)


Step 2. Click on Shares
Step 3. Click on Default (left side)
Step 4. Click on the + in front of Filesystems
Step 5. Enter the name config, rest default and Apply
Step 6. Start a terminal session (see top bar)
Step 7. Mount the share:
$ sudo mount 192.168.56.70:/export/config /mnt
Step 8. Copy the configuration file downloaded to this share:
$ sudo cp /export/home/oracle/Downloads/zba-01*.conf /mnt
$ sudo ls -ltr /mnt
Step 9. Un-mount the share:
$ sudo umount /mnt

18 | ORACLE OPTIMIZED SOLUTION FOR ORACLE SUPERCLUSTER BACKUP AND RECOVERY

Replicate NFS share config to ZBA-02

Step 1. Goto https://192.168.56.70:215 main menu (ZBA-01)


Step 2. Click on Shares
Step 3. Click on Project Default
Step 4. Click on edit the config filesystem
Step 5. Click on Replication

Step 6. Uncheck Inherit from project and Apply


Step 7. Click on the + in front of Actions
Step 8. Select defaults + Scheduled Monthly on the 1st at 00:10
Read the warning, and understand what it says.
Step 9. Press the update now button.
Step 10. You should be able to check on the ZBA-02 under Shares, Projects, Replica
the creation remotely of the Project Default from zba-01.
NOTE:
Our advice is to place all configuration related information on this replicated share in
normal life so you are always able to find out what the configuration is on the remote
side.

19 | ORACLE OPTIMIZED SOLUTION FOR ORACLE SUPERCLUSTER BACKUP AND RECOVERY

Configure NDMP on the ZFS appliance


We need to configure first NDMP on the ZBA-01
Step 1. Goto https://192.168.56.70:215 main menu (ZBA-01)
Step 2. Click on Configuration
Step 3. Click on NDMP
Step 4. The following fields are filled already:

Password : welcome1
Step 5. Click on the power button to Enabled the service.
The NDMP led will turn into green and the service goes Online

20 | ORACLE OPTIMIZED SOLUTION FOR ORACLE SUPERCLUSTER BACKUP AND RECOVERY

Preparation Work for NDMP Backup share


Create Backup Share

Step 1. Go to Shares and Click on the project named: Backup


Step 2. Click on the + in front Filesystems
Step 3. Enter with Name the share name: MyBackupShare
Step 4. Click on Apply
Step 5. Start another terminal and mount the filesystem and put some files on the share.
$ sudo mkdir /export/MyBackupShare
$ sudo mount 192.168.56.70:/export/MyBackupShare /export/MyBackupShare
$ sudo cp files/file-* /export/MyBackupShare
$ sudo ls -ltr /export/MyBackupShare

How to backup file systems to ZFS disk with NDMP via Oracle Secure Backup
Create Backup Dataset

NOTE:
We use for this lab OSB which stands for Oracle Secure Backup tool. First we have
created a backup dataset so this has already been prepared for you:
An Oracle Technical White Paper June 2016:
NDMP Implementation Guide for the Oracle ZFS Storage Appliance (white paper)
On the Solaris host we already created the ZFS SA zba-01 as a NDMP Mediaserver for
OSB named as zba-01-dump. We created this as follows:
ob> mkhost -a ndmp -o -r mediaserver,client -i 192.168.56.70 -u zba01 -B dump -q zba-01-dump
ob> catds mnt.ds
include host solaris-node-1
include path /export/MyBackupShare
ob> lsdev --long disk pool
disk:
Device type:
disk pool
In service:
yes
Debug mode:
no
Capacity:
(not set)
Consumption:
9.8 MB
Free space goal:
(system default)
Concurrent jobs:
25
Blocking factor:
(default)
Max blocking factor: (default)
UUID:
860c5fe0-23e0-1033-8dd5-f404d058a767
Attachment 1:
Host:
zba-01
Directory:
/export/store/solaris

21 | ORACLE OPTIMIZED SOLUTION FOR ORACLE SUPERCLUSTER BACKUP AND RECOVERY

Perform a Backup

Step 1. Login as Oracle Solaris User with password welcome1


Step 2. Start a terminal session (see top bar)

Step 3. First Start OSB:


$ obtool
Login : admin
Password : welcome1
ob>
Step 4. Start the filesystem dump backup :
ob> backup -D mnt.ds -l 0 --go
Use the lsjob and the catxcr commands in obtool to monitor the backup progress
Step 5. Monitor the job :
ob> lsjob
ob> catxcr admin/<nr>
ob> lsjob --log admin/<nr>
Job must end with:
ob> lsjob --log admin/<nr>
Job ID
Sched time Contents
State
---------------- ----------- ------------------------------ --------------------------------------admin/<nr>
none
dataset mnt.ds
completed successfully at
2016/08/22.15:52
2016/08/22.15:51:46 Dataset processed; host backups scheduled.
2016/08/22.15:52:57 Job completed successfully.

Destroy some files to restore

Step 6. Remove some files:


$ sudo rm -f /export/MyBackupShare/file-1 /export/MyBackupShare/file-3
$ sudo ls -ltr /mnt
NOTE:
We have now deleted file-1 & 3 from MyBackupShare which we are going to restore.
Perform the OSB filesystem Restore

Step 7. Go back to obtool and start the filesystem dump restore for docs only :
$ obtool
ob> set host solaris-node-1
ob> cd /export/MyBackupShare
ob> ls
file-1 file-02 file-03 file-04 etc

22 | ORACLE OPTIMIZED SOLUTION FOR ORACLE SUPERCLUSTER BACKUP AND RECOVERY

ob> restore -h solaris-node-1 -s latest --go /export/MyBackupShare


Step 8. Monitor the job :
ob> lsjob
ob> catxcr admin/<nr>
ob> lsjob --log admin/<nr>
Check if you see:
ob> lsjob --log admin/<nr>
Job ID
Sched time Contents
State
---------------- ----------- ------------------------------ --------------------------------------admin/<nr>
none
restore 1 item to solaris-node-1 completed successfully at
2016/08/22.16:00
2016/08/22.15:59:37 Dispatching job to run on administrative server.
2016/08/22.16:00:06 Restore completed with no error.
2016/08/22.16:00:07 Job completed successfully.

ob> logout
Step 9. Go back to your terminal and check the mount point again:
#sudo ls ltr /export/MyBackupShare
Files should be back in place.
Step 10. Unmount /mnt:
$ sudo umount /export/MyBackupShare

23 | ORACLE OPTIMIZED SOLUTION FOR ORACLE SUPERCLUSTER BACKUP AND RECOVERY

How to backup volumes (iscsi or FC LUN) to ZFS


Create Backup Dataset

NOTE:
On the ZBA-01 we have created a lun in Project-A named MySolarisLun01.
On the Solaris host we created Backup Dataset named MyLunBackup.
ob> mkds MyLunBackup.ds
$ Begin dataset TEMPLATES/new_button.
include host solaris-node-1
include path /iscsi-demo/Files
$ End dataset TEMPLATES/new_button.
ob> mkhost -a ndmp -o -r mediaserver,client -i 192.168.56.70 -u admin -B zfs q aie7000-zfs

Create a LUN with files

NOTE:
We already have created a LUN in project-A named MySolarisLun01, and we have
placed some files on it. In chapter Adding Lun Step 1 to 11. If you have skipped those
steps, you can now run the command ./create_zpool.
Step 1. List files:
$ sudo ls -ltr /iscsi-demo/Files
Step 2. Add more files to the folder:
$ cd /iscsi-demo/Files
$ ls -l
$ sudo tar xvf /install/lundata.tar
$ sudo ls -ltr

Perform a backup

Step 3. First Start OSB:


$ obtool
Login : admin
Password : oracle
ob>

24 | ORACLE OPTIMIZED SOLUTION FOR ORACLE SUPERCLUSTER BACKUP AND RECOVERY

Step 4. Start the filesystem dump backup :


ob> backup -D MyLunBackup.ds -l 0 --go
Use the lsjob and the catxcr commands in obtool to monitor the backup progress or use
the web based gui accessible via https://192.168.56.60/index.php to monitor.
Step 5. Monitor the job :
ob> lsjob
ob> catxcr admin/<nr>
ob> lsjob --log admin/<nr>
Its successful when you see this message:
2016/08/25.13:06:58 Dataset processed; host backups scheduled.
2016/08/25.13:07:53 Job completed successfully.

NOTE:
The logging will look like this:
ob> backup -D MyLunBackup.ds -l 0 --go
Info: backup request 1 (dataset MyLunBackup.ds) submitted; job id is admin/30.
ob> lsj
Job ID
Sched time Contents
State
---------------- ----------- ------------------------------ --------------------------------------admin/30
none
dataset MyLunBackup.ds
processed; host backup(s)
scheduled
admin/30.1
none
backup zba-01-zfs
running since 2016/08/14.09:03
ob> lsj --log admin/30
Job ID
Sched time Contents
State
---------------- ----------- ------------------------------ --------------------------------------admin/30
none
dataset MyLunBackup.ds
completed with warnings at
2016/08/14.09:04 - one or more warnings or non-critical errors reported
2016/08/14.09:03:22 Dataset processed; host backups scheduled.
2016/08/14.09:04:35 Job completed with warnings.
You have successfully completed the iscsi LUN backup via NDMP method on the
ZFS Appliance. Now we are going to destroy the LUN first and then restoring it back.

25 | ORACLE OPTIMIZED SOLUTION FOR ORACLE SUPERCLUSTER BACKUP AND RECOVERY

Delete Data on Lun

Step 1. Delete files in the folder:


$ sudo ls -l /iscsi-demo/Files
$ sudo rm -rf /iscsi-demo/Files/*
$ sudo ls -ltr /iscsi-demo/Files

Now youre DATA is completely destroyed and gone!


But we have the backup

26 | ORACLE OPTIMIZED SOLUTION FOR ORACLE SUPERCLUSTER BACKUP AND RECOVERY

Perform a LUN Data Restore

Step 1. First monitor zpool status:


$ zpool status iscsi-demo
$ zpool list
Step 2. Start the LUN restore:
ob> set host solaris-node-1
ob> cd /iscsi-demo/Files
ob> ls
ob> restore -h solaris-node-1 -p 1 -s latest --go /iscsi-demo/Files
NOTE:
This will restore into the original lun named MySolarisLun01, its also possible to restore
to a new lun name, then use the following command:
Optional : ob> restore -h solaris-node-1 -p 1 -s latest --go /iscsi-demo/Files -a /iscsidemo/Files-restored
Step 3. Monitor the job :
ob> lsjob
ob> catxcr admin/<nr>
ob> lsjob --log admin/<nr>
You should see something like this:

Step 4. After the successful restore, check in ZBA-01 if the LUN restored?
Step 5. OPTIONAL: only with new lun. (see Note above)
Scan the new iscsi disks MySolarisLun01:
$ sudo svcadm restart svc:/network/iscsi/initiator:default
Step 6. OPTIONAL: only with delete and restored LUN.
Fix zpool with restored LUN
$ sudo zpool clear iscsi-demo

27 | ORACLE OPTIMIZED SOLUTION FOR ORACLE SUPERCLUSTER BACKUP AND RECOVERY

Step 7. Go back to your folder and check if all the files are there again?
$ sudo ls ltr /iscsi-demo/Files
If you see the files, the exercise is completed.
NOTE:
Congratulations. You have completed the Hands On Lab.
You are now able to configure shares and LUNs on ZFS Appliances and starting
replications to a second side. You also are able to backup and restore using Oracle
Secure Backup tooling which is the preferred backup application from Oracle.
Thank you for joining the lab and good luck in the future using the product.

28 | ORACLE OPTIMIZED SOLUTION FOR ORACLE SUPERCLUSTER BACKUP AND RECOVERY

Oracle Corporation, World Headquarters

Worldwide Inquiries

500 Oracle Parkway

Phone: +1.650.506.7000

Redwood Shores, CA 94065, USA

Fax: +1.650.506.7200

CONNECT W ITH US

blogs.oracle.com/oracle

twitter.com/oracle

Copyright 2014, Oracle and/or its affiliates. All rights reserved. This document is provided for information purposes only, and the
contents hereof are subject to change without notice. This document is not warranted to be error-free, nor subject to any other
warranties or conditions, whether expressed orally or implied in law, including implied warranties and conditions of merchantability or
fitness for a particular purpose. We specifically disclaim any liability with respect to this document, and no contractual obligations are
formed either directly or indirectly by this document. This document may not be reproduced or transmitted in any form or by any means,
electronic or mechanical, for any purpose, without our prior written permission.

oracle.com

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

facebook.com/oracle

Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and
are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are
trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark of The Open Group. 0916

Você também pode gostar