Você está na página 1de 12

CREATING AND ASSIGNING LV AS A DISK TO LPAR FROM

VIO(STEP BY STEP SCREENSHOTS)

http://aix-administration.blogspot.com/p/index-of-posts-available-in-this-blog.html

THIS DOCUMENT WILL PROVIDE YOU STEP BY STEP INFORMATION TO


ASSIGN A LOGICAL VOLUME AS A DISK TO LPAR FROM A AVIO SERVER

TABLE OF CONTENTS

CREATE A SCSI SERVER ADAPTER ON THE VIO

MAP LV AS A DISK TO THE ADAPTER

REMOVING DISK AAND ADAPTER PROVIDED FROM VIO

On the vio server

Check status of virtual adapters before creating adapters

$ lsmap -all |grep vhost


vhost9 Available Virtual SCSI Server Adapter
vhost10 Available Virtual SCSI Server Adapter
vhost11 Available Virtual SCSI Server Adapter
vhost12 Available Virtual SCSI Server Adapter
vhost13 Available Virtual SCSI Server Adapter
vhost14 Available Virtual SCSI Server Adapter
vhost15 Available Virtual SCSI Server Adapter
vhost16 Available Virtual SCSI Server Adapter
vhost18 Available Virtual SCSI Server Adapter
vsa0 Available LPAR Virtual Serial Adapter

CREATE A SCSI SERVER ADAPTER ON THE VIO

To create a scsi server adapter we have to create a scsi


adapter at vio server side In the hmc server section select the
box on which you are going to do operation
select the lpar from the list of lpars and click on the small
arrow besides the lpar this arrow is a shortcut to the tasks
option on the top
From the DLPAR options select the virtual adapters
Select create
SCSI adapters
When we create a scsi server adapter we have to be carefull about the server
adapter id this id will corrospond to the respective client adapter to which this
adapter will serve . This is just like creating a target and a initiator where server
adapter is target and client adapter is initiator

Note the server adapter ID , here 31 , While creating client adapter this id will be

used to create a mapping


Note :Scsi Server adapters can only be created on VIO server and client adapters can
only
be created on client lpars

Check whether the new virtual adapter is


available now

$ lsmap -all |grep vhost

vhost14 Available Virtual SCSI Server Adapter


vhost15 Available Virtual SCSI Server Adapter

vhost16 Available Virtual SCSI Server Adapter

vhost17 Available Virtual SCSI Server Adapter

vhost18 Available Virtual SCSI Server Adapter

vhost19 Available Virtual SCSI Server Adapter

vsa0 Available LPAR Virtual Serial Adapter

# exit
Wecanseeanewvirtualadaptervhost19whichwecreatedjustnow

Check for virtual target devices

$ lsvg lpar_vg
VOLUME GROUP: lpar_vg VG IDENTIFIER:
00cf21f100004c000000011bda37a9cc
VG PERMISSION: read/write TOTAL PPs: 2730(698880
megabytes)
MAX LVs: 512 FREE PPs: 1098(281088
megabytes)
LVs: 6 USED PPs: 1632(417792 megaby

Check what LV's are available on this VG

$ lsvg -lv lpar_vg


lpar_vg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNTPOINT
D3-APP111-LV00 jfs 272 272 1 open/syncd N/A
D4-APP109-LV00 jfs 272 272 1 open/syncd N/A
D4-APP123-LV00 jfs 272 272 1 open/syncd N/A
D3-APP112-LV00 jfs 272 272 1 open/syncd N/A
D2-APP124-LV00 jfs 272 272 1 open/syncd N/A
D2-APP110-LV00 jfs 272 272 1 open/syncd N/A
Now we will create a LV named testlv in the VG lpar_vg , this lv
will act as a disk device for the client LPAR

Create a logical volume testlv in the lpar_vg


Check the space availability in the vg

$ mklv -lv testlv lpar_vg 8


testlv

Now we will add a client adapter in the client LPAR , note the
server adapter id is the same , one we created above and the
client adapter id is also same which was given above
Add client scsi adapter on the Client lpar

SELECT DYNAMIC LOGICAL PARTITIONING >> SELECET


VIRTUAL ADAPTERS
Now to verify the client adapter go to the client and run device
configuration utility
Run cfg mgr on client

# cfgmgr

Check the availability of adapter


# lsdev -Cc adapter
ent0 Available Logical Host Ethernet Port (lp-hea)
ent1 Available Logical Host Ethernet Port (lp-hea)
lhea0 Available Logical Host Ethernet Adapter (l-
hea)
sissas0 Available 03-08 PCI-X266 Planar 3Gb SAS
Adapter
sisscsia0 Available 00-08 PCI-XDDR Dual Channel
Ultra320 SCSI Adapter
sisscsia1 Defined 07-08 PCI-XDDR Dual Channel Ultra320
SCSI Adapter
usbhc0 Available 04-08 USB Host Controller
(33103500)
usbhc1 Available 04-09 USB Host Controller
(33103500)
usbhc2 Available 04-0a USB Enhanced Host Controller
(3310e000)
vsa0 Available LPAR Virtual Serial Adapter
vscsi0 Available Virtual SCSI Client Adapter
vscsi1 Available Virtual SCSI Client Adapter

vscsi2 Available Virtual SCSI Client Adapter

Now we can see the new adapter vscsi2 available at the client
side

For verification we are just showing what are the disks present
at the client side

# lspv

hdisk0 00cf21f1258309f0 rootvg active


hdisk1 00cf21f12c59a36a rootvg active
hdisk2 00cf21f14e0d976f nimvg active
hdisk3 00cf21f14e0d9862 nimvg active
hdisk4 00cf21f1aaba221c nimvg active
hdisk5 00cf21f161c6a6c9 nimvg active

Now we will map testlv LV to the virtual adapter created


above , check for any target device attached to the newly
created virtual adapter vhost19 just for verification

Map testlv to the new virtual adapter

$ lsmap -vadapter vhost19


SVSA Physloc
Client Partition ID
--------------- --------------------------------------------
----
--------------
vhost19 U9117.MMA.06F21F1-V7-C31
0x00000000
VTD NO VIRTUAL TARGET DEVICE FOUND

Run mkvdev command to created a virtual target device with virtual adapter
$ mkvdev -vdev testlv -vadapter vhost19

vtscsi0 Available
$ lsmap -all -field vtd
VTD D2-APP124-DSK0
VTD D2-APP110-DSK0
VTD D3-APP111-DSK0
VTD D3-APP112-DSK0
VTD D8-APP125-DSK0
VTD D9-APP122-DSK0
VTD D10-APP105-DSK0
VTD D4-APP109-DSK0

Now goto client side and run device configuration utility . verify the new disk same
size as of LV created at VIO server
# lspv

hdisk0 00cf21f1258309f0 rootvg active


hdisk1 00cf21f12c59a36a rootvg active
hdisk2 00cf21f14e0d976f nimvg active
hdisk3 00cf21f14e0d9862 nimvg active

hdisk4 00cf21f1aaba221c nimvg active


hdisk5 00cf21f161c6a6c9 nimvg active
VIO Backup

Backup:

Create a mksysb file of the system on a nfs mount: backupios file /mnt/vios.mksysb
mksysb

Create a backup of all structures of VGs and/or storage pools: savevgstruct vdiskvg ( data
will be stored to /home/ios/vgbackups)

List all backups made with savevgstruct: restorevgstruct ls

Backup the system to a NFS mounted file system: backupios file /mnt

Você também pode gostar