Você está na página 1de 7

VIRTUAL SCSI

Virtual SCSI is based on a client/server relationship. The Virtual I/O Server owns the physical resources
and acts as server or, in SCSI terms, target device. The client logical partitions access the virtual SCSI
backing storage devices provided by the Virtual I/O Server as clients.

Virtual SCSI server adapters can be created only in Virtual I/O Server. For HMC-managed systems, virtual
SCSI adapters are created and assigned to logical partitions using partition profiles.

The vhost SCSI adapter is the same as a normal SCSI adapter. You can have multiple disks assigned to it.
Usually one virtual SCSI server adapter mapped to one virtual SCSI client adapter will be configured,
mapping backing devices through to individual LPARs. It is possible to map these virtual SCSI server
adapters to multiple LPARs, which is useful for creating virtual optical and/or tape devices, allowing
removable media devices to be shared between multiple client partitions.

on VIO server:
root@vios1: / # lsdev -Cc adapter
vhost0 Available Virtual SCSI Server Adapter
vhost1 Available Virtual SCSI Server Adapter
vhost2 Available Virtual SCSI Server Adapter

The client partition accesses its assigned disks through a virtual SCSI client adapter. The virtual SCSI
client adapter sees the disks, logical volumes or file-backed storage through this virtual adapter as
virtual SCSI disk devices.

on VIO client:
root@aix21: / # lsdev -Cc adapter
vscsi0 Available Virtual SCSI Client Adapter

root@aix21: / # lscfg -vpl hdisk2


hdisk2 U9117.MMA.06B5641-V6-C13-T1-L890000000000 Virtual SCSI Disk Drive

In SCSI terms:
virtual SCSI server adapter: target
virtual SCSI client adapter: initiator
(Analogous to server client model, where client is the initiator.)

Physical disks presented to the Virtual I/O Server can be exported and assigned to a client partition in a
number of different ways:
- The entire disk is presented to the client partition.
- The disk is divided into several logical volumes, which can be presented to a single client or multiple
different clients.
- With the introduction of Virtual I/O Server 1.5, files can be created on these disks and file-backed
storage can be created.
- With the introduction of Virtual I/O Server 2.2 Fixpack 24 Service Pack 1 logical units from a shared
storage pool can be created.

The IVM and HMC environments present 2 different interfaces for storage management under different names.
Storage Pool interface under IVM is essentially the same as LVM under HMC. (These are used sometimes
interchangeably.) So volume group can refer to both volume groups and storage pools, and logical volume can
refer to both logical volumes and storage pool backing devices.

Once these virtual SCSI server/client adapter connections have been set up, one or more backing devices
(whole disks, logical volumes or files) can be presented using the same virtual SCSI adapter.

When using Live Partition Mobility storage needs to be assigned to the Virtual I/O Servers on the target
server.

----------------------------

Number of LUNs attached to a VSCSI adapter:

VSCSI adapters have a fixed queue depth that varies depending on how many VSCSI LUNs are configured for the
adapter. There are 512 command elements of which 2 are used by the adapter, 3 are reserved for each VSCSI
LUN for error recovery and the rest are used for IO requests. Thus, with the default queue_depth of 3 for
VSCSI LUNs, that allows for up to 85 LUNs to use an adapter: (512 - 2) / (3 + 3) = 85.

So if we need higher queue depths for the devices, then the number of LUNs per adapter is reduced. E.G., if
we want to use a queue_depth of 25, that allows 510/28= 18 LUNs. We can configure multiple VSCSI adapters
to handle many LUNs with high queue depths, each requiring additional memory. One may have more than one
VSCSI adapter on a VIOC connected to the same VIOS if you need more bandwidth.

Also, one should set the queue_depth attribute on the VIOC's hdisk to match that of the mapped hdisk's
queue_depth on the VIOS.

Note that to change the queue_depth on an hdisk at the VIOS requires that we unmap the disk from the VIOC
and remap it back, or a simpler approach is to change the values in the ODM (e.g. # chdev -l hdisk30 -a
queue_depth=20 -P) then reboot the VIOS.

----------------------------

File Backed Virtual SCSI Devices

Virtual I/O Server (VIOS) version 1.5 introduced file-backed virtual SCSI devices. These virtual SCSI
devices serve as disks or optical media devices for clients.

In the case of file-backed virtual disks, clients are presented with a file from the VIOS that it accesses
as a SCSI disk. With file-backed virtual optical devices, you can store, install and back up media on the
VIOS, and make it available to clients.

----------------------------

Check VSCSI adapter mapping on client:

root@bb_lpar: / # echo "cvai" | kdb | grep vscsi <--cvai is a kdb subcommand


read vscsi_scsi_ptrs OK, ptr = 0xF1000000C01A83C0
vscsi0 0x000007 0x0000000000 0x0 aix-vios1->vhost2 <--shows which vhost is used
on which vio server for this client
vscsi1 0x000007 0x0000000000 0x0 aix-vios1->vhost1
vscsi2 0x000007 0x0000000000 0x0 aix-vios2->vhost2

Checking for a specific vscsi adapter (vscsi0):

root@bb_lpar: /root # echo "cvscsi\ncvai vscsi0"| kdb |grep -E "vhost|part_name"


priv_cap: 0x1 host_capability: 0x0 host_name: vhost2 host_location:
host part_number: 0x1 os_type: 0x3 host part_name: aix-vios1

----------------------------

Other way to find out VSCSI and VHOST adapter mapping:


If the whole disk is assigned to a VIO client, then PVID can be used to trace back connection between VIO
server and VIO client.

1. root@bb_lpar: /root # lspv | grep hdisk0 <--check pvid of the disk is


question on client
hdisk0 00080e82a84a5c2a rootvg

2. padmin@bb_vios1: /home/padmin # lspv | grep 5c2a <--check which disk has this
pvid on vio server
hdiskpower21 00080e82a84a5c2a None

3. padmin@bb_vios1: /home/padmin # lsmap -all -field SVSA "Backing Device" VTD "Client Partition ID" Status
-fmt ":" | grep hdiskpower21
vhost13:0x0000000c:hdiskpower21:pid12_vtd0:Available <--check vhost adapter of the
given disk

----------------------------

Managing VSCSI devices (server-client mapping)

1. HMC -> VIO Server -> DLPAR -> Virtual Adapter (create vscsi adapter, name the client which can use it,
then create the same in profile)
(the profile can be updated: configuration -> save current config.)
(in case of an optical device, check out any client partition can connect)
2. HMC -> VIO Client -> DLPAR -> Virtual Adapter (create the same adapter as above, the ids should be
mapped, do it in the profile as well)
3. cfgdev (VIO server), cfgmgr (client) <--it will bring up vhostX on vio server,
vscsiX on client
4. create needed disk assignments:
-using physical disks:
mkvdev -vdev hdisk34 -vadapter vhost0 -dev vclient_disk <--for easier identification useful to give
a name with the -dev flag
rmvdev -vdev <backing dev.> <--back. dev can be checked with lsmap -all
(here vclient_disk)

-using logical volumes:


mkvg -vg testvg_vios hdisk34 <--creating vg for lv
lsvg <--listing a vg
reducevg <vg> <disk> <--deleting a vg

mklv -lv testlv_client testvg_vios 10G <--creating lv what will be mapped to


client
lsvg -lv <vg> <--lists lvs under a vg
rmlv <lv> <--removes an lv

mkvdev -vdev testlv_client -vadapter vhost0 -dev <any_name> <--for easier identification useful
to give a name with the -dev flag
(here backing device is an lv
(testlv_client)
rmvdev -vdev <back. dev.> <--removes an assignment to the client

-using logical volumes just with storage pool commands:


(vg=sp, lv=bd)

mksp <vgname> <disk> <--creating a vg (sp)


lssp <--listing stoarge pools (vgs)
chsp -add -sp <sp> PhysicalVolume <--adding a disk to the sp (vg)
chsp -rm -sp bb_sp hdisk2 <--removing hdisk2 from bb_sp (storage pool)

mkbdsp -bd <lv> -sp <vg> 10G <--creates an lv with given size in the sp
lssp -bd -sp <vg> <--lists lvs in the given vg (sp)
rmbdsp -bd <lv> -sp <vg> <--removes an lv from the given vg (sp)

mkvdev..., rmvdev... also applies

-using file backed storage pool


first a normal (LV) storage pool should be created with: mkvg or mksp, after that:
mksp -fb <fb sp name> -sp <vg> -size 20G <--creates a file backed storage pool in the
given storage pool with given size
(it wil look like an lv, and a fs will be
created automatically as well)
lssp <--it will show as FBPOOL
chsp -add -sp clientData -size 1G <--increase the size of the file storage
pool (ClientData) by 1G

mkbdsp -sp fb_testvg -bd fb_bb -vadapter vhost2 10G <--it will create a file backed device and
assigns it to the given vhost
mkbdsp -sp fb_testvg -bd fb_bb1 -vadapter vhost2 -tn balazs 8G <--it will also specify a virt. target
device name (-tn)

lssp -bd -sp fb_testvg <--lists the lvs (backing devices) of the
given sp
rmbdsp -sp fb_testvg -bd fb_bb1 <--removes the given lv (bd) from the sp
rmsp <file sp name> <--remove s the given file storage pool

removing it:
rmdev -dev vhost1 -recursive
----------------------------
On client partitions, MPIO for virtual SCSI devices currently only support failover mode (which means only
one path is active at a time:
root@bb_lpar: / # lsattr -El hdisk0
PCM PCM/friend/vscsi Path Control Module False
algorithm fail_over Algorithm True

----------------------------

Multipathing with dual VIO config:

on VIO SERVER:
# lsdev -dev <hdisk_name> -attr <--checking disk attributes
# lsdev -dev <fscsi_name> -attr <--checking FC attributes

# chdev -dev fscsi0 -attr fc_err_recov=fast_fail dyntrk=yes-perm <--reboot is needed for these
fc_err_recov=fast_fail <--in case of a link event IO will fail
immediately
dyntrk=yes <--allows the VIO server to tolerate
cabling changes in the SAN

# chdev -dev hdisk3 -attr reserve_policy=no_reserve <--each disk must be set to no_reservr
reserve_policy=no_reserve <--if this is configured, dual vio
server can present a disk to client

on VIO client:
# chdev -l vscsi0 -a vscsi_path_to=30 -a vscsi_err_recov=fast_fail -P <--path timout checks health of
VIOS and detects if VIO Server adapter isn't responding
vscsi_path_to=30 <--by default it is disabled (0), each client adapter must
be configured, minimum is 30
vscsi_err_recov=fast_fail <--failover will happen immediately rather than delayed

# chdev -l hdisk0 -a queue_depth=20 -P <--it must match the queue depth value used for the
physical disk on the VIO Server
queue_depth <--it determines how many requests will be queued on the
disk

# chdev -l hdisk0 -a hcheck_interval=60 -a hcheck_mode=nonactive -P <--health check updates


automatically paths state
(otherwise failed path must be set
manually))
hcheck_interval=60 <--how often do hcheck, each disk must be configured
(hcheck_interval=0 means it is disabled)
hcheck_mode=nonactive <--hcheck is performed on nonactive paths (paths with no
active IO)

Never set the hcheck_interval lower than the read/write timeout value of the underlying physical disk on
the Virtual I/O Server. Otherwise, an error detected by the Fibre Channel adapter causes new healthcheck
requests to be sent before the running requests time out.

The minimum recommended value for the hcheck_interval attribute is 60 for both Virtual I/O and non Virtual
I/O configurations.
In the event of adapter or path issues, setting the hcheck_interval too low can cause severe performance
degradation or possibly cause I/O hangs.

It is best not to configure more than 4 to 8 paths per LUN (to avoid too many hchecks IO), and set the
hcheck_interval to 60 in the client partition and on the Virtual I/O Server.

----------------------------

TESTING PATH PRIORITIES:

By default all the paths are defined with priority 1 meaning that traffic will go through the first path.
If you want to control the paths 'path priority' has to be updated.
Priority of the VSCSI0 path remains at 1, so it is the primary path.
Priority of the VSCSI1 path will be changed to 2, so it will be lower priority.

PREPARATION ON CLIENT:
# lsattr -El hdisk1 | grep hcheck
hcheck_cmd test_unit_rdy <--hcheck is configured, so path should come back
automatically from failed state
hcheck_interval 60
hcheck_mode nonactive

# chpath -l hdisk1 -p vscsi1 -a priority=2 <--I changed priority=2 on vscsi1 (by default both
paths are priority=1)

# lspath -AHE -l hdisk1 -p vscsi0


priority 1 Priority True

# lspath -AHE -l hdisk1 -p vscsi1


priority 2 Priority True

So, configuration looks like this:


VIOS1 -> vscsi0 -> priority 1
VIOS2 -> vscsi1 -> priority 2

TEST 1:

1. ON VIOS2: # lsmap -all <--checking disk mapping on VIOS2


VTD testdisk
Status Available
LUN 0x8200000000000000
Backing device hdiskpower1
...

2. ON VIOS2: # rmdev -dev testdisk <--removing disk mapping from VIOS2

3. ON CLIENT: # lspath
Enabled hdisk1 vscsi0
Failed hdisk1 vscsi1 <--it will show failed path on vscsi2 (this is
coming from VIOS2)

4. ON CLIENT: # errpt <--error report will show "PATh HAS FAILED"


IDENTIFIER TIMESTAMP T C RESOURCE_NAME DESCRIPTION
DE3B8540 0324120813 P H hdisk1 PATH HAS FAILED

5. ON VIOS2: # mkvdev -vdev hdiskpower1 -vadapter vhost0 -dev testdisk <--configure back disk mapping
from VIOS2

6. ON CLIENT: # lspath <--in 30 seconds path will come back


automatically
Enabled hdisk1 vscsi0
Enabled hdisk1 vscsi1 <--because of hcheck, path came back
automatically (no manual action was needed)

7. ON CLIENT: # errpt <--error report will show path has been recovered
IDENTIFIER TIMESTAMP T C RESOURCE_NAME DESCRIPTION
F31FFAC3 0324121213 I H hdisk1 PATH HAS RECOVERED

TEST 2:

I did the same on VIOS1 (rmdev...disk, which has path priority 1 (IO is going there by default)

ON CLIENT: # lspath
Failed hdisk1 vscsi0
Enabled hdisk1 vscsi1

ON CLIENT: # errpt <--an additional disk operation error will be in


errpt
IDENTIFIER TIMESTAMP T C RESOURCE_NAME DESCRIPTION
DCB47997 0324121513 T H hdisk1 DISK OPERATION ERROR
DE3B8540 0324121513 P H hdisk1 PATH HAS FAILED

----------------------------

How to change a VSCSI adapter on client:

# lspath
Enabled hdisk0 vscsi0
Enabled hdisk0 vscsi2 <--we want to change vsci2 to vscsi1

On VIO client:
1. # rmpath -p vscsi2 -d <--remove paths from vscsi2 adapter
2. # rmdev -dl vscsi2 <--remove adapter

On VIO server:
3. # lsmap -all <--check assignment and vhost device
4. # rmdev -dev vhost0 -recursive <--remove assignment and vhost
device

On HMC:
5. Remove deleted adapter from client (from profil too)
6. Remove deleted adapter from VIOS (from profil too)
7. Create new adapter on client (in profil too) <--cfgmgr on client
8. Create new adapter on VIOS (in profil too) <-cfgdev on VIO server

On VIO server:
9. # mkvdev -vdev hdiskpower0 -vadapter vhost0 -dev rootvg_hdisk0 <--create new assignment

# lspath
Enabled hdisk0 vscsi0
Enabled hdisk0 vscsi1 <--vscsi1 is there (cfgmgr may
needed)

----------------------------

Assigning and moving DVD RAM between LPARS


1. lsdev -type optical <--check if VIOS owns optical device (you should see sg. like:
cd0 Available SATA DVD-RAM Drive)
2. lsmap -all <--to see if cd0 is already mapped and which vhost to use for
assignment (lsmap -all | grep cd0)
3. mkvdev -vdev cd0 -vadapter vhost0 <--it will create vtoptX as a virtual target device (check with
lsmap -all )

4. cfgmgr (on client lpar) <--bring up cd0 device on client (before moving cd0 device rmdev
device on client first)

5. rmdev -dev vtopt0 -recursive <--to move cd0 to another client, remove assignment from vhost0
6. mkvdev -vdev cd0 -vadapter vhost1 <--create new assignment to vhost1

7. cfgmgr (on other client lpar) <--bring up cd0 device on other client

(Because VIO server adapter is configured with "Any client partition can connect" option, these pairs are
not suited for client disks.)

----------------------------

Você também pode gostar