Você está na página 1de 7

VNX

SIMPLE LUN ALLOCATION STEPS – VNX


Creating a LUN : You need to have the information like the Size of the LUN required,
the disk type and RAID type (if there are any specific requirement) etc… Based on
these requirements, you have to decide the pool to go with. Based on the disk type
and RAID type used in different pools, you will have to select the correct pool. From
Unisphere, under Storage>LUNs you have to click the Create button.
You have to furnish the data including the Size, Pool (Video below from EMC on
Pool creation) etc…in the screen. You will have to select the checkbox depending on
whether the LUN need to be created as a Thin/Thick. Once all the fields are filled in,
you have to note the LUN-ID and you can submit the form. Done..! You have created
the LUN, you can find the new LUN from the list and verify the details..
Adding a new host : Yes, your requirement may be to allocate new LUN to a new
host. Once host is connected via fabric and you have done with the zoning, the host
connectivity should be visible in Initiators list (Unisphere> Hosts> Initiators). If you
have the Unisphere Host Agent installed on the host or if it is an ESXi host, the host
gets auto-registered and you will see the host details in the Initiators list.
Else you will see only the new host WWNs in the list. You have to select the WWNs
and do a register. You have to fill in the host details (Name and IP) and the
ArrayCommpath and failover mode settings. Once the host is registered, you will see
the host in the hosts list (Unisphere > Hosts > Hosts).

Storage Group : You now have to make the LUN visible to the hosts. Storage Group
is the way to do this in VNX/Clariion. You will have to create a new storage group for
the hosts (Unisphere > Hosts > Storage Groups). You can name the new storage
group to match the host/cluster name for easy identification and then add the hosts
to the group.

If there are multiple hosts which will be sharing the LUNs, you have to add the hosts in the
storage group. And you also have to add the LUNs to the Storage Group. You have to set
the HLU for the LUNs in the SG and have to be careful in giving the HLU. For changing the
HLU, you will have to take a downtime as it can not be modified on-the-fly.
Once the LUNs and hosts are added to the Storage Group, you are done with the
allocation..! You can now request the host team to do a rescan to see the new LUNs.

SCANNING NEW LUNS FROM CELERRA/VNX FILE


Once you have provisioned a new LUN (or Symmetrix device), you have to scan for
this LUN from Celerra/VNX file components to make use of it at the File side – for
making filesystem and then CIFS share/ NFS export. Here we are discussing the
simple steps to scan the new LUN.
1. From the GUI – Unisphere Manager

Login to the Unisphere manager console by entering the control station IP address in
web browser. Select the system you have to scan for new LUN from top left drop-
down. Navigate to System > System Information tab.You will be given with a Rescan
All Storage Systems button there which will do the rescan for you.

Once rescan is completed, the devices/LUNs will be visible under the disks and space will
be available as the potential storage for the respective pool (Storage Pool for file).

2. Now via CLI


From the CLI we have to scan for the new LUN on all data movers. We will use the
command server_devconfig. We can run the command for each DM (data mover)
separately starting with the standby one first. The syntax for a dual data mover
system will be,
server_devconfig server_3 -create -scsi -all # for standby DM
server_devconfig server_2 -create -scsi -all # for primary DM
This is the recommended way but I have never heard of any issue occurred while
scanning across all DMs at a time.For a multi DM system if we want to scan for all
data movers in single command, the only change will be just ALL in the place of the
server name.
server_devconfig ALL -create -scsi -all # for all DMs
After the successful scanning you can find the new devices/LUNs at the bottom of
the output of nas_disks -list command.
$nas_disks -l
id inuse sizeMB storageID-devID type name servers
1 y 11263 CK1234512345-0000 CLSTD root_disk 1,2
============== Shortened output ==============
18 n 51200 CK1234512345-0010 CLSTD d18 1,2

You can verify the increased space of the respective pool by running the nas_pool -
size [Pool Name].

EXPANDING A (EMC CELERRA/VNX) NAS POOL


In this post let’s discuss expanding a (EMC Celerra/VNX-File) NAS pool by adding
new LUNs from the backend storage. A NAS Pool from on which we create
Filesystems for NFS/CIFS (SMB) should have sufficient space for catering the NAS
requests. Here our pool is running out of space, with only a few MBs left.
[nasadmin@beginnersNAS ~]$ nas_pool -size Bforum_Pool
id = 48
name = Bforum_Pool
used_mb = 491127
avail_mb = 123
total_mb = 491250
potential_mb = 0
[nasadmin@beginnersNAS ~]$
Let’s see how we can get this pool extended.
Let’s have a look first at the existing disks (LUNs from backend). Here we already
have 9 disks assigned. We should have the 10th one in place, which will add up
space to the pool.
[nasadmin@beginnersNAS ~]$ nas_disk -l
id inuse sizeMB storageID-devID type name servers
1 y 11263 CKxxxxxxxxxxx-0000 CLSTD root_disk 1,2
2 y 11263 CKxxxxxxxxxxx-0001 CLSTD root_ldisk 1,2
3 y 2047 CKxxxxxxxxxxx-0002 CLSTD d3 1,2
4 y 2047 CKxxxxxxxxxxx-0003 CLSTD d4 1,2
5 y 2047 CKxxxxxxxxxxx-0004 CLSTD d5 1,2
6 y 32767 CKxxxxxxxxxxx-0005 CLSTD d6 1,2
7 y 178473 CKxxxxxxxxxxx-0010 CLSTD d7 1,2
8 n 178473 CKxxxxxxxxxxx-0011 CLSTD d8 1,2
9 y 547418 CKxxxxxxxxxxx-0007 CLSTD d9 1,2
[nasadmin@beginnersNAS ~]$
As per the requirement, we have to assign the LUNs from the backend storage. It is
recommended to add the new LUNs of identical size as of existing LUNs in the pool
to have best performance.
Now to the most important part – Rescaning the new disks. We have to use the
server_devconfig command for rescan. We can run the command against individual
data movers also. The recommeded way to do this is to start with the standby DMs
first and then on primary ones. Listing the nas_disks will show the servers on which
the disks are scanned.
[nasadmin@beginnersNAS ~]$ server_devconfig ALL -create -scsi -all
Discovering storage (may take several minutes)
server_2 : done
server_3 : done
[nasadmin@beginnersNAS ~]$
Yes, that is done successfully. Now let’s check the disks list. We can see the 10th
disk with inuse=n which is scanned on both servers (data movers).
[nasadmin@beginnersNAS ~]$ nas_disk -l
id inuse sizeMB storageID-devID type name servers
1 y 11263 CKxxxxxxxxxxx-0000 CLSTD root_disk 1,2
2 y 11263 CKxxxxxxxxxxx-0001 CLSTD root_ldisk 1,2
3 y 2047 CKxxxxxxxxxxx-0002 CLSTD d3 1,2
4 y 2047 CKxxxxxxxxxxx-0003 CLSTD d4 1,2
5 y 2047 CKxxxxxxxxxxx-0004 CLSTD d5 1,2
6 y 32767 CKxxxxxxxxxxx-0005 CLSTD d6 1,2
7 y 178473 CKxxxxxxxxxxx-0010 CLSTD d7 1,2
8 n 178473 CKxxxxxxxxxxx-0011 CLSTD d8 1,2
9 y 547418 CKxxxxxxxxxxx-0007 CLSTD d9 1,2
10 n 547418 CKxxxxxxxxxxx-0006 CLSTD d10 1,2
[nasadmin@beginnersNAS ~]$
Let’s check the pool again to see the available and potential storage capacity.
[nasadmin@beginnersNAS ~]$ nas_pool -size Bforum_Pool
id = 48
name = Bforum_Pool
used_mb = 491127
avail_mb = 123
total_mb = 491250
potential_mb = 547418
[nasadmin@beginnersNAS ~]$
Now, as you see, the expanded capacity is available in the pool (refer the potential
storage) .

BASIC HEALTHCHECK COMMANDS FOR EMC


CELERRA/VNX
For the Celerra or a VNX file/Unified system we can verify the system health by
running the nas_checkupcommand. This will do all the checks including the file and
block hardware components, configuration checks – NTP, DNS etc…
A sample output is given below,
[nasadmin@LABVNX ~]$ nas_checkup
Check Version: 7.1.72-1
Check Command: /nas/bin/nas_checkup
Check Log : /nas/log/checkup-run.123456-654321.log
————————————-Checks————————————-
Control Station: Checking statistics groups database………………….. Pass
Control Station: Checking if file system usage is under limit………….. Pass
Control Station: Checking if NAS Storage API is installed correctly…….. Pass
Control Station: Checking if NAS Storage APIs match…………………… N/A
All the warnings, errors and information will be listed at the bottom of the output with
corrective action if required.
The below commands will help you to collect the necessary information while
registering a Service Request etc…
/nas/sbin/model # to find the VNX/Celerra Model
/nas/sbin/serial # to find the VNX/Celerra Serial number
nas_server -l # to list the Data movers and their status. A sample result is as
below.
———————————————–
nasadmin@LABVNX ~]$ nas_server -l
id type acl slot groupID state name
1 1 0 2 0 server_2
2 4 0 3 0 server_3
[nasadmin@LABVNX ~]
———————————————–
/nas/sbin/getreason # to see the Data movers and Control Station boot
status. A sample result is as below.
———————————————–
[nasadmin@LABVNX ~]$ /nas/sbin/getreason
10 – slot_0 primary control station
5 – slot_2 contacted
5 – slot_3 contacted
———————————————–
Control station status should be 10 and Data movers should be 5 with
state contacted for a healthy system.
And finally, collecting the logs – the support materials.
/nas/tools/collect_support_materials will help you in collecting the logs. The logs
will be saved under /nas/var/emcsupport. The file location and name will be
displayed at the bottom of the command output. You can use FTP/SCP tools to copy
the file to your desktop.

VNX/CELERRA – SP COLLECTS FROM CONTROL


STATION COMMAND LINE..

Personally, I prefer Control Station CLI to get the SP Collects for a VNX/Celerra with
attached Clariion, quicker..! Opening the Unisphere Manager takes time, of course it is Java
enabled. Here let us see how this can be done via the CLI.
Open an SSH/Telnet session to the control station and login. You have to navigate to
/nas/tools. Basic Linux command “cd /nas/tools” will do this. Once you are in tools,
there will be a hidden script get_spcollect which is used to collect the SP Collects
(will have to use ls -la for listing it as it is a hidden file).
Now we have to use the below command to execute the script.
./.get_spcollect [don’t miss the dots before and after the /]
This will run the SPCollects script and gather all the logs and create a single
SPCOLLECT.zip file. A sample output will be as below.
[nasadmin@SYSTEM_NAME ~]$ cd /nas/tools/
[nasadmin@SYSTEM_NAME tools]$ ./.get_spcollect
Generating spcollect zip file for Clariion(s)
Creating spcollect zip file for the Service Processor SP_A. Please wait…
spcollect started to pull out log files(it will take several minutes)…
Wait until new _data.zip file size becomes final(it will take several minutes)
Retrieving new _data.zip file…
— truncated output–
Wait until new _data.zip file size becomes final(it will take several minutes)
Retrieving new _data.zip file…
spcollect zip file SYS_SERIAL_SPA_DATE_TIME-STAMP_data.zip for the Service Processor
SP_A was created
Creating spcollect zip file for the Service Processor SP_B. Please wait…
spcollect started to pull out log files(it will take several minutes)…
Wait until new _data.zip file size becomes final(it will take several minutes)
Retrieving new _data.zip file…
— truncated output–
Wait until new _data.zip file size becomes final(it will take several minutes)
Retrieving new _data.zip file…
spcollect zip file SYS_SERIAL_SPB_DATE_TIME-STAMP_data.zip for the Service Processor
SP_B was created
Deleting old SPCOLLECT.zip file from /nas/var/log directory…
Old SPCOLLECT.zip deleted
Zipping all spcollect zip files in one SPCOLLECT.zip file and putting it in the /nas/var/log
directory…
adding: SYS_SERIAL_SPA_DATE_TIME-STAMP_data.zip (stored 0%)
adding: SYS_SERIAL_SPB_DATE_TIME-STAMP_data.zip (stored 0%)
[nasadmin@SYSTEM_NAME tools]$

Now, as mentioned towards the end of the output , the logs – SPCOLLECT.zip will
be located at /nas/var/logdirectory. How can we access it ? I use WinSCP software
to collect it via SCP. Enter the IP address/CS name and login credentials. Once the
Session is open, navigate to /nas/var/log on the right panel and your required
directory on the left. Select the log file and click F5 (or select copy)

That’s it..! You have the SPCollects on your desktop. Quite faster , right ? Hope this
post helped you.

Você também pode gostar