Wednesday, January 1, 2014

How do I setup multiple LUNs on Red Hat Enterprise Linux?


Bottom of Form
 Issue
  • How do I setup multiple LUNs on Red Hat Enterprise Linux?
  • receive message "kernel: scsi: On host 3 channel 0 id 1 only 511 (max_scsi_report_luns) of 522 luns reported, try increasing max_scsi_report_luns"
  • need to see more than 511 luns
  • how do I configure max_luns and max_report_luns for the scsi_mod kernel module?
  • In releases prior to RHEL 6 the scsi_mod module parameters such as, max_luns and max_report_luns could be changed by modifying /etc/modprobe.conf, how do I do the same in RHEL 6?

Resolution
  • There is a parmeter of SCSI core module to control the number of multiple LUNs.
  • Listed below are the steps to configure multiple LUNs.
For Red Hat Enterprise Linux 3, 4, 5

  1. Modify the kernel module configuration file.
In Red Hat Enterprise Linux 3, you need modify /etc/modules.conf, please add the following line.
options scsi_mod max_scsi_luns=xxx
In Red Hat Enterprise Linux 4 and Red Hat Enterprise Linux 5, /etc/modprobe.conf is used.
options scsi_mod max_luns=xxx
  1. Build the initial ramdisk to implement the change
  2. # mkinitrd -f /boot/newimage-2.6.xx 2.6.xx
(add the exact kernel version you have in place of xx)
  1. Reboot the system. After the reboot, the LUNs should appear.
For Red Hat Enterprise Linux 6
  1. scsi_mod is now built in to the kernel and is no longer a loadable module as in prior versions. Therefore module options cannot be changed in RHEL 6 by adding a .conf file entry within the /etc/modprode.d directory. Settings should go on the kernel command line. Append the following to your grub.conf 'kernel' line:
  2. scsi_mod.max_luns=xxx
  3. Note that some new arrays also require the report luns entry value be set. If so, also append it to your grub.conf kernel line:
  4. scsi_mod.max_report_luns=xxx
  5. Reboot the system. After the reboot, the LUNs should appear.
Diagnostic Steps
  • The default setting for lpfc.lpfc_max_luns (Emulex HBAs) is 255. This can be checked with the following command.
# cat /sys/module/lpfc/parameters/lpfc_max_luns
255
  • The default setting for qla2xxx.ql2xmaxlun is 65535. This can be checked with the following command (RHEL5 and above only).
# cat /sys/module/qla2xxx/parameters/ql2xmaxlun
65535
  • The default setting for scsi_mod.max_luns (SCSI mid layer) is 512. This can be checked with the following command.

# cat /sys/module/scsi_mod/parameters/max_luns

512

How to change the bonding mode without rebooting the system?


Resolution

In Red Hat Enterprise Linux 5 or 6, the bonding mode can be changed like this:
Here is an example about how to change bonding mode from 0 to 1 in RHEL5 or 6:

1. Find current bonding mode:
#cd /sys/class/net/bond0/bonding
#cat mode
 balance-rr   0              
The current bonding mode is round-robin.

2. Change the bonding mode:
#ifdown bond0
#echo 1 >mode
#cat mode
active-backup 1
Now the bonding mode has been changed to active-backup.
3. Up the bond0 again:
#ifup bond0
4. Check if the bonding mode has been changed:
#cat /proc/net/bonding/bond0

In Red Hat Enterprise Linux 4, the bonding mode can be changed like this:

Here is an example about how to change bonding mode from 0 to 1 in RHEL4:
1. Find current bonding mode:
# cat /proc/net/bonding/bond0
...
Bonding mode: round-robin
The  current bonding mode is round-robin.

2. Down all bonding information and change the bonding mode:
# ifdown bond0
(down other bonding interfaces if you have)
# rmmod bonding
- Edit /etc/modprobe.conf to change the bonding options (the line should look like the following)
options bonding mode=0 miimon=100
- This can be changed to:
options bonding mode=1 miimon=100

3. Up the bond0 (and other bonding interface) again. The bonding module will be reloaded automatically:
# ifup bond0

4. Check if the bonding mode has been changed:

#cat /proc/net/bonding/bond0

How do I configure multiple network bonding channels on Red Hat Enterprise Linux? 


Resolution
  • Multiple bonding setup is different for Red Hat Enterprise Linux 4 and Red Hat Enterprise Linux 5.
For Red Hat Enterprise Linux 5:

In Red Hat Enterprise Linux 5.3 (or update to initscripts-8.45.25-1.el5) and later, configuring multiple bonding channels is similar to configuring a single bonding channel. Setup the ifcfg-bondN and ifcfg-ethX files as if there were only one bonding channel. You can specify differentBONDING_OPTS for different bonding channels so that they can have different mode and other settings.
  • For example, you can add the following line to /etc/modprobe.conf:
    alias bond0 bonding
    alias bond1 bonding
  • And here is an example for ifcfg-bond0 and ifcfg-bond1:
ifcfg-bond0:
    DEVICE=bond0
    IPADDR=192.168.50.111
    NETMASK=255.255.255.0
    USERCTL=no
    BOOTPROTO=none
    ONBOOT=yes
    BONDING_OPTS="mode=0 miimon=100"
ifcfg-bond1:
    DEVICE=bond1
    IPADDR=192.168.30.111
    NETMASK=255.255.255.0
    USERCTL=no
    BOOTPROTO=none
    ONBOOT=yes
    BONDING_OPTS="mode=1 miimon=50"

Note: The bonding mode=0 is Round-robin policy. This mode provides load balancing and fault tolerance. The bonding mode=1 is Active-backup policy.

Note: When configuring BONDING_OPTS via ifcfg-bondX scripts, you should not configure "max_bonds=X" in either the ifcfg-bondX scripts or in /etc/modprobe.conf.  If bonding options are configured in modprobe.conf, "max_bonds=X" should be used if there is more than 1 bonding interface on the system.  See below for an example.

For Red Hat Enterprise Linux 4:

To configure multiple bonding channels, first setup the ifcfg-bondN and ifcfg-ethX files as if there were only one bonding channel..
  • The change comes in setting up /etc/modprobe.conf. If the two bonding channels have the same bonding option, such as bonding mode, monitoring frequency, etc, add the max_bonds option. For example:
    alias bond0 bonding
    alias bond1 bonding

    options bonding max_bonds=2 mode=balance-rr miimon=100

How do I add raw device mapping in Red Hat Enterprise Linux 5


Issue
How do I add raw device mapping in Red Hat Enterprise Linux 5?

Environment
  • Red Hat Enterprise Linux 5
  • raw device
Resolution
Previously (Red Hat Enterprise Linux 5.0 through Red Hat Enterprise Linux 5.3), support for raw devices in the upstream kernel was deprecated. However, this support has been returned to the kernel. Consequently, in Red Hat Enterprise Linux 5.4, support for raw devices has also been returned. Additionally, the initscripts packages have been updated, adding the previously dropped functionality of raw devices.
So in the Red Hat Enterprise Linux 5, there are two methods to configure RAW device.

Method 1. Using rawdevices service (Not available on RHEL5.0 -- RHEL5.3)

1. Edit the file /etc/sysconfig/rawdevices
    service rawdevices start
    # raw device bindings
    # format:  <rawdev> <major> <minor>
    #          <rawdev> <blockdev>
    # example: /dev/raw/raw1 /dev/sda1
    #          /dev/raw/raw2 8 5
    /dev/raw/raw1 /dev/hda5
    /dev/raw/raw2 /dev/hda6
(Notes: /dev/raw/raw0 is not allowed because minor number cannot be zero.)

2. Start the rawdevices service
    #service rawdevices start
    #chkconfig rawdevices on

Method 2. Using udev to configure RAW device

1. Creating the raw devices using udev:
Nevertheless, to create raw devices, add entries to /etc/udev/rules.d/60-raw.rules in the following formats:
For device names:
ACTION=="add", KERNEL=="<device name>", RUN+="raw /dev/raw/rawX %N"
For major / minor numbers:
ACTION=="add", ENV{MAJOR}=="A", ENV{MINOR}=="B", RUN+="raw /dev/raw/rawX %M %m"
Note: Replace <device name> with the name of the device needed to bind (such as /dev/sda1). "A" and "B" are the major / minor numbers of the device needed for binding, an "X" is the raw device number that the system wants to use.

2. Creating persistent raw devices for single path LUNs: 
If using unpartitioned LUNs, to create a single raw device for the whole LUN use this rule format:
ACTION=="add", KERNEL=="sd*[!0-9]", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="3600601601bd2180072193a9242c3dc11", RUN+="/bin/raw /dev/raw/raw1 %N"
Note: Set the RESULT value to the output of scsi_id -g -u -s /block/sdX (where sdX is the current path to the LUN). This will create the raw device /dev/raw/raw1 that will be persistently bound to the LUN with WWID 3600601601bd2180072193a9242c3dc11.
If using partitioned LUNs, where raw devices are created for each of the partitions on the LUN, use this rule format:
 ACTION=="add", KERNEL=="sd*[0-9]", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="3600601601bd2180072193a9242c3dc11", RUN+="/bin/raw /dev/raw/raw%n %N"
Again
, set RESULT to the output of scsi_id -g -u -s /block/sdX. This will create the raw device(s) /dev/raw/raw1, /dev/raw/raw2, etc. for each partition on the LUN and they will be persistently bound to the LUN with WWID 3600601601bd2180072193a9242c3dc11.

3. Setting ownership and permissions on the raw devices: 
To set specific ownership and/or permissions for the raw devices, add entries to /etc/udev/rules.d/60-raw.rules in the following format:
 ACTION=="add", KERNEL=="raw*", OWNER="root", GROUP="disk", MODE="0660"

4. Testing and implementing the udev rules: 
Before implementing them, use the udevtest command to verify the udev rules work as expected. To verify that the raw device is created for a specific disk or partition, eg /dev/sdb1:
 [root@rhel5 rules.d]# udevtest /block/sdb/sdb1 | grep raw
 main: run: '/bin/raw /dev/raw/raw1 /dev/.tmp-8-17'
To check ownership/permission settings for a particular raw device, eg /dev/raw/raw1:
 [root@rhel5 rules.d]# udevtest /class/raw/raw1 | grep mode
 udev_node_add: creating device node '/dev/raw/raw1', major = '162', minor = '1', mode = '0600', uid = '0', gid = '0'
Finally, to actually create the raw device(s), use the start_udev command:
 [root@rhel5 rules.d]# start_udev
 Starting udev:                                             [  OK  ]
Check that the raw device(s) have been created:
 [root@rhel5 rules.d]# raw -qa
 /dev/raw/raw1:  bound to major 8, minor 17
 [root@rhel5 rules.d]# ls -l /dev/raw
 total 0
 crw-rw---- 1 root   disk 162,  1 Jan 29 02:47 raw1
5. Creating persistent raw devices for multipathed LUNs or LVM device: 
Unfortunately it is not possible to write udev rules for creating raw devices on multipath devices (/dev/dm-*) without manipulating existing udev rules. Modifying existing rules for this purpose could cause unforeseen problems and is not supported by Red Hat Global Support Services. If absolutely necessary, an alternate method for creating raw devices on top of multipath devices could be to create the raw devices in /etc/rc.d/rc.local, so long as the raw device is not required before rc.local is executed. For example:
 /bin/raw /dev/raw/raw1 /dev/mpath/mpath1p1
 /bin/raw /dev/raw/raw2 /dev/mpath/mpath1p2
 /bin/chmod 660 /dev/raw/raw1
 /bin/chown root:disk /dev/raw/raw1
 /bin/chmod 660 /dev/raw/raw2
 /bin/chown root:disk /dev/raw/raw2
If absolutely want to create raw devices for multipathed LUNs or LVM device using udev, can add the following udev rules in the file /etc/udev/rules.d/60-raw.rules

# Device mapper raw rules
KERNEL!="dm-[0-9]*", GOTO="skip_dm"
ACTION!="change", GOTO="skip_dm"
PROGRAM!="/sbin/dmsetup ls --exec /bin/basename -j %M -m %m", GOTO="skip_dm"
RESULT=="mpath2", RUN+="/bin/raw /dev/raw/raw2 /dev/mapper/mpath2"
RESULT=="mpath1", RUN+="/bin/raw /dev/raw/raw1 /dev/mapper/mpath1"
LABEL="skip_dm"
KERNEL=="raw1", ACTION=="add", OWNER="root", GROUP="disk", MODE="0660"
KERNEL=="raw2", ACTION=="add", OWNER="root", GROUP="disk", MODE="0660"


Note: The first three rules makes sure we are only using dm devices with change actions, when dmsetup returns success. If not, we skip down to the "skip_dm" label.  After that, the rules look at the result of running dmsetup, and do fire off the appropriate command.

How to reduce the size of a non-root LVM and assign that space to the root LVM?

 Bottom of Form
Issue
  • Need to reduce the size of a non-root LVM and assign that space to the root LVM as / is running out of space.
Environment
  • Red Hat Enterprise Linux 5.6
Resolution
  • The following instructions can be followed in order to achive it. Since the file system which needs to be reduced is not root file system, this steps can be done from the current runlevel itself. For reducing root LVM, it is needed to boot in to rescue mode.
  • Unmout the file system which needs to be reduced.
  • # umount <file_system>
  • Reduce the file system by using resize2fs command.
  • # resize2fs <full_path_of_LVM>  <size_which you want the file system to be>
Example;
# resize2fs /dev/VolGroup00/LogVol05 400G
  • Then, reduce the corresponding LVM of that file system.
  • # lvreduce -L -size  <full_path_of_LVM>
(size should be the value which we want to reduce)
Example here;
# lvreduce -L -62G  /dev/VolGroup00/LogVol05


(For RHEL5)  This command will reduce the LV and the file system together. (If you have done Step 2 and/or 3, you don't need to do this step.)

# lvreduce -L -62G -r /dev/VolGroup00/LogVol05


Here, this file system is presently 462GB, and it is reducing to 400 in this example.
  • Then, increase the size of corresponding LVM of /.
  • # lvextend -L +size <full_path_of_LVM>
  • Extent / file system online.
  • # resize2fs /
Please verify the sizes by using df -h command.
Note:
online shrink a filesystem is not supported and it will probably not be supported since the problems it can raise

Root Cause
Shrink a filesystem is possible while umounted depending on the filesystem (not all filesystems are able to be shrinked). But, do it in a online way is almost impractical.
While online resizing a filesystem, the filesystem itself does not need to care about hashing or locking the new blocks added to the filesystem (operations that may cause deadlocks or data corruption). Once these blocks are new, there are no users using it or page/buffers mapped on them.

To shrink a filesystem, the filesystem needs to care about possible allocated blocks and inodes on the space to be freed, such operations like reallocate inodes and/or data blocks, besides to be an expensive operation to the filesystem, it will need to ensure there are no pages or buffers using the blocks and/or inodes to be re-allocated, which can lead to deadlocks and also filesystem corruptions.

How do I resize the root partition (/) after installation on Red Hat Enterprise Linux 4?



Top of Form
Issue
  • How do I resize the root partition (/) after installation on Red Hat Enterprise Linux 4?
Note:
  • Resizing non-LVM root partitions is not covered in this document and is not supported by Red Hat Global Support Services.
Environment
  • Red Hat Enterprise Linux 4
  • The root partition is an LVM logical volume.
Resolution
Assumption:
  • Logical Volume Manager (LVM) is already configured on the system.
Steps to resize the root partition (/):
  • Verify on which LV device, the / is mounted:
Eg:
# df /
  • Check filesystem column in the output of above command. There would be something like /dev/mapper/Vol00-LogVol00.
Eg: "LogVol00" is the Logical Volume device on which / is mounted and "Vol00" is the volume group.
  • Verify the free Physical Extents (PE) in the volume group in which the Logical Volume is residing.
Eg:
# vgdisplay <volumegroup>
  • After confirming the free Physical Extents (PE), resize the Logical Volume.
Eg:
# lvextend -l +125 /dev/<volumegroup>/<logicalvolume>
OR
# lvextend -L +500M /dev/<volumegroup>/<logicalvolume>
  • Resize the mounted filesystem.
Eg:
# ext2online /dev/<volumegroup>/<logicalvolume>
Note: If you miss this step, no error will occur, however only the logical volume size will increase i.e. when you mount the filesystem it will still be the same size.

Extra Notes:
  • The ext2online command is in the e2fsprogs rpm which is installed by default in RHEL4 systems.
The resize2fs command in the e2fsprogs rpm could only be used on an unmounted filesystem in RHEL4 system, but after RHEL5, it could 

Check whether hyperthreading is enabled or not in Linux?


# cat /proc/cpuinfo
If the number of siblings and number of cores you see is same then hyper threading is not enabled. If they are not same then its enabled.
If its enabled, then "siblings" give logical cores present and "cpu core" gives actual physical cores.

Linux detect or find out a dual-core cpu
Type the following command to get overall cpu info:

$ less /proc/cpuinfo
Task: Identif whether a cpu is dual-core or not
$ grep cores /proc/cpuinfo
Output:
cpu cores       : 2

Physical Processor

If you want to make sure how many physical CPUs are in the system, you can use the above information to calculate the number of physical CPUs. However simply counting the different physical id fields is easier.

What to do when "vxdisk list" shows status of 'online dgdisabled'.

Details:




HPSRV01:# vxdisk -o alldgs list
DEVICE           TYPE            DISK             GROUP        STATUS
EMC_CLARiiON0_0  auto:cdsdisk    EMC_CLARiiON0_0  dygy2502     online
EMC_CLARiiON0_1  auto:cdsdisk    -               (dvgy2500)    online
EMC_CLARiiON0_2  auto:cdsdisk    EMC_CLARiiON0_4  dvgyappl     online
EMC_CLARiiON0_3  auto:cdsdisk    EMC_CLARiiON0_3  dvgy2503     online
EMC_CLARiiON0_4  auto:cdsdisk    EMC_CLARiiON0_4  dvgy2504     online
EMC_CLARiiON0_5  auto:cdsdisk    EMC_CLARiiON0_5  dvgy25       online
EMC_CLARiiON0_6  auto:cdsdisk    EMC_CLARiiON0_9  dvgy26       online dgdisabled
EMC_CLARiiON0_7  auto:cdsdisk    EMC_CLARiiON0_8  dygy2501     online
EMC_CLARiiON0_8  auto:cdsdisk    -               (dvgy2506)    online
EMC_CLARiiON0_9  auto:cdsdisk    -               (dvgy2505)    online
EMC_CLARiiON0_10 auto:cdsdisk    -               (dvgy2507)    online
EMC_CLARiiON0_11 auto:cdsdisk    EMC_CLARiiON0_11 dvgy25db2    online


This situation can happen when every disk in a disk group is lost from a bad power supply, power turned off to the disk array, cable disconnected, zoning problems, etc.

The disk group will not show in the output from vxprint -ht.


HPSRV01:# vxprint -htg dvgy26
VxVM vxprint ERROR V-5-1-582 Disk group dvgy26: No such disk group


The disk group will show as disabled in vxdg list:


HPSRV01:# vxdg list
NAME         STATE                ID
dygy2501     enabled,cds          1189621899.78.HPSRV01
dvgyappl     enabled,cds          1190904062.52.HPSRV01
dvgy25       enabled,cds          1189622068.88.HPSRV01
dvgy25db2    enabled,cds          1189622043.86.HPSRV01
dvgy26       disabled             1189538508.74.HPSRV01
dvgy2503     enabled,cds          1189621988.82.HPSRV01
dvgy2504     enabled,cds          1189622014.84.HPSRV01
dygy2502     enabled,cds          1189621955.80.HPSRV01


This is the output of vxdg list dvgy26:


HPSRV01:# vxdg list dvgy26
Group:            dvgy26
dgid:             1189538508.74.HPSRV01
import-id:        1024.22
flags:            disabled
version:          0
alignment:        0 (bytes)
local-activation: read-write
ssb:              off
detach-policy:    invalid
copies:           nconfig=default nlog=default
config:           seqno=0.1103 permlen=1280 free=1259 templen=11 loglen=192
config disk EMC_CLARiiON0_6 copy 1 len=1280 state=clean online
log disk EMC_CLARiiON0_6 copy 1 len=192


Your filesystems will of course fail and the operating system will report it as corrupted.


HPSRV01:# df -k > /dev/null
df: /db2/dwins26q: I/O error
df: /backup: I/O error
df: /db/dwdb26q/dwins25q/NODE0000: I/O error
df: /db/dwins26q/dwdb26q/syscatspace/NODE0000: I/O error
df: /db/dwins26q/dwdb26q/tempspace01/NODE0000: I/O error
df: /dba/dwins26q: I/O error
df: /db2/dwmysld: I/O error
df: /backup/wiminst: I/O error


Once you have confirmed that the disk storage is powered-up, running, and operational and if the LUNs are in a SAN, zoning is configured right, this problem can be remedied by deporting, and then importing the disk group:


# vxdg deport dvgy26

# vxdg import dvgy26
VxVM vxdg ERROR V-5-1-587 Disk group dvgy26: import failed: No valid disk found containing disk group


If volume manager can't see the disks, and your SAN or storage administrator has confirmed that the LUNs were fine and presented to your server, then rescan the disks.


HPSRV01:# vxdisk scandisks
HPSRV01:# vxdctl enable
HPSRV01:# vxdg import dvgy26


Otherwise, your diskgroup should be showing up as enabled.


HPSRV01:# vxdg list
NAME         STATE           ID
dygy2501     enabled,cds          1189621899.78.HPSRV01
dvgyappl     enabled,cds          1190904062.52.HPSRV01
dvgy25       enabled,cds          1189622068.88.HPSRV01
dvgy25db2    enabled,cds          1189622043.86.HPSRV01
dvgy26       enabled,cds          1189538508.74.HPSRV01
dvgy2503     enabled,cds          1189621988.82.HPSRV01
dvgy2504     enabled,cds          1189622014.84.HPSRV01
dygy2502     enabled,cds          1189621955.80.HPSRV01


The disk group now shows in vxprint -ht with the volumes and plexes disabled:


HPSRV01:# vxprint -htg dvgy26
DG NAME         NCONFIG      NLOG     MINORS   GROUP-ID
ST NAME         STATE        DM_CNT   SPARE_CNT         APPVOL_CNT
DM NAME         DEVICE       TYPE     PRIVLEN  PUBLEN   STATE
RV NAME         RLINK_CNT    KSTATE   STATE    PRIMARY  DATAVOLS  SRL
RL NAME         RVG          KSTATE   STATE    REM_HOST REM_DG    REM_RLNK
CO NAME         CACHEVOL     KSTATE   STATE
VT NAME         NVOLUME      KSTATE   STATE
V  NAME         RVG/VSET/CO  KSTATE   STATE    LENGTH   READPOL   PREFPLEX UTYPE
PL NAME         VOLUME       KSTATE   STATE    LENGTH   LAYOUT    NCOL/WID MODE
SD NAME         PLEX         DISK     DISKOFFS LENGTH   [COL/]OFF DEVICE   MODE
SV NAME         PLEX         VOLNAME  NVOLLAYR LENGTH   [COL/]OFF AM/NM    MODE
SC NAME         PLEX         CACHE    DISKOFFS LENGTH   [COL/]OFF DEVICE   MODE
DC NAME         PARENTVOL    LOGVOL
SP NAME         SNAPVOL      DCO

dg dvgy26       default      default  9000     1189538508.74.HPSRV01

dm EMC_CLARiiON0_9 EMC_CLARiiON0_6 auto 2048   67102464 -

v  backup       -            DISABLED ACTIVE   4194304  SELECT    -        fsgen
pl backup-01    backup       DISABLED ACTIVE   4194304  CONCAT    -        RW
sd EMC_CLARiiON0_9-02 backup-01 EMC_CLARiiON0_9 8388608 4194304 0 EMC_CLARiiON0_6 ENA

v  db           -            DISABLED ACTIVE   1048576  SELECT    -        fsgen
pl db-01        db           DISABLED ACTIVE   1048576  CONCAT    -        RW
sd EMC_CLARiiON0_9-04 db-01  EMC_CLARiiON0_9 16777216 1048576 0   EMC_CLARiiON0_6 ENA

v  dba          -            DISABLED ACTIVE   4194304  SELECT    -        fsgen
pl dba-01       dba          DISABLED ACTIVE   4194304  CONCAT    -        RW
sd EMC_CLARiiON0_9-03 dba-01 EMC_CLARiiON0_9 12582912 4194304 0   EMC_CLARiiON0_6 ENA

v  db2          -            DISABLED ACTIVE   8388608  SELECT    -        fsgen
pl db2-01       db2          DISABLED ACTIVE   8388608  CONCAT    -        RW
sd EMC_CLARiiON0_9-01 db2-01 EMC_CLARiiON0_9 0 8388608  0         EMC_CLARiiON0_6 ENA

v  dwmysld      -            DISABLED ACTIVE   2097152  SELECT    -        fsgen
pl dwmysld-01   dwmysld      DISABLED ACTIVE   2097152  CONCAT    -        RW
sd EMC_CLARiiON0_9-09 dwmysld-01 EMC_CLARiiON0_9 55574528 2097152 0 EMC_CLARiiON0_6 ENA

v  lg1          -            DISABLED ACTIVE   10485760 SELECT    -        fsgen
pl lg1-01       lg1          DISABLED ACTIVE   10485760 CONCAT    -        RW
sd EMC_CLARiiON0_9-08 lg1-01 EMC_CLARiiON0_9 45088768 10485760 0  EMC_CLARiiON0_6 ENA

v  syscat       -            DISABLED ACTIVE   2097152  SELECT    -        fsgen
pl syscat-01    syscat       DISABLED ACTIVE   2097152  CONCAT    -        RW
sd EMC_CLARiiON0_9-05 syscat-01 EMC_CLARiiON0_9 17825792 2097152 0 EMC_CLARiiON0_6 ENA

v  tp01         -            DISABLED ACTIVE   4194304  SELECT    -        fsgen
pl tp01-01      tp01         DISABLED ACTIVE   4194304  CONCAT    -        RW
sd EMC_CLARiiON0_9-07 tp01-01 EMC_CLARiiON0_9 40894464 4194304 0  EMC_CLARiiON0_6 ENA

v  ts01         -            DISABLED ACTIVE   20971520 SELECT    -        fsgen
pl ts01-01      ts01         DISABLED ACTIVE   20971520 CONCAT    -        RW
sd EMC_CLARiiON0_9-06 ts01-01 EMC_CLARiiON0_9 19922944 20971520 0 EMC_CLARiiON0_6 ENA


Verify that the disks on the diskgroup are all online.


HPSRV01:# vxdisk -o alldgs list
DEVICE           TYPE            DISK             GROUP        STATUS
EMC_CLARiiON0_0  auto:cdsdisk    EMC_CLARiiON0_0  dygy2502     online
EMC_CLARiiON0_1  auto:cdsdisk    -               (dvgy2500)    online
EMC_CLARiiON0_2  auto:cdsdisk    EMC_CLARiiON0_4  dvgyappl     online
EMC_CLARiiON0_3  auto:cdsdisk    EMC_CLARiiON0_3  dvgy2503     online
EMC_CLARiiON0_4  auto:cdsdisk    EMC_CLARiiON0_4  dvgy2504     online
EMC_CLARiiON0_5  auto:cdsdisk    EMC_CLARiiON0_5  dvgy25       online
EMC_CLARiiON0_6  auto:cdsdisk    EMC_CLARiiON0_9  dvgy26       online
EMC_CLARiiON0_7  auto:cdsdisk    EMC_CLARiiON0_8  dygy2501     online
EMC_CLARiiON0_8  auto:cdsdisk    -               (dvgy2506)    online
EMC_CLARiiON0_9  auto:cdsdisk    -               (dvgy2505)    online
EMC_CLARiiON0_10 auto:cdsdisk    -               (dvgy2507)    online
EMC_CLARiiON0_11 auto:cdsdisk    EMC_CLARiiON0_11 dvgy25db2    online


Now the volumes can be started:


HPSRV01:# vxvol -g dvgy26 startall

HPSRV01:# vxprint -htg dvgy26 | egrep '^v|^pl'
v  backup       -            ENABLED  ACTIVE   4194304  SELECT    -        fsgen
pl backup-01    backup       ENABLED  ACTIVE   4194304  CONCAT    -        RW
v  db           -            ENABLED  ACTIVE   1048576  SELECT    -        fsgen
pl db-01        db           ENABLED  ACTIVE   1048576  CONCAT    -        RW
v  dba          -            ENABLED  ACTIVE   4194304  SELECT    -        fsgen
pl dba-01       dba          ENABLED  ACTIVE   4194304  CONCAT    -        RW
v  db2          -            ENABLED  ACTIVE   8388608  SELECT    -        fsgen
pl db2-01       db2          ENABLED  ACTIVE   8388608  CONCAT    -        RW
v  dwmysld      -            ENABLED  ACTIVE   2097152  SELECT    -        fsgen
pl dwmysld-01   dwmysld      ENABLED  ACTIVE   2097152  CONCAT    -        RW
v  lg1          -            ENABLED  ACTIVE   10485760 SELECT    -        fsgen
pl lg1-01       lg1          ENABLED  ACTIVE   10485760 CONCAT    -        RW
v  syscat       -            ENABLED  ACTIVE   2097152  SELECT    -        fsgen
pl syscat-01    syscat       ENABLED  ACTIVE   2097152  CONCAT    -        RW
v  tp01         -            ENABLED  ACTIVE   4194304  SELECT    -        fsgen
pl tp01-01      tp01         ENABLED  ACTIVE   4194304  CONCAT    -        RW
v  ts01         -            ENABLED  ACTIVE   20971520 SELECT    -        fsgen
pl ts01-01      ts01         ENABLED  ACTIVE   20971520 CONCAT    -        RW


The filesystems on these volumes may not be in consistent state. So, run a filesystem check before mounting them.

HPSRV01:# for i in `grep dvgy26 /etc/fstab | awk '{ print $1 }'`
> do
>   fsck -y $i
>   mount $i
> done