Thursday, July 5, 2012

Adding LVM from SAN Storage and resizing later

Goal: Will add a slice from Storage using LVM.
           So that anytime, this can be re-size in case needed.

Storage Network: 192.158.3.1 - 8

1. Prepare a Volume at the storage side, let the Storage administrator do it, we assume here that the ip of   host on the storage ip block is 192.168.3.21

2. Let the host be visible at the storage side, HBA should be visible.
- multipathd and iscsiadm should be installed.

3. Command to discover iscsi of storage.



iscsiadm --mode discoverydb --type sendtargets --portal 192.168.3.1 --discover

iscsiadm --mode discoverydb --type sendtargets --portal 192.168.3.2 --discover


Then let the host be seen at SAN storage.




iscsiadm -m node -l

[root@localhost ~]# iscsiadm -m node -l


Logging in to [iface: default, target: iqn.2002-03.com.compellent:5000d310003e7b2a, portal: 192.168.3.1,3260] (multiple)
Logging in to [iface: default, target: iqn.2002-03.com.compellent:5000d310003e7b17, portal: 192.168.3.2,3260] (multiple)


Then check at SAN Storage, the iscsi interface ip of the host should be seen now at Storage. Do the mapping of storage for this server.

4. Creating a multipath.

do the ff: commands:

multipath -v2
multipath -ll

Actual below:

[root@localhost ~]# multipath -v2

[root@localhost ~]# multipath -ll
mpath1 (36000d310003e7b00000000000000000b) dm-2 COMPELNT,Compellent Vol
[size=2.0T][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=4][active]
 \_ 14:0:0:1 sda        8:0   [active][ready]
 \_ 17:0:0:1 sdb        8:16  [active][ready]
 \_ 18:0:0:1 sdc        8:32  [active][ready]
 \_ 16:0:0:1 sdd        8:48  [active][ready]

The red number is the key that will be defined at multipath.conf so it can be named with friendly names.

5. Multipath.conf entry:

#/etc/multipath.conf (Centos OS)
defaults {
        user_friendly_names yes
        queue_without_daemon no
}

multipaths {
        multipath {
                wwid                    36000d310003e7b00000000000000000b
                alias                   SANDISK01
                path_grouping_policy    multibus
                path_selector           "round-robin 0"
                failback                manual
                rr_weight               priorities
                no_path_retry           5 r


}
#eof


if you do multipath -ll


[root@localhost ~]# multipath -ll
SANDISK01 (36000d310003e7b00000000000000000b) dm-2 COMPELNT,Compellent Vol
[size=2.0T][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=4][active]
 \_ 14:0:0:1 sda        8:0   [active][ready]
 \_ 17:0:0:1 sdb        8:16  [active][ready]
 \_ 18:0:0:1 sdc        8:32  [active][ready]
 \_ 16:0:0:1 sdd        8:48  [active][ready]
[root@localhost ~]# ll /dev/mapper/
total 0
crw------- 1 root root  10, 60 Aug 27 03:34 control
brw-rw---- 1 root disk 253,  2 Aug 27 04:49 SANDISK01
brw-rw---- 1 root disk 253,  0 Aug 27 03:34 VolGroup00-LogVol00
brw-rw---- 1 root disk 253,  1 Aug 27 03:34 VolGroup00-LogVol01



you have the above disk being map via multipath as friendly name.


6. Creating LVM


[root@localhost ~]# pvcreate /dev/mapper/SANDISK01 
  Writing physical volume data to disk "/dev/mapper/SANDISK01"
  Physical volume "/dev/mapper/SANDISK01" successfully created




[root@localhost ~]# vgcreate SANVG01 /dev/mapper/SANDISK01 
  Volume group "SANVG01" successfully created


[root@localhost ~]# vgdisplay 
  --- Volume group ---
  VG Name               SANVG01
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               2.00 TB
  PE Size               4.00 MB
  Total PE              524287
  Alloc PE / Size       0 / 0   
  Free  PE / Size       524287 / 2.00 TB
  VG UUID               A4Jydq-6V7Y-RV3w-3TbT-sALZ-jdGX-aeOULL


[root@localhost ~]# lvcreate -l 100%FREE -n SANLV01 SANVG01
  Logical volume "SANLV01" created

[root@localhost ~]# lvdisplay 
  --- Logical volume ---
  LV Name                /dev/SANVG01/SANLV01
  VG Name                SANVG01
  LV UUID                Edj7Gw-HGhR-g529-89mE-ab5a-HWGY-LnwWGq
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                2.00 TB
  Current LE             524287
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:3

7. Formatting - creating filesystem

[root@localhost ~]# mkfs.ext3 /dev/SANVG01/SANLV01 
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
268435456 inodes, 536869888 blocks
26843494 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
16384 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks: 
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
        4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 
        102400000, 214990848, 512000000

Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 34 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
[root@localhost ~]# 


8. Mounting and adding entry at fstab.

mkdir /SAN01

add below entry at /etc/fstab

/dev/mapper/SANVG01-SANLV01     /SAN01          ext3    _netdev,defaults,noatime,acl    0 0

mount /SAN01

df -h

/dev/mapper/SANVG01-SANLV01   2.0T  199M  1.9T   1% /SAN01

now mounted.



9. RESIZING


At SAN Storage or imform Storage admin, expand the volume to 1TB first. Once done, go back to the server and check.

First check: do fdisk -l, when you see that it is still the same, then restart multipath and restart iscsi service

After restarting isicsi service, check with fdisk again, this should now be 3TB disk since the existing is 2 TB.

Now do:

[root@dvo-zimbra01 home]# pvdisplay
--- Physical volume ---
PV Name /dev/mapper/SANDISK01
VG Name SANVG01
PV Size 2.00 TB / not usable 4.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB

then

restart multipath service then command below:



[root@dvo-zimbra01 home]# pvresize /dev/mapper/SANDISK01
Physical volume "/dev/mapper/SANDISK01" changed
1 physical volume(s) resized / 0 physical volume(s) not resized
[root@dvo-zimbra01 home]#

then

[root@dvo-zimbra01 home]# pvdisplay
--- Physical volume ---
PV Name /dev/mapper/SANDISK01
VG Name SANVG01
PV Size 3.00 TB / not usable 3.00 MiB
Allocatable yes
PE Size 4.00 MiB


then

[root@dvo-zimbra01 home]# lvextend -L+1TB /dev/SANVG01/SANLV01
Extending logical volume SANLV2 to 3.00 TB
Logical volume SANLV01 successfully resized

[root@dvo-zimbra01 home]# lvdisplay
--- Logical volume ---
LV Name /dev/SANVG01/SANLV01
VG Name SANVG01 LV UUID Edj7Gw-HGhR-g529-89mE-ab5a-HWGY-LnwWGq LV Write Access read/write LV Status available # open 1 LV Size 3.00 TB Current LE 524287 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:3

/dev/SANVG01/SANLV01 has now 3 TB of size but df shows the usage is still 2TB when doing df -h


/dev/mapper/SANVG01-SANLV01   2.0T  199M  1.9T   1% /SAN01

Resizing Online

so the last part would be doing resize2fs
/dev/SANVG01/SANLV01 even if the device is mounted, this is online resizing. Will take a while especially if the disk is large.



[root@dvo-zimbra01 home]# resize2fs
/dev/SANVG01/SANLV01
resize2fs 1.41.12 (17-May-2010)
Filesystem at
/dev/SANVG01/SANLV01 is mounted on /SAN01; on-line resizing required
old desc_blocks = 32, new_desc_blocks = 64
Performing an on-line resize of
/dev/SANVG01/SANLV01 to 268434432 (4k) blocks.
The filesystem on
/dev/SANVG01/SANLV01 is now 268434432 blocks long.


Do df -h again to check if it has now 3TB.