Thursday, July 5, 2012

Adding LVM from SAN Storage and resizing later

Goal: Will add a slice from Storage using LVM.
           So that anytime, this can be re-size in case needed.

Storage Network: 192.158.3.1 - 8

1. Prepare a Volume at the storage side, let the Storage administrator do it, we assume here that the ip of   host on the storage ip block is 192.168.3.21

2. Let the host be visible at the storage side, HBA should be visible.
- multipathd and iscsiadm should be installed.

3. Command to discover iscsi of storage.



iscsiadm --mode discoverydb --type sendtargets --portal 192.168.3.1 --discover

iscsiadm --mode discoverydb --type sendtargets --portal 192.168.3.2 --discover


Then let the host be seen at SAN storage.




iscsiadm -m node -l

[root@localhost ~]# iscsiadm -m node -l


Logging in to [iface: default, target: iqn.2002-03.com.compellent:5000d310003e7b2a, portal: 192.168.3.1,3260] (multiple)
Logging in to [iface: default, target: iqn.2002-03.com.compellent:5000d310003e7b17, portal: 192.168.3.2,3260] (multiple)


Then check at SAN Storage, the iscsi interface ip of the host should be seen now at Storage. Do the mapping of storage for this server.

4. Creating a multipath.

do the ff: commands:

multipath -v2
multipath -ll

Actual below:

[root@localhost ~]# multipath -v2

[root@localhost ~]# multipath -ll
mpath1 (36000d310003e7b00000000000000000b) dm-2 COMPELNT,Compellent Vol
[size=2.0T][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=4][active]
 \_ 14:0:0:1 sda        8:0   [active][ready]
 \_ 17:0:0:1 sdb        8:16  [active][ready]
 \_ 18:0:0:1 sdc        8:32  [active][ready]
 \_ 16:0:0:1 sdd        8:48  [active][ready]

The red number is the key that will be defined at multipath.conf so it can be named with friendly names.

5. Multipath.conf entry:

#/etc/multipath.conf (Centos OS)
defaults {
        user_friendly_names yes
        queue_without_daemon no
}

multipaths {
        multipath {
                wwid                    36000d310003e7b00000000000000000b
                alias                   SANDISK01
                path_grouping_policy    multibus
                path_selector           "round-robin 0"
                failback                manual
                rr_weight               priorities
                no_path_retry           5 r


}
#eof


if you do multipath -ll


[root@localhost ~]# multipath -ll
SANDISK01 (36000d310003e7b00000000000000000b) dm-2 COMPELNT,Compellent Vol
[size=2.0T][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=4][active]
 \_ 14:0:0:1 sda        8:0   [active][ready]
 \_ 17:0:0:1 sdb        8:16  [active][ready]
 \_ 18:0:0:1 sdc        8:32  [active][ready]
 \_ 16:0:0:1 sdd        8:48  [active][ready]
[root@localhost ~]# ll /dev/mapper/
total 0
crw------- 1 root root  10, 60 Aug 27 03:34 control
brw-rw---- 1 root disk 253,  2 Aug 27 04:49 SANDISK01
brw-rw---- 1 root disk 253,  0 Aug 27 03:34 VolGroup00-LogVol00
brw-rw---- 1 root disk 253,  1 Aug 27 03:34 VolGroup00-LogVol01



you have the above disk being map via multipath as friendly name.


6. Creating LVM


[root@localhost ~]# pvcreate /dev/mapper/SANDISK01 
  Writing physical volume data to disk "/dev/mapper/SANDISK01"
  Physical volume "/dev/mapper/SANDISK01" successfully created




[root@localhost ~]# vgcreate SANVG01 /dev/mapper/SANDISK01 
  Volume group "SANVG01" successfully created


[root@localhost ~]# vgdisplay 
  --- Volume group ---
  VG Name               SANVG01
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               2.00 TB
  PE Size               4.00 MB
  Total PE              524287
  Alloc PE / Size       0 / 0   
  Free  PE / Size       524287 / 2.00 TB
  VG UUID               A4Jydq-6V7Y-RV3w-3TbT-sALZ-jdGX-aeOULL


[root@localhost ~]# lvcreate -l 100%FREE -n SANLV01 SANVG01
  Logical volume "SANLV01" created

[root@localhost ~]# lvdisplay 
  --- Logical volume ---
  LV Name                /dev/SANVG01/SANLV01
  VG Name                SANVG01
  LV UUID                Edj7Gw-HGhR-g529-89mE-ab5a-HWGY-LnwWGq
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                2.00 TB
  Current LE             524287
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:3

7. Formatting - creating filesystem

[root@localhost ~]# mkfs.ext3 /dev/SANVG01/SANLV01 
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
268435456 inodes, 536869888 blocks
26843494 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
16384 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks: 
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
        4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 
        102400000, 214990848, 512000000

Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 34 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
[root@localhost ~]# 


8. Mounting and adding entry at fstab.

mkdir /SAN01

add below entry at /etc/fstab

/dev/mapper/SANVG01-SANLV01     /SAN01          ext3    _netdev,defaults,noatime,acl    0 0

mount /SAN01

df -h

/dev/mapper/SANVG01-SANLV01   2.0T  199M  1.9T   1% /SAN01

now mounted.



9. RESIZING


At SAN Storage or imform Storage admin, expand the volume to 1TB first. Once done, go back to the server and check.

First check: do fdisk -l, when you see that it is still the same, then restart multipath and restart iscsi service

After restarting isicsi service, check with fdisk again, this should now be 3TB disk since the existing is 2 TB.

Now do:

[root@dvo-zimbra01 home]# pvdisplay
--- Physical volume ---
PV Name /dev/mapper/SANDISK01
VG Name SANVG01
PV Size 2.00 TB / not usable 4.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB

then

restart multipath service then command below:



[root@dvo-zimbra01 home]# pvresize /dev/mapper/SANDISK01
Physical volume "/dev/mapper/SANDISK01" changed
1 physical volume(s) resized / 0 physical volume(s) not resized
[root@dvo-zimbra01 home]#

then

[root@dvo-zimbra01 home]# pvdisplay
--- Physical volume ---
PV Name /dev/mapper/SANDISK01
VG Name SANVG01
PV Size 3.00 TB / not usable 3.00 MiB
Allocatable yes
PE Size 4.00 MiB


then

[root@dvo-zimbra01 home]# lvextend -L+1TB /dev/SANVG01/SANLV01
Extending logical volume SANLV2 to 3.00 TB
Logical volume SANLV01 successfully resized

[root@dvo-zimbra01 home]# lvdisplay
--- Logical volume ---
LV Name /dev/SANVG01/SANLV01
VG Name SANVG01 LV UUID Edj7Gw-HGhR-g529-89mE-ab5a-HWGY-LnwWGq LV Write Access read/write LV Status available # open 1 LV Size 3.00 TB Current LE 524287 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:3

/dev/SANVG01/SANLV01 has now 3 TB of size but df shows the usage is still 2TB when doing df -h


/dev/mapper/SANVG01-SANLV01   2.0T  199M  1.9T   1% /SAN01

Resizing Online

so the last part would be doing resize2fs
/dev/SANVG01/SANLV01 even if the device is mounted, this is online resizing. Will take a while especially if the disk is large.



[root@dvo-zimbra01 home]# resize2fs
/dev/SANVG01/SANLV01
resize2fs 1.41.12 (17-May-2010)
Filesystem at
/dev/SANVG01/SANLV01 is mounted on /SAN01; on-line resizing required
old desc_blocks = 32, new_desc_blocks = 64
Performing an on-line resize of
/dev/SANVG01/SANLV01 to 268434432 (4k) blocks.
The filesystem on
/dev/SANVG01/SANLV01 is now 268434432 blocks long.


Do df -h again to check if it has now 3TB.





Friday, June 15, 2012

Postgresql 9 - some reminder


1. Grant select only to a certain user on tables.

psql databasename

GRANT SELECT ON ALL TABLES IN SCHEMA public TO readonlyuser;




Tuesday, May 29, 2012

A note in RHEV installation

I have 1 rhevm and 3 rhevh. Storage is Dell Compellent.

After redhat fix the 512Bytes limit, i should be able now to mount the iscsi store, so there should be a detailed process at storage it self so that all of the hypervisor will be able to become spm or able to connect to the storage pool immediately once the existing spm becomes on maintenance mode.

At the Storage, a server cluster should be created, then join all the rhevh as member. Go to all the rhevh and make sure that you will be able to login manualy on the iscsi LUN advertised by the storage so that HBA will be available on the servers at the cluster. Then all member should have been map on the volume intended as the storage for the rhevh.

at the rhevh, discover first the iscsi.

iscsiadm --mode discoverydb --type sendtargets --portal 192.168.1.1 --discover

assuming that the ip of storage iscsi is 192.168.1.1

then login.

iscsiadm -m node -l ---> it will  login to all discovered portal.

then go back at the rhevm and try to activate rhevh, it should be able to become an SPM now.


Thursday, May 17, 2012

Testing DD to write file to disk


Testing DD to write file to disk.

time sh -c "dd if=/dev/zero of=ddfile1 bs=8k count=200000"

-will write 1.6G of file see output below.

time sh -c "dd if=/dev/zero of=ddfile1 bs=8k count=200000"


200000+0 records in
200000+0 records out
1638400000 bytes (1.6 GB) copied, 7.48931 s, 219 MB/s




Tuesday, May 15, 2012

iptables NAT notes

I have an IP that I need to exclude on the masquerading, after google, this is what been functional. 10.254.1.87 need to bypass NAT. Below are the entry, the rest of the 10.254.x.x should be NAT.


iptables -A POSTROUTING -t nat -s 10.254.1.87/255.255.255.255 -j ACCEPT
#
iptables -A POSTROUTING -t nat -s 10.254.0.0/255.255.0.0 -d 192.168.0.0/255.255.0.0 -o eth0 -j MASQUERADE




Well, just copied the information from this link. Thanks!

Monday, January 30, 2012

Changing IP Address of Cisco AP c1260 Gigabit Interface

Changing IP Address of Cisco AP c1260 Gigabit Interface


Well, I need to put it here so I may not forget, there are still other AP that I have not yet set the IP.


I able to login on enable mode, and the command to configure the IP is..

lwapp ip  or try lwapp ? for the succeeding commands / parameters.






Friday, January 27, 2012

Duplicating a copy of schema to another schema name on an existing postgresql database

Problem: Need to duplicate the content of say schema001 inside a DB name dbtest001 to a new schema with name schema999.


Specs: Postgres 8.2.17


Solution:

1. Login as root, su - postgres

2. create a backup first of the DB dbtest001, command below:
pg_dump -U postgres dbtest001 -f /tmp/dbtest001-backup.sql

3. psql dbtest001

4. We need to rename the schema001 by alter command:
SCHEMA schema001 RENAME TO schema999;

then control d to exit inside psql.

5. After renaming it to the new schema name, dump the database schema only for the particular schema.
pg_dump -U postgres dbtest001 --schema=schema999 --schema-only -f /tmp/schema999.sql

6. Rename back the schema999 to its original name - schema001.

psql dbtest001

SCHEMA schema999 RENAME TO schema001;

7. Now create schema999


CREATE SCHEMA schema999;

control d to exist again from psql console.

8. Now restore the dumped file on the database.
psql -U postgres -d dbtest001 < /tmp/schema999.sql


The schema999 should now been populated with the content of schema001.


-Ohbet


Edit:


01302012

There is a request to do this on a live system, so I cannot rename the schema, and for searching, some mentioned the dump the original schema to a file, rename the schema name on that file to a new name, here im using sed to rename then create the new schema on the DB and restore the dumped file.


1. Of course, create a backup first. - see 1 above


2. Dump the existing schema.
pg_dump -U postgres dbtest001 --schema=schema001 --schema-only -f /tmp/schema001.sql

3. Rename the schema001 entry on that file and dump in on another file named schema999.
sed 's\schema001\schema999\g' /tmp/schema001.sql > /tmp/schema999.sql

4.  psql dbtest001

5. Create the new schema.

CREATE SCHEMA schema999;

6. Dump the edited schema named schema999.sql

psql -U postgres -d dbtest001 < /tmp/schema999.sql


Note: This is the same result as doing the above, the advantage is you dont need to rename the schema, since its a live system.