Monday, 18 November 2013

Linux Quota Management

Quota System allows an administrator to establish limits on the amount of disk resources user can consume.

The Quota is set in the /home partation. usrquota for user and grpquota for group.

If we want user quota first initialize quota in fstab

# vim /etc/fstab
/home /home ext3 defaults,usrquota 0 0
                         ,grpquota

# mount -o remount /home

# quotacheck -c /home  ------ which create a database file

             -cg /home                aquota.user in /home
            

# edquota -u username
          -g groupname


Start the assigned quota by

# quotaon /home
  quotaoff /home
 
block ----- used space by system
soft ----- get warning
hard ---- Max limit, quota exceeding size
inode ---- number of files


while giving soft and hard limits, add block size also with hard and soft, ie,if block size is 32, when you giving soft(100) and hard limit(200) it looks like 132 and 232

To checck quota first check as that user

# su - ctechz
Then create a file in his home dir with some limits exceeds

dd if=/dev/zero of=anyfileName bs=1024 count=5 (files generated which has that much size)

For checking the status of user quota

# repquota /home

To delete quota

# quotaoff /home
# vim /etc/fstab
/home /home ext3 defaults 0 0  ---- remove usrquota from here

# edquota -u ctechz
    and remove the Hard and Soft
   
# quotaoff /home

# mount -o remount /home

LVM export and import: Move a VG to another machine or group

vgexport & vgimport are not necessary to move drives from one system from another.

It is an admin policy tool to prevent access to volumes in the time it takes to move them.

 Exporting Volume Group

1. unmount the file system

First make sure no users are accessing files on active volume, then unmount it

# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1              25G  4.9G   19G  21% /
tmpfs                 593M     0  593M   0% /dev/shm
/dev/mapper/vg--ctechz-lvm--ctechz
                      664M  542M   90M  86% /lvm-ctechz

# umount /lvm-ctechz/

# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1              25G  4.9G   19G  21% /
tmpfs                 593M     0  593M   0% /dev/shm

2. Mark the Volume Group inactive

Marks the volume group inactive removes it from the kernal and prevents any further activity on it.

# vgchange -an vg-ctechz(VG name)
0 logical volume(s) in volume group "vg-ctechz" now active

3. Export the VG

It is now necessor to export the Volume Group, this prevents it from being accessed on the "old"host system and prepares it to be removed.

# vgexport vg-ctechz(vg name)
  Volume group "vg-ctechz" successfully exported

when the machine is next shut down, the disk can be unplgged and then connected to its new machine.

 Import the Volume Group(VG)

When plugged into new system it becomes /dev/sdb or what ever depends so an initial pvscan shows:

1. # pvscan
PV /dev/sda3 is in exported VG vg-ctechz[580.00MB/0 free]
PV /dev/sda4 is in exported VG vg-ctechz[484.00MB/312.00MB free]
PV /dev/sda5 is in exported VG vg-ctechz[288.00MB/288.00MB free]
 Total: 3 [1.32 GB] / in use: 3 [1.32 GB] / in no VG: 0[0]

2. We can now import the Volume Group (which also activates it) and mount the fle system.

If you are importing on an LVM2 system run,

# vgimport vg-ctechz
Volume group "vg-ctechz" successfully imported

If you are importing on an LVM1 system, add the pvs that needed to import

# vgimport vg-ctechz /dev/sda3 /dev/sda4 /dev/sda5

3. Activate the Volume Group

You must activate the volume group before you can access it

# vgchange -ay vg-ctechz
1 logical volume(s) in volume group "vg-ctechz" now active

Now mount the file system
# mount /dev/vg-ctechz/lvm-ctechz /LVM-import/

# mount
/dev/sda1 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
/dev/mapper/vg--ctechz-lvm--ctechz on /LVM-import type ext3 (rw)
[root@localhost ~]#

[root@localhost ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1              25G  4.9G   19G  21% /
tmpfs                 593M     0  593M   0% /dev/shm
/dev/mapper/vg--ctechz-lvm--ctechz
                      664M  542M   90M  86% /LVM-import

 Using Vgscan

# pvs

  PV         VG        Fmt  Attr PSize   PFree
  /dev/sda3  vg-ctechz lvm2 ax-  580.00M 0
  /dev/sda4  vg-ctechz lvm2 ax-  484.00M 312.00M
  /dev/sda5  vg-ctechz lvm2 ax-  288.00M 288.00M

# pvs shows in which all disk attached to vg

# vgscan
Reading all physical volumes.  This may take a while...
Found exported volume group "vg-ctechz" using metadata type lvm2

# vgimport vg-ctechz
Volume group "vg-ctechz" successfully imported

# vgchange -ay vg-ctechz
1 logical volume(s) in volume group "vg-ctechz" now active

# mkdir /LVM-vgscan
# mount /dev/vg-ctechz/lvm-ctechz /LVM-vgscan

# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1              25G  4.9G   19G  21% /
tmpfs                 593M     0  593M   0% /dev/shm
/dev/mapper/vg--ctechz-lvm--ctechz
                      664M  542M   90M  86% /LVM-vgscan

# mount
/dev/sda1 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
/dev/mapper/vg--ctechz-lvm--ctechz on /LVM-vgscan type ext3 (rw)

VG Scan is using when we are not exporting the vg. ie, first umount the Logical Volume and take the disk and attach it to some other disk, and then do the # vgscan

it will detect the volume group from the disk and mount it in the new system.

How to remove LVM Snapshot

When backup has finished you can now unmount the volume and remove it from the system. you should remove snapshot volume when you have finished with them because then take a copy of all data written to the original volume and this can hurt performance,

# vgdisplay
  --- Volume group ---
  VG Name               vg-ctechz
  System ID
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  4
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               1.04 GB
  PE Size               4.00 MB
  Total PE              266
 Alloc PE / Size       250 / 1000.00 MB
  Free  PE / Size       16 / 64.00 MB
  VG UUID         ui4JTr-JOwC-VCvG-5Evc-poOD-3Klr-hM3feq

# vgs
  VG        #PV #LV #SN Attr   VSize VFree
  vg-ctechz   2   2   1 wz--n- 1.04G 64.00M


# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1              25G  4.9G   19G  21% /
tmpfs                 593M     0  593M   0% /dev/shm
/dev/mapper/vg--ctechz-lvm--ctechz
                      591M  542M   20M  97% /lvm-ctechz
/dev/mapper/vg--ctechz-lvm--ctechz--spapshot1
                      591M  542M   20M  97% /snapshot

# umount /snapshot/

# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1              25G  4.9G   19G  21% /
tmpfs                 593M     0  593M   0% /dev/shm
/dev/mapper/vg--ctechz-lvm--ctechz
                      591M  542M   20M  97% /lvm-ctechz

# ls /dev/vg-ctechz/
lvm-ctechz  lvm-ctechz-spapshot1

# lvremove /dev/vg-ctechz/lvm-ctechz-spapshot1
Do you really want to remove active logical volume lvm-ctechz-spapshot1? [y/n]: y
Logical volume "lvm-ctechz-spapshot1" successfully removed

# vgs
  VG        #PV #LV #SN Attr   VSize VFree
  vg-ctechz   2   1   0 wz--n- 1.04G 464.00M

We got our 400MB back in Volume Group


How to take LMV Snapshots / Backuping LVM via snapshot

A snapshot volume is a special type of volume that presents all the data that was in the volume at the time the snapshot was created.

This means we can back up that volume without having to worry about data being changed while the backup is going on,and we don't have to take the database volume offline while the backup is taking place.

snapshot is just a link which point to another place, if the file is growing in lvm and we need a backup of data at a particular time say @ 6:00 pm, First make a snapshot at 6:00 and and mount it some where and take backup of data from that place. This allows the administrator to create a new block device which presents an exact copy of a logical volume, frozen at some point in time. 


Typically this would be used when some batch processing, a backup for instance, needs to be performed on the logical volume, but you don't want to halt a live system that is changing the data.


When the snapshot device has been finished with the system administrator can just remove the device.


This facility does require that the snapshot be made at a time when the data on the logical volume is in a consistent state.

# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1              25G  4.9G   19G  21% /
tmpfs                 593M     0  593M   0% /dev/shm
/dev/mapper/vg--ctechz-lvm--ctechz
                      591M  542M   20M  97% /lvm-ctechz


While we are creating a lvm snapshot we are taking space from volume group, so first make sure volume group contain enough free space.

# vgs or vgdisplay

Make sure the snapshot has enough free space or same as origin size(or you can create it in any size)

1.Check in Volume group for free space
# vgs

# vgs
VG        #PV #LV #SN Attr   VSize VFree
vg-ctechz   2   2   1 wz--n- 1.04G 464.00M

free space is available so create a snapshot of current lvm using the vg

# lvcreate -L +400M -s /dev/vg-ctechz/lvm-ctechz -n lvm-ctechz-spapshot1
Logical volume "lvm-ctechz-spapshot1" created


here in plaec of +400M we can give any size, we can even mention +10M as well
-L|--size LogicalVolumeSize
-s|--snapshot} OriginalLogicalVolume[Path]
-n|--name LogicalVolumeName


lvcreate -L +size -s snapshotName
-n lvmName

# ls /dev/vg-ctechz/
lvm-ctechz  lvm-ctechz-spapshot1

# mkdir /snapshot
# mount /dev/vg-ctechz/lvm-ctechz-spapshot1 /snapshot/

# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1              25G  4.9G   19G  21% /
tmpfs                 593M     0  593M   0% /dev/shm
/dev/mapper/vg--ctechz-lvm--ctechz
                      591M  542M   20M  97% /lvm-ctechz
/dev/mapper/vg--ctechz-lvm--ctechz--spapshot1
                      591M  542M   20M  97% /snapshot
 
# lvdisplay
  --- Logical volume ---
  LV Name                /dev/vg-ctechz/lvm-ctechz
  VG Name                vg-ctechz
  LV UUID           jZEuoN-16MG-30eX-SZaI-8ETO-ZguH-YRVyeN
  LV Write Access        read/write
  LV snapshot status    source of
              /dev/vg-ctechz/lvm-ctechz-spapshot1[active]
  LV Status              available
   # open                 1
  LV Size                600.00 MB
  Current LE             150
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0

  --- Logical volume ---
  LV Name                /dev/vg-ctechz/lvm-ctechz-spapshot1
  VG Name                vg-ctechz
  LV UUID                kk3Cvm-Il4z-J8Kt-qt8s-TstZ-Peyw-gU0Uho
  LV Write Access        read/write
  LV snapshot status     active destination for 

                         /dev/vg-ctech/lvm-ctechz
  LV Status              available
  # open                 1
  LV Size                600.00 MB
  Current LE             150
  COW-table size         400.00 MB
  COW-table LE           100
  Allocated to snapshot  0.01%
  Snapshot chunk size    4.00 KB
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1


# lvdisplay /dev/vg-ctechz/lvm-ctechz
--- Logical volume ---
  LV Name                /dev/vg-ctechz/lvm-ctechz
  VG Name                vg-ctechz
  LV UUID                jZEuoN-16MG-30eX-SZaI-8ETO-ZguH-YRVyeN
  LV Write Access        read/write
LV snapshot status     source of 

               /dev/vg-ctechz/lvm-ctechz-spapshot1[active]
  LV Status              available
  # open                 1
  LV Size                600.00 MB
  Current LE             150
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0

Here we can get the status of the active snapshot

Now we created the Snapshot for the Logical Volume what we have and then do the backup, so first create the snapshot and then backup. 


For backuping use your methods or strategy 

# tar -cvf lvmsnap1.tar /dev/mapper/vg--ctechz-lvm--ctechz--spapshot1

How to Extend Volume Group / Adding physical volumes to a volume group

For increasing the volume group size create another partation in the same disk or in a separate disk,

# vgdisplay
  --- Volume group ---
  VG Name               vg-ctechz
  System ID
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  16
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               1.04 GB
  PE Size               4.00 MB
  Total PE              266
  Alloc PE / Size       188 / 752.00 MB
  Free  PE / Size       78 / 312.00 MB
  VG UUID       ui4JTr-JOwC-VCvG-5Evc-poOD-3Klr-hM3feq

[root@localhost ~]# vgs
  VG        #PV #LV #SN Attr  VSize VFree
  vg-ctechz  2   1   0 wz--n- 1.04G 312.00M

# fdisk -l

Disk /dev/sda: 31.8 GB, 31890341888 bytes
255 heads, 63 sectors/track, 3877 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot  Start End  Blocks   Id  System
/dev/sda1   *    1    3315 26627706 83  Linux
/dev/sda2       3316  3446 1052257+ 82  Linux swap/Solaris
/dev/sda3       3447  3520 594405   8e  Linux LVM
/dev/sda4       3521  3582 498015   8e  Linux LVM

# fdisk /dev/sda

The number of cylinders for this disk is set to 3877.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
   (e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
e
Selected partition 2
First cylinder (3316-3877, default 3316):
Using default value 3316
Last cylinder or +size or +sizeM or +sizeK (3316-3446, default 3446): +600M

Command (m for help): p

Disk /dev/sda: 31.8 GB, 31890341888 bytes
255 heads, 63 sectors/track, 3877 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

    Device Boot  Start  End Blocks   Id  System
/dev/sda1   *    1     3315 26627706 83  Linux
/dev/sda2       3316   3389 594405   5   Extended
/dev/sda3       3447   3520 594405   8e  Linux LVM
/dev/sda4       3521   3582 498015   8e  Linux LVM

Command (m for help):
Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.

# partprobe

then create the new partition inside the extended partition,

# fdisk /dev/sda2

# fdisk -l

Disk /dev/sda: 31.8 GB, 31890341888 bytes
255 heads, 63 sectors/track, 3877 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot Start  End   Blocks   Id  System
/dev/sda1   *  1      3315  26627706 83  Linux
/dev/sda2      3316   3389  594405   5  Extended
/dev/sda3      3447   3520  594405   8e  Linux LVM
/dev/sda4      3521   3582  498015   8e  Linux LVM
/dev/sda5      3316   3352  297171   83  Linux

sda5 is the new partation,

# pvcreate /dev/sda5
  Writing physical volume data to disk "/dev/sda5"
  Physical volume "/dev/sda5" successfully created

# vgs
  VG        #PV #LV #SN Attr   VSize VFree
  vg-ctechz   2   1   0 wz--n- 1.04G 312.00M

to extend vg # vgextend oldvgname /dev/newvgname

# vgextend vg-ctechz /dev/sda5
  Volume group "vg-ctechz" successfully extended

# vgs
  VG        #PV #LV #SN Attr   VSize VFree
  vg-ctechz   3   1   0 wz--n- 1.32G 600.00M

Check the old vgsize and new vgsize here its extended 

from 1.04G to 1.32G

How to reduce LVM

Be careful while reducing LVM, or else their may be a chance of file system get corrupt.

Do this simple technique when you are reducing LVM, leave 10% free when you are reducing say if you are reducing 100GB to 60GB reduce, and when you are reszing leave 10% of 60 = 6 and now 60-6=54. First resize to 54M then reduce to 60M. Check another example,

total space say 300gb and we need the final size after resize to be 200gb then replace 200gb 's 90%  ie, 10*200 / 100 = 20 200-20 = 180Gb. ie, use the 10% ie, use this 180GB when resize the volume and use the correct size(200GB) that we need when reducing the the lvm,

# umount mount point

Check the file system
# e2fsck -f /dev/vgname/lvname

here lvm size is 900MB and need to reduce it to 750MB, then find the 10% of 750Mb ie, 10*750 / 100 = 75 then 750-75=675. so re-size using 675


# resize2fs /dev/vgname/lvname 675M

# lvreduce -L 750M /dev/vgname/lvname

# mount mountPoint

------

# umount /lvm-ctechz

# e2fsck -f /dev/vg-ctechz/lvm-ctechz

e2fsck 1.39 (29-May-2006)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/vg-ctechz/lvm-ctechz: 12/122880 files (8.3% non-contiguous), 142573/230400 blocks

# resize2fs /dev/vg-ctechz/lvm-ctechz 675M

resize2fs 1.39 (29-May-2006)
Resizing the filesystem on /dev/vg-ctechz/lvm-ctechz to 172800 (4k) blocks.
The filesystem on /dev/vg-ctechz/lvm-ctechz is now 172800 blocks long.

# lvreduce -L 750M /dev/vg-ctechz/lvm-ctechz

  Rounding up size to full physical extent 752.00 MB
  WARNING: Reducing active logical volume to 752.00 MB
  THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce lvm-ctechz? [y/n]: y
  Reducing logical volume lvm-ctechz to 752.00 MB
  Logical volume lvm-ctechz successfully resized

# mount /dev/vg-ctechz/lvm-ctechz /lvm-ctechz/

# lvdisplay
  --- Logical volume ---
  LV Name                /dev/vg-ctechz/lvm-ctechz
  VG Name                vg-ctechz
  LV UUID                jZEuoN-16MG-30eX-SZaI-8ETO-ZguH-YRVyeN
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                752.00 MB
  Current LE             188
  Segments               2
  Allocation             inherit
   Read ahead sectors     auto
  - currently set to     256
  Block device           253:0

[root@localhost ~]# lvs
  LV         VG        Attr   LSize   Origin Snap%  Move Log Copy%  Convert
  lvm-ctechz vg-ctechz -wi-ao 752.00M
 

So we successfully reduce our lvm of 900MB to 750MB.

How to extend LVM

say the disk is 97% fill, then what approach you will take

1. check the disk is full or not using df -h

# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1              25G  4.9G   19G  21% /
tmpfs                 593M     0  593M   0% /dev/shm
/dev/mapper/vg--ctechz-lvm--ctechz
                      591M  542M   20M  97% /lvm-ctechz


2. Find out the Volume group(VG) containing the logical volume(LV)

# lvdisplay
  --- Logical volume ---
  LV Name                /dev/vg-ctechz/lvm-ctechz
  VG Name                vg-ctechz
  LV UUID          jZEuoN-16MG-30eX-SZaI-8ETO-ZguH-YRVyeN
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                600.00 MB
  Current LE             150
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0

[root@localhost ~]# lvs
  LV         VG        Attr   LSize   Origin Snap%  Move Log Copy%  Convert
  lvm-ctechz vg-ctechz -wi-ao 600.00M

3. Check the Volume Group(VG) whether we can extend it or not, check is their any free space in volume group or not

# vgdisplay
  --- Volume group ---
  VG Name               vg-ctechz
  System ID
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  14
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               1.04 GB
  PE Size               4.00 MB
  Total PE              266
  Alloc PE / Size       150 / 600.00 MB
  Free  PE / Size       116 / 464.00 MB
  VG UUID          ui4JTr-JOwC-VCvG-5Evc-poOD-3Klr-hM3feq

[root@localhost ~]# vgs
  VG        #PV #LV #SN Attr   VSize VFree
  vg-ctechz   2   1   0 wz--n- 1.04G 464.00M

If we have free space in VG extend it

# lvextend -L +wanted size/full new size /dev/vgname/lvname

If we want to extend an LVM of size 110MB to 150MB then we can give

# lvextend -L +40M /dev/vgname/lvname  OR

# lvextend -L 150M /dev/vgname/lvname

We can also give it with block size

# lvextend -l +multiple of PE size /dev/vgname/lvname
                2,4,8,16
in the above case if the PE Size is 4MB we can done the extension by

lvextend -l +10 /dev/vgname/lvname

In this setup
# lvextend -L +300M /dev/vg-ctechz/lvm-ctechz
  Extending logical volume lvm-ctechz to 900.00 MB
  Logical volume lvm-ctechz successfully resized


4.Then format this extended size to ext3
     # resie2fs /dev/vgname/lvname

# resize2fs /dev/vg-ctechz/lvm-ctechz
resize2fs 1.39 (29-May-2006)
Filesystem at /dev/vg-ctechz/lvm-ctechz is mounted on /lvm-ctechz; on-line resizing required
Performing an on-line resize of /dev/vg-ctechz/lvm-ctechz to 230400 (4k) blocks.
The filesystem on /dev/vg-ctechz/lvm-ctechz is now 230400 blocks long.

# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1              25G  4.9G   19G  21% /
tmpfs                 593M     0  593M   0% /dev/shm
/dev/mapper/vg--ctechz-lvm--ctechz
                      885M  542M  301M  65% /lvm-ctechz


Earlier only 20MB is available and 97% full now we extended the lvm and 301MB available and 65% Free now.



# vgs
  VG        #PV #LV #SN Attr   VSize VFree
  vg-ctechz   2   1   0 wz--n- 1.04G 164.00M

# lvs
  LV         VG        Attr   LSize   Origin Snap%  Move Log Copy%  Convert
  lvm-ctechz vg-ctechz -wi-ao 900.00M
 

Now the total lvm size is 900MB

# lvdisplay
  --- Logical volume ---
  LV Name                /dev/vg-ctechz/lvm-ctechz
  VG Name                vg-ctechz
  LV UUID                jZEuoN-16MG-30eX-SZaI-8ETO-ZguH-YRVyeN
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                900.00 MB
  Current LE             225
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0

Tuesday, 12 November 2013

How to take MSSQL Backup from sql management studio

How to create LVM (Logical Volume Management)

LVM Creates a higher level of abstraction than traditional disk linux partitions. This allows great flexibility in allocating storage.

Logical volumes can be  resized and moved b/w physical devicess easily.

Here We can create two partations and toggle it as LVM, The imge showing here is different and use it as a reference,

In order to create a Logical Volume, first we need a
1.Physical Volume, then a 
2.Volume Group and
3.Logical Volume

First we need an unpartitioned hard disk /dev/sda. First you need to create physical volumes. To do this you need partitions or a whole disk.
 

It is possible to run pvcreate command on /dev/sda, but I prefer to use partitions and from partitions I later create physical volumes.

So create two partitions first to create a  physical volume.

# fdisk -l

Disk /dev/sda: 22.9 GB, 22922297344 bytes
255 heads, 63 sectors/track, 2786 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

 Device Boot  Start End  Blocks   Id  System
/dev/sda1   *  1    2525 20282031 83  Linux
/dev/sda2     2526  2786 2096482+ 82  Linux swap/Solaris


Creating two Partitions for PV

Two ways we can go here depends upon the partitions you have.If you have more than 3 primary partitions go with extended partition method. If only two primary partition are their you can also create two more and toggle it as lvm. Either ways you can go depends upon your disk partations.

# fdisk -l

Disk /dev/sda: 32.8 GB, 32841072640 bytes
255 heads, 63 sectors/track, 3992 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

  Device Boot  Start End  Blocks   Id System
/dev/sda1   *   1    3315 26627706 83 Linux
/dev/sda2      3316  3446 1052257+ 82 Linux swap/Solaris

Primary partitions have a disadvantage: You only have four of them. (Comes from my earlier days... 64k will always be sufficient... why more that 4 partitions on a 10MB disk?) Therefore, with bigger disks a specific type of primary partition was defined - the "extended partition", which is nothing more than a construct to create
more partitions (itselves sometimes called extended partitions, sometimes "logical drives"). But still the space defined for the "extended partition" (the one in the primary partition table) is valid for the sum of all sub-partitions.

Primary partitions are old-school... I guess nowadays you get away with defining one huge "extended partition" and creating lots of sub-partitions in there. Technically speaking extended partitions take time to scan, since an auto-created sub-partition table is at the beginning of each sub-partition, with a single entry plus a pointer to the next sub-partition... but that's no real show stopper: How often do you need to scan those tables?

If you know that you will have four or less partitions on the disk, stick with simple primary partitions. If unsure, let at least p4 be an extended partition with the remainder of the disk.

# fdisk /dev/sda

The number of cylinders for this disk is set to 3992.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
   (e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 3
First cylinder (3447-3992, default 3447):
Using default value 3447
Last cylinder or +size or +sizeM or +sizeK (3447-3992, default 3992):
Using default value 3992

Command (m for help):
Command (m for help): p

Disk /dev/sda: 32.8 GB, 32841072640 bytes
255 heads, 63 sectors/track, 3992 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot  Start End  Blocks   Id System
/dev/sda1   *    1    3315 26627706 83 Linux
/dev/sda2       3316  3446 1052257+ 82 Linux swap/Solaris

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 3
First cylinder (3447-3992, default 3447):
Using default value 3447
Last cylinder or +size or +sizeM or +sizeK (3447-3992, default 3992): +600M

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Selected partition 4
First cylinder (3521-3992, default 3521):
Using default value 3521
Last cylinder or +size or +sizeM or +sizeK (3521-3992, default 3992): +500M

Command (m for help): p

Disk /dev/sda: 32.8 GB, 32841072640 bytes
255 heads, 63 sectors/track, 3992 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot  Start End   Blocks   Id  System
/dev/sda1   *    1    3315  26627706 83  Linux
/dev/sda2       3316  3446  1052257+ 82  Linux swap/Solaris
/dev/sda3       3447  3520  594405   83  Linux
/dev/sda4       3521  3582  498015   83  Linux

Command (m for help): t
Partition number (1-4): 3
Hex code (type L to list codes): 8e
Changed system type of partition 3 to 8e (Linux LVM)

Command (m for help): t
Partition number (1-4): 4
Hex code (type L to list codes): 8e
Changed system type of partition 4 to 8e (Linux LVM)

Command (m for help): p

Disk /dev/sda: 32.8 GB, 32841072640 bytes
255 heads, 63 sectors/track, 3992 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot  Start End  Blocks   Id  System
/dev/sda1   *    1    3315 26627706 83  Linux
/dev/sda2       3316  3446 1052257+ 82  Linux swap/Solaris
/dev/sda3       3447  3520 594405   8e  Linux LVM
/dev/sda4       3521  3582 498015   8e  Linux LVM

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.

# partprobe

# fdisk -l

Disk /dev/sda: 32.8 GB, 32841072640 bytes
255 heads, 63 sectors/track, 3992 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot  Start   End  Blocks   Id  System
/dev/sda1   *  1     3315 26627706  83  Linux
/dev/sda2     3316   3446 1052257+  82  Linux swap/Solaris
/dev/sda3            3447 3520      594405   8e  Linux LVM
/dev/sda4            3521 3582      498015   8e  Linux LVM


Created two new partations and toggled it using LVM. Now create a Physical Volume,

Creating Physical Volume PV

# pvcreate /dev/sda{3,4}
  dev_is_mpath: failed to get device for 8:3
  Writing physical volume data to disk "/dev/sda3"
  Physical volume "/dev/sda3" successfully created
  dev_is_mpath: failed to get device for 8:4
  Writing physical volume data to disk "/dev/sda4"
  Physical volume "/dev/sda4" successfully created

# pvdisplay
  "/dev/sda3" is a new physical volume of "580.47 MB"
  --- NEW Physical volume ---
  PV Name               /dev/sda3
  VG Name
  PV Size               580.47 MB
  Allocatable           NO
  PE Size (KByte)       0
  Total PE              0
  Free PE               0
  Allocated PE          0
  PV UUID         SaxPtr-TI0S-3ITI-0zrh-j9ZB-0xOf-yX76sB

  "/dev/sda4" is a new physical volume of "486.34 MB"
  --- NEW Physical volume ---
  PV Name               /dev/sda4
  VG Name
  PV Size               486.34 MB
  Allocatable           NO
  PE Size (KByte)       0
  Total PE              0
  Free PE               0
  Allocated PE          0
  PV UUID         UXpSUV-auh9-2gDg-IBJ6-UXij-FAid-CNLLdL

OR
  

# pvs
  PV         VG   Fmt  Attr PSize   PFree
  /dev/sda3       lvm2 a--  580.47M 580.47M
  /dev/sda4       lvm2 a--  486.34M 486.34M
 

Now Create a volume group inside the Physical Volume

Creating a Volume Group VG 

# vgcreate vg-ctechz /dev/sda{3,4}
  Volume group "vg0" successfully created

# vgdisplay
  --- Volume group ---
  VG Name               vg-ctechz
  System ID
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               1.04 GB
  PE Size               4.00 MB
  Total PE              266
  Alloc PE / Size       0 / 0
  Free  PE / Size       266 / 1.04 GB
  VG UUID          ui4JTr-JOwC-VCvG-5Evc-poOD-3Klr-hM3feq

Or

# vgs
  VG        #PV #LV #SN Attr   VSize VFree
  vg-ctechz   2   0   0 wz--n- 1.04G 1.04G
 
We can create a volume group with fixed size as well, 

Ex: vgcreate -s +PESize vgname /dev/sda3 /dev/sda4

Now Create a Logical Volume 

Creating Logical Volume LV 

lvcreate -L +600 -n lvmName /dev/vgName

# lvcreate -L +600 -n lvm-ctechz /dev/vg-ctechz
Logical volume "lvm-sda" created

# lvdisplay
  --- Logical volume ---
  LV Name                /dev/vg-ctechz/lvm-ctechz
  VG Name                vg-ctechz
  LV UUID                jZEuoN-16MG-30eX-SZaI-8ETO-ZguH-YRVyeN
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                600.00 MB
  Current LE             150
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0

OR

# lvs
  LV           VG       Attr   LSize   Origin Snap%  Move Log Copy%  Convert
  lvm-ctechz vg-ctechz -wi-a- 600.00M


Now format the LVM Volume with the needed file system and mounted in the drive,

# mkdir /lvm-ctechz


# mkfs.ext3 /dev/vg-ctechz/lvm-ctechz
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
76800 inodes, 153600 blocks
7680 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=159383552
5 block groups
32768 blocks per group, 32768 fragments per group
15360 inodes per group
Superblock backups stored on blocks: 32768, 98304

Writing inode tables: done
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 36 mounts 

or 180 days, whichever comes first.Use tune2fs -c or -i to override.

Mount it in fstab 

# Vim /etc/fstab
 /dev/vg-ctechz/lvm-ctechz /lvm-ctechz  ext3  defaults  0 0

# mount -a


# mount/dev/sda1 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
/dev/mapper/vg0-lvm--sda on /LVM type ext3 (rw)

# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1              25G  4.4G   20G  19% /
tmpfs                 593M     0  593M   0% /dev/shm
/dev/mapper/vg--ctechz-lvm--ctechz
                      591M   17M  545M   3% /lvm-ctechz


# fdisk -l

Disk /dev/sda: 31.8 GB, 31890341888 bytes
255 heads, 63 sectors/track, 3877 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot Start End  Blocks   Id System
/dev/sda1   *   1    3315 26627706 83 Linux
/dev/sda2      3316  3446 1052257+ 82 Linux swap/Solaris
/dev/sda3      3447  3520 594405   8e Linux LVM
/dev/sda4      3521  3582 498015   8e Linux LVM



Create a big file and put it in mounted lvm dir the test another scenario's

# dd if=/dev/zero of=lvm-sizefile bs=1000024 count=500
Created some files to put in lvm volume

LVM Introduction

LVM is a Logical Volume Manager for the Linux operating system.

This gives the system administrator much more flexibility in allocating storage to applications and users.
Storage volumes created under the control of the logical volume manager can be resized and moved around.

The logical volume manager also allows management of storage volumes in user-defined groups, allowing the system administrator to deal with sensibly named volume groups such as "development" and "sales" rather than physical disk names such as "sda" and "sdb".

Logical volume management is traditionally associated with large installations containing many disks but it is equally suited to small systems with a single disk or maybe two.

When new drives are added to the system, it is no longer necessary to move users files around to make the best use of the new storage;
simply add the new disk into an existing volume group or groups and extend the logical volumes as necessary.


It is also easy to take old drives out of service by moving the data from them onto newer drives - this can be done online, without disrupting user service.

NOTE: If a system if full and there is no more space, we can either add a new disk into an existing volume group or groups and extend the logical volumes. OR export the LVM to another system.


hda1   hdc1  (PV:s on partitions or whole disk)             
       \   /                                                                   
        \ /                                                                    
 diskvg   
(VG)     
      /    |  \                                                                 
      /      |   \                                                               usrlv rootlv varlv (LV:s)
         |      |     |                                                               ext2  reiserfs  xfs (filesystems)


# volume group (VG)

The Volume Group is the highest level abstraction used within the LVM. It gathers together a collection of Logical Volumes and Physical Volumes into one administrative unit.

# physical volume (PV)

A physical volume is typically a hard disk, though it may well just be a device that 'looks' like a hard disk (eg. a software raid device).

# logical volume (LV)

The equivalent of a disk partition in a non-LVM system. The LV is visible as a standard block device; as such the LV can contain a file system (eg. /home).

# physical extent (PE)

Each physical volume is divided chunks of data, known as physical extents, these extents have the same size as the logical extents for the volume group.

# logical extent (LE)

Each logical volume is split into chunks of data, known as logical extents. The extent size is the same for all logical volumes in the volume group.

# Tying it all together

Example:
  Lets suppose we have a volume group called VG1, this volume group has a physical extent size of 4MB.
  Into this volume group we introduce 2 hard disk partitions, /dev/hda1 and /dev/hdb1.
 
These partitions will become physical volumes PV1 and PV2 (more meaningful names can be given at the administrators discretion).
 

The PV's are divided up into 4MB chunks, since this is the extent size for the volume group.
 
The disks are different sizes and we get 99 extents in PV1 and 248 extents in PV2.


  We now can create ourselves a logical volume, this can be any size between 1 and 347 (248 + 99) extents. When the logical volume is created a mapping is defined between logical extents and physical extents,
 
  eg. logical extent 1 could map onto physical extent 51 of PV1, data written to the first 4 MB of the logical volume in fact be written to the 51st extent of PV1.