A snapshot volume is a special type of volume that presents all the data that was in the volume at the time the snapshot was created.
This means we can back up that volume without having to worry about data being changed while the backup is going on,and we don't have to take the database volume offline while the backup is taking place.
snapshot is just a link which point to another place, if the file is growing in lvm and we need a backup of data at a particular time say @ 6:00 pm, First make a snapshot at 6:00 and and mount it some where and take backup of data from that place. This allows the administrator to create a new block device which presents an exact copy of a logical volume, frozen at some point in time.
Typically this would be used when some batch processing, a backup for instance, needs to be performed on the logical volume, but you don't want to halt a live system that is changing the data.
When the snapshot device has been finished with the system administrator can just remove the device.
This facility does require that the snapshot be made at a time when the data on the logical volume is in a consistent state.
# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 25G 4.9G 19G 21% /
tmpfs 593M 0 593M 0% /dev/shm
/dev/mapper/vg--ctechz-lvm--ctechz
591M 542M 20M 97% /lvm-ctechz
While we are creating a lvm snapshot we are taking space from volume group, so first make sure volume group contain enough free space.
# vgs or vgdisplay
Make sure the snapshot has enough free space or same as origin size(or you can create it in any size)
1.Check in Volume group for free space
# vgs
# vgs
VG #PV #LV #SN Attr VSize VFree
vg-ctechz 2 2 1 wz--n- 1.04G 464.00M
free space is available so create a snapshot of current lvm using the vg
# lvcreate -L +400M -s /dev/vg-ctechz/lvm-ctechz -n lvm-ctechz-spapshot1
Logical volume "lvm-ctechz-spapshot1" created
here in plaec of +400M we can give any size, we can even mention +10M as well
-L|--size LogicalVolumeSize
-s|--snapshot} OriginalLogicalVolume[Path]
-n|--name LogicalVolumeName
lvcreate -L +size -s snapshotName -n lvmName
# ls /dev/vg-ctechz/
lvm-ctechz lvm-ctechz-spapshot1
# mkdir /snapshot
# mount /dev/vg-ctechz/lvm-ctechz-spapshot1 /snapshot/
# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 25G 4.9G 19G 21% /
tmpfs 593M 0 593M 0% /dev/shm
/dev/mapper/vg--ctechz-lvm--ctechz
591M 542M 20M 97% /lvm-ctechz
/dev/mapper/vg--ctechz-lvm--ctechz--spapshot1
591M 542M 20M 97% /snapshot
# lvdisplay
--- Logical volume ---
LV Name /dev/vg-ctechz/lvm-ctechz
VG Name vg-ctechz
LV UUID jZEuoN-16MG-30eX-SZaI-8ETO-ZguH-YRVyeN
LV Write Access read/write
LV snapshot status source of
/dev/vg-ctechz/lvm-ctechz-spapshot1[active]
LV Status available
# open 1
LV Size 600.00 MB
Current LE 150
Segments 2
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0
--- Logical volume ---
LV Name /dev/vg-ctechz/lvm-ctechz-spapshot1
VG Name vg-ctechz
LV UUID kk3Cvm-Il4z-J8Kt-qt8s-TstZ-Peyw-gU0Uho
LV Write Access read/write
LV snapshot status active destination for
/dev/vg-ctech/lvm-ctechz
LV Status available
# open 1
LV Size 600.00 MB
Current LE 150
COW-table size 400.00 MB
COW-table LE 100
Allocated to snapshot 0.01%
Snapshot chunk size 4.00 KB
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1
# lvdisplay /dev/vg-ctechz/lvm-ctechz
--- Logical volume ---
LV Name /dev/vg-ctechz/lvm-ctechz
VG Name vg-ctechz
LV UUID jZEuoN-16MG-30eX-SZaI-8ETO-ZguH-YRVyeN
LV Write Access read/write
LV snapshot status source of
/dev/vg-ctechz/lvm-ctechz-spapshot1[active]
LV Status available
# open 1
LV Size 600.00 MB
Current LE 150
Segments 2
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0
Here we can get the status of the active snapshot
Now we created the Snapshot for the Logical Volume what we have and then do the backup, so first create the snapshot and then backup.
For backuping use your methods or strategy
# tar -cvf lvmsnap1.tar /dev/mapper/vg--ctechz-lvm--ctechz--spapshot1
This means we can back up that volume without having to worry about data being changed while the backup is going on,and we don't have to take the database volume offline while the backup is taking place.
snapshot is just a link which point to another place, if the file is growing in lvm and we need a backup of data at a particular time say @ 6:00 pm, First make a snapshot at 6:00 and and mount it some where and take backup of data from that place. This allows the administrator to create a new block device which presents an exact copy of a logical volume, frozen at some point in time.
Typically this would be used when some batch processing, a backup for instance, needs to be performed on the logical volume, but you don't want to halt a live system that is changing the data.
When the snapshot device has been finished with the system administrator can just remove the device.
This facility does require that the snapshot be made at a time when the data on the logical volume is in a consistent state.
# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 25G 4.9G 19G 21% /
tmpfs 593M 0 593M 0% /dev/shm
/dev/mapper/vg--ctechz-lvm--ctechz
591M 542M 20M 97% /lvm-ctechz
While we are creating a lvm snapshot we are taking space from volume group, so first make sure volume group contain enough free space.
# vgs or vgdisplay
Make sure the snapshot has enough free space or same as origin size(or you can create it in any size)
1.Check in Volume group for free space
# vgs
# vgs
VG #PV #LV #SN Attr VSize VFree
vg-ctechz 2 2 1 wz--n- 1.04G 464.00M
free space is available so create a snapshot of current lvm using the vg
# lvcreate -L +400M -s /dev/vg-ctechz/lvm-ctechz -n lvm-ctechz-spapshot1
Logical volume "lvm-ctechz-spapshot1" created
here in plaec of +400M we can give any size, we can even mention +10M as well
-L|--size LogicalVolumeSize
-s|--snapshot} OriginalLogicalVolume[Path]
-n|--name LogicalVolumeName
lvcreate -L +size -s snapshotName -n lvmName
# ls /dev/vg-ctechz/
lvm-ctechz lvm-ctechz-spapshot1
# mkdir /snapshot
# mount /dev/vg-ctechz/lvm-ctechz-spapshot1 /snapshot/
# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 25G 4.9G 19G 21% /
tmpfs 593M 0 593M 0% /dev/shm
/dev/mapper/vg--ctechz-lvm--ctechz
591M 542M 20M 97% /lvm-ctechz
/dev/mapper/vg--ctechz-lvm--ctechz--spapshot1
591M 542M 20M 97% /snapshot
# lvdisplay
--- Logical volume ---
LV Name /dev/vg-ctechz/lvm-ctechz
VG Name vg-ctechz
LV UUID jZEuoN-16MG-30eX-SZaI-8ETO-ZguH-YRVyeN
LV Write Access read/write
LV snapshot status source of
/dev/vg-ctechz/lvm-ctechz-spapshot1[active]
LV Status available
# open 1
LV Size 600.00 MB
Current LE 150
Segments 2
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0
--- Logical volume ---
LV Name /dev/vg-ctechz/lvm-ctechz-spapshot1
VG Name vg-ctechz
LV UUID kk3Cvm-Il4z-J8Kt-qt8s-TstZ-Peyw-gU0Uho
LV Write Access read/write
LV snapshot status active destination for
/dev/vg-ctech/lvm-ctechz
LV Status available
# open 1
LV Size 600.00 MB
Current LE 150
COW-table size 400.00 MB
COW-table LE 100
Allocated to snapshot 0.01%
Snapshot chunk size 4.00 KB
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1
# lvdisplay /dev/vg-ctechz/lvm-ctechz
--- Logical volume ---
LV Name /dev/vg-ctechz/lvm-ctechz
VG Name vg-ctechz
LV UUID jZEuoN-16MG-30eX-SZaI-8ETO-ZguH-YRVyeN
LV Write Access read/write
LV snapshot status source of
/dev/vg-ctechz/lvm-ctechz-spapshot1[active]
LV Status available
# open 1
LV Size 600.00 MB
Current LE 150
Segments 2
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0
Here we can get the status of the active snapshot
Now we created the Snapshot for the Logical Volume what we have and then do the backup, so first create the snapshot and then backup.
For backuping use your methods or strategy
# tar -cvf lvmsnap1.tar /dev/mapper/vg--ctechz-lvm--ctechz--spapshot1
No comments:
Post a Comment