DISCARD EFFECT ON THIN VOLUMES
Notes by JondZ
2017-03-14
This note was prompted by my need for the use of SNAPPER to protect a massive amount of data. This morning I realized the very good effect ofdiscard in space-savings as when dealing with Terabytes of data it is good to save as much space as possible
In this example the thin POOL is tp1 and the thin VOLUME of interest is te1. It is like this because I am merely testing out a configuration thatalready exists.
These are dumped-out unedited notes.
INITIAL CONDITIONS
------------------------------------------------------------------------
te1 is a 1-Gig (thin) disk mounted on /volumes/te1.
The actual, physical thin volume POOL is sized at 10.35 Gigs right now.
lvs -a | grep tp1
te1 bmof Vwi-aotz-- 1.00g tp1 4.77
te2 bmof Vwi-aotz-- 1.00g tp1 97.66
te3 bmof Vwi-aotz-- 1.00g tp1 4.75
te4 bmof Vwi-aotz-- 3.00g tp1 42.32
tp1 bmof twi-aotz-- 10.35g 22.62 15.28
[tp1_tdata] bmof Twi-ao---- 10.35g
[tp1_tmeta] bmof ewi-ao---- 8.00m
root@epike-OptiPlex-7040:/volumes/te1#
EFFECT 1: A 500-MEG FILE INSERTED
-------------------------------------------------------------------
Notice the increase in useage of "te1" now up to 52.36. The thin volume tp1 increased as well, 27.22 usage.
dd if=/dev/zero of=500MFILE21 bs=1024 count=500000
root@epike-OptiPlex-7040:/volumes/te1# !lvs
lvs -a | grep tp1
te1 bmof Vwi-aotz-- 1.00g tp1 52.36
te2 bmof Vwi-aotz-- 1.00g tp1 97.66
te3 bmof Vwi-aotz-- 1.00g tp1 4.75
te4 bmof Vwi-aotz-- 3.00g tp1 42.32
tp1 bmof twi-aotz-- 10.35g 27.22 18.26
[tp1_tdata] bmof Twi-ao---- 10.35g
[tp1_tmeta] bmof ewi-ao---- 8.00m
root@epike-OptiPlex-7040:/volumes/te1#
EFFECT 2: 500 FILE REMOVED
---------------------------------------------------------------------
Removing a file did not reduce the Thin volume usage. The numbers are the same for the pool use percentages.
root@epike-OptiPlex-7040:/volumes/te1# rm 500MFILE21
root@epike-OptiPlex-7040:/volumes/te1# df -h -P .
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/bmof-te1 976M 1.3M 908M 1% /volumes/te1
root@epike-OptiPlex-7040:/volumes/te1#
root@epike-OptiPlex-7040:/volumes/te1# !lvs
lvs -a | grep tp1
te1 bmof Vwi-aotz-- 1.00g tp1 52.45
te1-snapshot1 bmof Vri---tz-k 1.00g tp1 te1
te2 bmof Vwi-aotz-- 1.00g tp1 97.66
te3 bmof Vwi-aotz-- 1.00g tp1 4.75
te4 bmof Vwi-aotz-- 3.00g tp1 42.32
tp1 bmof twi-aotz-- 10.35g 27.23 18.46
[tp1_tdata] bmof Twi-ao---- 10.35g
[tp1_tmeta] bmof ewi-ao---- 8.00m
root@epike-OptiPlex-7040:/volumes/te1#
EFFECT 3: fstrim
-----------------------------------------------------------------
FSTRIM will reclaim spaces on the thin volume AND the thin pool:
root@epike-OptiPlex-7040:/volumes/te1# fstrim -v /volumes/te1
/volumes/te1: 607.2 MiB (636727296 bytes) trimmed
root@epike-OptiPlex-7040:/volumes/te1# !lvs
lvs -a | grep tp1
te1 bmof Vwi-aotz-- 1.00g tp1 4.77
te1-snapshot1 bmof Vri---tz-k 1.00g tp1 te1
te2 bmof Vwi-aotz-- 1.00g tp1 97.66
te3 bmof Vwi-aotz-- 1.00g tp1 4.75
te4 bmof Vwi-aotz-- 3.00g tp1 42.32
tp1 bmof twi-aotz-- 10.35g 27.23 18.55
[tp1_tdata] bmof Twi-ao---- 10.35g
[tp1_tmeta] bmof ewi-ao---- 8.00m
Well it does not show here, but I recall that the thin POOL also is reduced. Perhaps the snapshot gets in the way? It has been put there automatically while I was composing this text.
There, much better:
root@epike-OptiPlex-7040:/volumes/te1# snapper -c te1 delete 1
root@epike-OptiPlex-7040:/volumes/te1# !lvs
lvs -a | grep tp1
te1 bmof Vwi-aotz-- 1.00g tp1 4.77
te2 bmof Vwi-aotz-- 1.00g tp1 97.66
te3 bmof Vwi-aotz-- 1.00g tp1 4.75
te4 bmof Vwi-aotz-- 3.00g tp1 42.32
tp1 bmof twi-aotz-- 10.35g 22.62 15.28
[tp1_tdata] bmof Twi-ao---- 10.35g
[tp1_tmeta] bmof ewi-ao---- 8.00m
root@epike-OptiPlex-7040:/volumes/te1# fstrim -v /volumes/te1
/volumes/te1: 239.4 MiB (251031552 bytes) trimmed
root@epike-OptiPlex-7040:/volumes/te1# !lvs
lvs -a | grep tp1
te1 bmof Vwi-aotz-- 1.00g tp1 4.77
te2 bmof Vwi-aotz-- 1.00g tp1 97.66
te3 bmof Vwi-aotz-- 1.00g tp1 4.75
te4 bmof Vwi-aotz-- 3.00g tp1 42.32
tp1 bmof twi-aotz-- 10.35g 22.62 15.28
[tp1_tdata] bmof Twi-ao---- 10.35g
[tp1_tmeta] bmof ewi-ao---- 8.00m
root@epike-OptiPlex-7040:/volumes/te1#
The numbers are down to 4.77 consumed on the Thin VOLUME, and 22.62 percent on the thin POOL.
EFFECT 4: mount with DISCARD option automatically reclaims THIN space
----------------------------------------------------------------------
This example demonstrates that thin volume space are automatically reclaimed
and returned to POOL automatically, without needing to manually run fstrim.
root@epike-OptiPlex-7040:/volumes/te1# mount -o remount,discard /dev/mapper/bmof-te1
root@epike-OptiPlex-7040:/volumes/te1# !dd
dd if=/dev/zero of=500MFILE24 bs=1024 count=500000
500000+0 records in
500000+0 records out
512000000 bytes (512 MB, 488 MiB) copied, 0.553593 s, 925 MB/s
root@epike-OptiPlex-7040:/volumes/te1# !lvs
lvs -a | grep tp1
te1 bmof Vwi-aotz-- 1.00g tp1 52.39
te2 bmof Vwi-aotz-- 1.00g tp1 97.66
te3 bmof Vwi-aotz-- 1.00g tp1 4.75
te4 bmof Vwi-aotz-- 3.00g tp1 42.32
tp1 bmof twi-aotz-- 10.35g 27.22 18.26
[tp1_tdata] bmof Twi-ao---- 10.35g
[tp1_tmeta] bmof ewi-ao---- 8.00m
root@epike-OptiPlex-7040:/volumes/te1# rm 500MFILE24
root@epike-OptiPlex-7040:/volumes/te1# !lvs
lvs -a | grep tp1
te1 bmof Vwi-aotz-- 1.00g tp1 52.39
te2 bmof Vwi-aotz-- 1.00g tp1 97.66
te3 bmof Vwi-aotz-- 1.00g tp1 4.75
te4 bmof Vwi-aotz-- 3.00g tp1 42.32
tp1 bmof twi-aotz-- 10.35g 27.22 18.26
[tp1_tdata] bmof Twi-ao---- 10.35g
[tp1_tmeta] bmof ewi-ao---- 8.00m
root@epike-OptiPlex-7040:/volumes/te1# !lvs
lvs -a | grep tp1
te1 bmof Vwi-aotz-- 1.00g tp1 4.79
te2 bmof Vwi-aotz-- 1.00g tp1 97.66
te3 bmof Vwi-aotz-- 1.00g tp1 4.75
te4 bmof Vwi-aotz-- 3.00g tp1 42.32
tp1 bmof twi-aotz-- 10.35g 22.62 15.28
[tp1_tdata] bmof Twi-ao---- 10.35g
[tp1_tmeta] bmof ewi-ao---- 8.00m
root@epike-OptiPlex-7040:/volumes/te1# ls
lost+found
root@epike-OptiPlex-7040:/volumes/te1#
-----------------------------------------------------------------------
But does the reclamation work thru snaphots layers? Well it would be difficult to test all combinations, but lets at least verify that the spaces are reclaimed when the snaphots are deleted.
First, mount with the discard mode
root@epike-OptiPlex-7040:~# !mount
mount -o remount,discard /dev/mapper/bmof-te1
root@epike-OptiPlex-7040:~#
the initial conditions are:
te1 bmof Vwi-aotz-- 1.00g tp1 4.77
te2 bmof Vwi-aotz-- 1.00g tp1 97.66
te3 bmof Vwi-aotz-- 1.00g tp1 4.75
te4 bmof Vwi-aotz-- 3.00g tp1 42.32
tp1 bmof twi-aotz-- 10.35g 22.62 15.28
Ok, so LV is 4.77 percent, LV POOL is 22.62 percent.
So..consume space.
root@epike-OptiPlex-7040:/volumes/te1# !dd
dd if=/dev/zero of=500MFILE24 bs=1024 count=500000
500000+0 records in
500000+0 records out
512000000 bytes (512 MB, 488 MiB) copied, 0.559463 s, 915 MB/s
root@epike-OptiPlex-7040:/volumes/te1# !lvs
lvs | grep tp1
te1 bmof Vwi-aotz-- 1.00g tp1 52.37
te2 bmof Vwi-aotz-- 1.00g tp1 97.66
te3 bmof Vwi-aotz-- 1.00g tp1 4.75
te4 bmof Vwi-aotz-- 3.00g tp1 42.32
tp1 bmof twi-aotz-- 10.35g 27.22 18.26
root@epike-OptiPlex-7040:/volumes/te1#
snapshot, and consume space some more..
root@epike-OptiPlex-7040:/volumes/te1# snapper -c te1 create
root@epike-OptiPlex-7040:/volumes/te1# !lvs
lvs | grep tp1
te1 bmof Vwi-aotz-- 1.00g tp1 52.45
te1-snapshot1 bmof Vri---tz-k 1.00g tp1 te1
te2 bmof Vwi-aotz-- 1.00g tp1 97.66
te3 bmof Vwi-aotz-- 1.00g tp1 4.75
te4 bmof Vwi-aotz-- 3.00g tp1 42.32
tp1 bmof twi-aotz-- 10.35g 27.23 18.36
root@epike-OptiPlex-7040:/volumes/te1#
root@epike-OptiPlex-7040:/volumes/te1# !dd:p
dd if=/dev/zero of=500MFILE24 bs=1024 count=500000
root@epike-OptiPlex-7040:/volumes/te1# dd if=/dev/zero of=200mfile bs=1024 count=200000
200000+0 records in
200000+0 records out
204800000 bytes (205 MB, 195 MiB) copied, 0.211273 s, 969 MB/s
root@epike-OptiPlex-7040:/volumes/te1# !snapper
snapper -c te1 create
root@epike-OptiPlex-7040:/volumes/te1# !lvs
lvs | grep tp1
te1 bmof Vwi-aotz-- 1.00g tp1 71.53
te1-snapshot1 bmof Vri---tz-k 1.00g tp1 te1
te1-snapshot2 bmof Vri---tz-k 1.00g tp1 te1
te2 bmof Vwi-aotz-- 1.00g tp1 97.66
te3 bmof Vwi-aotz-- 1.00g tp1 4.75
te4 bmof Vwi-aotz-- 3.00g tp1 42.32
tp1 bmof twi-aotz-- 10.35g 29.08 19.82
root@epike-OptiPlex-7040:/volumes/te1#
Then remove the files. The numbers should not go down since there are snap volumes.
root@epike-OptiPlex-7040:/volumes/te1# rm 200mfile 500MFILE24
root@epike-OptiPlex-7040:/volumes/te1# !lvs
lvs | grep tp1
te1 bmof Vwi-aotz-- 1.00g tp1 4.78
te1-snapshot1 bmof Vri---tz-k 1.00g tp1 te1
te1-snapshot2 bmof Vri---tz-k 1.00g tp1 te1
te2 bmof Vwi-aotz-- 1.00g tp1 97.66
te3 bmof Vwi-aotz-- 1.00g tp1 4.75
te4 bmof Vwi-aotz-- 3.00g tp1 42.32
tp1 bmof twi-aotz-- 10.35g 29.08 20.12
Ok so I stand corrected: The LVM VOLUME usage went down, but the LVM POOL did not.
That actually makes sense since the snapshot consumes the space.
What happens when the snaphots are removed, are the spaces reclaimed into the thin POOL?
root@epike-OptiPlex-7040:/volumes/te1# snapper -c te1 delete 1
root@epike-OptiPlex-7040:/volumes/te1# snapper -c te1 delete 2
root@epike-OptiPlex-7040:/volumes/te1# !lvs
lvs | grep tp1
te1 bmof Vwi-aotz-- 1.00g tp1 4.78
te2 bmof Vwi-aotz-- 1.00g tp1 97.66
te3 bmof Vwi-aotz-- 1.00g tp1 4.75
te4 bmof Vwi-aotz-- 3.00g tp1 42.32
tp1 bmof twi-aotz-- 10.35g 22.62 15.28
root@epike-OptiPlex-7040:/volumes/te1#
It does!!! When the snap volumes are removed, the spaces are reclaimed into the thin pool.
CONCLUSION:
--------------------
When working with thin volumes, use DISCARD mount option, even (or, especially) when not using SSD's.
OTHER TESTS
-----------
I tested mounting normally, consume space, then mount with discard option. What happens is that the space are not automatically reclaimed just by mounting. fstrim needs to run, and snapshots need to be deleted. Still there is no harm and in fact an advantage to add "discard" option in fstab even for existing (thin volume) mounts.
JondZ 20170314
Monday, March 13, 2017
Subscribe to:
Post Comments (Atom)
Creating ipip tunnels in RedHat 8 that is compatible with LVS/TUN. Every few seasons I built a virtual LVS (Linux Virtual Server) mock up ju...
-
Occasionally we are troubled by failure alerts; since I have installed the check_crm nagios monitor plugin the alert seems more 'sensiti...
-
This is how to set up a dummy interface on RedHat/CentOS 8.x. I cannot make the old style of init scripts in /etc/sysconfig/network-scripts...
-
While testing pacemaker clustering with iscsi (on redhat 8 beta) I came upon this error: Pending Fencing Actions: * unfencing of rh-8beta...
1 comment:
Additional notes.
This is fstrim running on REAL DATA (BIG DISKS). I X'd out the real directory names. The "fstrim -va" takes a while to run. These are the trimmed (empty spaces) of the disks which are typically > 90% full :-)
/xxxx/xxxxxxx: 0 B (0 bytes) trimmed
/xxxx/xxxxxxx: 0 B (0 bytes) trimmed
/xxxx/xxxxxxx: 370.8 GiB (398135660544 bytes) trimmed
/xxxx/xxxxxxx: 653.3 GiB (701471526912 bytes) trimmed
/xxxx/xxxxxxx: 473.4 GiB (508268302336 bytes) trimmed
/xxxx/xxxxxxx: 314.3 GiB (337448353792 bytes) trimmed
/xxxx/xxxxxxx: 284.6 GiB (305584029696 bytes) trimmed
/xxxx/xxxxxxx: 476.8 GiB (511989387264 bytes) trimmed
/xxxx/xxxxxxx: 385.7 GiB (414081814528 bytes) trimmed
/xxxx/xxxxxxx: 256.9 GiB (275816914944 bytes) trimmed
/xxxx/xxxxxxx: 416.4 GiB (447090442240 bytes) trimmed
/xxxx/xxxxxxx: 375.2 GiB (402878091264 bytes) trimmed
/xxxx/xxxxxxx: 324.2 GiB (348118593536 bytes) trimmed
/xxxx/xxxxxxx: 284.5 GiB (305479589888 bytes) trimmed
/xxxx/xxxxxxx: 372.7 GiB (400199221248 bytes) trimmed
/xxxx/xxxxxxx: 981.8 GiB (1054140956672 bytes) trimmed
/xxxx/xxxxxxx: 732.1 GiB (786098515968 bytes) trimmed
/xxxx/xxxxxxx: 318.7 GiB (342231486464 bytes) trimmed
/xxxx/xxxxxxx: 301 GiB (323175542784 bytes) trimmed
jondz 20170314
Post a Comment