Sunday, October 15, 2017

drbd and lvm: so many combinations

On vacation from a major dental surgery I am currently learning and  testing these three DRBD/LVM combinations and thinking about which one I would use on a real production setup. 

1. DRBD over plain device
2. DRBD over LVM
3. LVM over DRBD
4. LVM over DRBD over LVM

1. DRBD over plain device.  This puts actual device names such as sdb1 in the drbd configuration.  I dont like that.  There are ways around this such as using multipath or using /dev/disk/by-id.  I havent tested those yet with drbd but the point is the actual device names are in the configuration files and they had better agree with the real devices (after years of uptime and changeover of sysadmins :).

2. DRBD over LVM.  This puts an abstraction layer at the lowest layer and it avoids having to place actual device names in drbd resource files.   For example:

/etc/drbd.d/some-resource.res

resource __ {
  ...
  ...
  device /dev/drbd0
  disk /dev/vg/lvdisk0
  ...
  }

There you go, no /dev/sdb1 or whatever in the disk configuration.  This avoids problems arising from devices switching device names on reboot.

3. LVM over DRBD

As the name implies, puts the flexibility of provisioning in the drbd layer, where it is closer to the application.  It would make typical provisioning such as disk allocation, destruction, extensions and shrinkings much easier.  Howerer I still do not like writing device names in the config files...

4. LVM over  DRBD over LVM.

LVM over DRBD over LVM is probably the most flexible solution.  There are no actual device names in DRBD configuration; LVM is very much resilient with machine restarts due to its auto detection of metadata in whatever order the physical disks comes up in.   With this combinations I can re arrange the phsyical backing storage and at the same time have the flexibility of LVM on the upper layer.  The only issue is having to ADJUST STUFF IN /etc/lvm/lvm.conf. 

in /etc/lvm/lvm.conf

    # filter example -- 
    # /dev/vd* on the physical layer, 
    # /dev/drbd* on the drbd layer
    filter = [ "a|^/dev/vd.*|", "a|drbd.*|", "r|.*/|" ]
    write_cache_state = 0
    use_lvmetad = 0

Just a few lines of config.  This is fine.   The  problem is having to remember what all these configuration mean after 2 years of uptime...

---

JondZ 20171015



1 comment:

JondZ said...

note to self:

3. "LVM over DRBD" is actually stacked LVM. need to tune filter parameter in /etc/lvm/lvm.conf. LVM will see native /dev/ device as a physical volume before drbd.

Creating ipip tunnels in RedHat 8 that is compatible with LVS/TUN. Every few seasons I built a virtual LVS (Linux Virtual Server) mock up ju...