I’ve only recently begun my journey as a “data hoarder” and I keep running into the need for more storage space. Specifically, a coherent way to manage and balance out my storage demands on my NAS box!
And Red Hat (the wonderful folks they are) just keep trying to help me out! Their documentation on logical volumes is so easy to read and understand. Lovely folks. Anyway…
What is LVM?
Logical Volume Management (LVM) is a technology built into most linux kernels which enables you to quickly group several physical volumes (i.e. hard disks) into volume groups (i.e. a rack of drives) and split them into various logical volumes (i.e. whatever you want).
This is extensible to massive enterprise grade systems with hundreds of disks with hundreds of logical volumes! But, it’s still really useful on the smaller scale too.
I’ve been curating used hard drives (lots of “game drives” – people undervalue them since they don’t understand it’s just a hard drive) and I needed a way to easily get them to work together…and LVM came along to help me!
Show me the code!
Okay okay, so my lsblk
looks like this:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
...
sdb 8:16 0 1.8T 0 disk
└─sdb1 8:17 0 1.8T 0 part
sdc 8:32 0 1.8T 0 disk
└─sdc1 8:33 0 1.8T 0 part
...
And I want to make one big ~4TB volume using /dev/sdb
and /dev/sdc
. How do I do that? Well:
1. Make some physical volumes
This sounds kinda strange because…I already have the physical volumes. They’re sitting on my desk. What we’re really doing is changing the partition in question into a special filesystem which can operate within a volume group: the LVM2_member
filesystem.
You can mess around with physical volumes using the pv*
tools, pvs
, pvcreate
, pvmodify
, and so on. RTFM if you want to know more than the basics.
Anyway, you just need to run pvcreate </dev link>
to set up a partition to operate under LVM. You can either choose one partition on a drive, or use the whole drive. To use the whole drive, you’ll want to destroy any existing partitions, then run pvcreate
.
# pvcreate /dev/sdb1
and # pvcreate /dev/sdc1
did the trick for me. We can then run pvs
to see our physical volumes:
PV VG Fmt Attr PSize PFree
/dev/sdb1 vg0 lvm2 a-- <1.82t 0
/dev/sdc1 vg0 lvm2 a-- 1.77t <3.96g
Now, I’ve already allocated both drives to a logical volume, so the PFree
column shows the drives as empty. If you’ve just created the physical volume, then it should be the same as the PSize
column.
2. Setup the volume group
Now we need to create the overall “pool” of storage we’ll draw upon in our logical volume. Like the physical volumes, you’ll interact with volume groups using the vg*
suite of tools.
vgcreate <volume group name> <devices...>
is the syntax for vgcreate
. There’s some specifics in man 8 lvm
about the names of volume groups and logical volumes, but it’s nothing shocking. Your volume group will end up in /dev
as /dev/<vg name>
, so sticking to naming conventions is good! I chose vg0
:
# vgcreate vg0 /dev/sdb1 /dev/sdc1
…and as we can see in vgs
:
VG #PV #LV #SN Attr VSize VFree
vg0 2 1 0 wz--n- 3.59t <3.96g
Again, I’ve already setup a logical volume for my disks. Thus the #LV, VFree
columns indicate that. But you can see there are two physical volumes tied to this volume group!
3. Setup the logical volume
Finally, we must take our pool and give it form. I’ve chosen a simple approach for these disks: they’re all going in one logical volume. There are some references at the bottom of this post if you need something more complex!
Good naming conventions strike again! The lv*
tools will be our friends here. This command will be the most complex, because the way a logical volume operates can be changed radically – we can do thin and RAID volumes, among others.
I chose a “linear” type – it just extends the space across disks without performing any striping. It’s a lot more flexible as I can add disks willy nilly and grow the space that way. Striping would offer possible parallel operations…but extending striped volumes can be complicated and linear works well enough for me.
#lvcreate -l 100%FREE -n behemoth vg0
This syntax is a litte strange, but if you stare at it for a second you can pick apart what’s happening:
-l 100%FREE
asks for 100% of the remaining free space in the specified volume group. There’s a number of suffixes you can use, or you can just specify a size like10G
or1500
(the default unit is MB)-n behemoth
is the name of the new logical volume. I was feeling silly this time and didn’t obey good naming conventions…vg0
asks to provision space from that volume group!
…and now lvs
:
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
behemoth vg0 -wi-ao---- <3.59t
Yay! lsblk
also looks a little different now:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sdb 8:16 0 1.8T 0 disk
└─sdb1 8:17 0 1.8T 0 part
└─vg0-behemoth 254:0 0 3.6T 0 lvm
sdc 8:32 0 1.8T 0 disk
└─sdc1 8:33 0 1.8T 0 part
└─vg0-behemoth 254:0 0 3.6T 0 lvm
This is just wonderful. Now I have one big 3.6 TB volume to play around with! We still have to do the normal disk preparations, but we can basically treat this volume as we would a normal disk:
# mkfs.ext4 /dev/vg0/behemoth
# mount /dev/vg0/behemoth /media
And then lsblk
again:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sdb 8:16 0 1.8T 0 disk
└─sdb1 8:17 0 1.8T 0 part
└─vg0-behemoth 254:0 0 3.6T 0 lvm /media
sdc 8:32 0 1.8T 0 disk
└─sdc1 8:33 0 1.8T 0 part
└─vg0-behemoth 254:0 0 3.6T 0 lvm /media
And now you’re on your merry way…storing files as you please…