Partitioning Disks

Do this for all disks

# parted -a optimal /dev/sdb 
(parted) mklabel gpt
(parted) mkpart primary 2048s 100%
(parted) align-check optimal 1
1 aligned
(parted) set 1 raid on                                                    
(parted) print                                                                
Model: ATA WDC WD30EFRX-68E (scsi)
Disk /dev/sdb: 3001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
 
Number  Start   End     Size    File system  Name     Flags
 1      1049kB  3001GB  3001GB               primary  raid
 
(parted) quit                                                             
Information: You may need to update /etc/fstab.

Create Raid

Create a raid level 1
-n2 = 2 raid-member

mdadm -C /dev/md0 -l1 -n2 /dev/sdb1 /dev/sdc1

Or create a raid level 5 with 3 disks

mdadm -C /dev/md0 -l5 -n3 /dev/sdb1 /dev/sdc1 /dev/sdd1

At this point you can check your raid status

# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdd1[3] sdc1[1] sdb1[0]
      7813770240 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
      bitmap: 2/30 pages [8KB], 65536KB chunk

Create Raid with missing disk

This is just a side note and not part of creating an LVM encrypted Raid!
Move on to ‘Create LVM

Create a backup!

Let’s say you have a running system on a single disk /dev/sda and you add another disk to that system.
Now you want to have the OS running from a Raid 1 instead of the single disk.
Create a raid level 1 with one disk missing

mdadm --create /dev/md0 --level 1 --raid-devices=2 missing /dev/sdb1
  • Start a rescue system!
  • Copy the whole disk sda to md0 or whatever your raid device is named. I’m not describing this process. Maybe you want to use ‘dd’ or similar tools.
  • Mount md0 somewhere and change the boot records to the new raid device
  • Re-Create initrd
  • Reboot system
  • Check everything works

If everything is fine you can add your initial disk sda to the raid.
Create one single partition on sda and add.
After adding sda to the raid 1, it will be overwritten by everything currently on md0

/sbin/mdadm --add /dev/md0 /dev/sda1

Create LVM

Create Physical Volume

As we put all our disks together as a raid, we just need to add /dev/md0

pvcreate /dev/md0

Create Volume Group

vgcreate vg1 /dev/md0

Show volume group

# vgs
  VG  #PV #LV #SN Attr   VSize  VFree
  vg1   1   3   0 wz--n- <7.28t <5.67t

Create Logical Volume

We can now create our first volume within volume group ‘vg1’

lvcreate -L 10G -n <volume name> <volume group>

lvcreate -L 10G -n data1 vg1

Show logical volume(s)

# lvs
  LV               VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data1            vg1 -wi-ao----   1.22t
  home             vg1 -wi-ao---- 200.00g

Encrypt Logical Volume

You can skip this step if you don’t want encryption

This will first encrypt the volume data1. You will be asked to set an encryption password.
Be sure to remember that!
After this has been done, we can decrypt the volume data1 which creates another logical volume named /dev/mapper/<name> in this case /dev/mapper/data1.decrypted

# cryptsetup -c aes-cbc-essiv:sha256 -y -s256 luksFormat /dev/vg1/data1 

# cryptsetup luksOpen /dev/vg1/data1 data1.decrypted

Create Filesystem

Create a filesystem and mount the volume

mkfs.ext4 /dev/mapper/data1.decrypted

If you skipped the ‘Encrypt Logical Volume’ part above, format LVM logical volume directly

mkfs.ext4 /dev/vg1/data1

Extend Volume

Show Available Space in VG

# vgdisplay
  --- Volume group ---
  VG Name               vg1
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  5
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                3
  Open LV               3
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <7.28 TiB
  PE Size               4.00 MiB
  Total PE              1907658
  Alloc PE / Size       422400 / 1.61 TiB
  Free  PE / Size       1485258 / <5.67 TiB
  VG UUID               tVgBeB-3MU4-fZAZ-Z4Zn-Bol3-X0CK-XN8q6I

Extend Volume

lvextend -L +100G /dev/vg1/data1

Extend DM-Crypt Container

cryptsetup resize data1.decrypted

Extend Filesystem

# resize2fs /dev/mapper/data1.decrypted

# xfs_growfs /mount/point

Delete Volume

# umount /mount/point

# cryptsetup luksClose data1.decrypted

# dmsetup remove /dev/mapper/vg1-data1

# lvremove /dev/vg1/data1

Shrink Volume

You should NOT!
Be prepared for data loss

Let’s say you want a volume of 50GB after shrinking it from 150GB now

# umount /dev/mapper/data1.decrypted

# e2fsck -f /dev/mapper/data1.decrypted

# resize2fs /dev/mapper/data1.decrypted 50G

After your filesystem is at 50GB we need to shrink our dm-crypt container

cryptsetup resize data1.decrypted

And finally shrink our logical volume

lvreduce -L-100G /dev/mapper/data1.decrypted

Failed Disk

Mark disk as ‘failed’ and remove from raid

# mdadm --manage /dev/md0 --fail /dev/sdd1

# mdadm --remove /dev/md0 /dev/sdd1

If you have three identical disks get the serial number of the disk

sdparm -i /dev/sdd
...
ST31000340AS                                        9QJ0W04D
...

Replace disk sdd in your computer/server.

Copy the partition table

sfdisk -d /dev/sdc | sfdisk /dev/sdd

Clear the magic flag

dd if=/dev/zero of=/dev/sdd1 bs=1024 count=1000

Add the new disk back to the raid

mdadm /dev/md0 -a /dev/sdd1

Wait for replication to finish

# cat /proc/mdstat
...
[>....................] recovery = 4.1% (40587520/976761408) finish=227.8min
...

If the new disk is bigger than the old one, tell it to your system

# mdadm --grow /dev/md0 --bitmap none

# mdadm --grow /dev/md0 --size=max

# pvresize /dev/md0

Leave a Reply