Well, I recently learned that one of my VMs had its root partition full. The following document on the root partition was resized on that VM.
It’s not gonna be something exciting or anything special. However, I feel like I should document anyway, in case I forget on how to do that in the future, and partly due to I no longer work with these type of work as my daily job anymore.
On Sep 15 08:08:20…
➜ ssh thounchey3
Have a lot of fun...
Last login: Sun Sep 15 08:07:20 2024 from 2406:3400:xxx:xxxx:xxxx:xxxx:xxxx:xxxx
Have a lot of fun...
mktemp: failed to create file via template ‘/tmp/.psub.XXXXXXXXXX’: No space left on device
thread 'main' panicked at library/std/src/io/stdio.rs:1030:9:
failed printing to stdout: Broken pipe (os error 32)
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Welcome to fish, the friendly interactive shell
Type help for instructions on how to use fish
kenno@thounchey3 ~> df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda3 32G 32G 20K 100% /
devtmpfs 4.0M 8.0K 4.0M 1% /dev
tmpfs 966M 112K 966M 1% /dev/shm
tmpfs 387M 37M 350M 10% /run
/dev/xvda2 33M 3.9M 30M 12% /boot/efi
tmpfs 194M 1.4M 192M 1% /run/user/1000
Checking the file system on all partitions:
kenno@thounchey3 ~> df -hT
Filesystem Type Size Used Avail Use% Mounted on
/dev/xvda3 xfs 32G 32G 20K 100% /
devtmpfs devtmpfs 4.0M 8.0K 4.0M 1% /dev
tmpfs tmpfs 966M 112K 966M 1% /dev/shm
tmpfs tmpfs 387M 37M 350M 10% /run
/dev/xvda2 vfat 33M 3.9M 30M 12% /boot/efi
tmpfs tmpfs 194M 1.4M 192M 1% /run/user/1000
So, the root partition resides on partition 3 of xvda
device. The size of the device is ~32GiB. At this point, the VM was shutdown, and the xvda
device was expanded to 64 GiB.
Fast forward a few minutes later, let’s list the block devices on the VM after powering the VM backup.
thounchey3:~ # lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sr0 11:0 1 1024M 0 rom
xvda 202:0 0 64G 0 disk
├─xvda1 202:1 0 2M 0 part
├─xvda2 202:2 0 33M 0 part /boot/efi
└─xvda3 202:3 0 31.8G 0 part /
Alright, we could see xvda
volume has been increased/detected to 64 GiB. Next I can start to grow file system of the root partition. (Or can I?)
thounchey3:~ # xfs_growfs -d /
meta-data=/dev/xvda3 isize=512 agcount=168, agsize=49855 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=0 bigtime=0 inobtcount=0 nrext64=0
data = bsize=4096 blocks=8326139, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=855, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data size unchanged, skipping
Hmm… data size unchanged, skipping
. But why? Oh… silly me. I forgot to grow the root partition first. Let’s fix that:
thounchey3:~ # growpart /dev/xvda 3
CHANGED: partition=3 start=73728 old: size=66609119 end=66682847 new: size=134143967 end=134217695
thounchey3:~ # lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sr0 11:0 1 1024M 0 rom
xvda 202:0 0 64G 0 disk
├─xvda1 202:1 0 2M 0 part
├─xvda2 202:2 0 33M 0 part /boot/efi
└─xvda3 202:3 0 64G 0 part /
Expanding the file system for the root partition again:
thounchey3:~ # xfs_growfs -d /
meta-data=/dev/xvda3 isize=512 agcount=168, agsize=49855 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=0 bigtime=0 inobtcount=0 nrext64=0
data = bsize=4096 blocks=8326139, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=855, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 8326139 to 16767995
Finally, confirm the new root partition size:
thounchey3:~ # df -hT
Filesystem Type Size Used Avail Use% Mounted on
/dev/xvda3 xfs 64G 19G 46G 29% /
devtmpfs devtmpfs 4.0M 8.0K 4.0M 1% /dev
tmpfs tmpfs 966M 112K 966M 1% /dev/shm
tmpfs tmpfs 387M 5.5M 381M 2% /run
/dev/xvda2 vfat 33M 3.9M 30M 12% /boot/efi
tmpfs tmpfs 194M 1.3M 192M 1% /run/user/1000
Reference: