]> Snapshots using lots of disk space 🌐:aligrant.com

Snapshots using lots of disk space

Alastair Grant | Wednesday 20 June 2018

I started to run low on space on a Linux virtual machine recently.  I found this surprising as I try and keep usage down, whilst providing plenty of breathing space in the disks.

I started some investigation, using a combination of the following commands:

Show how much raw space is being used on each drive: df -h

Show how much space a btrfs file system is consuming: btrfs fi usage [path]

Show how much space is actually being used by the current file system: du -x --exclude=.snapshots --max-depth=1 -h /

It's at this point I really narrowed down the source of my problems.  You'll notice with the above command I exclude ".snapshots" from my scan.  This is because on the btrfs file system, snapshots don't take up extra space unless there is a change made to the file.  So a 100MB file will only ever take up 100MB even if you have a thousand snapshots, providing it hasn't changed anything inside it.

Note: with the above command the -x argument only checks for usage on the current file system, but you can have multiple file systems using one disk.  You can also use --exclude as many times as you want to pull out areas you don't want to account for (tmpfs, /proc etc).

When I ran this command, I found that I was using 13GB, yet the previous commands were showing closer to 50GB being used.  A big red finger at snapshots then.

I used this btrfs-df snapshot sizing tool to help me visualise the source of the problem.  You have to first enable quotas on the subvolume you're interested in: btrfs quota enable /, this will then take a while to account for all the data on the drive.  Once that's settled, you can use the script to give you the breakdown of usage.  The interesting column is the exclusive one - how much data that a snapshot has exclusively (i.e. not data common with previous ones).

When I ran this, I found a couple of eye-wateringly large snapshots.  I checked these subvolumes against the output of snapper list.  This highlighted the problem, orphaned snapshots.  Snapper is a great tool and tidies up after itself, but some how (probably in a distribution upgrade or something), I had acquire two snapshots from year ago.  These were ever increasingly different from the current file system and thus taking up valuable delta space.

As they were orphaned from snapper, they needed to be deleted directly by removing the subvolume and then the directory:

btrfs subvolume delete /.snapshots/[number]/snapshot
rm -rf /.snapshots/[number]

btrfs will rebalance itself out over time, and you should see the output of df -h steadily go down until it's closer to what you were expecting.

Breaking from the voyeuristic norms of the Internet, any comments can be made in private by contacting me.