Disk Space Trouble On Ec2 Instance

Home » CentOS » Disk Space Trouble On Ec2 Instance
CentOS 4 Comments

Hey all,

Ok, so I’ve been having some trouble for a while with an EC2 instance running CentOS 5.11 with a disk volume reporting 100% usage. Root is on an EBS volume.

So I’ve tried the whole ‘du -sk | sort -nr | head -10’ routine all around this volume getting rid of files. At first I was getting rid of about 50MB
of files. Yet the volume remains at 100% capacity.

Thinking that maybe the OS was just not letting go of the inodes for the files on the disk, I attempted rebooting the instance. After logging in again I did a df -h / on the root volume. And look! Still at 100% capcity used. Grrr….

Ok so I then did a du -h on the /var/www directory, which was mounted on the root volume. And saw that it was gobbling up 190MB of disk space.

So then I reasoned that I could create an EBS volume, rsync the data there, blow away the contents of /var/www/* and then mount the EBS volume on the
/var/www directory. So I went through that exercise and lo and behold. Still at 100% capacity. Rebooted the instance again. Logged in and.. still at 100% capacity.

Here’s how the volumes are looking now.

[root@ops:~] #df -h Filesystem Size Used Avail Use% Mounted on
/dev/sda1 9.9G 9.3G 49M 100% /
none 312M 0 312M 0% /dev/shm
/dev/sdi 148G 116G 25G 83% /backup/tapes
/dev/sdh 9.9G 385M 9.0G 5% /backup/tapes/bacula-restores
/dev/sdf 9.9G 2.1G 7.4G 22% /var/lib/mysql fuse 256T 0 256T 0% /backup/mysql fuse 256T 0 256T 0% /backup/svn
/dev/sdg 197G 377M 187G 1% /var/www

There are some really important functions I need this volume to perform that it simply can’t because the root volume is at 100% capacity. Like the fact that neither mysql nor my backup program – bacula will even think of starting up and functioning!

I’m at a loss to explain how I can delete 190MB worth of data, reboot the instance and still be at 100% usage.

I’m at my wits end over this. Can someone please offer some advice on how to solve this problem?

Thanks Tim

4 thoughts on - Disk Space Trouble On Ec2 Instance

  • 49mb out of 9.9gb is less than one-half of one percent, so the df command is probably rounding that up to 100% instead of showing you 99.51%. Whatever is checking for free disk space is likely doing the same thing.

  • 190MB is less than one percent of 9.9GB aka 9900MB

    BTW, for cases like this, I’d suggest using df -k or -m rather than -h to get more precise and consistent values.

    also note, Unix (and Linux) file systems usually have a reserved freespace, only root can write that last bit. most modern file systems suffer from severe fragmentation if you completely fill them. ext*fs, you adjust this with `tune2fs -m 1 /dev/sdXX`. XFS treats these reserved blocks as inviolable, so they don’t show up as freespace, they can be changed with xfs_io but should be modified at your own risk.

  • Hey guys,

    Thanks for this response. I just wanted to get back to you to let you know how I was able to resolve this. And yeah I think it’s more informative to use df -m or df -k, so I’ll try to stick to that from now on. Especially when posting to the lists.

    But I took a look around on the disk and saw that the /var/ww and
    /usr/local directories were the biggest. So I just solved this problem that you can only seem to do this easily on AWS. I grabbed the smallest EBS
    volumes that I could use (1GB for www and 2GB for /usr/local) respectively to use for those directories. 1GB being the smallest EBS volume you can get.

    So like I said earlier, I had around 195MB of data in /var/www and about
    1.5GB of date in /usr/local. So I just mounted them on /mnt/www and
    /mnt/local and rsynced hte contents of those directories there. Blew away the contents of the original directories with rm -rf (scary but I was very careful while doing this). Then re-mounted them on those original paths. And voila!

    [root@ops:~] #df -m Filesystem 1M-blocks Used Available Use% Mounted on
    /dev/sda1 10080 8431 1546 85% /
    none 312 0 312 0% /dev/shm
    /dev/sdi 151190 122853 20658 86% /backup/tapes
    /dev/sdh 10080 385 9183 5%
    /backup/tapes/bacula-restores
    /dev/sdf 10080 2064 7504 22% /var/lib/mysql fuse 268435456 0 268435456 0% /backup/mysql fuse 268435456 0 268435456 0% /backup/svn
    /dev/sdj 1008 223 735 24% /var/www
    /dev/sdk 2016 1335 579 70% /usr/local

    Problem solved.

    So right now my root EBS volume is down to about 85% used instead of 100%
    used.

    Maybe a little unconventional, but at least it got the job done.

    Thanks again, guys!
    Tim

  • Adding disks is unconventional in the physical server world because it has a minimum base cost and you eventually run out of disk bays in the chassis. You quickly hit practical limits.

    In the VM world, you’re just creating yet another a large-ish file inside a much larger pool of storage.

    The only problem I can think of from doing this is that it means you’re fooling the OS into believing it has multiple disks that it can access in parallel when in reality accesses to both will cause the same set of disk arms to be seeking back and forth, tripping each other up.

    This is no worse in practice than using partitions instead of separate physical volumes, however.