Fix VMware VCSA /storage/log filesystem out of disk space

This morning I ran into an issue where users were reporting the production VCSA 6.0 was not allowing them to connect into the web or thick client. Another administrator rebooted the VCSA which seemed to work only briefly. I then logged into the VCSA web management (https://<VCENTER_IP>:5480) and noticed the following health status right away:

The /storage/log filesystem is out of disk space or inodes

vcsa /storage/log full

So I opened up PuTTY and ran a df -h command and confirmed the issue:

VMware VCSA storage log VMDK full

How to fix VCSA /storage/log filesystem out of disk space

Luckily the fix is rather easy, thanks to a blog post by @lamw that I found while looking for a solution, he mentions in VCSA 6.x you can now expand VMDK’s on the fly since the VCSA takes advantage of LVM.

  1. So open the vSphere client (web or thick) and expand the VMDK (see below table for which VMDK to expand)
    VCSA /storage/log expand vmdk
  2. Next open PuTTY (or other terminal window) and run the following command on your VCSA
    vpxd_servicecfg storage lvm autogrow

    VCSA /storage/log autogrow vmdk

And that is all there is to it, we now have a healthy VCSA again!

VMware VCSA healthy

VMWare VCSA 6.0 VMDK list and purpose

VMDK DiskSizeMount PointPurpose
VMDK112 GB/ & /bootBoot
VMDK21.3 GB/tmp/mountTemp mount
VMDK325 GBSWAPSwap space
VMDK425 GB/storage/coreCore dumps
VMDK510 GB/storage/logSystem logs
VMDK610 GB/storage/dbPostgres DB location
VMDK75 GB/storage/dblogPostgres DB logs
VMDK810 GB/storage/seatStats, events, and tasks (SEAT) for Postgres
VMDK91 GB/storage/netdumpNetdump collector
VMDK1010 GB/storage/autodeployAuto Deploy repository
VMDK115 GBstorage/invsvcInventory service bootstrap and tomcat config

So in the issue above, VMDK disk 5 was expanded and then the autogrow command ran which resolved our issue.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *


  1. Hey Mike, you might also want to investigate the cause of that full disk scenario. One such known condition I’ve seen multiple times is caused by incorrect log rotation settings in the sso filesystem. Have a look here to see if you might be suffering the same fate: