A while back I was welcomed to the office by a vCenter Server Appliance critical health alert, specifically, ‘The /storage/log filesystem is out of disk space or inodes’. This error is usually due to a failed automated log clean-up process, so in this article I detail how to implement a temporary ‘get out of jail’ fix, followed by a more permanent fix with the identification of the offending files and how to tidy them up.
Firstly, let’s take a look at the file system itself in order to confirm our UI findings. SSH onto the VCSA appliance and enter BASH, then list all available file systems via the df -h command. From the below screenshot the UI warning has been confirmed, specifically, the file system in question has been completely consumed.
The ‘Get Out of Jail’ Temporary Fix
In the unfortunate event that this issue is preventing you from accessing vCenter, we can implement a quick fix by extending the affected disk. Note, this is a quick fix only and should be implemented to restore vCenter access only. This should not be relied on as a permanent resolution.
As we have already identified the problematic disk, jump over to the vSphere client and extend the disk in question (you call by how much, but in my environment, I’ve added an additional 5 GB). This leaves us the final task of initiating the extension and enabling the VCSA to see the additional space. Depending on your VCSA version, there are two options:
vpxd_servicecfg storage lvm autogrow
VCSA v6.5 and 6.7
Lastly, list all file systems to confirm the extension has been realised.
So, we’re out of jail, but we still have an offending consumer. In my instance, checking within the file system identified a number of large log files. These hadn’t been cleared automatically by the VCSA so a manual intervention was required. Specifically, the removal of localhost_access_log, vmware-identity-sts, and vmware-identity-sts-perf logs was required. These can be removed via the below command.
Following the removal, another df -h show’s we’re back in business.
Lastly, and in this instance, restart the Security Token Service to initiate the creation of new log files.
service vmware-stsd restart
For this specific issue, please see VMware KB article 2143565, however, if in doubt, do call upon the VMware Support. The team will be able to assist you in identifying the offending files/directories which can be safely removed.
olso works on vcenter appliance 6.7
This saved me. Thank you for publishing this. I didn’t realize that the one mount of drives could bring a company down!
It’s always the little things!
Great post – got us out of a serious jam. We could not power on VMs because of the log files “stuck” in the SSO directory
Glad to hear it helped Bryan. Like you, I experienced this the ‘bad way’.
Great article and thank you. Following your article I was able to free up 4% of space. I was wondering what else I can safely remove? Thank you
localhost:/storage/log # df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 11G 6.1G 4.1G 60% /
udev 4.0G 164K 4.0G 1% /dev
tmpfs 4.0G 44K 4.0G 1% /dev/shm
/dev/sda1 128M 41M 81M 34% /boot
/dev/mapper/core_vg-core 25G 7.2G 17G 31% /storage/core
/dev/mapper/log_vg-log 9.9G 8.9G 475M 96% /storage/log
/dev/mapper/db_vg-db 9.9G 304M 9.1G 4% /storage/db
/dev/mapper/dblog_vg-dblog 5.0G 267M 4.5G 6% /storage/dblog
/dev/mapper/seat_vg-seat 9.9G 507M 8.9G 6% /storage/seat
/dev/mapper/netdump_vg-netdump 1001M 18M 932M 2% /storage/netdump
/dev/mapper/autodeploy_vg-autodeploy 9.9G 151M 9.2G 2% /storage/autodeploy
/dev/mapper/invsvc_vg-invsvc 5.0G 168M 4.6G 4% /storage/invsvc
localhost:/storage/log # du -h
Great news Michael! This post has certainly been interesting to follow in terms of replies. Remember though, this is only a get of jail card. Once you’ve got your VCSA back up and running, I’d suggest logging a support call with GSS (if you haven’t already) to finalise the issue. Also, the following KB might help further – https://kb.vmware.com/s/article/2143565.