I recently ran into a small issue applying a patch to one of my lab’s vCenter Servers. Specifically, attempts to patch the vCenter Server ran into the error, Exception occurred in install precheck phase.
vSAN deployments in brownfield environments are simple. New hosts are configured based on projected workloads (plus points for utilising vSAN Ready Nodes), they’re purchased, racked, built, and absorbed into an existing vCenter workload domain before vSAN is finally enabled and configured. But how would we deploy vSAN into a greenfield environment? An environment with no vCenter, no shared storage, but only brand new ESXi hosts with valid (yet unconfigured) cache and capacity vSAN disks? As vSAN is reliant on vCenter for its operations, we seemingly have a chicken-and-egg scenario.
In this article, I detail the process of deploying (Stage 1) and configuring (Stage 2) a vCenter Server Appliance into a greenfield environment and, more specifically, onto a single-node vSAN cluster in hybrid-mode (Note – this is in no way supported by VMware for anything other than deploying vCenter and vSAN into a greenfield environment). I then add additional hosts to the cluster and configure vSAN storage and networking via the brilliant Cluster Quickstart tool (Stage 3), before applying a vSAN VM Storage policy to the vCenter Server Appliance (Stage 4). Once complete, our vSAN cluster will be ready to host live workloads.
Patching my lab’s vCenter Server Appliance this evening raised an issue whereby the root password had expired. Unable to login via root, I can still administer the appliance via a vCenter’s SSO domain account (administrator@vsphere.local, for instance), however, attempts to perform any updates will not be possible until the appliance’s root account password is reset. This an easy exercise, however, this is not possible via vSphere UI or console, only bash.
Following a recent upgrade of VMware NSX Data Centre for vSphere from 6.4.1 to 6.4.4, the option to access NSX’s Networking and Security extension from within the vSphere Client (HTML 5) had simply disappeared. This left me scratching my head a little, more so as I’ve completed this upgrade (what seems) a million times.
Scenario-wise, I had completed the initial NSX Manager upgrade, but after logging in to the vSphere Client, I noted the Networking and Security extension failed to display.
A while back I was welcomed to the office by a vCenter Server Appliance critical health alert, specifically, ‘The /storage/log filesystem is out of disk space or inodes’. This error is usually due to a failed automated log clean-up process, so in this article I detail how to implement a temporary ‘get out of jail’ fix, followed by a more permanent fix with the identification of the offending files and how to tidy them up.
Firstly, let’s take a look at the file system itself in order to confirm our UI findings. SSH onto the VCSA appliance and enter BASH, then list all available file systems via the df -h command. From the below screenshot the UI warning has been confirmed, specifically, the file system in question has been completely consumed.
The ‘Get Out of Jail’ Temporary Fix
In the unfortunate event that this issue is preventing you from accessing vCenter, we can implement a quick fix by extending the affected disk. Note, this is a quick fix only and should be implemented to restore vCenter access only. This should not be relied on as a permanent resolution.
As we have already identified the problematic disk, jump over to the vSphere client and extend the disk in question (you call by how much, but in my environment, I’ve added an additional 5 GB). This leaves us the final task of initiating the extension and enabling the VCSA to see the additional space. Depending on your VCSA version, there are two options:
VCSA v6.0
vpxd_servicecfg storage lvm autogrow
VCSA v6.5 and 6.7
/usr/lib/applmgmt/support/scripts/autogrow.sh
Lastly, list all file systems to confirm the extension has been realised.
Permanent Fix
So, we’re out of jail, but we still have an offending consumer. In my instance, checking within the file system identified a number of large log files. These hadn’t been cleared automatically by the VCSA so a manual intervention was required. Specifically, the removal of localhost_access_log, vmware-identity-sts, and vmware-identity-sts-perf logs was required. These can be removed via the below command.
rm log-file-name.*
Following the removal, another df -h show’s we’re back in business.
Lastly, and in this instance, restart the Security Token Service to initiate the creation of new log files.
service vmware-stsd restart
Further Reading
For this specific issue, please see VMware KB article 2143565, however, if in doubt, do call upon the VMware Support. The team will be able to assist you in identifying the offending files/directories which can be safely removed.
With the release of vSphere 6.7 back in April 2018, a host of new enhancements, features, and goodies had the vCommunity going wild. With enhanced feature parity between the legacy vSphere Web Client and new HTML 5 vSphere Client, as well as the vCenter Server Appliance boasting performance increases of ~2X faster performance in vCenter operations per second, ~3X reduction in memory usage, and ~3X faster DRS-related operations (e.g. power-on virtual machine); these two areas alone made most of us want to upgrade. Nice.
vSphere 6.7 also boasts the new Quick Boot feature for vSphere hosts running the ESXi 6.7 hypervisor and above. This feature allows users to a) reduce maintenance time by removing the number of reboots required during major version upgrades (Single Reboot), and b) allows users to restart the ESXi hypervisor without having to reboot the physical host (essentially skipping the time-consuming hardware initialisation). Very nice!
By design, there are certain virtual machines and/or appliances within vSphere which are protected to prevent editing (this can include NSX Controllers, Edges, Logical Routers, etc.) In a live/production environment, you’d not normally care about editing these appliances, however, in a lab environment (especially one where resource is tight), reducing memory and/or CPU allocation can help a lot. As such, this article will cover the process of removing the lock on protected VM in vSphere, in order to enable editing.
The scenario: a customer needs to reduce the resource allocation of an NSX Controller, however, due to the VM in question being protected/locked, editing the VM’s resources is not possible via UI or PowerCLI.
The process of removing this lock is quick and easy, however, we first need to identify the virtual machine’s Managed Object Reference (moRef ID). Please note, VMware do not support or recommend this procedure in any way. As such, this procedure should not be implemented in a production environment.