I recently ran into a small issue applying a patch to one of my lab’s vCenter Servers. Specifically, attempts to patch the vCenter Server ran into the error, Exception occurred in install precheck phase.
Page 2 of 8
With the recent announcement and general availability of VMware NSX-T Data Center 3.1 on Friday 30th October 2020, we have a number of enhancements, new features, and functionality. The new features and functionality can be seen in a previous post (VMware NSX-T 3.1.0 Release Announcement), however, I realise I’ve never discussed the upgrade procedure itself.
Upgrading NSX-T Data Center couldn’t be easier. Yes, there are some disruptive elements, however, if your NSX-T design has redundancy built-in, we aren’t talking much. Upgrading the edge and transport nodes is as simple as you can imagine, as is the process of upgrading the NSX Managers themselves and, in this article, I cover the process from start to finish.
VMware confirmed general availability of NSX-T Data Center 3.1.0 on Friday 30th October and saw many great new features and functionality. Continue reading to review the details around the latest enhancements, etc., or jump over to the upgrade procedure available in a separate article – Upgrading NSX-T Data Center to 3.1.0.
In this article, we take a look at VMware NSX-T Data Center Multisite and, more specifically, the failover/recovery procedure for an NSX-T environment spanning two sites in an active/standby deployment. This is also known as the NSX-T Multisite disaster recovery use case.
Backing up NSX-T Data Center is a simple process; however, at time of writing, automating the retention period for the backup files requires a few additional tasks and is not configurable via the NSX Manager UI. These additional steps are quick to implement and will ensure your SFTP server does not run out of storage.
In my previous articles, we installed (VMware vRealize Network Insight (vRNI) – Part 1 – Installation) and configured (VMware vRealize Network Insight (vRNI) – Part 2 – Configuration) our VMware vRealize Network Insight infrastructure.
Now that we have the vRNI components in place and happily collecting data, we’re going to take a quick detour and configure LDAP, enabling our users to log in using their domain credentials instead of the single local@admin user.
In my last vRealize Network Insight article (VMware vRealize Network Insight (vRNI) – Part 1 – Installation) we covered the initial installation of the on-premises Platform and Proxy/Collector appliances.
Following on from the installation we will begin looking at how we actually add data sources to vRNI in readiness for application discovery and data flow analysis.
In this article, we will add a vCenter Server and an NSX-T Manager.
In a nutshell, vRealize Network Insight delivers intelligent operations for software-defined networking and security. It enables customers to build an optimised, highly-available, and secure network infrastructure across multi-cloud environments. It accelerates micro-segmentation planning and deployment, enables visibility across virtual and physical networks, and provides operational views to manage and scale the VMware NSX deployments.
By default, NSX-T transport nodes access NSX-T Manager nodes via their IP address, however, changing this behaviour so that the NSX-T Manager FQDN is used instead is an easy fix and is implemented by a simple REST API call.
FQDN registration is an NSX-T Multisite requirement. As such, FQDN registration is not required for single-site deployments.
In the scenario whereby a customer needs to failover NSX-T operations to a secondary site (by deploying a new NSX-T Manager and restoring from backup), the NSX-T Manager(s) and Cluster VIP address will likely change unless they have implemented stretched-L2. As such, the NSX-T Manager(s)/Cluster FQDN needs to be registered with all NSX-T transport nodes and, once a new NSX-T Manager is deployed to the secondary site and restored from backup, DNS can be amended, and management operations restored.
In NSX-T, the Admin and Audit user passwords for both the NSX Manager and NSX Edge appliances expire, by default, after 90 days. When these passwords expire, you will not be able to log in and manage your NSX-T components. This includes any API calls where administrative credentials are required.
In this article I detail the simple process of amending the expiration period or, if so required, removing the password expiration altogether (the latter being perfect for POC and/or lab environments).