Backing up NSX-T Data Center is a simple process; however, at time of writing, automating the retention period for the backup files requires a few additional tasks and is not configurable via the NSX Manager UI. These additional steps are quick to implement and will ensure your SFTP server does not run out of storage.
Page 3 of 9
In my previous articles, we installed (VMware vRealize Network Insight (vRNI) – Part 1 – Installation) and configured (VMware vRealize Network Insight (vRNI) – Part 2 – Configuration) our VMware vRealize Network Insight infrastructure.
Now that we have the vRNI components in place and happily collecting data, we’re going to take a quick detour and configure LDAP, enabling our users to log in using their domain credentials instead of the single local@admin user.
In my last vRealize Network Insight article (VMware vRealize Network Insight (vRNI) – Part 1 – Installation) we covered the initial installation of the on-premises Platform and Proxy/Collector appliances.
Following on from the installation we will begin looking at how we actually add data sources to vRNI in readiness for application discovery and data flow analysis.
In this article, we will add a vCenter Server and an NSX-T Manager.
In a nutshell, vRealize Network Insight delivers intelligent operations for software-defined networking and security. It enables customers to build an optimised, highly-available, and secure network infrastructure across multi-cloud environments. It accelerates micro-segmentation planning and deployment, enables visibility across virtual and physical networks, and provides operational views to manage and scale the VMware NSX deployments.
By default, NSX-T transport nodes access NSX-T Manager nodes via their IP address, however, changing this behaviour so that the NSX-T Manager FQDN is used instead is an easy fix and is implemented by a simple REST API call.
FQDN registration is an NSX-T Multisite requirement. As such, FQDN registration is not required for single-site deployments.
In the scenario whereby a customer needs to failover NSX-T operations to a secondary site (by deploying a new NSX-T Manager and restoring from backup), the NSX-T Manager(s) and Cluster VIP address will likely change unless they have implemented stretched-L2. As such, the NSX-T Manager(s)/Cluster FQDN needs to be registered with all NSX-T transport nodes and, once a new NSX-T Manager is deployed to the secondary site and restored from backup, DNS can be amended, and management operations restored.
In NSX-T, the Admin and Audit user passwords for both the NSX Manager and NSX Edge appliances expire, by default, after 90 days. When these passwords expire, you will not be able to log in and manage your NSX-T components. This includes any API calls where administrative credentials are required.
In this article I detail the simple process of amending the expiration period or, if so required, removing the password expiration altogether (the latter being perfect for POC and/or lab environments).
Reading Time: 6 minutes
For those coming from an NSX-V background, you’ll remember how we enabled east-west traffic by deploying Distributed Logical Routers (DLR). This has changed ever so slightly in NSX-T, with earlier versions using Tier-1 Logical Routers, and in 2.4, Tier-1 Gateways.
Reading Time: 4 minutes
That didn’t disappoint! I’ve wanted to visit the North East England VMUG for sometime, so being asked to present at the user group made it all the more special. As I sit here in Newcastle International Airport waiting for my flight home, I thought I’d summarise the event for those who’ve never been to a VMUG event, are thinking of doing so in the future, or are thinking of speaking at a local VMUG.
Reading Time: 11 minutes
vSAN deployments in brownfield environments are simple. New hosts are configured based on projected workloads (plus points for utilising vSAN Ready Nodes), they’re purchased, racked, built, and absorbed into an existing vCenter workload domain before vSAN is finally enabled and configured. But how would we deploy vSAN into a greenfield environment? An environment with no vCenter, no shared storage, but only brand new ESXi hosts with valid (yet unconfigured) cache and capacity vSAN disks? As vSAN is reliant on vCenter for its operations, we seemingly have a chicken-and-egg scenario.
In this article, I detail the process of deploying (Stage 1) and configuring (Stage 2) a vCenter Server Appliance into a greenfield environment and, more specifically, onto a single-node vSAN cluster in hybrid-mode (Note – this is in no way supported by VMware for anything other than deploying vCenter and vSAN into a greenfield environment). I then add additional hosts to the cluster and configure vSAN storage and networking via the brilliant Cluster Quickstart tool (Stage 3), before applying a vSAN VM Storage policy to the vCenter Server Appliance (Stage 4). Once complete, our vSAN cluster will be ready to host live workloads.
Reading Time: < 1 minute
The On-Demand Library for this year’s VMworld in San Francisco is now live with all sessions available to stream. Simply visit the VMworld 2019 On-Demand Library and enjoy!