In a nutshell, vRealize Network Insight delivers intelligent operations for software-defined networking and security. It helps customers build an optimised, highly-available, and secure network infrastructure across multi-cloud environments. It accelerates micro-segmentation planning and deployment, enables visibility across virtual and physical networks, and provides operational views to manage and scale the VMware NSX deployments.
Page 2 of 8
By default, NSX-T transport nodes access NSX-T Manager nodes via their IP address, however, changing this behaviour so that the NSX-T Manager FQDN is used instead is an easy fix and is implemented by a simple REST API call.
FQDN registration is an NSX-T Multisite requirement. As such, FQDN registration is not required for single-site deployments.
In the scenario whereby a customer needs to failover NSX-T operations to a secondary site (by deploying a new NSX-T Manager and restoring from backup), the NSX-T Manager(s) and Cluster VIP address will likely change unless they have implemented stretched-L2. As such, the NSX-T Manager(s)/Cluster FQDN needs to be registered with all NSX-T transport nodes and, once a new NSX-T Manager is deployed to the secondary site and restored from backup, DNS can be amended, and management operations restored.
In NSX-T, the Admin and Audit user passwords for both the NSX Manager and NSX Edge appliances expire, by default, after 90 days. When these passwords expire, you will not be able to log in and manage your NSX-T components. This includes any API calls where administrative credentials are required.
In this article I detail the simple process of amending the expiration period or, if so required, removing the password expiration altogether (the latter being perfect for POC and/or lab environments).
For those coming from an NSX-V background, you’ll remember how we enabled east-west traffic by deploying Distributed Logical Routers (DLR). This has changed ever so slightly in NSX-T, with earlier versions using Tier-1 Logical Routers, and in 2.4, Tier-1 Gateways.
That didn’t disappoint! I’ve wanted to visit the North East England VMUG for sometime, so being asked to present at the user group made it all the more special. As I sit here in Newcastle International Airport waiting for my flight home, I thought I’d summarise the event for those who’ve never been to a VMUG event, are thinking of doing so in the future, or are thinking of speaking at a local VMUG.
vSAN deployments in brownfield environments are simple. New hosts are configured based on projected workloads (plus points for utilising vSAN Ready Nodes), they’re purchased, racked, built, and absorbed into an existing vCenter workload domain before vSAN is finally enabled and configured. But how would we deploy vSAN into a greenfield environment? An environment with no vCenter, no shared storage, but only brand new ESXi hosts with valid (yet unconfigured) cache and capacity vSAN disks? As vSAN is reliant on vCenter for its operations, we seemingly have a chicken-and-egg scenario.
In this article, I detail the process of deploying (Stage 1) and configuring (Stage 2) a vCenter Server Appliance into a greenfield environment and, more specifically, onto a single-node vSAN cluster in hybrid-mode (Note – this is in no way supported by VMware for anything other than deploying vCenter and vSAN into a greenfield environment). I then add additional hosts to the cluster and configure vSAN storage and networking via the brilliant Cluster Quickstart tool (Stage 3), before applying a vSAN VM Storage policy to the vCenter Server Appliance (Stage 4). Once complete, our vSAN cluster will be ready to host live workloads.
The On-Demand Library for this year’s VMworld in San Francisco is now live with all sessions available to stream. Simply visit the VMworld 2019 On-Demand Library and enjoy!
After almost six years at the Royal College of Nursing (the last two and a half as Senior Infrastructure Architect) the time has come to move on to pastures new. I’ve loved working with such a talented team of professionals (David Collins, Wayne Shadrach, Richard Thompson, Mark Whalley, and Darren Latter), and the leadership of Geoff Lewis (Technical Operations Manager) and Huw Bevan (IT Operations Manager) has been an inspiration since day one.
I’ve been lucky enough to have led on a number of exciting projects over the years, and my sincerest thanks to all at the RCN for the opportunities, the friendships (which don’t end here), and the amazing memories.The RCN will always hold a special place in my heart.
The next North East England VMUG will be taking place on Thursday 26th September at the Royal Station Hotel, Newcastle, and I’m excited to be presenting alongside so many fantastic individuals from throughout the vCommunity.
My session will be covering VMware NSX Data Centre for vSphere (NSX-V) and, more specifically, a real world look at micro-segmentation and the implementation of a zero-trust environment. NSX makes this fairly easy thanks to a number of built-in tools, and we’ll explore how we can use the NSX Application Rule Manager to visualise application dependencies in order to start fleshing-out our Distributed Firewall rules.
Patching my lab’s vCenter Server Appliance this evening raised an issue whereby the root password had expired. Unable to login via root, I can still administer the appliance via a vCenter’s SSO domain account (email@example.com, for instance), however, attempts to perform any updates will not be possible until the appliance’s root account password is reset. This an easy exercise, however, this is not possible via vSphere UI or console, only bash.