Twice each year VMware’s vExpert program opens its doors to applications from the IT and tech community. That second door opened just recently on June 7th 2019.
The vExpert community is a group of like-minded enthusiasts, bloggers, book authors, VMUG leaders, speakers, tool builders, and community leaders. If you are already busy in the community and are contributing in some way, this will without doubt open doors for you, give you priority access to VMware information and, of course, there are the usual vExpert licensing benefits.
For those already consuming Microsoft Office 365, then you will undoubtedly (to some level) be utilising Azure Active Directory. Azure AD comes with an array of tools, some of which aren’t confined to public cloud; some can even aid and strengthen your on-premises applications. One such tool is the Azure Multi-Factor Authentication Server, an on-premises 2-factor authentication mechanism which can integrate with on-prem VMware Horizon environments.
The Azure MFA Server enables us to further enhance the security of numerous applications capable of integrating with 2FA authentication, and VMware Horizon has been able to integrate with such solutions for some time. This additional level of security is a much sought after function which serves to further secure public access to internal desktop pools.
Today saw the release of VMware NSX-T 2.4, the latest and greatest, lauded as a ‘landmark release’ for the product.
Since its initial release in February 2017, NSX-T has focused on addressing organisational requirements to support cloud-native applications, bare metal workloads, multi-hypervisor environments, and public clouds. With the release of NSX-T 2.4, we can now add multi-clouds to the list.
NSX-T delivers security to diverse endpoints such as VMs, containers, and bare metal, as well as a range of cloud platforms and cloud native projects including Kubernetes, VMware PKS, Pivotal Application Service (PAS), and Red Hat OpenShift.
With NSX-T 2.4, VMware are able to deliver further advancements in networking, security, automation, and an ‘operational simplicity for everyone’. This includes IT admins, DevOps teams, and developers. As such, NSX-T is an enabler for customers embracing cloud-native application development, expanding use of public cloud, and those who require automation to drive agility.
The first South West UK VMUG will be taking place on Wednesday 20th March 2019 at the Bristol and Bath Science Park, an event which also marks my first time presenting at a VMUG. No pressure, but I will be following a session by fellow vExpert, Chris Lewis (no relation).
My session will be covering VMware NSX Data Centre for vSphere (NSX-V) and, more specifically, the reality of managing a zero-trust environment for true micro-segmentation of services. NSX itself makes this fairly easy thanks to a number of tools (Application Rule Manager being just one), however, there are always a number of human variables which need to be acknowledged and identified along the way.
In what was a slightly quiet announcement, VMware NSX Data Center for vSphere 6.4.4 was released just two days ago on Thursday 13th December 2018. Unsure as to why this was such a hushed release as there are a number of cool items to shout about.
Other than the usual resolved issues, 6.4.4 has had a much awaited functionality update. Specifically, we are now able to manage Logical Switches, perform Edge Appliance management, Edge Services (DHCP, NAT), Edge Certificates, and Edge Grouping Objects, all from the HTML 5 vSphere Client. Until 6.4.4, these features were only available via the legacy Flex vSphere Web Client, forcing NSX administrators to jump between the two difference consoles.
With the release of VMware NSX 6.3.0 back in February 2017, we saw the introduction of the Application Rule Manager (ARM). The Application Rule Manager allows us to a) simplify the process of creating grouping objects and distributed firewall rules for the micro-segmentation of existing workloads, and b) deploy applications within a zero-trust environment with greater speed and efficiency.
In this second post we take a look at the alternative load balancer mode – In-Line/Transparent mode. First of all, unlike the One-Armed/Proxy mode, In-Line load balancers require two logical interfaces (LIFs); one Uplink LIF (connected to either a DLR or upstream Edge) and one Internal LIF. The Internal LIF is directly connected to the network segment housing the back-end servers requiring load-balancing. In addition to this (and unlike the One-Armed/Proxy load balancer), In-Line load balancers are required to act as the default gateway for all back-end servers.
Welcome to the first in a series of posts covering VMware NSX Edge load balancers. These posts will dive into the two main flavours – ‘One-Armed’ and ‘In Line’. We will cover use-cases for each option.
NSX Edge load balancers allow us to distribute incoming requests across a number of servers (aka – members) in order to achieve optimal resource utilisation, maximise throughput, minimise response time, and avoid application overload. NSX Edges allow load balancing up to Layer 7.
In this first post, we deploy an NSX Edge, enable the load balancer feature, and configure it in One-Armed mode (aka – Proxy, SNAT, non-transparent mode). This One-Armed/Proxy mode is the simplest of the two deployments, and utilises a single internal Logical Interface (LIF) (i.e. – it’s ‘one arm’).
This flavour of NSX Edge load balancer utilises its own IP address as the source address to send requests to back-end/member servers. The member servers see this traffic as originating from the load balancer and not the client and, as a result, all responses are sent directly to the load balancer. So, nice and simple, and this is usually my go-to solution where I have a requirement to load balance across a number of member servers for resilience.
In this article we have a basic topology consisting of a NSX Edge load balancer (LB-101-10-WEB / 10.101.10.100) and two back-end/Member servers (101-10-WEB01 / 10.101.10.11 and 101-10-WEB02 / 10.101.10.12), all of which are housed on the same Logical Switch (10.101.10.0/24).
Note, this article assumes your Logical Switches are already in play, traffic is able to route directly to each of the back-end servers, and you have created the necessary NSX Distributed Firewall rules. In this example, I will be configuring the NSX Edge load balancer to pass HTTP traffic to the back-end/Member servers.
NSX Edge – Deployment
1. Create a new NSX Edge Services Gateway. Note, for my lab environment I will not enable High Availability. When ready, click Next.
2.Configure the CLI credentials and click Next.
3. Configure the appliance size and resources. Again, for lab purposes, the Compact appliance size is appropriate. When ready, click Next.
4. Next up, we need to configure a single (one-armed) interface. Click the + button to begin.
5. Give the interface a name, select Internal, and connect it to the same Logical Switch which houses both back-end web servers. Assign a primary IP address (this will be used as the load balancer’s virtual IP address) and, when ready, click OK.
Note – 10.101.10.100 has been assigned to the internal LIF and will be utilised in a future step as the virtual IP address of our new application pool. Additional/secondary IP addresses can be added and assigned to additional application pools (more on this on a later step), meaning one load balancer is capable of load balancing multiple applications.
6. Confirm the configuration and click Next.
7. As the NSX Edge will not have an Uplink LIF, we will not be able to configure a default gateway. Click Next.
8. For lab purposes, I will not configure any firewall policies. Also, as we are not deploying the appliance in HA mode, all HA parameters will be greyed-out. Click Next.
9. Confirm the NSX Edge configuration, and click Finish to deploy.
NSX Edge – Routing
Here in Lab World, I don’t have OSPF/BGP configured, so we’ll create a static route to enable traffic to flow upstream. Looking at the topology a little more closely, you’ll note the NSX Edge load balancer has a next hop of 10.101.10.254 (the internal LIF of the DLR ).
To configure the static route, simply jump into the configuration console of the newly created NSX Edge, browse to Manage > Routing > Static Routes, and click +. Configure accordingly and click OK.
NSX Edge – One-Armed Load Balancer Configuration
Now that our new NSX Edge has been deployed, we will enable the load balancer feature and configure in One-Armed/Proxy Mode.
1. Browse to Manage > Load Balancer > Global Configuration and click Edit.
2. Ensure Enable Load Balancer is ticked, and click OK.
3. Browse to Manager > Load Balancer > Application Profiles and click +.
Application Profiles – An Application Profile is used to define the behaviour of a particular type of network traffic, and is associated with a virtual server (virtual IP address). The virtual server then processes traffic according to the values specified in the Application Profile. This allows us to perform traffic management tasks with greater ease and efficiency.
4. As mentioned at the start of this post, we are only interested in load balancing for resilience. As such (and as detailed below), we will set the Application Profile Type to TCP.
5. Confirm creation of the new Application Profile.
6. Browse to Manager > Load Balancer > Pools and click +.
Pools – A Pool is simply a group of back-end servers (aka, Members), and is configured with a load-balancing distribution method/algorithm. A service monitor (optional) can also be configured and, as this suggests, is used to perform health checks on its Members.
7. Give your new Pool a Name, Description, choose its distribution method/Algorithm, and Monitors.
8. When ready, click + to add your back-end/member servers. For this either click Select to choose a vSphere Object, or simply type the destination’s IP address.
9. Define the Port (in this instance I am load-balancing HTTP/80 traffic), as well as the Monitor Port (here I use port 80 again). When done, click OK.
10. Confirm your configuration by clicking OK.
11. Confirm creation of the new Pool.
12. Check your newly created Pool’s health status by clicking Show Pool Statistics. The Status of both the Pool and it’s Members should show UP.
13. Browse to Virtual Servers and click +.
14. From the Application Profile drop-down menu, select the recently created Application Profile, give the Virtual Server a Name and Description, and click Select IP Address to select the IP address which we allocated to the internal LIF when we created the load balancer.
15. Lastly, set the Protocol to TCP, Port/Port Range to 80, and set the the Default Pool to the pool we created in step 6.
16. Confirm creation of the new Virtual Server.
17. Finally, browse to the Virtual Server IP address to confirm load-balancing to each of the Pool Members is successful. In the below screenshot, traffic is routed to the VM, 101-10-WEB01.
18. After Refreshing the browser, I am directed to 101-10-WEB02.
In the next post we’ll cover the second flavour of NSX Edge load balancer, In-Line mode (aka, Transparent mode) and, in future posts, we’ll look at use cases for both, as well as troubleshooting tips.
Yesterday, Tuesday 16th October saw the much anticipated release of VMware’s vSphere 6.7 Update 1, however, shortly after the announcement a number of Veeam users decried the release due to compatibility issues with Veeam’s Backup & Replication suite. None other than Veeam’s Anton Gostev first announced the issue with the below tweet:
Looks like vSphere 6.7 Update 1 completely breaks backups, so please avoid updating until further notice. I must say I really miss those times when we didn’t even have to test vSphere updates, and literally supported them by default because they never broke anything – for years!
The very next day the Veeam team announced a workaround in the form of Veeam KB2784, as well as ‘out-of-the-box’ support being included with highly awaited (and much delayed) next release, Update 4.
vSphere 6.7 U1 compatibility issue has been researched, and the simple workaround is now available for use in test labs. Official out-of-the-box support for vSphere 6.7 U1 will be included in Update 4. See this Veeam forums topic for more details > https://t.co/BNgMNWDOmS
Where the fault lies with such release/compatibility issues is not the goal of this post (which Twitter seems to be more focused on). However, with a high number of pros likely raising internal changes to upgrade their vCenter(s) and ESXi hosts, you’ll want to implement the Veeam workaround in-line with this upgrade, as well as a number of solid backup/restore tests.
Not only will this year mark my first ever visit to VMworld Europe, I’ll also be taking part in a Customer Panel session.
If you are interested in hearing my VMware NSX Data Center journey, how we implemented and operationalised NSX; how NSX continues to increase security and application performance, while simplifying troubleshooting and improving network provisioning time, then join me on Thursday, 8th November at 12:00-13:00 to hear more.