Page 2 of 4

20190320 VMUG Presentation Welcome Slide

South West UK VMUG – March 2019 – VMware NSX and Micro-Segmentation from the Field

Reading Time: 3 minutes

That was a blast! On Wednesday 20th March I had the pleasure of speaking at the South West UK VMUG, held at the Bristol and Bath Science Park. My biggest thanks to VMUG Leaders Jeremy Bowman, Simon Eady, Barry Coombs, and Megan Warren for such a great opportunity, and to all who attended my session. This was my first time speaking at a VMUG, and despite the nerves, I really enjoyed it.

Focusing on VMware NSX Data Centre for vSphere and, more specifically, the micro-segmentation of applications with the aid of the NSX Application Rule Manager (based around my previous article). I opted not to perform a live demo during my very first speaking slot, but instead produced a live recording, for which I walked the group through how to utilise the NSX Application Rule Manager to identify application dependencies, endpoints, and service/ports/protocols when implementing a zero-trust environment.

Continue reading → South West UK VMUG – March 2019 – VMware NSX and Micro-Segmentation from the Field

VMware NSX-T 2.4 – ‘A Landmark Release’

Reading Time: 3 minutes

Today saw the release of VMware NSX-T 2.4, the latest and greatest, lauded as a ‘landmark release’ for the product.

Since its initial release in February 2017, NSX-T has focused on addressing organisational requirements to support cloud-native applications, bare metal workloads, multi-hypervisor environments, and public clouds. With the release of NSX-T 2.4, we can now add multi-clouds to the list.

NSX-T delivers security to diverse endpoints such as VMs, containers, and bare metal, as well as a range of cloud platforms and cloud native projects including Kubernetes, VMware PKS, Pivotal Application Service (PAS), and Red Hat OpenShift.

With NSX-T 2.4, VMware are able to deliver further advancements in networking, security, automation, and an ‘operational simplicity for everyone’. This includes IT admins, DevOps teams, and developers. As such, NSX-T is an enabler for customers embracing cloud-native application development, expanding use of public cloud, and those who require automation to drive agility.

Continue reading → VMware NSX-T 2.4 – ‘A Landmark Release’

South West UK VMUG – 20th March 2019

Reading Time: 2 minutes

The first South West UK VMUG will be taking place on Wednesday 20th March 2019 at the Bristol and Bath Science Park, an event which also marks my first time presenting at a VMUG. No pressure, but I will be following a session by fellow vExpert, Chris Lewis (no relation).

My session will be covering VMware NSX Data Centre for vSphere (NSX-V) and, more specifically, the reality of managing a zero-trust environment for true micro-segmentation of services. NSX itself makes this fairly easy thanks to a number of tools (Application Rule Manager being just one), however, there are always a number of human variables which need to be acknowledged and identified along the way.

Continue reading → South West UK VMUG – 20th March 2019

RunNSX

VMware NSX Data Center for vSphere 6.4.4 Released

Reading Time: 3 minutes

In what was a slightly quiet announcement, VMware NSX Data Center for vSphere 6.4.4 was released just two days ago on Thursday 13th December 2018. Unsure as to why this was such a hushed release as there are a number of cool items to shout about.

Other than the usual resolved issues, 6.4.4 has had a much awaited functionality update. Specifically, we are now able to manage Logical Switches, perform Edge Appliance management, Edge Services (DHCP, NAT), Edge Certificates, and Edge Grouping Objects, all from the HTML 5 vSphere Client. Until 6.4.4, these features were only available via the legacy Flex vSphere Web Client, forcing NSX administrators to jump between the two difference consoles.

Continue reading → VMware NSX Data Center for vSphere 6.4.4 Released

VMware NSX Data Centre – Application Rule Manager

Reading Time: 7 minutes

With the release of VMware NSX 6.3.0 back in February 2017, we saw the introduction of the Application Rule Manager (ARM). The Application Rule Manager allows us to a) simplify the process of creating grouping objects and distributed firewall rules for the micro-segmentation of existing workloads, and b) deploy applications within a zero-trust environment with greater speed and efficiency.

Continue reading → VMware NSX Data Centre – Application Rule Manager

VMware NSX Edge Load Balancers: Part 2 – In-Line/Transparent Mode

Reading Time: 7 minutes

In Part 1 we looked at the deployment of the NSX Edge load balancer in One-Armed/Proxy mode. As detailed, this flavour of NSX Edge load balancer requires nothing from its back-end server pool members, and enables us to quickly and easily add a load balancer to an existing network segment which houses a number of proposed back-end servers.

In-Line/Transparent Mode

In this second post we take a look at the alternative load balancer mode – In-Line/Transparent mode. First of all, unlike the One-Armed/Proxy mode, In-Line load balancers require two logical interfaces (LIFs); one Uplink LIF (connected to either a DLR or upstream Edge) and one Internal LIF. The Internal LIF is directly connected to the network segment housing the back-end servers requiring load-balancing. In addition to this (and unlike the One-Armed/Proxy load balancer), In-Line load balancers are required to act as the default gateway for all back-end servers.

Continue reading → VMware NSX Edge Load Balancers: Part 2 – In-Line/Transparent Mode

VMworld 2018 Europe – Customer Panel on NSX Data Center (NET3042PE)

Reading Time: 2 minutes

At this year’s VMworld Europe in Barcelona I was invited to take part in a Customer Panel session which saw me detailing my story with VMware NSX Data Centre. Myself and two other panellists covered three very different use cases and stories, and I loved hearing their challenges, why they chose to implement NSX Data Centre, and of course, their implementation road-map, hurdles and successes.

Despite a few nerves, I’m quite happy with my story, and it was a load of fun! A huge thanks to all who attended the session, as well as to those who stayed behind afterwards to discuss their own NSX challenges with me (next time I might need a whiteboard). It was great speaking to you all, and I hope you all took something away from the session.

That was a blast! Thanks for having me, VMworld!

To replay the session, simply visit  Customer Panel on NSX Data Center (NET3042PE). To view the full range of on-demand videos, simply visit https://videos.vmworld.com/global/2018.

VMworld 2018 NSX Data Centre Panel - Members

VMworld Europe 2018 - Gareth Lewis

VMworld 2018 NSX Data Centre Panel - Members

VMware NSX Edge Load Balancers: Part 1 – One-Armed/Proxy Mode

Reading Time: 7 minutes

Welcome to the first in a series of posts covering VMware NSX Edge load balancers. These posts will dive into the two main flavours – ‘One-Armed’ and ‘In Line’. We will cover use-cases for each option.

NSX Edge load balancers allow us to distribute incoming requests across a number of servers (aka – members) in order to achieve optimal resource utilisation, maximise throughput, minimise response time, and avoid application overload. NSX Edges allow load balancing up to Layer 7.

One-Armed/Proxy Mode

In this first post, we deploy an NSX Edge, enable the load balancer feature, and configure it in One-Armed mode (aka – Proxy, SNAT, non-transparent mode). This One-Armed/Proxy mode is the simplest of the two deployments, and utilises a single internal Logical Interface (LIF) (i.e. – it’s ‘one arm’).

This flavour of NSX Edge load balancer utilises its own IP address as the source address to send requests to back-end/member servers. The member servers see this traffic as originating from the load balancer and not the client and, as a result, all responses are sent directly to the load balancer. So, nice and simple, and this is usually my go-to solution where I have a requirement to load balance across a number of member servers for resilience.

Topology

In this article we have a basic topology consisting of a NSX Edge load balancer (LB-101-10-WEB / 10.101.10.100) and two back-end/Member servers (101-10-WEB01 / 10.101.10.11 and 101-10-WEB02 / 10.101.10.12), all of which are housed on the same Logical Switch (10.101.10.0/24).

Configure NSX One-Armed Load Balancer
NSX Edge Load Balancers: Part 1 – One-Armed/Proxy Mode

Note, this article assumes your Logical Switches are already in play, traffic is able to route directly to each of the back-end servers, and you have created the necessary NSX Distributed Firewall rules. In this example, I will be configuring the NSX Edge load balancer to pass HTTP traffic to the back-end/Member servers.

NSX Edge – Deployment

1. Create a new NSX Edge Services Gateway. Note, for my lab environment I will not enable High Availability. When ready, click Next.

Configure VMware NSX One-Armed Load Balancer

2.Configure the CLI credentials and click Next.

Configure VMware NSX One-Armed Load Balancer

3. Configure the appliance size and resources. Again, for lab purposes, the Compact appliance size is appropriate. When ready, click Next.Configure VMware NSX One-Armed Load Balancer

4. Next up, we need to configure a single (one-armed) interface. Click the + button to begin.

Configure VMware NSX One-Armed Load Balancer

5. Give the interface a name, select Internal, and connect it to the same Logical Switch which houses both back-end web servers. Assign a primary IP address (this will be used as the load balancer’s virtual IP address) and, when ready, click OK.

Note – 10.101.10.100 has been assigned to the internal LIF and will be utilised in a future step as the virtual IP address of our new application pool. Additional/secondary IP addresses can be added and assigned to additional application pools (more on this on a later step), meaning one load balancer is capable of load balancing multiple applications.

Configure VMware NSX One-Armed Load Balancer

6. Confirm the configuration and click Next.

Configure VMware NSX One-Armed Load Balancer

7. As the NSX Edge will not have an Uplink LIF, we will not be able to configure a default gateway. Click Next.

Configure VMware NSX One-Armed Load Balancer

8. For lab purposes, I will not configure any firewall policies. Also, as we are not deploying the appliance in HA mode, all HA parameters will be greyed-out. Click Next.

Configure VMware NSX One-Armed Load Balancer

9. Confirm the NSX Edge configuration, and click Finish to deploy.

Configure VMware NSX One-Armed Load Balancer

NSX Edge – Routing

Here in Lab World, I don’t have OSPF/BGP configured, so we’ll create a static route to enable traffic to flow upstream. Looking at the topology a little more closely, you’ll note the NSX Edge load balancer has a next hop of 10.101.10.254 (the internal LIF of the DLR ).

Configure VMware NSX One-Armed Load Balancer
Configure a VMware NSX Edge Static Route

To configure the static route, simply jump into the configuration console of the newly created NSX Edge, browse to Manage > Routing > Static Routes, and click +. Configure accordingly and click OK.

Configure VMware NSX One-Armed Load Balancer

NSX Edge – One-Armed Load Balancer Configuration

Now that our new NSX Edge has been deployed, we will enable the load balancer feature and configure in One-Armed/Proxy Mode.

1. Browse to Manage > Load Balancer > Global Configuration and click Edit.

Configure VMware NSX One-Armed Load Balancer

2. Ensure Enable Load Balancer is ticked, and click OK.

Configure VMware NSX One-Armed Load Balancer

3. Browse to Manager > Load Balancer > Application Profiles and click +.

Application Profiles – An Application Profile is used to define the behaviour of a particular type of network traffic, and is associated with a virtual server (virtual IP address). The virtual server then processes traffic according to the values specified in the Application Profile. This allows us to perform traffic management tasks with greater ease and efficiency.

Configure VMware NSX One-Armed Load Balancer

4. As mentioned at the start of this post, we are only interested in load balancing for resilience. As such (and as detailed below), we will set the Application Profile Type to TCP.

Configure VMware NSX One-Armed Load Balancer

5. Confirm creation of the new Application Profile.

Configure VMware NSX One-Armed Load Balancer

6. Browse to Manager > Load Balancer > Pools and click +.

Pools – A Pool is simply a group of back-end servers (aka, Members), and is configured with a load-balancing distribution method/algorithm. A service monitor (optional) can also be configured and, as this suggests, is used to perform health checks on its Members.

Configure VMware NSX One-Armed Load Balancer

7. Give your new Pool a Name, Description, choose its distribution method/Algorithm, and Monitors.

Configure VMware NSX One-Armed Load Balancer

8. When ready, click + to add your back-end/member servers. For this either click Select to choose a vSphere Object, or simply type the destination’s IP address.

Configure VMware NSX One-Armed Load Balancer

9. Define the Port (in this instance I am load-balancing HTTP/80 traffic), as well as the Monitor Port (here I use port 80 again). When done, click OK.

Configure VMware NSX One-Armed Load Balancer

10. Confirm your configuration by clicking OK.

Configure VMware NSX One-Armed Load Balancer

11. Confirm creation of the new Pool.

Configure VMware NSX One-Armed Load Balancer

12. Check your newly created Pool’s health status by clicking Show Pool Statistics. The Status of both the Pool and it’s Members should show UP.

Configure VMware NSX One-Armed Load Balancer

13. Browse to Virtual Servers and click +.

Configure VMware NSX One-Armed Load Balancer

14. From the Application Profile drop-down menu, select the recently created Application Profile, give the Virtual Server a Name and Description, and click Select IP Address to select the IP address which we allocated to the internal LIF when we created the load balancer.

Configure VMware NSX One-Armed Load Balancer

15. Lastly, set the Protocol to TCP, Port/Port Range to 80, and set the the Default Pool to the pool we created in step 6.

Configure VMware NSX One-Armed Load Balancer

16. Confirm creation of the new Virtual Server.

Configure VMware NSX One-Armed Load Balancer

17. Finally, browse to the Virtual Server IP address to confirm load-balancing to each of the Pool Members is successful. In the below screenshot, traffic is routed to the VM, 101-10-WEB01.

Configure VMware NSX One-Armed Load Balancer

18. After Refreshing the browser, I am directed to 101-10-WEB02.

Configure VMware NSX One-Armed Load Balancer

Conclusion

In the next post we’ll cover the second flavour of NSX Edge load balancer, In-Line mode (aka, Transparent mode) and, in future posts, we’ll look at use cases for both, as well as troubleshooting tips.

VMworld Europe 2018

VMworld 2018 Europe – Customer Panel on NSX Data Center (NET3042PE)

Reading Time: < 1 minute

Not only will this year mark my first ever visit to VMworld Europe, I’ll also be taking part in a Customer Panel session.

If you are interested in hearing my VMware NSX Data Center journey, how we implemented and operationalised NSX; how NSX continues to increase security and application performance, while simplifying troubleshooting and improving network provisioning time, then join me on Thursday, 8th November at 12:00-13:00 to hear more.

To register for the session, simply visit the VMworld 2018 Europe Content Catalogue – Customer Panel on NSX Data Center (NET3042PE).

VMworld Europe 2018

Deploying ‘Lab-Friendly’ NSX Controllers

Reading Time: 5 minutes

You don’t have to have an enterprise-grade lab environment to run VMware NSX Data Center for vSphere. For those who neither wish to house half a rack of servers, storage, and enterprise networking kit at home, nor wish to incur the wrath of their energy company for the privilege, a single desktop/laptop with appropriate compute and storage is more than capable of handling NSX.

However, there are obvious limitations to this style of lab environment and, as you’re reading this, I’m guessing you’ve been unable to deploy an NSX Controller (likely due to its CPU requirements). By default, NSX Controllers are deployed with 4 vCPU and 4 GB memory. This is likely too high a requirement to be accommodated in smaller lab environments and, as a result, NSX Controller deployments will fail.

Deploying Lean NSX Controllers in a Lab Environment Controller Deployment Fail
NSX Controller Deployment Failed – No host is compatible with the virtual machine.

Challenge – We are unable to specify NSX Controller resources during their deployment via UI. As NSX Controllers are also Protected VMs, we are unable to alter their resources via UI after the template is deployed and before it is deleted due to the error ‘No host is compatible with the virtual machine’.

Solution – We will a) remove the lock from the protected NSX Controller, and b) apply a more ‘lab-friendly’ resource configuration to the NSX Controller via PowerCLI.

Okay, so I’m guessing you’ve already attempted to deploy an NSX Controller. No problem, we just need to identify the failed entity’s Managed Object Reference ID (moRef ID). For more information, see my previous post regarding moRef IDs.

vCenter is quite predictable in that all newly created entities are assigned moRef IDs incrementally. Identifying the moRef ID of the previously failed NSX Controller (Stage 1) will allow us to delete the next moRef ID (which will remove the Protected VM lock) and, subsequently, enable us to reconfigure the NSX Controller’s resources via PowerCLI before the VM is powered on (Stage 2).

Please note, as stated in my previous post, VMware do not support or recommend this procedure in any way.  As such, this procedure should not be implemented in a production environment.

Stage 1 – Identify Failed NSX Controller moRef ID

1. Connect to your vCenter Server via SSH.

2. Enable and enter Bash.

shell.set --enable True
shell

3. Connect to the vCenter Postgres Database via PSQL.

/opt/vmware/vpostgres/current/bin/psql -U postgres

4. Connect to the VCDB.

\connect VCDB

5. Identify and note the moRef ID of the failed NSX Controller. In my case, this is ‘vm-41’ (see below screenshot).

select * from VPX_DISABLED_METHODS;
Deploying Lean NSX Controllers in a Lab Environment Shell 01
The previous NSX Controller creation attempt. Note the MO_ID (vm-41).

Stage 2 – Remove Future NSX Controller Protected VM Lock and Reconfigure VM Resource via PowerCLI

Identification done, we now need to prepare for the next stage – the deployment of a new NSX Controller, the removal of the Protected VM lock via SSH, and the reconfiguration of its resources via PowerCLI.

For this we’ll need to setup two commands in readiness, both of which must be run at specific stages of the NSX Controller ‘Deploy OVF template’ task.

1. Remove Protected VM Lock via SSH – Jump back into your previous SSH session and ready the below command (but don’t run it yet), configured with the ‘next in line’ moRef ID (in my case ‘vm-42’). This will remove the Protected VM lock at the end of the OVF template deployment, and will allow PowerCLI to jump in and reconfigure the VM just before it is powered on.

delete from VPX_DISABLED_METHODS where entity_mo_id_val = 'vm-42';

2. Reconfigure NSX Controller Resources via PowerCLI – Launch PowerCLI, connect to your VCSA, and ready the below command (but don’t run it yet). This is the NSX Controller resource configuration change. For my lab environment, 1x vCPU and 1 GB of memory is fine. Note, ‘NSXCV0’ is the start of my NSX Controller name. Configure yours accordingly.

Get-VM -Name NSXCV0* | Set-VM -NumCPU 1 -MemoryMB 1024

3. With both commands prepared, do not run them yet. They will be run AFTER the NSX Controller deployment has started.

Deploying Lab Friendly NSX Controllers Command Preparation
Protected VM Lock Removal and PowerCLI Resource Configuration commands prepared.

4. Jump back into your vSphere Client and create a new NSX Controller.

5. At specific stages of the Deploy OVF Template task, run the prepared commands detailed above.

  • ~60% – Reconfigure NSX Controller Resources via PowerCLI. This will queue until it’s able to run (e.g. – following the removal of the Protected VM lock).
  • ~98% – Remove Protected VM Lock via SSH. Run this command repeatedly from 98%, and until you receive the ‘DELETE 1’ feedback. Don’t hold back on this step! Repeat the command as you would mash your keyboard to enter BIOS.

And, hey presto! From the below screenshots we can see the Protected VM lock has been removed successfully, allowing the PowerCLI command to complete, resulting in a reconfigured NSX Controller.

Deploying Lean NSX Controllers in a Lab Environment Shell Remove VM Lock
Protected VM Lock Removed after repeatedly running the delete command via SSH at ~98%.
Deploying Lean NSX Controllers in a Lab Environment PowerCLI Set-VM
Allocating a more ‘lab-friendly resource configuration via PowerCLI.

Via the vSphere Client, we can see the Deploy OVF template, Reconfigure virtual machine, and Power On virtual machine tasks were able to complete successfully.

Deploying Lean NSX Controllers in a Lab Environment Tasks Complete

And below, our shiny, new, ‘lab-friendly’ NSX Controller.

Deploying Lean NSX Controllers in a Lab Environment NSX Controller
The ‘Lab-Friendly’ NSX Controller.
Deploying Lean NSX Controllers in a Lab Environment NSX Controller VM
…and confirmation of its more appropriate resource allocation.

References

During my research for this article, I came across to below guides, without which, the above would not have been possible. Props.