Page 2 of 2

vSphere vCenter Server Migration Featured

VMware vSphere 6.5: Migration from Windows vCenter to vCenter Server Appliance

Following on from my previous posts (What’s New in vSphere 6.5 and VMware VCSA 6.5: Installation & Configuration), a major area for discussion (and excitement) is the VMware Migration Assistant which, should you wish, is able to easily migrate you away from the Windows-based vCenter Server to the Linux-based vCenter Server Appliance (VCSA).

There are pros and cons to the vCenter appliance of course, as well as a healthy number of supporters in each camp, but if you fancy shaving some licensing costs (Windows Server and SQL Server), would like to enjoy a faster vSphere experience (since 6.0), or would just like to be able to take a quick backup of vCenter without having to either snapshot both Windows and SQL Servers elements, or by utilising your backup product of choice to take a full image of your environment, you might just want to take VCSA for a spin.

This post will detail the migration process of a Windows-based vCenter 6.0.0 U2 to vCenter Server Appliance 6.5.

vSphere vCenter Server Migration Featured

Migration Process

1. Via the Windows Server hosting vCenter Server, mount the VCSA installation media, and launch the VMware Migration Assistant (\migration-assistant\VMware-Migration-Assistant.exe). It is imperative that the Migration Assistant is left running throughout the entire migration process, and not stopped at any stage. If the Migration Assistant is stopped, the migration process will need to be restarted from scratch.

2. Leave the assistant running and, via a management workstation, server, etc., mount the VCSA installation media and launch the vCenter Server Appliance Installer (path). Click Migrate to start the process.

3. Click Next.

4. Accept the EULA and click Next.

5. Enter the details and SSO credentials for the source Windows vCenter Server (i.e. – the one which is currently running the Migration Assistant…it is still running, right?) Once complete, click Next.

6. Verify the certificate thumbprint and accept by clicking Yes.

7. Specify a target ESXi host or vCenter Server and SSO credentials. Here, I have specified my vCenter Server, still managing my lab environment. Once complete, click Next.

8. Verify the certificate thumbprint and accept by clicking Yes.

9. Specify a destination VM Folder where your new vCenter Server Appliance will be created.

10. Specify the compute resource destination. Here, I have chosen a generic compute cluster, and I’ll leave the rest to DRS.

11. Configure the new target appliance with a VM name and root credentials.

12. Choose your deployment size. For my lab environment, and for this article in particular, I’ve opted for a ‘Tiny’ deployment.

13. Specify a target datastore to house the appliance, and enable thin (or not) disk provisioning.

14. Configure the network settings accordingly. Here, my VCSA will be housed on a vSphere Distributed Switch port group (vDS_VL11_Servers). The temporary TCP/IP configuration will be removed during the finalisation of the migration process, as the original IP configuration will follow the migrated appliance.

15. Review your configuration and click Finish.

16. The migration will now begin and you will be able to track the process via a number of updates.

17. Throughout the migration process, you will note the new appliance being deployed via vSphere as per below screenshots.

18. Stage 1 is now complete. To start Stage 2, click Continue.

19. Click Next.

20. Following pre-migration checks, you will be prompted to specify AD user credentials. Once complete, click Next.

21. Choose what data you wish to migrate, and click Next.

22. Opt in/out of the CEIP and click Next.

23. Review your configuration and click Finish, but ensure you have a backup of your vCenter server and its database before proceeding. You have been warned!

24. Click OK to acknowledge the Shutdown Warning.

25. Migration of the Windows Server-based vCenter Server to vCenter Server Appliance will now begin.

26. The transfer process will now begin and will progress through the below three steps. You might want to grab a cup of coffee (or three) at this stage while the migration progresses.

27. Once complete, we’re done. Log in to the vCenter Server Appliance and away to go.

VMware vCNS 5.5.4 to NSX 6.2.5 Upgrade

VMware vCNS to NSX Upgrade

I’m a fan of upgrades that ‘just work’, but rarely do they run without a few unforeseens jumping out at you. Reading the VMware Upgrading VMware vCNS 5.5.x to NSX 6.2.x (2144620), I was surprised to see just five upgrade areas. Five? Really?? As this is a business critical system (and one with the potential of being able to turn a long day into an even longer day were things to go awry), I was a little sceptical, however, the vCNS to NSX upgrade process really is that easy.

VMware recommend the below implementation path when upgrading to NSX from vCNS, and if you’re not utilising any advanced features such as the vCNS Edges, you can cut this process down to just the first three steps.

  1. vCNS/NSX Manager
  2. Host Clusters and Virtual switches/wires
  3. vShield App (replaced by NSX Distributed Firewall)
  4. vShield Edge
  5. vShield Endpoint

Stick with me, I know you think I’m lying…

Scenario

So, a requirement exists whereby I need to replace a VMware vCNS 5.5.4 environment with VMware NSX 6.2.5 due to the former going end-of-life in Q4 2016. As I see it, I have two options; a) install NSX and migrate the vCNS workload to the new compute hardware, or b) upgrade vCNS in-place. As there aren’t any spare hosts lying around, the option will see us progressing with the in-place upgrade.

Note, configuration of NSX, as well as integration with AD Security Groups, will be covered in a future post.

Prerequisites

Okay, so there are some prerequisites (when would there not be?) Before initiating the upgrade process, you will need to ensure the below checklist has been completed:

  1. Physical network must be configured with a minimum MTU of 1600 due to the VXLAN overlay.
  2. As the NSX vSwitch is based upon vSphere Distributed Switches (vDS), if you’re currently running standard virtual switches, you’ll need to migrate to vDS first.
  3. Your backups have run successfully
  4. Snapshots of both vCenter and vCNS Manager have been taken
  5. vCNS Manager – e1000 network adapter replaced with a VMXNET3 adapter
  6. vCNS Manager – configured with at least 16GB RAM
  7. vCNS Manager – Tech Support Bundle created
  8. Download the relevant vCNS to NSX Upgrade Bundle

Upgrade vCNS 5.5.4 to NSX 6.2.5

1. First of all, we will need to download the Upgrade Bundle from VMware. Login to your MyVMware account and download.

2. Next, log into vCNS Manager, and browse to Settings & Reports > Updates.

3. Click Upload Upgrade Bundle, and upload the bundle we downloaded in Step 1.

4. Once uploaded, review the version details, and click Install.

5. When requested, click Confirm Install, and monitor the progress as per the below screenshots.

6. Monitor the reboot process via the appliance’s console and, once complete, we can proceed.

7. Following the reboot, browse to the previous vCNS Manager FQDN (https://server_name.domain.local), and you will be presented with the new NSX Manager. Note, the default admin credentials will have changed as part of the upgrade process:

  • Username – admin
  • Password – default

8. Login using the new credentials and ensure the NSX Management Service is running before proceeding. Note, this is a lab environment, hence the 4GB RAM.

9. Browse to Manage > NSX Management Service. In the Lookup Server URL section, configure by clicking Edit.

10. For this lab environment, I am configuring the lookup service to utilise vSphere SSO which, in this instance, integrates with my vCenter Server.

11. When prompted, accept the SSL certificate.

12. Ensure the Status for both Lookup Server URL and vCenter Server shows Connected.

13. After logging in to the vSphere Web Client as administrator@vsphere.local (we’ll configure NSX users and groups via Active Directory in a later post), you’ll now be able to see the new Networking & Security tab.

14. As this procedure details the upgrade process of vCNS to NSX, browsing to Networking & Security > Firewall, you will happily see that all vCNS Firewall rules have been retained.

At this point we will need to apply licensing, upgrade the ESXi host VIBs, and upgrade the vCNS Firewall to the new NSX Distributed Firewall. Until this takes place, any/all firewall amendments will not be seen by the ESXi hosts.

Licensing

1. Using the vSphere Web Client, browse to Administration > Licensing > Licenses, click Add (+).

2. When prompted, enter your license keys, and click Next. 

3. Confirm your license key information, amend the names where required, click Next.

4. Review your license information and click Finish.

5. Browse to Administration > Licenses > Assets > Solutions, and assign the new license by clicking the Assign icon.

6. Select the newly added license, and click OK.

Host Preparation

1. Browse to Networking & Security > Installation > Host Preparation.

2. Select the cluster you wish to upgrade, and click Actions > Upgrade.

3. As part of the upgrade process, note the below tasks as hosts and VMs are reconfigured.

4. Once the Host Preparation is complete, you will be requested to finalise the upgrade from vShield App Firewall to NSX Distributed Firewall. When prompted, click Upgrade.

5. After the migration has finished, browse to Networking & Security > Service Definitions, and remove the now legacy vShield-App-Service.

6. If you have any Edges in play, simply browse to NSX Edges, right-click the Edge in question, and choose Upgrade Version.

This concludes the upgrade of VMware vCloud Networking & Security 5.5.4 to VMware NSX 6.2.5. In a future post, we will cover the configuration of NSX itself, as well as the management of NSX via AD Groups.

VMware Product Walkthroughs

VMware Product Walkthroughs

VMware Product Walkthroughs

A great new range of informational overviews is available via the VMware Product Walkthroughs website. Covering a range of product overviews (from vSphere 6.5vRealize Network Insight, and more), to the specifics of vSphere 6.5 Encrypted vMotionNSX VXLAN Configuration, Virtual SAN Fault Domains. Great on so many levels, enabling us to up-skill and dry-run new products, demonstrate solutions to management and technical teams, etc.

Visit the parent website at https://featurewalkthrough.vmware.com.

Good job VMware.

VMware vSphere: Locked Disks, Snapshot Consolidation Errors, and ‘msg.fileio.lock’

A reoccurring issue this one, and usually due to a failed backup. In my case, this was due to a failure of a Veeam Backup & Replication disk backup job which had, effectively, failed to remove it’s delta disks following a backup run. As a result, a number of virtual machines reported disk consolidation alerts and, due to the locked vmdks, I was unable to consolidate the snapshots or Storage vMotion the VM to a different datastore. A larger and slightly more pressing concern that arose (due to the size and amount of delta disks being held) meant the underlying datastore had blown it’s capacity, taking a number of VMs offline.

So, how do we identify a) the locked file, b) the source of the lock, and c) resolve the locked vmdks and consolidate the disks?

snapshot_consolidation_disklocked_01
Disk consolidation required.
snapshot_consolidation_disklocked_02
Manual attempts at consolidating snapshots fail with either DISKLOCKED errors…
...and/or 'msg.fileio.lock' errors.
…and/or ‘msg.fileio.lock’ errors.
snapshot_consolidation_disklocked_03
Storage vMotion attempts fail, identifying the locked file.

Identify the Locked File

As a first step, we’ll need to check the hostd.log to try and identify what is happening during the above tasks. To do this, SSH to the ESXi host hosting the VM in question, and launch the hostd.log.

tail -f /var/log/hostd.log

While the log is being displayed, jump back to either the vSphere Client for Windows (C#) or vSphere Web Client and re-run a snapshot consolidation (Virtual Machine > Snapshot > Consolidate). Keep an eye on the hostd.log output while the snapshot consolidation task attempts to run, as any/all file lock errors will be displayed. In my instance, the file-lock error detailed in the Storage vMotion screenshot above is confirmed via the hostd.log output (below), and clearly shows the locked disk in question.

snapshot_consolidation_disklocked_06
File lock errors, detailed via the hostd.log, should be fairly easy to identify, and will enable you to identify the locked vmdk.

Identify the Source of the Locked File

Next, we need to identify which ESXi host is holding the lock on the vmdk by using vmkfstools.

vmkfstools -D /vmfs/volumes/volume-name/vm-name/locked-vm-disk-name.vmdk

We are specifically interested in the ‘RO Owner’, which (in the below example) shows both the lock itself and the MAC address of the offending ESXi host (in this example, ending ‘f1:64:09’).

snapshot_consolidation_disklocked_04

The MAC address shown in the above output can be used to identify the ESXi host via vSphere.

snapshot_consolidation_disklocked_05

Resolve the Locked VMDKs and Consolidate the Disks

Now the host has been identified, place in Maintenance Mode and restart the Management Agent/host daemon service (hostd) via the below command.

/etc/init.d/hostd restart

snapshot_consolidation_disklocked_06

Following a successful restart of the hostd service, re-run the snapshot consolidation. This should now complete without any further errors and, once complete, any underlying datastore capacity issues (such as in my case) should be cleared.

snapshot_consolidation_disklocked_07

For more information, an official VMware KB is available by clicking here.

VMware vCenter Server Appliance 6.5: Installation & Configuration

Following the general release of VMware vSphere 6.5 last month (see my What’s New in VMware vSphere 6.5 post), I’ll be covering a number of technical run-throughs already in discussion throughout the virtual infrastructure community.

We’ll be starting with a fresh installation of the new and highly improved vCenter Server Appliance (VCSA), followed by a migration from the Windows-based vCenter Server 6.0; the latter task made all the easier thanks to the vSphere Migration Assistant. More on this to come. Lastly, I’ll be looking at a fresh installation of the Windows-based product, however, the experience throughout all of these installation/migration scenarios has been vastly improved.

So, first up then, let’s take a quick look at a fresh installation of the new vCenter Server Appliance, the installation and configuration of which can take just 20 minutes.

1. Log on to a domain-joined server, mount the VCSA installation media, and click Install (more on the Upgrade, Migrate, and Restore options in future posts).vcsa-6_5_installation_01

2. Click Next at the Introduction screen.vcsa-6_5_installation_02

3. Accept the EULA and click Next.
vcsa-6_5_installation_03

4. For this installation, we will be deploying the vCenter Server with an Embedded Platform Services Controller. Once done, click Next.vcsa-6_5_installation_04

5. Configure the Appliance Deployment Target by entering the target ESXi host, HTTPS port, and user credentials, and click Next.vcsa-6_5_installation_06

6. Configure up the appliance virtual machine by specifying a VM name, and Root credentials.
vcsa-6_5_installation_07

7. Select your deployment size; for this example, Tiny will suffice. Once done, click Next.vcsa-6_5_installation_08

8. Select a suitable datastore for the new VM, and click Next.vcsa-6_5_installation_09

9. Configure the network settings accordingly, and click Next.vcsa-6_5_installation_10

10. Confirm the configuration and click Finish once happy.vcsa-6_5_installation_11

11. Stage 1 of the installation (appliance deployment) will now begin.vcsa-6_5_installation_12

12. Once installation is complete, click Continue to configure the appliance.vcsa-6_5_installation_13

13. Click Next at the Introduction screen.vcsa-6_5_installation_14

14. Configure NTP settings and click Next.vcsa-6_5_installation_15

15. Complete the vCenter SSO configuration, and click Next.vcsa-6_5_installation_16

16. Opt in/out of the VMware Customer Experience Improvement Program and click Next.vcsa-6_5_installation_17

17. Review the Summary and click Finish.vcsa-6_5_installation_18

18. Stage 2 of the installation (set up vCenter Server Appliance) will now begin.vcsa-6_5_installation_19

19. Once complete, you will be presented with your FQDN for your new vCenter Server.vcsa-6_5_installation_20

Looking at the console of the vCSA, and we are presented with a very familiar grey and blue (instead of grey and yellow) interface. Appliance URLs are visible here, as well as basic management/configuration tasks.

vcsa-6_5_installation_21

The new vCenter Server Appliance can now be accessed via the default URLs and, depending on your choice of interface (either the new vSphere Client or older vSphere Web Client), there are now two URLs to remember.

  • vSphere Web Client – http://<vcenter_fqdn>/vsphere-client)
  • vSphere Client – http://<vcenter_fqdn>/ui.
vcsa-6_5_installation_22
A warm welcome to the fast and sleek HTML 5 vSphere Client.

Both clients will be running in parallel until further notice, but do remember that the new vSphere Client is yet offer full functionality; VMware state they are working on this area with priority, and I’ll be interested to see how quickly the day-to-day management functionality is added.

Integrating Active Directory with VMware vSphere SSO

One item I see mentioned fairly often, either in relation to personal labs or production environments, is the integration of vSphere SSO with Active Directory. Configuring vSphere’s SSO/AD integration via LDAP is a simple process, more so thanks to vSphere 6.5.

1. Login to the VMware vSphere Web Client using the vCenter Single Sign-On user credentials configured as part of the VMware vCenter Server installation.

sso_ad_integration_01

2. Browse to Administration > Single Sign-On > Configuration and click the Identity Services tab.

sso_ad_integration_02

3. Click the Add Identity Source icon, select Active Directory as an LDAP Server, and click Next.

sso_ad_integration_03

4. Configure the new identity source accordingly and click Next.

sso_ad_integration_04

5. Confirm the summary and click Finish.

sso_ad_integration_05

6. Select your new identity source and click the Set as Default Domain icon.

sso_ad_integration_06

Next, we’ll add an Active Directory Security Group to the vSphere Global Permissions, enabling us to test SSO functionality.

7. Browse to Administration > Access Control > Global Permissions, and click the Add Permission icon.

sso_ad_integration_07

8. Via the Add Permission wizard, click Add.

sso_ad_integration_08

9. Select your domain, recently added via the LDAP identity source, and add the required security group.

sso_ad_integration_09

10. Your added security group will now display, allowing you to logout and back in utilising your domain credentials.

sso_ad_integration_10

VMware vSphere 6.5

What’s New in VMware vSphere 6.5

VMware vSphere 6.5

With the release of vSphere 6.5 back in October, VMware have finally been able to offer a true HTML 5-based experience via their new vSphere Client (something that has been on the cards for quite a number of years), and I must say, I’m rather (very) pleased. Add to this the fact that the older C# Client has been pushed even closer to the Decommission Bin due to the release of the new ESXi Embedded Host Client (more on this in a future post), things are looking very good indeed.

esxi_embedded_host_client
The brand new ESXi Embedded Host Client offers a much welcomed move away from the legacy C# Client thanks to the new HTML5 and JavaScript UI.

The new vSphere Client will run alongside the older vSphere Web Client and is an inbuilt feature of both Windows and Appliance versions of vCenter Server 6.5. Don’t jump out of your seats just yet, however, as the reason for running the two interfaces in parallel is due to the new vSphere Client not offering full functionality. VMware state that their teams are looking to flesh-out the new Client with priority, so we hopefully won’t have to wait long. For all full-functional requirements, you’ll still be able to access the vSphere Web Client via standard means (http://<vcenter_fqdn>/vsphere-client), with the new vSphere Client accessible via http://<vcenter_fqdn>/ui.

vcsa-6_5_installation_22
Like the new ESXi Embedded Host Client, the new vSphere Client offers a fantastic HTML 5/JavaScript experience, but is lacking in some functionality at time of writing.

Other features of vSphere 6.5 and the vCenter Server Applicance include a fully integrated vSphere Update Manager, file-based backup and recovery, native VCSA high availability, and performance improvements of up to 3xHTML5-based web clients outlined above; security enhancements including VM disk-level encryption, vMotion encryption, as well as the addition of a secure boot model (enabling VMware to now offer ‘Secure Data, Secure Infrastructure, and Secure Access’).

For further details regarding vSphere 6.5, and a full list of the improvements and new functionality, simply visit https://blogs.vmware.com/vsphere/2016/10/whats-new-in-vsphere-6-5-vcenter-server.html.

Testing Network Connectivity Between VMkernel Ports

Configuring VLANs within vSphere is a simple enough task, however, testing outgoing ICMP traffic between hosts is a must when you find yourself unable to communicate with another VMkernel port on another host. Using the vmkping CLI command, we are able to test outgoing traffic via specific VMkernel ports, perfect for those attempting to troubleshoot connectivity issues on different subnets and/or vSwitches.

Testing Basic Network Connectivity

  1. Connect to an ESXi host via SSH.
  2. Via command shell, run the below command (where x.x.x.x is the hostname or IP address of the server that you wish to ping):
    # vmkping x.x.x.x

In my example below, I test connectivity between the Management Networks on two ESXi hosts in my lab. Specifically, I connect to Host A (192.168.20.101) via SSH, and ping Host B (192.168.20.102):

vmkping

Testing Network Connectivity via a Specific VMkernel Port

ESXi 5.1 and up allows us to test outgoing ICMP traffic on specific vmkernel ports by adding the -I switch, followed by vmkX (where X is the VMkernal number):

# vmkping -I vmkX x.x.x.x

In my example below, I test ICMP traffic between two VMkernel ports which have been configured for iSCSI traffic (vmk1 on both hosts). Specifically, I SSH on to Host A and test  ICMP traffic between the specific VMkernel ports (Host A = 192.168.25.101, Host B = 192.168.25.103):

iSCSI Network

vmkping -I