Sunday, December 1, 2024

RAV migration cutover was failing after sync

 Issue:

HCX RAV migration was failing after base sync and while cutover the VM.

Noticed below error in hcx connector app logs.

TxId: 4cda256a-ddff-4eb1-816a-aec4ecf15ec7] E**RROR c.v.h.s.rav.jobs.RAVSwitchoverJob- [migId=e686435c-2e95-424b-8da5-1eba6a7d1cf8] Error while executing** RAVSwitchoverJob state 'DISABLE_REPLICATION'.2025-01-04 19:15:54.709** UTC** [RAVService_SvcThread-12725, Ent: HybridityAdmin, , TxId: 4cda256a-ddff-4eb1-816a-aec4ecf15ec7]** ERROR c.v.v.h.m.common.MigrationUtil- [migId=e686435c-2e95-424b-8da5-1eba6a7d1cf8] Job (d288d46b-fdc0-4c9d-9c2c-bfd08f2900d9) failed with exception Method vim.HbrManager.retrieveReplicationConfig threw undeclared fault of type vim.fault.TaskInProgress**


Issue occurred for 14 tb vm as users keep updating the files and it was like almost every min the files are getting updated on source side and with this HCX disabled the replication and migration job failed.


Workaround:

Initiated cold migration and able ot migrate the vm to cloud. 

Wednesday, September 11, 2024

No connection to VR server or not responding - HCX migration base sync not started

Issue:

 The "No Connection to VR Server: Not Responding" indicates a network communication issue between the ESXi host and the vSphere Replication VR Server.






Workaround:

Migrate the VM to another Host to continue the migration.

After moving the vm to another host the base sync started.


Tuesday, August 13, 2024

Script: Create affinity and anti-affinity rules using script on Azure cli for Azure vmware solution

 script connects to your vCenter Server, identifies the cluster and VMs, and creates a new VM affinity rule to keep the specified VMs together


az vmware placement-policy vm create --resource-group Groupnamehere --private-cloud Namehere --cluster-name Namehere --placement-policy-name POLICYNAMEHERE --state Enabled --display-name POLICYDISPLAYNAMEHERE --vm-members /subscriptions/Subcriptionidhere /resourceGroups/Namehere/providers/Microsoft.AVS/privateClouds/Namehere/clusters/Namehere/virtualMachines/VMIDhere /subscriptions/SubscriptionID here/resourceGroups/resourcegroupname here/providers/Microsoft.AVS/privateClouds/AVS SDDC namhere/clusters/Cluster-1/virtualMachines/vm-IDHERE /subscriptions/SubscriptionID /resourceGroups/resourcegroupname/providers/Microsoft.AVS/privateClouds/AVS SDDC namhere/clusters/Cluster-1/virtualMachines/vm-IDHERE --affinity-type AntiAffinity


Sunday, June 30, 2024

Terraform script to add a new vmxnet3 network adapter to a VM in vCenter

 Using this script add a new vmxnet3 network adapter to a VM in vCenter


Replace "your-vcenter-username", "your-vcenter-password", "your-vcenter-server", "your-datacenter-name", "your-cluster-name", and "your-vm-name" with your actual vCenter details.


"network-id" should be replaced with the ID of the network to which you want to connect the new vmxnet3 adapter.


Make sure you have the required Terraform provider for vSphere installed. 

You can run the script using the terraform apply command once everything is set up


provider "vsphere" {

  user           = "your-vcenter-username"

  password       = "your-vcenter-password"

  vsphere_server = "your-vcenter-server"


  # If you have a self-signed cert

  allow_unverified_ssl = true

}


data "vsphere_datacenter" "dc" {

  name = "your-datacenter-name"

}


data "vsphere_compute_cluster" "cluster" {

  name          = "your-cluster-name"

  datacenter_id = data.vsphere_datacenter.dc.id

}


data "vsphere_virtual_machine" "vm" {

  name          = "your-vm-name"

  datacenter_id = data.vsphere_datacenter.dc.id

}


resource "vsphere_virtual_machine_network_interface" "vmxnet3" {

  type         = "vmxnet3"

  network_id   = "network-id"  # Replace with your network id

  virtual_machine_id = data.vsphere_virtual_machine.vm.id

}


Wednesday, June 5, 2024

How to check MTU settings are consistent and properly configured across your network.

 You can use the Path Maximum Transmission Unit (PMTU) command on the HCX connector to verify the MTU configuration.


Login to HCX connector.

Go to IX connector 

Run PMTU command


The PMTU command helps you determine the maximum packet size that can be transmitted without fragmentation across the network path.


This command will provide you with the Path MTU for different paths, helping you ensure that the MTU settings are consistent and properly configured across your network

Thursday, May 30, 2024

automate HCX VM migration using Powershell

To automate HCX VM migration using PowerShell, you can use VMware PowerCLI cmdlets. 


Install VMware PowerCLI: Ensure you have VMware PowerCLI installed on your machine. 

You can download and install it from the official VMware website.


Connect to HCX: Use the following command to connect to your HCX server:


Connect-HCXServer -Server <HCX_Server_Name> -User <Username> -Password <Password>



Define Variables: Define the necessary variables for the source and destination sites, VMs, and networks:


New-HCXMigration -SourceSite $HCXSRC -DestinationSite $HCXDEST -Folder $HCXCTR -TargetComputeContainer $COMPCTR -NetworkMapping $TargetNW -TargetDatastore $TargetDS -VM $VM -ScheduleStartTime '03/08/2019 06:36:00 AM' -ScheduleEndTime '03/18/2019 07:36:00' -MigrationType Bulk

Monday, May 13, 2024

HCX Service Mesh deployment fails with Could not find an OVERLAY transport zone

 



ISSUE:

The affected environment has the following configuration:
- NSX is deployed in On-Premises and no Overlay-TZ is configured in the HCX Network Profile
- The WANOPT is selected during the Service Mesh deployment


Cause
When using an WAN Optimization appliance, an Overlay Transport Zone is required for communication with the IX appliance. This issue can occur if there is NSX on-premises and no overlay segment is configured in the HCX network profile.


Resolution
- Create a new overlay transport zone for the WANOPT.
Or 
- Deploy the service mesh without WANOPT



RAV migration cutover was failing after sync

 Issue: HCX RAV migration was failing after base sync and while cutover the VM. Noticed below error in hcx connector app logs. TxId: 4cda256...