CloudStack// by PJ
← Back to blog
Azure VMware Solution

Azure VMware Solution: Gen 1 vs Gen 2 - What's Actually Changed?

Azure VMware Solution Generation 2 now deploys inside an Azure Virtual Network. Here's what's changed, what to watch out for, and whether you should move.

PJ

Cloud Stack

Mar 19, 2026 · 7 min read

Azure VMware Solution: Gen 1 vs Gen 2 - What's Actually Changed?

A Bit of Background

Azure VMware Solution (AVS) has always sat in an interesting place — it lets you run VMware workloads natively on Azure dedicated hardware, bridging the gap between on-premises VMware environments and the Azure cloud. Gen 1 worked well, but it came with a networking model that felt distinctly un-Azure. ExpressRoute circuits, seed cluster requirements, and additional networking setup were the price of admission.

It has been particularly popular because leveraging VMware HCX allows customers to extend their on-premises IP address space directly into AVS using Layer 2 network extensions. This means workloads can be migrated without re-IP-ing — a huge operational win that removes one of the biggest friction points in any data centre migration project.

Gen 2 changes that. Microsoft has re-engineered AVS to deploy directly inside an Azure Virtual Network, bringing it in line with how everything else in Azure works. That single shift has a cascade of positive effects — fewer moving parts, better performance, and a dramatically simpler path to Azure-native integration.

Gen 1 vs Gen 2: Side by Side

The diagrams below show the most fundamental change — the shift from a dedicated ExpressRoute circuit inside AVS in Gen 1, to native Virtual Network integration in Gen 2:

Azure VMware Solution Gen 1 - ExpressRoute networking diagram

// Gen 1 — Dedicated AVS ExpressRoute Circuit · click to zoom

Azure VMware Solution Gen 2 native Virtual Network connectivity diagram

// Gen 2 — Native Azure VNet Integration · Source: Microsoft Learn · click to zoom

Here's how the two generations compare across the key dimensions:

Feature
⬡ Gen 1
⬡ Gen 2
Supported SKU Types
AV36, AV36P, AV52, AV48 + AV64 (requires seed cluster)
AV64 only (min. 3-host cluster)
Network Attach Model
ExpressRoute
Virtual Network (VNet)
vSAN Architecture
OSA (Original Storage Architecture)
ESA (Express Storage Architecture)
Seed Cluster Required
Yes (for AV64 deployments)
No — deploy AV64 directly
VNet Peering
Not available
Works out of the box
NSG Support
Not available
Fully supported
Availability Zone Selection
Not available
Supported
Private DNS Resolution
Not available
Supported

⚠️ Gotchas to Watch Out For

Before you commit to a deployment or migration, here are some real-world considerations worth keeping front of mind:

Low latency between AVS and Azure native services?
Gen 2 is the answer. Native VNet integration eliminates the ExpressRoute hop, delivering significantly lower latency for workloads that depend on tight integration with Azure PaaS services.
🔄
Hot migrating VMs from on-premises to AVS?
Always check EVC (Enhanced vMotion Compatibility) compatibility first. CPU instruction set mismatches between source and destination hosts can cause migrations to fail or VMs to behave unexpectedly post-migration.
💾
Keep at least 30% free on your vSAN datastore
vSAN requires headroom for rebalancing, rebuilds, and slack space operations. Letting utilisation creep above 70% can trigger performance degradation and, in worst cases, place the datastore in a read-only state.
🐢
HCX RAV and Bulk migrations on Gen 2 — expect slower performance
HCX Replication Assisted vMotion (RAV) and Bulk migrations on Gen 2 can experience significantly slower throughput due to stalls during the Base Sync and Online Sync phases. Plan migration windows accordingly and test with a small batch before committing to large-scale cuts.
📋
VMware licensing is no longer bundled with AVS
VMware licensing is no longer included in your AVS costs from Microsoft. You now need to request a VCF (VMware Cloud Foundation) subscription directly from Broadcom to cover your licensing entitlements. Factor this into your cost planning before deployment.
🌍
Global Reach is required for on-premises to AVS connectivity in Gen 1
In Gen 1, connecting your on-premises environment to AVS via ExpressRoute requires ExpressRoute Global Reach. It is not included in your AVS costs, carries its own pricing, and is not available in every Azure region. Check availability early in your planning process.
🖥️
HCX appliances consume your AVS cluster capacity
HCX appliances run directly on your AVS cluster nodes, not on separate infrastructure. They consume compute and memory resources that would otherwise be available to your workloads, so make sure to factor them into your initial sizing calculations.
💸
Stretched clusters cost double — fast
If you need availability zone redundancy via stretched clusters, you will need a minimum of 6 nodes split across two zones (3 per zone). That is double the node count before you have even deployed a single workload, so model the cost carefully before committing to this architecture.
💰
Buy Reserved Instances as soon as possible
PAYG pricing for AVS nodes is significantly more expensive than Reserved Instances. If you know your environment is going to be running for any meaningful length of time, get onto 1 or 3 year reservations early. The savings are substantial and this is one of the easiest cost wins available.
🔐
You do not get full admin access to vSphere or the underlying ESXi hosts
In AVS, Microsoft manages the underlying infrastructure. The highest level of access you get is the built-in CloudAdmin account, which has a restricted set of vSphere permissions. You cannot access ESXi hosts directly, and certain low-level operations simply are not available to you. This is a managed service trade-off, but it can make deep troubleshooting difficult and in some cases leaves you reliant on Microsoft Support to investigate issues at the infrastructure layer. Make sure your team understands this boundary before go-live.
🔌
ExpressRoute is required if you want to migrate via HCX
While you can connect to AVS over a VPN, HCX migrations require an ExpressRoute connection to be in place. If you are planning to use HCX to move workloads from on-premises, make sure ExpressRoute is part of your design from day one. ExpressRoute with Global Reach provides the fastest and most reliable on-premises connection to AVS and should be the default choice for any serious migration project.
🗺️
Gen 2 has strict route limits — plan your network segments carefully
Gen 2 imposes a hard limit of 1000 prefixes on the virtual network address space. This includes NSX segment routes, service routes, and HCX MON host routes all counting toward the same limit. On a 3-node cluster you get 4096 /28s worth of capacity, and a 4-node cluster gives you 6144. If you are planning a large number of network segments or heavy use of HCX Mobility Optimised Networking (MON), you can burn through this budget faster than you expect. The fix is to use fewer, larger prefixes where possible, summarise routes, and do the maths before you deploy.
🌐
Public IP down to the NSX Edge is not supported in Gen 2
In Gen 2, you cannot assign a Public IP directly to the VMware NSX Microsoft Edge for internet configuration. Instead, route your egress internet traffic through your hub VNet using Azure Firewall or a third-party Network Virtual Appliance (NVA). This is the recommended pattern anyway as it gives you centralised inspection, logging, and control over all outbound traffic leaving AVS.
⚖️
Azure native load balancers cannot load balance workloads inside AVS
Azure Load Balancer does not support backends inside AVS — they have no visibility into the VMware network. For L7 load balancing, Azure Application Gateway can be used as it operates at the HTTP/HTTPS layer and can route to AVS workloads via the VNet. For L4 load balancing inside AVS, you'll need a third-party solution such as F5 BIG-IP or VMware AVI (NSX Advanced Load Balancer). Both options work well but come with significant licensing costs — get a quote early and factor it into your design before you commit to an architecture.

If there's one thing that defines Gen 2, it's the shift from ExpressRoute to Virtual Network connectivity. In Gen 1, your AVS private cloud connected to Azure via an ExpressRoute circuit — a dedicated private link that, while reliable, added complexity, cost, and friction when integrating with other Azure services.

Gen 2 private clouds deploy inside an Azure Virtual Network by default. That means you get instant connectivity to other Azure services the moment your cloud is provisioned. No extra networking setup. VNet peering just works. You can attach Network Security Groups directly to control traffic. For architects who've spent time wrestling with AVS Gen 1 network topology, this is a significant quality-of-life improvement.

Worth noting: This isn't just convenience — deploying inside a VNet also reduces latency for workloads that talk to other Azure services, and improves data transfer speeds. For latency-sensitive applications that rely on services like Azure SQL or Blob Storage, this is a meaningful performance gain.

SKU Changes: AV64 Takes Centre Stage

Gen 2 exclusively supports the AV64 SKU, with a minimum of 3 hosts required. In Gen 1, deploying AV64 meant you first had to provision a seed cluster of at least three nodes using an older SKU (AV36, AV36P, AV48, or AV52), then add AV64 on top.

Gen 2 eliminates that step entirely — you go straight to AV64. The AV64 node is Microsoft's latest-generation VMware host, offering significantly more compute and memory than its predecessors. Removing the seed cluster requirement also reduces the minimum entry cost for new deployments.

Keep in mind: If you're already running a Gen 1 deployment on AV36, AV36P, AV48, or AV52 SKUs, those aren't supported in Gen 2. This matters for planning migrations or expansions — Gen 2 is an AV64-only environment.

vSAN: OSA Out, ESA In

Under the hood, Gen 2 moves from vSAN's Original Storage Architecture (OSA) to the Express Storage Architecture (ESA). ESA is VMware's newer storage model, designed to take advantage of NVMe-based storage more efficiently than OSA. You get better throughput, lower latency storage operations, and a more modern foundation for running I/O-intensive workloads.

For most workloads this will be transparent — but if you're running database servers, analytics platforms, or anything else with demanding storage requirements, the ESA upgrade is a genuine win.

Regional Availability

Gen 2 is currently available in the following Azure public regions. Microsoft has noted that additional regions may be available — contact your Microsoft account team to confirm coverage elsewhere.

Australia EastEast USCanada CentralCanada EastCentral USMalaysia WestNorth EuropeNorway EastSwitzerland NorthUK WestWest US 2

Who is Responsible for What? Does it Change in Gen 2?

Short answer — no, it does not. The shared responsibility model remains the same in Gen 2 as it does in Gen 1. Despite the architectural shift to native VNet integration, Microsoft still owns the infrastructure layer and you still own everything above it. Gen 2 does not change the boundary, it just makes the networking around it simpler.

One of the most common sources of confusion with AVS is understanding where Microsoft's responsibility ends and yours begins. Unlike a traditional IaaS VM where you own almost everything above the hypervisor, AVS is a managed service — Microsoft takes on a significant portion of the operational burden, but that does not mean you can switch off entirely.

Microsoft handles the physical infrastructure, physical security, hardware failures, ESXi host patching, VMware NSX, vSAN, vCenter Server, and HCX Manager. You are responsible for things like your VMs, Guest OS, applications, identity management, and connecting to your VNet and the internet. The matrix below makes this clear:

Azure VMware Solution Shared Responsibility Matrix

Azure VMware Solution — Shared Responsibility Matrix · Source: Microsoft Learn · click to zoom

The practical takeaway here ties directly to the CloudAdmin gotcha above — because Microsoft owns the infrastructure layer, you cannot always get to the root cause of an issue yourself. Understanding this matrix before you go live helps set the right expectations with your team and your stakeholders.

Should You Move to Gen 2?

If you're planning a new AVS deployment, Gen 2 is the clear choice. The simplified networking, direct VNet integration, and removal of the seed cluster requirement make it easier to deploy and cheaper to get started. You're also building on a more modern foundation that will be better aligned with future Azure VMware Solution capabilities.

If you have an existing Gen 1 deployment, the picture is more nuanced. There's no in-place migration path between Gen 1 and Gen 2 — they're architecturally different enough that moving requires planning. If your Gen 1 environment is stable and serving its purpose, there's no immediate pressure to migrate. But for organisations looking to expand their AVS footprint, new clusters should absolutely be built on Gen 2.

Gen 2 feels like Microsoft finally listened. AVS has always been a solid platform, but it always had that slightly bolted-on feeling when it came to networking. Gen 2 fixes that. It sits inside your VNet, plays nicely with the rest of Azure, and removes a lot of the head-scratching that came with Gen 1. If you are starting fresh, there is really no reason to look at Gen 1. And if you are already running Gen 1, it is worth keeping an eye on the migration path as it matures.

Next steps: The network design considerations guide is also worth reading before you start.

And one final thought — AVS has a new look, a new networking model, and a new storage architecture... but why can I still not rename the cluster? 😂

👉

Coming next: HCX — a deep dive into L2 network extensions and migrations.

Azure VMware Solution