AHV Hypervisor – How It Works and Comparison with VMware ESXi

Last updated: March 4, 2026


Table of Contents


My First Encounter with AHV

I came from a VMware background. For years, ESXi was the default answer whenever someone said "virtualization." vCenter, vSAN, NSX — I knew the stack well. So when I first spun up a Nutanix CE (Community Edition) node at home and saw that the default hypervisor was something called AHV, I was skeptical.

My first instinct was to install ESXi on top of it instead. Nutanix actually supports that. But I decided to give AHV a proper chance first — and I ended up keeping it.

This article documents what I learned about AHV: how it works, what it actually offers, and an honest comparison with ESXi from someone who has used both.


What Is AHV?

AHV (Acropolis Hypervisor) is Nutanix's native, built-in Type 1 hypervisor. It is based on KVM (Kernel-based Virtual Machine) — the same Linux kernel virtualization technology that underpins most of the major public cloud hypervisors today.

AHV is not a fork or a thin wrapper. Nutanix builds on upstream KVM and QEMU but layers its own management plane, storage integration, and operations tooling on top via AOS.

Key attributes:

  • Type 1 hypervisor — runs directly on bare metal

  • KVM-based — leverages a well-understood, battle-tested open-source foundation

  • Included at no extra cost — no separate hypervisor licensing like ESXi

  • Tightly integrated with AOS — storage, networking, and compute management go through Prism, not separate tools


AHV Architecture Deep Dive

Understanding AHV requires understanding the full Nutanix node architecture, because AHV does not operate in isolation.

spinner

The Controller VM (CVM)

The most important thing to understand about Nutanix architecture is the CVM. Every Nutanix node runs exactly one CVM — a special VM that is protected and managed by AHV itself.

The CVM is not optional and not user-managed. It is the node's storage controller, and it:

  • Handles all disk I/O for VMs on that node (via an iSCSI loopback)

  • Runs the Stargate process — the primary I/O path handler

  • Participates in the Cassandra metadata ring that spans all CVMs in the cluster

  • Executes Curator jobs — background MapReduce tasks like compression, dedup, and rebalancing

  • Coordinates distributed state via Zookeeper

Why the CVM Design Matters

In traditional SAN/NAS architectures, storage controllers are separate hardware appliances. Nutanix moves this intelligence into software running alongside the hypervisor. Each CVM owns the local disks of its node and serves storage to all VMs on that node from a low-latency local iSCSI path.

The result: every node contributes to the cluster's total storage capacity and performance. There is no single storage controller that becomes a bottleneck.


How NDFS Works

NDFS (Nutanix Distributed Storage Fabric) — sometimes called DSF or ADS (Acropolis Distributed Storage) in newer documentation — is the distributed filesystem that presents a unified storage pool to all VMs across the cluster.

From a VM's perspective, it just sees a vDisk (virtual disk) mounted via iSCSI from its local CVM. From the cluster's perspective, NDFS handles:

Replication Factor (RF)

NDFS writes data with a configurable Replication Factor:

  • RF2: Two copies of data exist across at least two different nodes

  • RF3: Three copies exist across at least three different nodes

RF is set at the container level (a Nutanix storage container is roughly analogous to a datastore in VMware terms).

Storage Tiering

AHV nodes typically have multiple tiers of storage:

Tier
Media
Usage

Hot

NVMe / SSD

Active working data, frequently accessed

Warm

SSD / Flash

Less active data

Cold

HDD (hybrid nodes only)

Infrequently accessed, archival

The Intelligent Tiering (ILM — Information Lifecycle Management) feature automatically migrates cold data down to HDD and promotes hot data to NVMe without any manual intervention.

In my CE home-lab which runs on a single node with only SSDs, ILM is essentially a no-op — everything stays on the single tier. But in a multi-node cluster with hybrid disk configurations, this becomes genuinely useful.


Core AHV Features

Live Migration

AHV supports live VM migration between nodes in the cluster, called AHV Live Migration (analogous to vMotion in VMware). It uses pre-copy memory migration — memory pages are iteratively copied while the VM keeps running, and the final cutover is a very brief pause.

From the CLI:

Or from Prism Element UI: VM → Actions → Migrate.

High Availability (HA)

If a node fails, AHV HA automatically restarts affected VMs on surviving nodes. Unlike VMware HA which requires a dedicated admission control configuration, Nutanix HA is on by default and tied to the cluster's replication factor.

HA respects RF2/RF3 — if you have RF2 and one node fails, VMs can restart because replicas still exist on other nodes.

Dynamic Scheduling (ADS)

Nutanix ADS (Acropolis Dynamic Scheduler) monitors VM workloads and can trigger live migrations automatically to balance CPU and memory usage across nodes — similar conceptually to VMware DRS, but without requiring a separate license.

ADS uses a scoring algorithm considering:

  • CPU utilization per host

  • Memory pressure per host

  • Network bandwidth consumption

  • Storage I/O contention

VM Snapshots and Cloning

AHV supports crash-consistent and application-consistent VM snapshots through Nutanix's Protection Domains. Snapshots are stored on NDFS using redirect-on-write semantics — no performance cliff as snapshot trees grow.

Guest Customization

AHV supports cloud-init for Linux VMs and Sysprep for Windows VMs, allowing template-based deployments with hostname, SSH key, and network configuration injected at boot.


AHV vs VMware ESXi – Feature Comparison

This comparison is based on my personal working knowledge of both platforms, not marketing materials.

Feature
AHV
VMware ESXi

Hypervisor Type

Type 1, KVM-based

Type 1, proprietary

Hypervisor Cost

Included with AOS license

Separate license required

Management UI

Prism Element / Prism Central

vCenter

Live Migration

AHV Live Migration (built-in)

vMotion (requires vSphere Standard+)

HA

AHV HA (built-in, no extra license)

vSphere HA (part of vSphere)

Dynamic Scheduling

ADS (built-in)

DRS (requires vSphere Enterprise+)

Storage Clustering

NDFS (built-in)

vSAN (separate license)

Network Microsegmentation

Flow Network Security (add-on)

NSX-T (separate license)

VM Backup

Protection Domains, 3rd-party via APIs

VADP-based 3rd-party tools

Nested Virtualization

Limited (Windows-specific use cases)

Supported for dev/test

API Coverage

REST API v3 + Prism Central API

vSphere API + REST

CLI

acli, ncli, nuclei

esxcli, PowerCLI

Community Edition

Yes — free CE available

vSphere Free (ESXi only, no vCenter)

GPU Passthrough

Supported (NVidia vGPU, PCIe passthrough)

Supported

Honest Assessment

Where AHV wins:

  • Total cost of ownership for the full stack is typically lower because HA, live migration, and storage clustering are included without separate add-on licenses

  • Prism is genuinely a better management UI than vCenter for day-to-day tasks, especially for non-experts

  • The CVM+NDFS design means storage and compute scaling is linear — add a node and you add both

Where ESXi still leads:

  • Ecosystem maturity — more third-party tool integrations, more documentation, larger community

  • Nested virtualization support is much more robust — important for running K8s training environments or HomeLab setups that need VMs inside VMs

  • vSphere APIs and PowerCLI have years more tooling depth

  • If you already have VMware licenses and tools in place, the switching cost is real

My take: For a net-new Nutanix deployment, AHV is the right choice — there's no reason to pay for a separate ESXi license when AHV handles the workloads well. If you're migrating an existing VMware environment to Nutanix, whether to switch to AHV or bring ESXi along depends on how deeply your operations depend on VMware-specific tooling.


Prism Element and Prism Central

Prism Element (PE)

Prism Element is the per-cluster management interface, running on the CVMs. It handles:

  • VM lifecycle (create, power on/off, migrate, snapshot)

  • Storage container management

  • Network configuration (virtual switches, VLANs)

  • Cluster health and performance dashboards

  • Cluster upgrade management (LCM — Life Cycle Manager)

Prism Central (PC)

Prism Central is the multi-cluster management layer, deployed as a separate VM (or scale-out VM set). It:

  • Provides a single pane of glass across multiple Nutanix clusters

  • Hosts NCM Self-Service (Blueprints)

  • Runs Nutanix Flow policies (microsegmentation rules)

  • Exposes the v3 REST API — the primary programmable API for automation

  • Provides category-based VM tagging (used for Flow policies and licensing)

spinner

Flow Networking and Microsegmentation

Nutanix Flow (available as Flow Network Security in Prism Central) provides hypervisor-based microsegmentation — network policies enforced at the vNIC level, before traffic leaves the host.

Flow Security Policy Concepts

  • Categories: VM tags in Prism Central. Security policies use categories as selectors, not IP addresses

  • Security Policies: Define which categories can communicate with which, and on which ports

  • Default Deny: Once a policy is applied to a VM category, only explicitly allowed flows pass

For example, in a personal project I used Flow to isolate a development VLAN's VMs from production VMs on the same AHV cluster. Rather than managing VLAN ACLs, I tagged VMs with Environment:Dev and Environment:Prod categories and wrote a Flow policy allowing only specific ports between them.

Flow Virtual Networking

A separate component, Flow Virtual Networking, provides software-defined networking with:

  • Virtual Private Clouds (VPCs) within Nutanix

  • Overlay networks (VXLAN-based)

  • External connectivity via NAT or gateway VMs

For most simple setups, standard VLAN-based networking with a managed switch is sufficient. Flow Virtual Networking becomes relevant when you need multi-tenant network isolation without physical VLAN changes.


AHV in a Home-Lab Setup

I run Nutanix Community Edition on a small Intel NUC cluster (two nodes). Key observations from this setup:

AHV CE specifics:

  • CE supports AHV only — you cannot install ESXi on CE

  • CE is limited to specific hardware profiles but works fine on NUCs with NVMe SSDs

  • Prism Central on CE is slightly limited compared to commercial PC — Self-Service is enabled but some NCM features are restricted

My typical workflow:

  1. Spin up CE cluster via Foundation (Nutanix's cluster installer)

  2. Deploy Prism Central as a VM via Prism Element

  3. Register the cluster with Prism Central

  4. Enable Self-Service (Calm) for Blueprint access

  5. Use nutanix.ncp Ansible collection for automation tasks


What I Think About AHV After Using Both

I still have VMware knowledge and use it professionally where it exists. But for any new setup where the choice is open, I choose Nutanix/AHV because:

  1. The single platform story is real. Storage, compute, HA, and DR all managed from one UI and one API removes a lot of operational complexity

  2. The API is clean. The Prism Central v3 REST API is well-documented and consistent — significantly nicer than the vSphere API to work with programmatically

  3. The cost model makes more sense. Paying for separate vCenter, vSAN, and DRS licenses on top of ESXi adds up. With Nutanix, the stack is included.

  4. Prism Central scales well. Managing 3 clusters from one PC instance is the same UX as managing 1 cluster. That's good design.

The main reason I would hesitate to recommend AHV in an enterprise context is the ecosystem gap — if your operations team is VMware-native and you're already paying for NSX-T and vRO, switching to AHV has real migration costs. But for greenfield or personal projects, AHV is a capable and cost-effective hypervisor.


Next Steps

For a deep-dive side-by-side reference of the full VMware and Nutanix product stacks, see VMware to Nutanix – Complete Feature Mapping — covering compute, storage, networking, automation, DR, Kubernetes, licensing, CLI/API equivalents, and migration tooling.

Or continue to Nutanix Blueprint 101 — where I cover Self-Service blueprints, how to model multi-tier applications, and how to use Day 2 actions.

Last updated