Intro

I had a vendor-provided VHD that looked fine on the surface but every time I tried to upload it to Azure and spin up a VM, it failed.

The process of getting it working wasn’t simple. I went through a lot of dead ends before finally landing on a solution that actually worked.

Here’s a full breakdown of what I tried, what failed, and what finally got the VHD fixed and deployed.

The Problem

  • qemu-img reported it as RAW, not a proper VHD.
  • It was missing the VHD footer (the Conectix signature Azure expects).
  • The disk type was dynamic, not fixed.
  • The virtual size wasn’t a multiple of 1MB, which Azure requires.
  • It had Linux partitions and an EFI system partition.

Because of this, Azure kept rejecting it, either during upload or during managed disk creation.

What I Tried First (And Why It Didn’t Work)

1. File-Based Fixes: Conversion and Resizing

At first, I treated it like a typical VHD problem.

Converting the Disk

Convert-VHD -Path "C:\broken.vhd" -DestinationPath "C:\broken-fixed.vhd" -VHDType Fixed

Or using qemu-img:

qemu-img convert -f raw -O vpc -o subformat=fixed "broken.vhd" "broken-fixed.vhd"

Resize to 1MB Alignment:

Resize-VHD -Path "C:\broken-fixed.vhd" -SizeBytes ([math]::Ceiling((Get-VHD -Path "C:\broken-fixed.vhd").Size / 1MB) * 1MB)

Result:
The file was technically converted and resized, but EFI partitions were corrupted, partition tables shifted, and the appliance wouldn’t boot.

Lesson:
Converting or resizing a broken VHD just moves the problem around. It doesn’t fix it.

Trying Disk2VHD

I mounted the broken VHD and tried to re-capture it using Disk2VHD.

Mount the VHD:

Mount-VHD -Path "C:\broken-fixed.vhd"

Disk2VHD failed because Windows couldn’t assign drive letters to Linux or EFI partitions. No volumes showed up, so there was nothing to capture.

Lesson:
Disk2VHD only works if the disk has Windows volumes with assigned drive letters.

What Actually Worked

At this point, I stopped treating it like a file and treated it like a physical disk.

Mount the broken VHD (read-only):

Mount-VHD -Path "C:\broken-fixed.vhd" -ReadOnly

New-VHD -Path "C:\recovered.vhd" -SizeBytes 32GB -Fixed

Mount the blank VHD:

Mount-VHD -Path "C:\recovered.vhd"

Block-level clone the mounted disks using PowerShell:

$sourceDiskNumber = 2   # Source (broken VHD)
$targetDiskNumber = 3   # Destination (blank VHD)

$sourceDevice = "\\.\PhysicalDrive$sourceDiskNumber"
$targetDevice = "\\.\PhysicalDrive$targetDiskNumber"

$bufferSize = 1MB
$input = [System.IO.File]::Open($sourceDevice, 'Open', 'Read', 'ReadWrite')
$output = [System.IO.File]::Open($targetDevice, 'Open', 'Write', 'ReadWrite')

$buffer = New-Object byte[] $bufferSize
$totalBytes = 0
$totalSize = (Get-Disk -Number $sourceDiskNumber).Size

while ($true) {
	$bytesRead = $input.Read($buffer, 0, $bufferSize)
	if ($bytesRead -le 0) { break }
	$output.Write($buffer, 0, $bytesRead)

	$totalBytes += $bytesRead
	$percent = [math]::Min(100, (($totalBytes / $totalSize) * 100))
	Write-Progress -Activity "Cloning Disk" -Status "$($totalBytes / 1MB) MB copied" -PercentComplete $percent
}

$input.Close()
$output.Close()

Validate the recovered VHD:

Get-VHD -Path "C:\recovered.vhd"

Make sure it’s:

  • Fixed type
  • Proper footer
  • Size aligned to a multiple of 1MB

Upload to Azure

Upload the recovered VHD to Azure Storage:

azcopy copy "C:\recovered.vhd" "https://yourstorage.blob.core.windows.net/vhds/recovered.vhd?<sas_token>"

Create a Managed Disk from the uploaded blob:

az disk create \

–resource-group your-rg-name \

–name recovered-disk \

–source “https://yourstorage.blob.core.windows.net/vhds/recovered.vhd” \

–os-type Linux \

–hyper-v-generation V2

Create a VM from the Managed Disk:

az vm create \
  --resource-group your-rg-name \
  --name your-vm-name \
  --attach-os-disk recovered-disk \
  --attach-os-disk recovered-disk \
  --os-type Linux \
  --size Standard_DS1_v2

Why This Worked

Block-copying the mounted disk preserved everything exactly: EFI partitions, Linux rootfs, reserved sectors, bootloaders etc.
It avoided file-level corruption and allowed Azure to accept and boot the appliance.


What I Learned

  • Validate VHDs immediately: format, type, footer, size.
  • Don’t resize or convert if the disk isn’t verified first.
  • Block copy anything with Linux or EFI partitions.
  • Azure requires:
    • Fixed-size VHDs
    • Conectix VHD footer
    • Size aligned to a multiple of 1MB
  • Gen2 VMs are required for EFI boot setups.

Final Recovery Flow

Broken VHD –> Mount as Disk –> Blank Fixed VHD –> Block Copy –> Validate –> Upload –> Managed Disk –> VM Boot

Final Thoughts

At first, I assumed standard conversion and resizing would fix the VHD.
It didn’t. It just made it worse.
Once I treated the VHD like a raw disk and cloned it sector-by-sector into a clean blank VHD, it uploaded cleanly, deployed, and booted successfully on Azure.
Treat questionable VHDs like raw physical devices, not like regular files. It’ll save you a lot of wasted time.