Azure disk I/O performance is extremely low compared to other cloud providers

I am making some benchmarks between Azure (using free 1-month trial) and Linode, here’re the results:

Linode:

Azure:

Both are SSD’s. Any idea what might be going on here?

Azure disks are network mounted and have limits.

See this https://azure.microsoft.com/en-au/pricing/details/managed-disks/

Azure is… not great when using it for a small business. :slight_smile:

3 Likes

You mean their higher enterprise level plans pergorm better ?

Depending on your requirements, Azure at large scale can be useful.

When you need a couple of VMs, Azure Compute performance and cost is abysmal.

DigitalOcean/Vultr/Linode/etc will beat them out for this.

2 Likes

I have just created some basic VM on Amazon’s AWS, same low disk performance as Azure.

What did you use for benchmarking?

Could you try something like ioping -RD -w 10 .

Here are the results:

Linode (20$/m machine):

--- /dev/sdc (block device 40.4 GiB) ioping statistics ---
13.2 k requests completed in 2.95 s, 51.4 MiB read, 4.46 k iops, 17.4 MiB/s
generated 13.2 k requests in 3.00 s, 51.4 MiB, 4.39 k iops, 17.1 MiB/s
min/avg/max/mdev = 84.4 us / 224.2 us / 3.11 ms / 101.4 us

4390 iops

Azure (83$/m machine):

--- /dev/sda4 (block device 28.5 GiB) ioping statistics ---
374 requests completed in 2.99 s, 1.46 MiB read, 125 iops, 500.2 KiB/s
generated 375 requests in 3.00 s, 1.46 MiB, 124 iops, 499.9 KiB/s
min/avg/max/mdev = 124 us / 8.00 ms / 31.2 ms / 6.67 ms

125 iops

I really cannot understand what is the advantage, if any, of using Azure.

2 Likes

Clearly, a Microsoft salesman has not yet talked to your VP :wink:

2 Likes

Here are the results on this server (copied from our 2019 Update thread)

121.6 k requests completed in 9.63 s, 474.9 MiB read, 12.6 k iops, 49.3 MiB/s

generated 121.6 k requests in 10.0 s, 474.9 MiB, 12.2 k iops, 47.5 MiB/s

min/avg/max/mdev = 53.3 us / 79.2 us / 5.77 ms / 39.9 us

I would get in touch with Azure and ask them for an explanation :lol:

And just for fun…

My desktop boot drive (512gigs):

15.2 k requests completed in 3.00 s, 5.35 k iops, 20.9 MiB/s
min/avg/max/mdev = 81 us / 186 us / 5.58 ms / 149 us

My desktop main storage drive (4tb):

425 requests completed in 3.00 s, 142 iops, 568.1 KiB/s
min/avg/max/mdev = 74 us / 7.04 ms / 53.7 ms / 6.79 ms

My big processing server (not for file hosting, 128gigs):

1.77 k requests completed in 2.94 s, 6.91 MiB read, 602 iops, 2.35 MiB/s
generated 1.77 k requests in 3.03 s, 6.91 MiB, 583 iops, 2.28 MiB/s
min/avg/max/mdev = 206.4 us / 1.66 ms / 96.8 ms / 6.61 ms

And I can’t really run code easily on my file server, so maybe later. ^.^;

But it looks to me like Linode is either SSD or an Object Store server, and Azure is standard platter magnetic drives. I wouldn’t actually think that bad of it to be honest, though Azure isn’t something I’m using (way way too expensive, like wow expensive…).

1 Like

Azure was tested with what they call Premium SSD's :slight_smile:

I have just tested a 260$ / month machine, same terrible results.

Even Linode’s 20$ / month 2v CPU cores outperforms Azure’s 4v CPU cores 260$ / month machine (using sysbench)

Their VM admin interface is amazing with hundreds of configuration options, but the hardware is bad.

They make a lot of money of way below average VM performance.

I still cannot understand why they still around with those slow services.

Anyway, I still have a 200 $ trial budget to spend on Azure :slightly_smiling_face:

Sounds like a lie, definitely not SSD performance (and my SSD is ~5 years old operating over SATA), lol. ^.^;

1 Like

NVMe can make a difference:

As SSDs become more common, you’ll also hear more about Non-Volatile Memory Express, aka NVM Express, or more commonly—NVMe. NVMe is a communications interface/protocol developed specially for SSDs by a consortium of vendors including Intel, Samsung, Sandisk, Dell, and Seagate.

Like SCSI and SATA, NVMe is designed to take advantage of the unique properties of pipeline-rich, random access, memory-based storage. The spec also reflects improvements in methods to lower data latency since SATA and AHCI were introduced

Advances include requiring only a single message for 4KB transfers as opposed to two, and the ability to process multiple queues—a whopping 65,536 of them—instead of only one. That’s going to speed things up for servers processing lots of simultaneous disk I/O requests, though it’ll be of less benefit to consumer PCs.

Source: https://www.pcworld.com/article/2899351/storage/everything-you-need-to-know-about-nvme.html

2 Likes

Yep, my wife has that connection on her boot SSD, it’s very nice. ^.^

2 Likes

Half-sarcasm: Using azure is really great when management tells you to use it because Microsoft has paid for it for you.

2 Likes

There’s some good information published on Azure disk performance. I think the most relevant portion is probably the bit about queue depths, here: https://docs.microsoft.com/en-us/azure/virtual-machines/windows/premium-storage-performance#queue-depth

And there’s a page here on how to use popular disk benchmarking tools to run tests with the necessary queue depth to get high throughput: https://docs.microsoft.com/en-us/azure/virtual-machines/windows/disks-benchmarks

In my experience, as long as you achieve appropriate queue depth (typically through multi-threading, but not necessarily) Azure Disks perform exactly as specified. [Disclaimer: I work for Microsoft in Azure Storage. But not in the Disks team]