SPECstorage(TM) Solution 2020_eda_blended Result
Microsoft and NetApp : Azure NetApp Files large volume scale
Inc.
SPECstorage Solution = 10560 Job_Sets (Overall Response Time = 0.64 msec)
2020_eda_blended
===============================================================================
Performance
===========
Business Average
Metric Latency Job_Sets Job_Sets
(Job_Sets) (msec) Ops/Sec MB/Sec
------------ ------------ ------------ ------------
704 0.3 316817 5111
1408 0.3 633634 10223
2112 0.3 950453 15336
2816 0.3 1267250 20448
3520 0.4 1584076 25558
4224 0.4 1900875 30672
4928 0.4 2217683 35784
5632 0.4 2534490 40893
6336 0.4 2851300 46005
7040 0.5 3168158 51118
7744 0.5 3484890 56230
8448 0.6 3801696 61344
9152 0.8 4118539 66453
9856 1.5 4435382 71570
10560 3.8 4745453 76487
===============================================================================
Product and Test Information
============================
+---------------------------------------------------------------+
| Azure NetApp Files large volume scale |
+---------------------------------------------------------------+
Tested by Microsoft and NetApp Inc.
Hardware Available May 2024
Software Available May 2024
Date Tested August 2025
License Number 33
Licensee Locations San Jose, CA USA
Azure
NetApp Files is an Azure native, first-party, enterprise-class,
high-performance file storage service. It provides Volumes as a service, which
you can create within a NetApp account and a capacity pool, and share to
clients using SMB and NFS. You can also select service and performance levels
and manage data protection. You can create and manage high-performance, highly
available, and scalable file shares by using the same protocols and tools that
you're familiar with and rely on on-premises.
Solution Under Test Bill of Materials
=====================================
Item
No Qty Type Vendor Model/Name Description
---- ---- ---------- ---------- ---------- -----------------------------------
1 6 Storage Microsoft Azure Azure NetApp Files large volumes can
volume support from 50 TiB to 2 PiB in
size, with a maximum throughput of
up to 12800 MiB/s. Volumes can be
resized up or down on demand, and
throughput can be adjusted
automatically (based on volume
size) or manually depending on the
capacity pool
QoS type.
2 60 Azure Microsoft Standard_D Red Hat Enterprise Linux running on
Virtual 32_v5 Azure D32s_v5 Virtual Machines (32
Machine vCPU, 128 GB Memory, 16 Gbps
Networking). The Dsv5-series
virtual machines offer a
combination of vCPUs and memory to
meet the requirements associated
with most enterprise workloads
Configuration Diagrams
======================
1) storage2020-20250929-00142.config1.png (see SPECstorage Solution 2020 results webpage)
Component Software
==================
Item Name and
No Component Type Version Description
---- ------------ ------------ ------------ -----------------------------------
1 RHEL 9.5 Operating RHEL 9.5 Operating System (OS) for the
System (Kernel 5.14 workload clients
.0-503.38.1.
el9_5.x86_64
)
Hardware Configuration and Tuning - Virtual
===========================================
+----------------------------------------------------------------------+
| Client Network Settings |
+----------------------------------------------------------------------+
Parameter Name Value Description
--------------- --------------- ----------------------------------------
Accelerated Enabled Accelerated Networking enables single
Networking root I/O virtualization (SR-IOV) on
supported virtual machine (VM) types
+----------------------------------------------------------------------+
| Storage Network Settings |
+----------------------------------------------------------------------+
Parameter Name Value Description
--------------- --------------- ----------------------------------------
Network Standard Standard Network Features allows Azure
features VNet features such as network security
groups, user-defined routes and others.
Hardware Configuration and Tuning Notes
---------------------------------------
None
Software Configuration and Tuning - Virtual
===========================================
+----------------------------------------------------------------------+
| Clients |
+----------------------------------------------------------------------+
Parameter Name Value Description
--------------- --------------- ----------------------------------------
rsize,wsize 262144 NFS mount options for data block size
protocol tcp NFS mount options for protocol
nfsvers 3 NFS mount options for NFS version
nconnect 8 NFS mount options for multiple TCP
connections
actimeo 600 NFS mount option to modify the timeouts
for attribute caching
nocto present NFS mount option to turn off close-to-
(boolean) open consistency
noatime present NFS mount option to turn off access time
(boolean) updates
nofile 102400 Maximum number of open files per user
nproc 10240 Maximum number of processes per user
sunrpc.tcp_slot 128 Sets the number of (TCP) RPC entries to
_table_entries pre-allocate for in-flight RPC requests
net.core.wmem_m 16777216 Maximum size of the socket send buffer
ax
net.core.rmem_m 16777216 Maximum size of the socket receive
ax buffer
net.core.wmem_d 1048576 Default setting in bytes of the socket
efault send buffer
net.core.rmem_d 1048576 Default setting in bytes of the socket
efault receive buffer
net.ipv4.tcp_rm 1048576 8388608 Minimum, default and maximum size of the
em 33554432 TCP receive buffer
net.ipv4.tcp_wm 1048576 8388608 Minimum, default and maximum size of the
em 33554432 TCP send buffer
net.core.optmem 4194304 Maximum ancillary buffer size allowed
_max per socket
net.core.somaxc 65535 Maximum tcp backlog an application can
onn request
net.ipv4.tcp_me 4096 89600 Maximum memory in 4096-byte pages across
m 8388608 all TCP applications. Contains minimum,
pressure and maximum.
net.ipv4.tcp_wi 1 Enable TCP window scaling
ndow_scaling
net.ipv4.tcp_ti 0 Turn off timestamps to reduce
mestamps performance spikes related to timestamp
generation
net.ipv4.tcp_no 1 Prevent TCP from caching connection
_metrics_save metrics on closing connections
net.ipv4.route. 1 Flush the routing cache
flush
net.ipv4.tcp_lo 1 Allows TCP to make decisions to prefer
w_latency lower latency instead of maximizing
network throughput
net.ipv4.ip_loc 1024 65000 Defines the local port range that is
al_port_range used by TCP and UDP traffic to choose
the local port.
net.ipv4.tcp_sl 0 Congestion window will not be timed out
ow_start_after_ after an idle period
idle
net.core.netdev 300000 Sets maximum number of packets, queued
_max_backlog on the input side, when the interface
receives packets faster than kernel can
process
net.ipv4.tcp_sa 0 Disable TCP selective acknowledgements
ck
net.ipv4.tcp_ds 0 Disable duplicate SACKs
ack
net.ipv4.tcp_fa 0 Disable forward acknowledgement
ck
vm.dirty_expire 30000 Defines when dirty data is old enough to
_centisecs be eligible for writeout by the kernel
flusher threads. Unit is 100ths of a
second.
vm.dirty_writeb 30000 Defines a time interval between periodic
ack_centisecs wake-ups of the kernel threads
responsible for writing dirty data to
hard-disk. Unit is 100ths of a second.
Software Configuration and Tuning Notes
---------------------------------------
Tuned the necessary client parameters as shown above, for communication between
clients and storage over Azure Virtual Networking, to optimize data transfer
and minimize overhead.
Service SLA Notes
-----------------
Service
Level Agreement (SLA) for Azure NetApp Files
Storage and Filesystems
=======================
Item Stable
No Description Data Protection Storage Qty
---- ------------------------------------- ------------------ -------- -----
1 Azure NetApp Files large volume, Azure NetApp Files Stable 6
Flexible Service Level, 50 TiB, 12800 Flexible, Storage
MiB/s Standard, Premium
and Ultra service
levels are built
on a fault-
tolerant bare-
metal fleet
powered by ONTAP,
delivering
enterprise-grade
resilience, and
uses RAID-DP
(Double Parity
RAID) to safeguard
data against disk
failures. This
mechanism
distributes parity
across multiple
disks, enabling
seamless data
recovery even if
two disks fail
simultaneously.
RAID-DP has a
long-standing
presence in the
enterprise storage
industry and is
recognized for its
proven reliability
and fault
tolerance.
Number of Filesystems 6
Total Capacity 300 TiB
Filesystem Type Azure NetApp Files large volume
Filesystem Creation Notes
-------------------------
Large volumes were created via the public Azure API using the azure cli tool.
Creation commands are available here: https://learn.microsoft.com/en-us/cli/azure/netappfiles/volume?view=azure-cli-latest#az-netappfiles-volume-create
Creating the Azure NetApp Files Account: az netappfiles account create
--account-name [account-name] --resource-group [resource-group] --location
[location]
Creating the Azure NetApp Files Capacity Pool:
az netappfiles pool create --account-name [account-name] --resource-group
[resource-group] --location [location] --pool-name [pool-name] --service-level
Flexible --size 54975581388800 --CustomThroughputMibps 12800
Creating the Azure NetApp Files Volume: az netappfiles volume create
--resource-group [resource-group] --account-name [account-name] --location
[location] --pool-name [pool-name] --name [volume-name] --usage-threshold 51200
--file-path [mount-point] --protocol-types NFSv3 --vnet [vnet-id] --zones 1
--throughput-mibps 12800
Storage and Filesystem Notes
----------------------------
n/a
Transport Configuration - Virtual
=================================
Item Number of
No Transport Type Ports Used Notes
---- --------------- ---------- -----------------------------------------------
1 16 Gbps Virtual 60 Each Linux Virtual Machine has a single 16 Gbps
NIC Network Adapter with Accelerated Networking
enabled.
Transport Configuration Notes
-----------------------------
Azure Virtual machines allocated bandwidth limits egress (outbound) traffic
from the virtual machines. The virtual machines ingress bandwidth rates may
exceed 16 Gbps depending on other resources available to the virtual machine
(https://learn.microsoft.com/en-us/azure/virtual-network/virtual-machine-network-throughput)
Switches - Virtual
==================
Total Used
Item Port Port
No Switch Name Switch Type Count Count Notes
---- -------------------- --------------- ------ ----- ------------------------
1 Azure Virtual Virtual Network 11 11 Each Azure virtual
Network - Qatar network had 1 connection
Central AZ3 for the Azure NetApp
Files storage endpoint
and 10 (1 per) RHEL
client. Azure virtual
networks allow up to
65,536 Network interface
cards and Private IP
addresses per virtual
network. This Azure VNet
was Peered to the VNet
in Canada Central AZ1.
2 Azure Virtual Virtual Network 11 11 This Azure VNet was
Network - South Peered to the VNet in
Africa North AZ2 Canada Central AZ1.
3 Azure Virtual Virtual Network 11 11 This Azure VNet was
Network - South Peered to the VNet in
Africa North AZ3 Canada Central AZ1.
4 Azure Virtual Virtual Network 11 11 This Azure VNet was
Network - Germany Peered to the VNet in
West Central AZ1 Canada Central AZ1.
5 Azure Virtual Virtual Network 11 11 This Azure VNet was
Network - Canada Peered to all other
Central AZ1 VNets to allow Prime
client communication
between itself and the
workload clients
6 Azure Virtual Virtual Network 11 11 This Azure VNet was
Network - Canada Peered to the VNet in
Central AZ2 Canada Central AZ1.
Processing Elements - Virtual
=============================
Item
No Qty Type Location Description Processing Function
---- ---- -------- -------------- ------------------------- -------------------
1 1920 vCPU Azure Cloud Intel(R) Xeon(R) Platinum Client Workload
8370C CPU @ 2.80GHz (32 Generator
cores allocated to each
VM)
Processing Element Notes
------------------------
n/a
Memory - Virtual
================
Size in Number of
Description GiB Instances Nonvolatile Total GiB
------------------------- ---------- ---------- ------------ ------------
Client Workload Generator 128 60 V 7680
Grand Total Memory Gibibytes 7680
Memory Notes
------------
None
Stable Storage
==============
Azure NetApp Files utilizes non-volatile battery-backed memory of two
independent nodes as write caching prior to write acknowledgement. This
protects the filesystem from any single-point-of-failure until the data is
de-staged to disks. In the event of an abrupt failure, pending data in the
non-volatile battery-backed memory is replayed to disk upon restoration.
Solution Under Test Configuration Notes
=======================================
Clients accessed one storage endpoint for the Azure NetApp Files large volume
within the same availability zone/virtual network.
Unlike a
general-purpose operating system, Azure NetApp Files does not provide
mechanisms for customers to run third-party code (https://learn.microsoft.com/en-us/security/benchmark/azure/baselines/azure-netapp-files-security-baseline?toc=/azure/azure-netapp-files/TOC.json#security-profile).
Azure Resource Manager allows only an allow-listed set of operations to be
executed via the Azure APIs (https://learn.microsoft.com/en-us/azure/azure-netapp-files/control-plane-security).
Underlying Azure infrastructure was patched for Spectre/Meltdown on or prior
to January 2018. (https://azure.microsoft.com/en-us/blog/securing-azure-customers-from-cpu-vulnerability/
and https://learn.microsoft.com/en-us/azure/virtual-machines/mitigate-se).
Other Solution Notes
====================
None
Dataflow
========
60 clients were used to generate the workload: 1 client acted as both Prime
client and a workload client to itself and the 59 other workload clients.
Each workload client used one 16 Gbps virtual network adapter, through a single
vnet connected to one Azure NetApp Files endpoint. Each client mounted one ANF
large volume as an NFSv3 filesystem.
The Prime client communicated with
all workload clients outside of the virtual network it was connected to using
virtual
network peering. Client to storage traffic was contained within each
virtual network created per region/availability zone.
Other Notes
===========
There is 1 mount per client. Example mount commands from one server are shown
below. /etc/fstab entry:
10.254.121.4:/canada-az1-vol /mnt/eda nfs
hard,proto=tcp,vers=3,rsize=262144,wsize=262144,nconnect=8,nocto,noatime,actimeo=600
0 0
mount | grep eda
10.254.121.4:/canada-az1-vol on
/mnt/eda type nfs
(rw,noatime,vers=3,rsize=262144,wsize=262144,namlen=255,acregmin=600,acregmax=600,acdirmin=600,acdirmax=600,hard,nocto,proto=tcp,nconnect=8,timeo=600,retrans=2,sec=sys,mountaddr=10.254.121.4,mountvers=3,mountport=635,mountproto=tcp,local_lock=none,addr=10.254.121.4)
Other Report Notes
==================
None
===============================================================================
Generated on Mon Oct 6 12:41:33 2025 by SpecReport
Copyright (C) 2016-2025 Standard Performance Evaluation Corporation