SPECstorage(TM) Solution 2020_eda_blended Result
Microsoft and NetApp : Azure NetApp Files large volume
Inc.
SPECstorage Solution = 1760 Job_Sets (Overall Response Time = 0.48 msec)
2020_eda_blended
===============================================================================
Performance
===========
Business Average
Metric Latency Job_Sets Job_Sets
(Job_Sets) (msec) Ops/Sec MB/Sec
------------ ------------ ------------ ------------
110 0.3 49502 798
220 0.3 99005 1597
330 0.3 148508 2396
440 0.3 198011 3195
550 0.3 247513 3993
660 0.4 297016 4792
770 0.4 346519 5591
880 0.4 396022 6390
990 0.4 445524 7188
1100 0.4 495022 7987
1210 0.4 544530 8785
1320 0.4 594033 9586
1430 0.5 643536 10383
1540 0.7 693038 11181
1650 1.0 742541 11980
1760 2.3 792046 12780
===============================================================================
Product and Test Information
============================
+---------------------------------------------------------------+
| Azure NetApp Files large volume |
+---------------------------------------------------------------+
Tested by Microsoft and NetApp Inc.
Hardware Available May 2024
Software Available May 2024
Date Tested August 2025
License Number 33
Licensee Locations San Jose, CA USA
Azure
NetApp Files is an Azure native, first-party, enterprise-class,
high-performance file storage service. It provides Volumes as a service, which
you can create within a NetApp account and a capacity pool, and share to
clients using SMB and NFS. You can also select service and performance levels
and manage data protection. You can create and manage high-performance, highly
available, and scalable file shares by using the same protocols and tools that
you're familiar with and rely on on-premises.
Solution Under Test Bill of Materials
=====================================
Item
No Qty Type Vendor Model/Name Description
---- ---- ---------- ---------- ---------- -----------------------------------
1 1 Storage Microsoft Azure Azure NetApp Files large volumes can
volume support from 50 TiB to 2 PiB in
size, with a maximum throughput of
up to 12800 MiB/s. Volumes can be
resized up or down on demand, and
throughput can be adjusted
automatically (based on volume
size) or manually depending on the
capacity pool
QoS type.
2 10 Azure Microsoft Standard_D Red Hat Enterprise Linux running on
Virtual 32_v5 Azure D32s_v5 Virtual Machines (32
Machine vCPU, 128 GB Memory, 16 Gbps
Networking). The Dsv5-series
virtual machines offer a
combination of vCPUs and memory to
meet the requirements associated
with most enterprise workloads
Configuration Diagrams
======================
1) storage2020-20250929-00143.config1.png (see SPECstorage Solution 2020 results webpage)
Component Software
==================
Item Name and
No Component Type Version Description
---- ------------ ------------ ------------ -----------------------------------
1 RHEL 9.5 Operating RHEL 9.5 Operating System (OS) for the
System (Kernel 5.14 workload clients
.0-503.38.1.
el9_5.x86_64
)
Hardware Configuration and Tuning - Virtual
===========================================
+----------------------------------------------------------------------+
| Client Network Settings |
+----------------------------------------------------------------------+
Parameter Name Value Description
--------------- --------------- ----------------------------------------
Accelerated Enabled Accelerated Networking enables single
Networking root I/O virtualization (SR-IOV) on
supported virtual machine (VM) types
+----------------------------------------------------------------------+
| Storage Network Settings |
+----------------------------------------------------------------------+
Parameter Name Value Description
--------------- --------------- ----------------------------------------
Network Standard Standard Network Features allows Azure
features VNet features such as network security
groups, user-defined routes and others.
Hardware Configuration and Tuning Notes
---------------------------------------
None
Software Configuration and Tuning - Virtual
===========================================
+----------------------------------------------------------------------+
| Clients |
+----------------------------------------------------------------------+
Parameter Name Value Description
--------------- --------------- ----------------------------------------
rsize,wsize 262144 NFS mount options for data block size
protocol tcp NFS mount options for protocol
nfsvers 3 NFS mount options for NFS version
nconnect 8 NFS mount options for multiple TCP
connections
actimeo 600 NFS mount option to modify the timeouts
for attribute caching
nocto present NFS mount option to turn off close-to-
(boolean) open consistency
noatime present NFS mount option to turn off access time
(boolean) updates
nofile 102400 Maximum number of open files per user
nproc 10240 Maximum number of processes per user
sunrpc.tcp_slot 128 Sets the number of (TCP) RPC entries to
_table_entries pre-allocate for in-flight RPC requests
net.core.wmem_m 16777216 Maximum size of the socket send buffer
ax
net.core.rmem_m 16777216 Maximum size of the socket receive
ax buffer
net.core.wmem_d 1048576 Default setting in bytes of the socket
efault send buffer
net.core.rmem_d 1048576 Default setting in bytes of the socket
efault receive buffer
net.ipv4.tcp_rm 1048576 8388608 Minimum, default and maximum size of the
em 33554432 TCP receive buffer
net.ipv4.tcp_wm 1048576 8388608 Minimum, default and maximum size of the
em 33554432 TCP send buffer
net.core.optmem 4194304 Maximum ancillary buffer size allowed
_max per socket
net.core.somaxc 65535 Maximum tcp backlog an application can
onn request
net.ipv4.tcp_me 4096 89600 Maximum memory in 4096-byte pages across
m 8388608 all TCP applications. Contains minimum,
pressure and maximum.
net.ipv4.tcp_wi 1 Enable TCP window scaling
ndow_scaling
net.ipv4.tcp_ti 0 Turn off timestamps to reduce
mestamps performance spikes related to timestamp
generation
net.ipv4.tcp_no 1 Prevent TCP from caching connection
_metrics_save metrics on closing connections
net.ipv4.route. 1 Flush the routing cache
flush
net.ipv4.tcp_lo 1 Allows TCP to make decisions to prefer
w_latency lower latency instead of maximizing
network throughput
net.ipv4.ip_loc 1024 65000 Defines the local port range that is
al_port_range used by TCP and UDP traffic to choose
the local port.
net.ipv4.tcp_sl 0 Congestion window will not be timed out
ow_start_after_ after an idle period
idle
net.core.netdev 300000 Sets maximum number of packets, queued
_max_backlog on the input side, when the interface
receives packets faster than kernel can
process
net.ipv4.tcp_sa 0 Disable TCP selective acknowledgements
ck
net.ipv4.tcp_ds 0 Disable duplicate SACKs
ack
net.ipv4.tcp_fa 0 Disable forward acknowledgement
ck
vm.dirty_expire 30000 Defines when dirty data is old enough to
_centisecs be eligible for writeout by the kernel
flusher threads. Unit is 100ths of a
second.
vm.dirty_writeb 30000 Defines a time interval between periodic
ack_centisecs wake-ups of the kernel threads
responsible for writing dirty data to
hard-disk. Unit is 100ths of a second.
Software Configuration and Tuning Notes
---------------------------------------
Tuned the necessary client parameters as shown above, for communication between
clients and storage over Azure Virtual Networking, to optimize data transfer
and minimize overhead.
Service SLA Notes
-----------------
Service
Level Agreement (SLA) for Azure NetApp Files
Storage and Filesystems
=======================
Item Stable
No Description Data Protection Storage Qty
---- ------------------------------------- ------------------ -------- -----
1 Azure NetApp Files large volume, Azure NetApp Files Stable 1
Flexible Service Level, 50 TiB, 12800 Flexible, Storage
MiB/s Standard, Premium
and Ultra service
levels are built
on a fault-
tolerant bare-
metal fleet
powered by ONTAP,
delivering
enterprise-grade
resilience, and
uses RAID-DP
(Double Parity
RAID) to safeguard
data against disk
failures. This
mechanism
distributes parity
across multiple
disks, enabling
seamless data
recovery even if
two disks fail
simultaneously.
RAID-DP has a
long-standing
presence in the
enterprise storage
industry and is
recognized for its
proven reliability
and fault
tolerance.
Number of Filesystems 1
Total Capacity 50 TiB
Filesystem Type Azure NetApp Files large volume
Filesystem Creation Notes
-------------------------
Large volumes were created via the public Azure API using the azure cli tool.
Creation commands are available here: https://learn.microsoft.com/en-us/cli/azure/netappfiles/volume?view=azure-cli-latest#az-netappfiles-volume-create
Creating the Azure NetApp Files Account: az netappfiles account create
--account-name [account-name] --resource-group [resource-group] --location
[location]
Creating the Azure NetApp Files Capacity Pool:
az netappfiles pool create --account-name [account-name] --resource-group
[resource-group] --location [location] --pool-name [pool-name] --service-level
Flexible --size 54975581388800 --CustomThroughputMibps 12800
Creating the Azure NetApp Files Volume: az netappfiles volume create
--resource-group [resource-group] --account-name [account-name] --location
[location] --pool-name [pool-name] --name [volume-name] --usage-threshold 51200
--file-path [mount-point] --protocol-types NFSv3 --vnet [vnet-id] --zones 1
--throughput-mibps 12800
Storage and Filesystem Notes
----------------------------
n/a
Transport Configuration - Virtual
=================================
Item Number of
No Transport Type Ports Used Notes
---- --------------- ---------- -----------------------------------------------
1 16 Gbps Virtual 10 Each Linux Virtual Machine has a single 16 Gbps
NIC Network Adapter with Accelerated Networking
enabled.
Transport Configuration Notes
-----------------------------
Azure Virtual machines allocated bandwidth limits egress (outbound) traffic
from the virtual machines. The virtual machines ingress bandwidth rates may
exceed 16 Gbps depending on other resources available to the virtual machine
(https://learn.microsoft.com/en-us/azure/virtual-network/virtual-machine-network-throughput)
Switches - Virtual
==================
Total Used
Item Port Port
No Switch Name Switch Type Count Count Notes
---- -------------------- --------------- ------ ----- ------------------------
1 Azure Virtual Virtual Network 11 11 The Azure virtual
Network network had 1 connection
for the Azure NetApp
Files storage endpoint
and 10 (1 per) RHEL
client. Azure virtual
networks allow up to
65,536 Network interface
cards and Private IP
addresses per virtual
network
Processing Elements - Virtual
=============================
Item
No Qty Type Location Description Processing Function
---- ---- -------- -------------- ------------------------- -------------------
1 320 vCPU Azure Cloud Intel(R) Xeon(R) Platinum Client Workload
8370C CPU @ 2.80GHz (32 Generator
cores allocated to each
VM)
Processing Element Notes
------------------------
n/a
Memory - Virtual
================
Size in Number of
Description GiB Instances Nonvolatile Total GiB
------------------------- ---------- ---------- ------------ ------------
Client Workload Generator 128 10 V 1280
Grand Total Memory Gibibytes 1280
Memory Notes
------------
None
Stable Storage
==============
Azure NetApp Files utilizes non-volatile battery-backed memory of two
independent nodes as write caching prior to write acknowledgement. This
protects the filesystem from any single-point-of-failure until the data is
de-staged to disks. In the event of an abrupt failure, pending data in the
non-volatile battery-backed memory is replayed to disk upon restoration.
Solution Under Test Configuration Notes
=======================================
All clients accessed the Azure NetApp Files large volume over a single storage
endpoint.
Unlike a general-purpose operating system, Azure NetApp Files
does not provide mechanisms for customers to run third-party code (https://learn.microsoft.com/en-us/security/benchmark/azure/baselines/azure-netapp-files-security-baseline?toc=/azure/azure-netapp-files/TOC.json#security-profile).
Azure Resource Manager allows only an allow-listed set of operations to be
executed via the Azure APIs (https://learn.microsoft.com/en-us/azure/azure-netapp-files/control-plane-security).
Underlying Azure infrastructure was patched for Spectre/Meltdown on or prior
to January 2018. (https://azure.microsoft.com/en-us/blog/securing-azure-customers-from-cpu-vulnerability/
and https://learn.microsoft.com/en-us/azure/virtual-machines/mitigate-se).
Other Solution Notes
====================
None
Dataflow
========
Please reference the configuration diagram. 10 clients were used to generate
the workload: 1 client acted as both Prime Client and a workload client to
itself and the 9 other workload clients.
Each client used one 16 Gbps
virtual network adapter, through a single vnet connected to one Azure NetApp
Files endpoint. The clients mounted the ANF large volume as an NFSv3
filesystem.
Other Notes
===========
There is 1 mount per client. Example mount commands from one server are shown
below. /etc/fstab entry:
10.254.121.4:/canada-az1-vol /mnt/eda nfs
hard,proto=tcp,vers=3,rsize=262144,wsize=262144,nconnect=8,nocto,noatime,actimeo=600
0 0
mount | grep eda
10.254.121.4:/canada-az1-vol on
/mnt/eda type nfs
(rw,noatime,vers=3,rsize=262144,wsize=262144,namlen=255,acregmin=600,acregmax=600,acdirmin=600,acdirmax=600,hard,nocto,proto=tcp,nconnect=8,timeo=600,retrans=2,sec=sys,mountaddr=10.254.121.4,mountvers=3,mountport=635,mountproto=tcp,local_lock=none,addr=10.254.121.4)
Other Report Notes
==================
None
===============================================================================
Generated on Mon Oct 6 12:44:23 2025 by SpecReport
Copyright (C) 2016-2025 Standard Performance Evaluation Corporation