SPEC SFS(R)2014_swbuild Result NetApp, Inc. : NetApp 12-node AFF A800 with FlexGroup SPEC SFS2014_swbuild = 6200 Builds (Overall Response Time = 0.83 msec) =============================================================================== Performance =========== Business Average Metric Latency Builds Builds (Builds) (msec) Ops/Sec MB/Sec ------------ ------------ ------------ ------------ 310 0.3 155007 2005 620 0.3 310014 4011 930 0.3 465021 6017 1240 0.3 620026 8025 1550 0.3 775035 10023 1860 0.3 930041 12030 2170 0.7 1085048 14038 2480 0.4 1240056 16042 2790 0.5 1395058 18047 3100 0.6 1550069 20052 3410 0.8 1705076 22059 3720 0.8 1860049 24070 4030 0.9 2015062 26063 4340 0.9 2170081 28078 4650 1.1 2325089 30087 4960 1.2 2480100 32072 5270 1.4 2635089 34089 5580 1.6 2790099 36101 5890 2.0 2945042 38103 6200 2.8 3100060 40117 =============================================================================== Product and Test Information ============================ +---------------------------------------------------------------+ | NetApp 12-node AFF A800 with FlexGroup | +---------------------------------------------------------------+ Tested by NetApp, Inc. Hardware Available May 2018 Software Available January 2019 Date Tested November 2018 License Number 33 Licensee Locations Sunnyvale, CA USA Combined with the industry's first support of NVMe inside and out and NetApp ONTAP data management software, AFF A800 all-flash systems accelerate, manage, and protect your business-critical data with the industry's highest performance, superior flexibility, and best-in-class data management and cloud integration. By combining low-latency NVMe solid-state drives (SSDs) and the first NVMe over Fibre Channel (NVMe/FC) connectivity, the AFF A800 delivers ultra-low latency and massive throughput, and scales up to 24 nodes in a cluster. The FlexGroup feature of ONTAP 9 enables massive scaling in a single namespace to over 20PB with over 400 billion files while evenly spreading the performance across the cluster. This makes the AFF A800 a great system for engineering and design applications as well as DevOps. It is particularly well-suited for chip development and software builds that are typically high file-count environments with high meta-data traffic. Solution Under Test Bill of Materials ===================================== Item No Qty Type Vendor Model/Name Description ---- ---- ---------- ---------- ---------- ----------------------------------- 1 6 Storage NetApp AFF A800 A single NetApp AFF A800 system is System Flash a single chassis with 2 controllers System (HA and 48 drive slots. Each set of 2 Pair, controllers comprises a High- Active- Availability (HA) Pair. The words Active "controller" and "node" are used Dual Contr interchangeably in this document. oller) Each AFF A800 HA Pair includes 1280GB of ECC memory, 128GB of NVRAM, 8 PCIe expansion slots and a set of included I/O ports: * 4x 40/100 GbE ports, in Slots numbered 1 in the controllers, configured as 100 GbE, used for cluster interconnect and HA connections; * 4x 40/100 GbE ports, in the mezzanine location in the controllers, configured as 100 GbE, used for cluster interconnect and HA connections; Included Premium Bundle which includes All Protocols, SnapRestore, SnapMirror, SnapVault, FlexClone, SnapManager Suite, Single Mailbox Recovery (SMBR), SnapCenter Foundation. Only the NFS protocol license is active in the test, also available in the BASE bundle. 2 12 Network NetApp 2-Port 1 card in Slot 3 of each Interface 40/100 GbE controller; 2 cards per HA pair; Card QSFP28 each has 2 ports; used for data X1146A connections 3 288 Solid- NetApp 3.84TB NVMe Solid-State Drives (NVMe SSDs) State NVMe SSD installed in chassis, 48 per HA Drive X4002A pair 4 72 Network Intel XXV710-DA2 2-port 25 GbE NIC, one installed Interface per client. Only one 25GbE port per Card client was active during the test. 5 1 Switch Cisco Cisco Used for Ethernet data connections Nexus 9000 between clients and storage C9516 systems. Large switch in use due to testing having been done in a large shared-infrastructure lab. Only the ports used for this test are listed in this report. See the 'Transport Configuration - Physical' section for connectivity details. 6 12 Linecard Cisco N9K- Cisco Nexus 9500 48p 1/10/25G SFP+ X97160YC- plus 4p 100G QSFP cloud-scale line EX card. Clients connected to the 10/25 ports on these cards. 7 4 Linecard Cisco N9K- Cisco Nexus 9500 32x100G Ethernet X9732C-EX Module. The AFF A800 connected to these cards. 8 24 Fibre Emulex Quad Port Located in Slots 2 and 5 of each Channel 32 Gb FC controller; there were 2 such cards Interface X1135A in each controller; they were not Card used for this test. They were in place because this is a large shared-infrastructure lab environment; no I/O was directed through these cards during this test. 9 1 Switch Cisco Cisco Used for 100 GbE cluster Nexus 3000 interconnections C3232C 10 72 Client SuperMicro Custom Clients are a custom build ordered build from from Supermicro. The base build was Superserve a Superserver 5018R-W. The custom r 5018R-W build includes a X10SRW-F base motherboard, a CSE-815TQ-600WB chassis, a RSC-R1UW-2E16-O-P Dual PCI-Express 3.0 x16 Riser, a RSC- R1UW-E8R-O-P Single PCI-Express 3.0 x8 Riser Card, Intel Xeon E5-1630 v4 Quad-core Haswell Processors, and 4 DDR 2400 16GB ECC/REG DIMMs. 1 used as Prime Client; 71 used to generate the workload. 11 1094 Software E NetApp OS-ONTAP1- Capacity-based License, Per 0.1TB 4 nablement/ CAP1-PREM- License 2P Configuration Diagrams ====================== 1) sfs2014-20190125-00061.config1.jpg (see SPEC SFS2014 results webpage) Component Software ================== Item Name and No Component Type Version Description ---- ------------ ------------ ------------ ----------------------------------- 1 Linux OS Centos Linux Operating System (OS) for the 72 6.10 (Kernel clients 4.18.12) 2 ONTAP Storage OS 9.5 Storage Operating System 3 Data Switch Operating 7.0(3)I6(1) Cisco switch NX-OS (system System software) 4 Cluster Operating 7.0(3)I6(1) Cisco switch NX-OS (system Switch System software) Hardware Configuration and Tuning - Physical ============================================ +----------------------------------------------------------------------+ | Storage | +----------------------------------------------------------------------+ Parameter Name Value Description --------------- --------------- ---------------------------------------- MTU 9000 Jumbo Frames configured for Cluster Interconnect ports MTU 9000 Jumbo Frames configured for data ports Hardware Configuration and Tuning Notes --------------------------------------- NetApp AFF A800 storage controller 100 GbE ports used for cluster interconnects and HA connections (8 per HA pair) were set up with MTU of 9000. Data network was set up with MTU of 9000. Software Configuration and Tuning - Physical ============================================ +----------------------------------------------------------------------+ | Clients | +----------------------------------------------------------------------+ Parameter Name Value Description --------------- --------------- ---------------------------------------- rsize,wsize 65536 NFS mount options for data block size protocol tcp NFS mount option for protocol nfsvers 3 NFS mount option for NFS version somaxconn 65536 Max tcp backlog an application can request Software Configuration and Tuning Notes --------------------------------------- Tuned the necessary client parameters as shown above, for communication between clients and storage controllers over Ethernet, to optimize data transfer and minimize overhead. Service SLA Notes ----------------- None Storage and Filesystems ======================= Item Stable No Description Data Protection Storage Qty ---- ------------------------------------- ------------------ -------- ----- 1 3.84 TB NVMe SSDs used for data and RAID-DP Yes 288 storage operating system; used to build three RAID-DP RAID groups per storage controller node in the cluster 2 960GB NVMe M.2 device, 2 per HA pair; none Yes 12 used as boot media Number of Filesystems 1 Total Capacity 671.5 TiB Filesystem Type NetApp FlexGroup Filesystem Creation Notes ------------------------- The single FlexGroup consumed all data volumes from all of the aggregates across all of the nodes. In order to support the high number of files required by the benchmark, immediately after creation the default inode density of the FlexGroup was increased. At the same time the FlexGroup's snapshot reserve percentage was also set to zero. These actions were accomplished with a single 'volume modify' command with option '-files 12000000000' and option '-percent-snapshot-space 0'. Storage and Filesystem Notes ---------------------------- The storage configuration consisted of 6 AFF A800 HA pairs (12 controller nodes total). The two controllers in each HA pair were connected in a SFO (storage failover) configuration. Together, all 12 controllers (configured as 6 HA Pairs) comprise the tested AFF A800 HA cluster. Stated in the reverse, the tested AFF A800 HA cluster consists of 6 HA Pairs, each of which consists of 2 controllers (also referred to as nodes). Each storage controller was connected to its own and partner's NVMe drives in a multi-path HA configuration. All NVMe SSDs were in active use during the test. In addition to the factory configured RAID Group housing its root aggregate, each storage controller was configured with two 21+2 RAID-DP RAID Groups. There were 2 data aggregates on each node, each of which consumed one of the node's two 21+2 RAID-DP RAID Groups. 8x volumes, holding benchmark data, were created within each aggregate. "Root aggregates" hold ONTAP operating system related files. Note that spare (unused) drive partitions are not included in the "storage and filesystems" table because they held no data during the benchmark execution. A storage virtual machine or "SVM" was created on the cluster, spanning all storage controller nodes. Within the SVM, a single FlexGroup volume was created using the two data aggregates on each controller. A FlexGroup volume is a scale-out NAS single-namespace container that provides high performance along with automatic load distribution and scalability. Transport Configuration - Physical ================================== Item Number of No Transport Type Ports Used Notes ---- --------------- ---------- ----------------------------------------------- 1 25 GbE and 100 96 For the client-to-storage network, the AFF A800 GbE Cluster used a total of 24x 100 GbE connections from storage to the switch, communicating via NFSv3 over TCP/IP to 72 clients, via one 25 GbE connection to the switch for each client. MTU=9000 was used for data switch ports. The benchmark was conducted in a large shared- infrastructure lab; only the ports shown and documented were used on the Cisco Nexus C9516 switch for this benchmark test. 2 100 GbE 24 The Cluster Interconnect network is connected via 100 GbE to a Cisco C3232 switch, with 4 connections to each HA pair. Transport Configuration Notes ----------------------------- Each NetApp AFF A800 HA Pair used 4x 100 GbE ports for data transport connectivity to clients (through the Cisco C9516 switch), Item 1 above. Each of the clients driving workload used one 25GbE port for data transport. All ports on the Item 1 network utilized MTU=9000. The Cluster Interconnect network, Item 2 above, utilized MTU=9000. All interfaces associated with dataflow are visible to all other interfaces associated with dataflow. Switches - Physical =================== Total Used Item Port Port No Switch Name Switch Type Count Count Notes ---- -------------------- --------------- ------ ----- ------------------------ 1 Cisco Nexus C9516 25GbE and 100 96 96 72 client-side 25 GbE GbE Switch data connections; 24 storage-side 100 GbE data connections. Only the ports on the Cisco Nexus C9516 used for the solution under test are included in the total port count. 2 Cisco Nexus C3232 100 GbE Switch 32 24 For Cluster Interconnect Processing Elements - Physical ============================== Item No Qty Type Location Description Processing Function ---- ---- -------- -------------- ------------------------- ------------------- 1 24 CPU Storage 2.10 GHz Intel Xeon NFS, TCP/IP, RAID Controller Platinum 8160 and Storage Controller functions 2 72 CPU Client 3.70 GHz Intel Xeon NFS Client, Linux E5-1630 OS Processing Element Notes ------------------------ Each of the 12 NetApp AFF A800 Storage Controllers contains 2 Intel Xeon 8160 processors with 24 cores each; 2.10 GHz, hyperthreading disabled. Each client contains 1 Intel Xeon E5-1630 processor with 4 cores at 3.70 GHz, hyperthreading enabled. Memory - Physical ================= Size in Number of Description GiB Instances Nonvolatile Total GiB ------------------------- ---------- ---------- ------------ ------------ Main Memory for each 1280 6 V 7680 NetApp AFF A800 HA Pair NVDIMM (NVRAM) Memory for 128 6 NV 768 each NetApp AFF A800 HA pair Memory for each client; 64 72 V 4608 71 of these drove the workload Grand Total Memory Gibibytes 13056 Memory Notes ------------ Each storage controller has main memory that is used for the operating system and caching filesystem data. Each controller also has NVRAM; See "Stable Storage" for more information. Stable Storage ============== The AFF A800 utilizes non-volatile battery-backed memory (NVRAM) for write caching. When a file-modifying operation is processed by the filesystem (WAFL) it is written to system memory and journaled into a non-volatile memory region backed by the NVRAM. This memory region is often referred to as the WAFL NVLog (non-volatile log). The NVLog is mirrored between nodes in an HA pair and protects the filesystem from any SPOF (single-point-of-failure) until the data is de-staged to disk via a WAFL consistency point (CP). In the event of an abrupt failure, data which was committed to the NVLog but has not yet reached its final destination (disk) is read back from the NVLog and subsequently written to disk via a CP. Solution Under Test Configuration Notes ======================================= All clients accessed the FlexGroup from all the available network interfaces. Unlike a general-purpose operating system, ONTAP does not provide mechanisms for non-administrative users to run third-party code. Due to this behavior, ONTAP is not affected by either the Spectre or Meltdown vulnerabilities. The same is true of all ONTAP variants including both ONTAP running on FAS/AFF hardware as well as virtualized ONTAP products such as ONTAP Select and ONTAP Cloud. In addition, FAS/AFF BIOS firmware does not provide a mechanism to run arbitrary code and thus is not susceptible to either the Spectre or Meltdown attacks. More information is available from https://security.netapp.com/advisory/ntap-20180104-0001/. None of the components used to perform the test were patched with Spectre or Meltdown patches (CVE-2017-5754,CVE-2017-5753,CVE-2017-5715). Other Solution Notes ==================== ONTAP Storage Efficiency techniques including inline compression and inline deduplication were enabled by default, and were active during this test. Standard data protection features, including background RAID and media error scrubbing, software validated RAID checksum, and double disk failure protection via double parity RAID (RAID-DP) were enabled during the test. WARMUP_TIME has been set to a value of 600 seconds. Dataflow ======== Please reference the configuration diagram. 72 clients were used to generate the workload; 1 client acted as Prime Client to control the 71 other clients. Each client used one 25 GbE connection, through a Cisco Nexus C9516 switch. Each storage HA pair had 4x 100 GbE connections to the data switch. The filesystem consisted of one ONTAP FlexGroup. The clients mounted the FlexGroup volume as an NFSv3 filesystem. The ONTAP cluster provided access to the FlexGroup volume on every 100 GbE port connected to the data switch (16 ports total). Each cluster node had 2 Logical Interfaces (LIFs) per 100GbE Port, for a total of 4 LIFs per node, for a total of 48 LIFs for the AFF A800 cluster. Each client created mount points across those 48 LIFs symmetrically. Other Notes =========== None Other Report Notes ================== NetApp is a registered trademark and "Data ONTAP", "FlexGroup", and "WAFL" are trademarks of NetApp, Inc. in the United States and other countries. All other trademarks belong to their respective owners and should be treated as such. =============================================================================== Generated on Wed Mar 13 16:19:32 2019 by SpecReport Copyright (C) 2016-2019 Standard Performance Evaluation Corporation