SPEC SFS®2014_vda Result

Copyright © 2016-2021 Standard Performance Evaluation Corporation

Quantum Corporation SPEC SFS2014_vda = 7450 Streams
Quantum StorNext 7.0.1 with F-Series Storage Nodes Overall Response Time = 0.90 msec


Performance

Business
Metric
(Streams)
Average
Latency
(msec)
Streams
Ops/Sec
Streams
MB/Sec
7450.80374553440
14900.786149106870
22350.7762236510320
29800.7822981913758
37250.7863727517192
44700.8454473020643
52150.8215218524077
59600.9085964027539
67051.1136709530947
74501.7337455034391
Performance Graph


Product and Test Information

Quantum StorNext 7.0.1 with F-Series Storage Nodes
Tested byQuantum Corporation
Hardware AvailableJanuary 2021
Software AvailableJanuary 2021
Date TestedJanuary 2021
License Number4761
Licensee LocationsMendota Heights Minnesota

StorNext File System (SNFS) is a software platform designed to manage massive amounts of data throughout its lifecycle, delivering the required balance of high performance, data protection and preservation, scalability, and cost. StorNext was designed specifically for large unstructured data sets, including video workloads, where low latency, predictable performance is required. SNFS is a scale-out parallel file system that is POSIX compliant and supports hundreds of petabytes and billions of files in a single namespace. Clients connect to the front-end using NAS protocols or directly to the back-end storage network with a dedicated client. Value-add data services enable integrated data protection and policy-based movement of files between multiple tiers of primary and secondary storage, including cloud. Storage and server-agnostic, SNFS may be run on customer-supplied hardware, or Quantum server and storage nodes.

Solution Under Test Bill of Materials

Item NoQtyTypeVendorModel/NameDescription
11Parallel File SystemQuantumStorNext V7.01High-Performance, parallel file system, scales across storage nodes, capacity and performance, multiple OS support
210F1000 Storage NodeQuantumF-Series NVMe StorageSingle Node, F-1000, each node has, 10 Micron 9300 MTFDHAL15T3TDP 15.36TB NVMe SSD, Single AMD EPYC Proecessor (7261 8 -Core Proc @2.5GHz), 64GB Memeory, 2 x Dual port Mellanox ConnectX-5 100GbE HBA (MCX518A-CCAT). 2 x 100GbE connections to the switch fabric, 1 per ethernet adaptor
314ClientsQuantumXcellis Workflow Extender (XWE) Gen2Quantum XWE, each single 1U server has, 192GB memory, Dual CPU Intel(R) Xeon(R) Silver 4110 CPU @ 2.10GHz 8 Core, 2 x Dual Port 100GbE Mellanox MT28800 [ConnectX-5 Ex]. 2 x 100GbE connections to the switch fabric, 1 per ethernet adaptor
41Metadata ControllerQuantumXcellis Workflow Director (XWD) Gen2Dual 1U server with HA (high availability), each server has, 64GB memory, Dual CPU Intel(R) Xeon(R) Silver 4110 CPU @ 2.10GHz 8 Core, 1 x Dual Port 100GbE Mellanox MT28800 [ConnectX-5 ] each ConnectX-5 card connects with a single DAC connection to switch infrastructure, only for administrative purposes. Note: Secondary node is also being used as "SPEC Prime".
52100GbE switchAristaArista DCS-7060CX2-32S-F32 Port, 100GbE Ethernet switch
611GbE switchNetgearNetgear ProSAFE GS752TP 48 Port, 1GbE Ethernet switch
711GbE switchDellDell PowerConnect 6248 48 Port, 1GbE Ethernet switch

Configuration Diagrams

  1. Quantum_Design.pdf

Component Software

Item NoComponentTypeName and VersionDescription
1Storage Node Operating System Quantum CSP 1.2.0 Node Cloud Storage Platform (CSP)
2ClientOperating System CentOS 7.7Operating system on load generator or clients
3StorNext Metadata controllersOperating System CentOS 7.7Operating system on metadata controller
4SPEC SFS Prime Operating System CentOS 7.7Operating system on SPEC SFS Prime

Hardware Configuration and Tuning - Physical

Component Name
Parameter NameValueDescription
Tuning Param NameTuning Param ValueTuning Param Description

Hardware Configuration and Tuning Notes

F1000 Nodes were all stock installation no other hardware alterations needed

Software Configuration and Tuning - Physical

F1000 Storage Node
Parameter NameValueDescription
Jumbo Frames9000set Jumbo frames to 9000
Client
Parameter NameValueDescription
Jumbo Frames9000set Jumbo frames to 9000
nr_requests512maximum number of read and write requests that can be queued at one time
schedulernoopI/O scheduler in Linux

Software Configuration and Tuning Notes

Each client mount options: cachebufsize=128k,buffercachecap=16384,dircachesize=32m,buffercache_iods=32,bufferlowdirty=6144,bufferhighdirty=12288 Both scheduler and nr_requests set in /usr/cvfs/config/deviceparams on each client

Service SLA Notes

None

Storage and Filesystems

Item NoDescriptionData ProtectionStable StorageQty
1Micron 9300 15.36TB NVMe SSD, 10 per storage node RAID 10Yes100
Number of Filesystems1
Total Capacity431 TiB
Filesystem TypeStorNext

Filesystem Creation Notes

Created a StorNext file system across all 10 nodes. Total 20 LUNS with 100 NVMe disks. A single Stripe group, stripebreadth=2.5MB, round-robin pattern. Metadata and User data combined into a single Stripe group and striped across all LUNS in the stripe group

Storage and Filesystem Notes

Each F1000 contains 10 NVMe devices configured in RAID 10, then sliced into two LUNS per node.

Transport Configuration - Physical

Item NoTransport TypeNumber of Ports UsedNotes
1100GbE20Storage nodes used total of 20 ports of 100GbE or 2 per node
2100GbE28Load generators (Clients) used total 28 100GbE ports 2 per client
3100GbE1Xcellis Workflow Director Gen 2 used 1 port of 100GbE per metadata controller, for administration purposes only.
4100GbE1Xcellis Workflow Director Secondary metadata controller Gen 2/Prime, used 1 port of 100GbE for administration purposes only.

Transport Configuration Notes

Core switch configurations were two independent 100GbE subnets, each storage node and client had dual 100GbE connection, one per switch.

Switches - Physical

Item NoSwitch NameSwitch TypeTotal Port CountUsed Port CountNotes
1Arista DCS-7060CX2-32S-F, total of 2 switches100GbE6450 48 ports for storage and clients, 1 port for Primary metadata controller and 1 port for Prime/metadata controller
2Netgear ProSAFE GS752TP1GbE4847Management switch for metadata traffic, administrative access SUT, includes additional 6 ports for Xcellis Workflow Director
3Dell PowerConnect 6248 1GbE4841Management switch for metadata traffic, administrative access SUT

Processing Elements - Physical

Item NoQtyTypeLocationDescriptionProcessing Function
110CPUStorage NodeAMD EPYC 7261 8-Core Processor, @ 2.5GhzStorage
228CPULoad Generator, Client Intel(R) Xeon(R) Silver 4110 CPU 8 Core @ 2.10GHzStorNext Client
32CPUPrime, Client Intel(R) Xeon(R) Silver 4110 CPU 8 Core @ 2.10GHzSPEC2014 Prime

Processing Element Notes

None

Memory - Physical

DescriptionSize in GiBNumber of InstancesNonvolatileTotal GiB
F1000 Storage memory 6410V640
Client memory 19214V2688
Prime memory 641V64
Grand Total Memory Gibibytes3392

Memory Notes

Storage nodes have 64GB of memory for a total of 640GB. Clients have 192GB of memory each for a total of 2,688GB. The Prime has 64GB of memory

Stable Storage

The F1000 storage node does not use write cache to temporarily store data in flight, writes are therefore committed to disk immediately. All Data is protected in the file system by a RAID 10 storage configuration including metadata. The entire SUT is protected with redundant power supplies, both storage nodes and clients. F1000 and clients have two NVMe system devices in a 1+1 configuration for OS redundency. Metadata servers are configured in a high availability state, this ensures file system access in the event of a metadata server failure.

Solution Under Test Configuration Notes

The solution is a standardized configuration by Quantum with Xcellis workflow directors managing metadata for the file system. F1000 storage nodes are off the shelf nodes designed for high perpormance streaming media in addition to high IOPS for very randomized workflows. The file system was configured, as per the "Filesystem Creation Notes" listed above. The purpose of this is to take into account a mixed workflow of random and sequential processes, including combining metadata striped across the file system.

Other Solution Notes

None

Dataflow

The entire SUT is connected per the SUT diagram. Each of the 14 clients is direct connected using iSER/RDMA. No special tuning parameters were used in tuning the connections. One file system was created and shared as StorNext file system.

Other Notes

None

Other Report Notes

None


Generated on Thu Feb 18 15:16:38 2021 by SpecReport
Copyright © 2016-2021 Standard Performance Evaluation Corporation