SPEC SFS®2014_vda Result

Copyright © 2016-2021 Standard Performance Evaluation Corporation

DDN SPEC SFS2014_vda = 7000 Streams
DDN ES400NVX 2014 VDA Overall Response Time = 1.17 msec


Performance

Business
Metric
(Streams)
Average
Latency
(msec)
Streams
Ops/Sec
Streams
MB/Sec
7000.99870043227
14001.033140096460
21000.982210149696
28001.0222801912905
35001.0683502316142
42001.1044202819360
49001.1814903322636
56001.6665603825826
63001.3276304229133
70001.3497004732300
Performance Graph


Product and Test Information

DDN ES400NVX 2014 VDA
Tested byDDN
Hardware Available02 21
Software Available02 21
Date Tested03 21
License Number4722
Licensee LocationsColorado Springs, CO

Simple appliances for the largest data challenges. Whether you are speeding up financial service analytics, running challenging autonomous vehicle workloads, or looking to boost drug development pipelines, the ES400NVX all flash appliance is a powerful building block for application acceleration. Unlike traditional enterprise storage, DDN EXAScaler solutions offer efficient performance from a parallel filesystem to deliver an optimized data path designed to scale. Building on over a decade of experience deploying storage solutions in the most demanding environments around the world, DDN delivers unparalleled performance, capability and flexibility for users looking to manage and gain insights from massive amounts of data.

Solution Under Test Bill of Materials

Item NoQtyTypeVendorModel/NameDescription
13Scalable 2U NVMe ApplianceDDNDDN ES400NVXDual Intel(R) Xeon(R) Gold 6230 CPU @ 2.10GHz
224Network AdapterNVIDIA(Mellanox)MCX653106A-ECAT,ConnectX-6 VPIFor HDR100 connection from the EXAScaler Servers/OSS/MDS to the QM8700 Switch.
32Network SwitchNVIDIA (Mellanox)QM870040 Port HDR200 NETWORK Switch
416EXAScaler ClientsSuperMicroSYS-1028TP-DC0RDual Intel Xeon(R) CPU E5-2650 v2 @ 2.60GHz, 128GB memory
516Network AdapterNVIDIA (Mellanox)MCX653106A-ECAT,ConnectX-6 VPIFor HDR100 connection from the EXSCaler Clients
672NVMe DrivesSamsungMZWLL3T2HAJQ-000053.2TB Samsung NVMe Drives for Storage and Lustre Filesystem
71610000rpm Sas-12gbpsToshibaAL14SEB030N300GB Toshiba Drives for Client Nodes OS Boot

Configuration Diagrams

  1. DDN SUT Solution

Component Software

Item NoComponentTypeName and VersionDescription
1Storage Appliance SoftwareDDN SFAOS11.8.3Storage Operating System designed for scalable performance
2EXAScaler Parallel Filesystem SoftwareParallel Filesystem Software5.2.1Distributed/Parallel File system software that runs on Embedded/Physical Servers
3CentOSLinux OSCentOS 8.164-bit Operating system for the Client Nodes
4EXAScaler Client SoftwareParallel Filesystem Client Node Softwarelustre-client-2.12.6_ddn3-1.el8.x86_64Distributed/Parallel File system software that runs on Client/Compute Nodes

Hardware Configuration and Tuning - Physical

CPU Performance Setting
Parameter NameValueDescription
Scaling GovernorPerformanceRuns the CPU at the Maximum Frequency

Hardware Configuration and Tuning Notes

No other Hardware Tunings and Configurations were applied to the solution offered.

Software Configuration and Tuning - Physical

EXAScaler Component Tunings
Parameter NameValueDescription
osd-ldiskfs.*.writethrough_cache_enable,osd-ldiskfs.*.read_cache_enable0Disables writethrough cache and read cache on the OSSs/OSTs
osc.*.max_pages_per_rpc1mmaximum number of pages per RPC
osc.*.max_rpcs_in_flight8maximum number of rpcs in flight
osc.*.max_dirty_mb1024maximum amount of outstanding dirty data
llite.*.max_read_ahead_mb2048maximum amount of cache reserved for readahead cache
osc.*.checksums0Disables network/wire checksums

Software Configuration and Tuning Notes

All of the tunings applied have been mentioned in the previous section

Service SLA Notes

None

Storage and Filesystems

Item NoDescriptionData ProtectionStable StorageQty
124 NVME drives in a single DCR pool per ES400NVX appliance.54.33 TiB per ES400NVX set aside for data for a total of 163TiB for data with 3xES400NVX appliance. Data and Metadata in the same pool.DCR Raid6 [8+2p]Yes72
224 NVME drives in a single DCR pool per ES400NVX appliance.1TiB per ES400NVX set aside for metadata for a total of 3TiB for metadata with 3xES400NVX appliance.Data and Metadata in the same poolDCR Raid6 [8+2p]Yes72
316 in total for the client OS BootNoneNo16
Number of Filesystems1
Total Capacity163TiB
Filesystem TypeEXAScaler File System

Filesystem Creation Notes

8 Virtual Disks/Raid Objects for OST usage and 4 Virtual Disks/Raid Objects for MDT usage were created from a single DCR pool consisting of 24 NVMEs drives per ES400NVX appliances. Complete solution consisted of 24 OSTs and 12 MDTs.

Storage and Filesystem Notes

Declustered RAID (DCR) is a RAID system in which physical disks (PD) are logically broken into smaller pieces known as physical disk extents (PDE). A RAIDset is allocated from the PDEs across a large number of drvies in a random fashion so as to distribute the RAIDset across as many physical drives as possible. Configurable RAID Group Sizes feature allows users to configure the system with the desired RAID[in our case 10disks/raid6 group] and redundancy levels based on data requirements. Each RAID group is configured independently and users can select any valid combination of the number of disks for the respective RAID group.

Transport Configuration - Physical

Item NoTransport TypeNumber of Ports UsedNotes
1100Gb/S HDR10024LNETs[Lustre Network Communication Protocol] for EXAScaler MDS/OSS
2100Gb/S HDR10016LNETs for EXAScaler Clients
3100Gb/S HDR1008ISL between Switches

Transport Configuration Notes

None

Switches - Physical

Item NoSwitch NameSwitch TypeTotal Port CountUsed Port CountNotes
1QM8700HDR200/HDR1004016EXAScaler Clients
2QM8700HDR200/HDR1004024EXAScaler MDS/OSS

Processing Elements - Physical

Item NoQtyTypeLocationDescriptionProcessing Function
16CPUES400NVXDual Intel(R) Xeon(R) Gold 6230 CPU @ 2.10GHzStorage Platform
216CPUClientsDual Intel(R) Xeon(R) CPU E5-2660 v4 @ 2.00GHzEXAScaler Client Nodes

Processing Element Notes

None

Processing Elements - Virtual

Item NoQtyTypeLocationDescriptionProcessing Function
1120Virtual CoresEXASCaler VMs/OSSVirtual Cores from ES400NVX assigned to EXASCaler MDS/OSSEXAScaler MDS/OSS

Processing Element Notes

None

Memory - Physical

DescriptionSize in GiBNumber of InstancesNonvolatileTotal GiB
Memory for ES400NVX SFAOS and cache906NV
Memory set aside for each ES400NVX VMs which act as EXAScaler Filesystem OSS/MDS15012V
Memory per EXAScaler Client Nodes12816V
Grand Total Memory Gibibytes4388

Memory Notes

150GiB of memory are assigned to each of the VMs in the ES400NVX platform. These VMs function as OSS/MDS within the exascaler filesystem.

Stable Storage

SFAOS DCR[declustered raid] utilizes large number of drives within a single configuration for fast rebuild time, spreading the load across multiple drives. SFAOS also has the capability of utilizing DCR for setting aside drives for extra redundancy on top of the raid architecture. Other features includes redundant power supply, battery backups,partial rebuilds, dual active-active controller and online upgrades of drives, SFAOS and BIOS, etc. SFAOS features also provide transfer of ownership including raid devices between the controllers as well as the VMs within the SFA Platform and the EXAScaler Filesystem.

Solution Under Test Configuration Notes

The SUT Configuration utilized DDNs fastest NVME storage offering, the ES400NVX, combined with EXAScaler Parallel filesystem. 4 VMS residing in each ES400NVX[12 in total] acted as both Metadata Servers and Object Storage servers, and utilized 24 NVME drives[72 in total] for the creation of Medatata Targets[MDTs] and Object Storage Targets[OSTs] and the EXAScaler Filesystem. 16 physical servers acted as client Nodes, participating in the SPEC SFS benchmark.

Other Solution Notes

None

Dataflow

DCR raid objects[Pools and Virtual Disks] were utlized by the ES400NVX VMs[OSS and MDS] to create a single shared namespace[EXAScaler Filesystem].Network data and bulk data communication between the nodes within the fileystem[Lustre Network Communication Protocol] was carried out via HDR100 interfaces. Client Nodes shared the same namespace/filesystem via their own HDR100 connection to the Mellanox [NVIDIA] QM8700 switch.

Other Notes

Spectre turned off. spectre_v2=off nopti.

Other Report Notes

None


Generated on Tue Jun 29 17:09:52 2021 by SpecReport
Copyright © 2016-2021 Standard Performance Evaluation Corporation