SPEC SFS(R)2014_vda Result DDN : DDN ES400NVX 2014 VDA SPEC SFS2014_vda = 7000 Streams (Overall Response Time = 1.17 msec) =============================================================================== Performance =========== Business Average Metric Latency Streams Streams (Streams) (msec) Ops/Sec MB/Sec ------------ ------------ ------------ ------------ 700 1.0 7004 3227 1400 1.0 14009 6460 2100 1.0 21014 9696 2800 1.0 28019 12905 3500 1.1 35023 16142 4200 1.1 42028 19360 4900 1.2 49033 22636 5600 1.7 56038 25826 6300 1.3 63042 29133 7000 1.3 70047 32300 =============================================================================== Product and Test Information ============================ +---------------------------------------------------------------+ | DDN ES400NVX 2014 VDA | +---------------------------------------------------------------+ Tested by DDN Hardware Available 02 21 Software Available 02 21 Date Tested 03 21 License Number 4722 Licensee Locations Colorado Springs, CO Simple appliances for the largest data challenges. Whether you are speeding up financial service analytics, running challenging autonomous vehicle workloads, or looking to boost drug development pipelines, the ES400NVX all flash appliance is a powerful building block for application acceleration. Unlike traditional enterprise storage, DDN EXAScaler solutions offer efficient performance from a parallel filesystem to deliver an optimized data path designed to scale. Building on over a decade of experience deploying storage solutions in the most demanding environments around the world, DDN delivers unparalleled performance, capability and flexibility for users looking to manage and gain insights from massive amounts of data. Solution Under Test Bill of Materials ===================================== Item No Qty Type Vendor Model/Name Description ---- ---- ---------- ---------- ---------- ----------------------------------- 1 3 Scalable DDN DDN Dual Intel(R) Xeon(R) Gold 6230 CPU 2U NVMe ES400NVX @ 2.10GHz Appliance 2 24 Network NVIDIA(Mel MCX653106A For HDR100 connection from the Adapter lanox) -ECAT,Conn EXAScaler Servers/OSS/MDS to the ectX-6 VPI QM8700 Switch. 3 2 Network NVIDIA QM8700 40 Port HDR200 NETWORK Switch Switch (Mellanox) 4 16 EXAScaler SuperMicro SYS-1028TP Dual Intel Xeon(R) CPU E5-2650 v2 @ Clients -DC0R 2.60GHz, 128GB memory 5 16 Network NVIDIA MCX653106A For HDR100 connection from the Adapter (Mellanox) -ECAT,Conn EXSCaler Clients ectX-6 VPI 6 72 NVMe Samsung MZWLL3T2HA 3.2TB Samsung NVMe Drives for Drives JQ-00005 Storage and Lustre Filesystem 7 16 10000rpm Toshiba AL14SEB030 300GB Toshiba Drives for Client Sas-12gbps N Nodes OS Boot Configuration Diagrams ====================== 1) sfs2014-20210608-00077.config1.png (see SPEC SFS2014 results webpage) Component Software ================== Item Name and No Component Type Version Description ---- ------------ ------------ ------------ ----------------------------------- 1 Storage DDN SFAOS 11.8.3 Storage Operating System designed Appliance for scalable performance Software 2 EXAScaler Parallel 5.2.1 Distributed/Parallel File system Parallel Filesystem software that runs on Filesystem Software Embedded/Physical Servers Software 3 CentOS Linux OS CentOS 8.1 64-bit Operating system for the Client Nodes 4 EXAScaler Parallel lustre-clien Distributed/Parallel File system Client Filesystem t-2.12.6_ddn software that runs on Software Client Node 3-1.el8.x86_ Client/Compute Nodes Software 64 Hardware Configuration and Tuning - Physical ============================================ +----------------------------------------------------------------------+ | CPU Performance Setting | +----------------------------------------------------------------------+ Parameter Name Value Description --------------- --------------- ---------------------------------------- Scaling Performance Runs the CPU at the Maximum Frequency Governor Hardware Configuration and Tuning Notes --------------------------------------- No other Hardware Tunings and Configurations were applied to the solution offered. Software Configuration and Tuning - Physical ============================================ +----------------------------------------------------------------------+ | EXAScaler Component Tunings | +----------------------------------------------------------------------+ Parameter Name Value Description --------------- --------------- ---------------------------------------- osd-ldiskfs.*.w 0 Disables writethrough cache and read ritethrough_cac cache on the OSSs/OSTs he_enable,osd-l diskfs.*.read_c ache_enable osc.*.max_pages 1m maximum number of pages per RPC _per_rpc osc.*.max_rpcs_ 8 maximum number of rpcs in flight in_flight osc.*.max_dirty 1024 maximum amount of outstanding dirty data _mb llite.*.max_rea 2048 maximum amount of cache reserved for d_ahead_mb readahead cache osc.*.checksums 0 Disables network/wire checksums Software Configuration and Tuning Notes --------------------------------------- All of the tunings applied have been mentioned in the previous section Service SLA Notes ----------------- None Storage and Filesystems ======================= Item Stable No Description Data Protection Storage Qty ---- ------------------------------------- ------------------ -------- ----- 1 24 NVME drives in a single DCR pool DCR Raid6 [8+2p] Yes 72 per ES400NVX appliance.54.33 TiB per ES400NVX set aside for data for a total of 163TiB for data with 3xES400NVX appliance. Data and Metadata in the same pool. 2 24 NVME drives in a single DCR pool DCR Raid6 [8+2p] Yes 72 per ES400NVX appliance.1TiB per ES400NVX set aside for metadata for a total of 3TiB for metadata with 3xES400NVX appliance.Data and Metadata in the same pool 3 16 in total for the client OS Boot None No 16 Number of Filesystems 1 Total Capacity 163TiB Filesystem Type EXAScaler File System Filesystem Creation Notes ------------------------- 8 Virtual Disks/Raid Objects for OST usage and 4 Virtual Disks/Raid Objects for MDT usage were created from a single DCR pool consisting of 24 NVMEs drives per ES400NVX appliances. Complete solution consisted of 24 OSTs and 12 MDTs. Storage and Filesystem Notes ---------------------------- Declustered RAID (DCR) is a RAID system in which physical disks (PD) are logically broken into smaller pieces known as physical disk extents (PDE). A RAIDset is allocated from the PDEs across a large number of drvies in a random fashion so as to distribute the RAIDset across as many physical drives as possible. Configurable RAID Group Sizes feature allows users to configure the system with the desired RAID[in our case 10disks/raid6 group] and redundancy levels based on data requirements. Each RAID group is configured independently and users can select any valid combination of the number of disks for the respective RAID group. Transport Configuration - Physical ================================== Item Number of No Transport Type Ports Used Notes ---- --------------- ---------- ----------------------------------------------- 1 100Gb/S HDR100 24 LNETs[Lustre Network Communication Protocol] for EXAScaler MDS/OSS 2 100Gb/S HDR100 16 LNETs for EXAScaler Clients 3 100Gb/S HDR100 8 ISL between Switches Transport Configuration Notes ----------------------------- None Switches - Physical =================== Total Used Item Port Port No Switch Name Switch Type Count Count Notes ---- -------------------- --------------- ------ ----- ------------------------ 1 QM8700 HDR200/HDR100 40 16 EXAScaler Clients 2 QM8700 HDR200/HDR100 40 24 EXAScaler MDS/OSS Processing Elements - Physical ============================== Item No Qty Type Location Description Processing Function ---- ---- -------- -------------- ------------------------- ------------------- 1 6 CPU ES400NVX Dual Intel(R) Xeon(R) Storage Platform Gold 6230 CPU @ 2.10GHz 2 16 CPU Clients Dual Intel(R) Xeon(R) CPU EXAScaler Client E5-2660 v4 @ 2.00GHz Nodes Processing Element Notes ------------------------ None Processing Elements - Virtual ============================= Item No Qty Type Location Description Processing Function ---- ---- -------- -------------- ------------------------- ------------------- 1 120 Virtual EXASCaler Virtual Cores from EXAScaler MDS/OSS Cores VMs/OSS ES400NVX assigned to EXASCaler MDS/OSS Processing Element Notes ------------------------ None Memory - Physical ================= Size in Number of Description GiB Instances Nonvolatile Total GiB ------------------------- ---------- ---------- ------------ ------------ Memory for ES400NVX SFAOS 90 6 NV and cache Memory set aside for each 150 12 V ES400NVX VMs which act as EXAScaler Filesystem OSS/MDS Memory per EXAScaler 128 16 V Client Nodes Grand Total Memory Gibibytes 4388 Memory Notes ------------ 150GiB of memory are assigned to each of the VMs in the ES400NVX platform. These VMs function as OSS/MDS within the exascaler filesystem. Stable Storage ============== SFAOS DCR[declustered raid] utilizes large number of drives within a single configuration for fast rebuild time, spreading the load across multiple drives. SFAOS also has the capability of utilizing DCR for setting aside drives for extra redundancy on top of the raid architecture. Other features includes redundant power supply, battery backups,partial rebuilds, dual active-active controller and online upgrades of drives, SFAOS and BIOS, etc. SFAOS features also provide transfer of ownership including raid devices between the controllers as well as the VMs within the SFA Platform and the EXAScaler Filesystem. Solution Under Test Configuration Notes ======================================= The SUT Configuration utilized DDNs fastest NVME storage offering, the ES400NVX, combined with EXAScaler Parallel filesystem. 4 VMS residing in each ES400NVX[12 in total] acted as both Metadata Servers and Object Storage servers, and utilized 24 NVME drives[72 in total] for the creation of Medatata Targets[MDTs] and Object Storage Targets[OSTs] and the EXAScaler Filesystem. 16 physical servers acted as client Nodes, participating in the SPEC SFS benchmark. Other Solution Notes ==================== None Dataflow ======== DCR raid objects[Pools and Virtual Disks] were utlized by the ES400NVX VMs[OSS and MDS] to create a single shared namespace[EXAScaler Filesystem].Network data and bulk data communication between the nodes within the fileystem[Lustre Network Communication Protocol] was carried out via HDR100 interfaces. Client Nodes shared the same namespace/filesystem via their own HDR100 connection to the Mellanox [NVIDIA] QM8700 switch. Other Notes =========== Spectre turned off. spectre_v2=off nopti. Other Report Notes ================== None =============================================================================== Generated on Tue Jun 29 17:09:52 2021 by SpecReport Copyright (C) 2016-2021 Standard Performance Evaluation Corporation