SPEC SFS(R)2014_vda Result CeresData Co., Ltd. : CeresData Prodigy Distributed Storage System SPEC SFS2014_vda = 9600 Streams (Overall Response Time = 1.62 msec) =============================================================================== Performance =========== Business Average Metric Latency Streams Streams (Streams) (msec) Ops/Sec MB/Sec ------------ ------------ ------------ ------------ 960 1.4 9606 4431 1920 1.4 19213 8869 2880 1.5 28819 13289 3840 1.5 38426 17723 4800 1.6 48032 22146 5760 1.6 57639 26582 6720 1.7 67245 31042 7680 1.7 76852 35411 8640 1.9 86459 39873 9600 2.0 96065 44303 =============================================================================== Product and Test Information ============================ +---------------------------------------------------------------+ | CeresData Prodigy Distributed Storage System | +---------------------------------------------------------------+ Tested by CeresData Co., Ltd. Hardware Available March 2021 Software Available December 2020 Date Tested May 2021 License Number 6255 Licensee Locations Beijing, P.R. China. The CeresData Prodigy System is an enterprise distributed storage system with predictable high performance and scalability for both file and block storage. It supports a wide range of hardware architecture including X86 and ARM. The rich feature and good scalability make it a good contender for various enterprise applications, including traditional NAS/SAN, real time workloads, distributed database, etc. Solution Under Test Bill of Materials ===================================== Item No Qty Type Vendor Model/Name Description ---- ---- ---------- ---------- ---------- ----------------------------------- 1 1 Distribute CeresData CeresData CeresData Prodigy OS is an d Storage Prodigy OS enterprise distributed storage OS V9 system which is feature rich, standards compliant, high performance storage system. 2 1 Storage CeresData D-Fusion 10-node storage cluster, each node server 5000 SOC has 2 Intel Xeon E5-2650 v4 (12-core CPU with hyperthreading), 256GiB memory (8 X 32GiB), 1 Mellanox ConnectX-3 56Gb/s InfiniBand HCA (1 port connected to 100Gbps IB switch), 1 external dual-port 10GbE adapter (2 ports connected to 10G Ethernet switch for cluster internal communication), 1 4GB SATADOM to hold Prodigy OS. 3 11 Storage AIC HA401-LB2 Dual-node 4U chassis, 22 nodes in Client total, 20 nodes configured as Chassis client, 1 node as prime server, 1 node unused. Each client node has 2 Intel Xeon E5-2620 v4 (8-core CPU with hyperthreading), 384GiB memory (12 X 32GiB), 1 Mellanox ConnectX-4 100Gb/s InfiniBand HCA (1 port connected to 100Gbps IB switch). 4 15 JBOD CeresData B702 2U 24 Bays 12G SAS Dual Expander enclosure JBOD enclosure to accommodate most of the SSDs deployed to the storage cluster. 5 1 100Gbps IB Mellanox SB7790 36 Ports, 100Gbps InfiniBand Switch switch. 6 1 10GbE Maipu MyPower 48 Ports, 10Gbps Ethernet switch. Switch S5820 7 1 1GbE TP-LINK TL-SF1048S 48 Ports, 1Gbps Ethernet switch. Switch 8 30 SAS HBA Broadcom SAS9300-8e SAS 3008 Fusion MPT2.5, 8-port 12Gb/s HBA. Each storage node had 3 external SAS9300-8e HBA installed. Each HBA used 1 cable to connect to a JBOD enclosure. 9 21 100G IB Mellanox CX456A ConnectX-4 EDR +100GbE InfiniBand HCA HCA, Each client had 1 IB HCA installed, prime node had 1 IB HCA installed. 10 10 56G IB HCA Mellanox CX354A ConnectX-3 FDR InfiniBand +40GigE HCA, Each storage node had 1 IB HCA installed. 11 10 10G NIC silicom PE210G2SPI 10GbE dual-port NIC. Each storage 9-XR node had 1 external 10GbE dual-port NIC installed. 12 450 SSD WDC WUSTR6480A 800GB WDC WUSTR6480ASS200 SSD. 360 SS200 were in the 15 JBOD enclosures, the remaining were installed in the front bays of storage nodes. Configuration Diagrams ====================== 1) sfs2014-20210524-00076.config1.png (see SPEC SFS2014 results webpage) Component Software ================== Item Name and No Component Type Version Description ---- ------------ ------------ ------------ ----------------------------------- 1 Storage Node Prodigy OS Prodigy V9 All the 10 storage nodes were installed with the same Prodigy OS and configured as a single storage cluster. 2 Client Operating RHEL 7.4 All the 20 client nodes and the System prime node were installed with the same RHEL version 7.4. Hardware Configuration and Tuning - Physical ============================================ +----------------------------------------------------------------------+ | None | +----------------------------------------------------------------------+ Parameter Name Value Description --------------- --------------- ---------------------------------------- None None None Hardware Configuration and Tuning Notes --------------------------------------- The hardware used the default configuration from CeresData, Mellanox, etc. Software Configuration and Tuning - Physical ============================================ +----------------------------------------------------------------------+ | Client | +----------------------------------------------------------------------+ Parameter Name Value Description --------------- --------------- ---------------------------------------- /proc/sys/vm/mi 10240000 Set on client machines. The shortage of n_free_kbytes reclaimable pages encountered by InfiniBand driver has caused page allocation failure and spurious stack traces printed into syslog. It was increased to a larger value so that this kind of error went away. proto rdma The clients used this parameter to mount storage directories with NFSoRDMA. wsize 524288 Namely 512KiB. Used as clients' NFS mount option. Software Configuration and Tuning Notes --------------------------------------- Specified as above. Service SLA Notes ----------------- None. Storage and Filesystems ======================= Item Stable No Description Data Protection Storage Qty ---- ------------------------------------- ------------------ -------- ----- 1 800GB WDC WUSTR6480ASS200 SSD. 360 8+1 yes 450 were in the 15 JBOD enclosures, the remaining were installed in the front bays of storage nodes. Number of Filesystems 1 Total Capacity 285 TiB Filesystem Type ProdigyFS Filesystem Creation Notes ------------------------- A single Prodigy distributed file system was created over the 450 SSD drives across the 10 storage nodes. Any data in the file system could be accessed from any node of the storage cluster. The disk protection scheme was configured as 8+1. Storage and Filesystem Notes ---------------------------- The SSDs were configured into groups, each group had 9 disks with 1 disk failure tolerance. The data and parity were striped across all disks within the group. The single Prodigy distributed file system was created over all these groups. Transport Configuration - Physical ================================== Item Number of No Transport Type Ports Used Notes ---- --------------- ---------- ----------------------------------------------- 1 56Gbps IB 10 The solution used a total of 10 56Gbps IB ports from the storage nodes (1 port for each node) to the IB switch. 2 100Gbps IB 21 The solution used a total of 21 100Gbps IB ports from the clients (1 port from each of 20 client,1 port from prime) to the IB switch. Transport Configuration Notes ----------------------------- Each of the storage nodes has a dual ports 56Gbps IB HCA with only 1 port used during the test. Each of the clients has a dual ports 100Gbps IB HCA with only one port used during the test. Switches - Physical =================== Total Used Item Port Port No Switch Name Switch Type Count Count Notes ---- -------------------- --------------- ------ ----- ------------------------ 1 Mellanox SB7790 100Gbps IB 36 31 21 100G IB cables from clients, 10 56G IB cables from storage cluster nodes. Processing Elements - Physical ============================== Item No Qty Type Location Description Processing Function ---- ---- -------- -------------- ------------------------- ------------------- 1 20 CPU Prodigy Intel(R) Xeon(R) E5-2650 NFS (over RDMA) Storage v4 @ 2.20GHz, 12-core CPU Server with hyperthreading 2 40 CPU Load Intel(R) Xeon(R) E5-2620 NFS (over RDMA) Generator, v4 @ 2.10GHz, 8-core CPU Client Client with hyperthreading 3 2 CPU Prime Intel(R) Xeon(R) E5-2620 SPEC SFS2014 Prime v4 @ 2.10GHz, 8-core CPU with hyperthreading Processing Element Notes ------------------------ Each node had 2 CPUs. Memory - Physical ================= Size in Number of Description GiB Instances Nonvolatile Total GiB ------------------------- ---------- ---------- ------------ ------------ Prodigy storage node 256 10 V memory Client memory 384 20 V Prime memory 384 1 V Grand Total Memory Gibibytes 10624 Memory Notes ------------ Each storage node had 8 X 32GiB RAM installed, each client had 12 X 32GiB RAM installed, the prime node had 12 X 32GiB RAM installed . Stable Storage ============== The Prodigy System uses disks to store all the data writes. Once the data write is acknowledged, it has been committed to the disk. Further, the disks under test have been configured with Data Protection of "8+1". Solution Under Test Configuration Notes ======================================= The 10 storage nodes were each installed with standard Prodigy OS and then configured into a single storage cluster. None of the components used to perform the test were patched with Spectre or Meltdown patches (CVE-2017-5754, CVE-2017-5753, CVE-2017-5715). Other Solution Notes ==================== None. Dataflow ======== All 20 clients and 10 storage nodes were connected via a single IB switch. Through the IB switch, any client can directly access any storage data from any node in the storage cluster. Other Notes =========== None. Other Report Notes ================== None. =============================================================================== Generated on Tue Jun 22 18:49:55 2021 by SpecReport Copyright (C) 2016-2021 Standard Performance Evaluation Corporation