SPEC SFS®2014_vda Result

Copyright © 2016-2021 Standard Performance Evaluation Corporation

CeresData Co., Ltd. SPEC SFS2014_vda = 9600 Streams
CeresData Prodigy Distributed Storage System Overall Response Time = 1.62 msec


Performance

Business
Metric
(Streams)
Average
Latency
(msec)
Streams
Ops/Sec
Streams
MB/Sec
9601.38996064431
19201.437192138869
28801.4672881913289
38401.5233842617723
48001.5634803222146
57601.6405763926582
67201.6696724531042
76801.7497685235411
86401.8808645939873
96001.9629606544303
Performance Graph


Product and Test Information

CeresData Prodigy Distributed Storage System
Tested byCeresData Co., Ltd.
Hardware AvailableMarch 2021
Software AvailableDecember 2020
Date TestedMay 2021
License Number6255
Licensee LocationsBeijing, P.R. China.

The CeresData Prodigy System is an enterprise distributed storage system with predictable high performance and scalability for both file and block storage. It supports a wide range of hardware architecture including X86 and ARM. The rich feature and good scalability make it a good contender for various enterprise applications, including traditional NAS/SAN, real time workloads, distributed database, etc.

Solution Under Test Bill of Materials

Item NoQtyTypeVendorModel/NameDescription
11Distributed Storage OSCeresDataCeresData Prodigy OS V9CeresData Prodigy OS is an enterprise distributed storage system which is feature rich, standards compliant, high performance storage system.
21Storage serverCeresDataD-Fusion 5000 SOC10-node storage cluster, each node has 2 Intel Xeon E5-2650 v4 (12-core CPU with hyperthreading), 256GiB memory (8 X 32GiB), 1 Mellanox ConnectX-3 56Gb/s InfiniBand HCA (1 port connected to 100Gbps IB switch), 1 external dual-port 10GbE adapter (2 ports connected to 10G Ethernet switch for cluster internal communication), 1 4GB SATADOM to hold Prodigy OS.
311Storage Client ChassisAICHA401-LB2Dual-node 4U chassis, 22 nodes in total, 20 nodes configured as client, 1 node as prime server, 1 node unused. Each client node has 2 Intel Xeon E5-2620 v4 (8-core CPU with hyperthreading), 384GiB memory (12 X 32GiB), 1 Mellanox ConnectX-4 100Gb/s InfiniBand HCA (1 port connected to 100Gbps IB switch).
415JBOD enclosureCeresDataB7022U 24 Bays 12G SAS Dual Expander JBOD enclosure to accommodate most of the SSDs deployed to the storage cluster.
51100Gbps IB SwitchMellanoxSB779036 Ports, 100Gbps InfiniBand switch.
6110GbE SwitchMaipuMyPower S582048 Ports, 10Gbps Ethernet switch.
711GbE SwitchTP-LINKTL-SF1048S48 Ports, 1Gbps Ethernet switch.
830SAS HBABroadcomSAS9300-8eSAS 3008 Fusion MPT2.5, 8-port 12Gb/s HBA. Each storage node had 3 external SAS9300-8e HBA installed. Each HBA used 1 cable to connect to a JBOD enclosure.
921100G IB HCAMellanoxCX456AConnectX-4 EDR +100GbE InfiniBand HCA, Each client had 1 IB HCA installed, prime node had 1 IB HCA installed.
101056G IB HCAMellanoxCX354AConnectX-3 FDR InfiniBand +40GigE HCA, Each storage node had 1 IB HCA installed.
111010G NICsilicomPE210G2SPI9-XR10GbE dual-port NIC. Each storage node had 1 external 10GbE dual-port NIC installed.
12450SSDWDCWUSTR6480ASS200800GB WDC WUSTR6480ASS200 SSD. 360 were in the 15 JBOD enclosures, the remaining were installed in the front bays of storage nodes.

Configuration Diagrams

  1. CeresData Configuration Diagram

Component Software

Item NoComponentTypeName and VersionDescription
1Storage NodeProdigy OSProdigy V9All the 10 storage nodes were installed with the same Prodigy OS and configured as a single storage cluster.
2ClientOperating SystemRHEL 7.4All the 20 client nodes and the prime node were installed with the same RHEL version 7.4.

Hardware Configuration and Tuning - Physical

None
Parameter NameValueDescription
NoneNoneNone

Hardware Configuration and Tuning Notes

The hardware used the default configuration from CeresData, Mellanox, etc.

Software Configuration and Tuning - Physical

Client
Parameter NameValueDescription
/proc/sys/vm/min_free_kbytes10240000Set on client machines. The shortage of reclaimable pages encountered by InfiniBand driver has caused page allocation failure and spurious stack traces printed into syslog. It was increased to a larger value so that this kind of error went away.
protordmaThe clients used this parameter to mount storage directories with NFSoRDMA.
wsize524288Namely 512KiB. Used as clients' NFS mount option.

Software Configuration and Tuning Notes

Specified as above.

Service SLA Notes

None.

Storage and Filesystems

Item NoDescriptionData ProtectionStable StorageQty
1800GB WDC WUSTR6480ASS200 SSD. 360 were in the 15 JBOD enclosures, the remaining were installed in the front bays of storage nodes.8+1yes450
Number of Filesystems1
Total Capacity285 TiB
Filesystem TypeProdigyFS

Filesystem Creation Notes

A single Prodigy distributed file system was created over the 450 SSD drives across the 10 storage nodes. Any data in the file system could be accessed from any node of the storage cluster. The disk protection scheme was configured as 8+1.

Storage and Filesystem Notes

The SSDs were configured into groups, each group had 9 disks with 1 disk failure tolerance. The data and parity were striped across all disks within the group. The single Prodigy distributed file system was created over all these groups.

Transport Configuration - Physical

Item NoTransport TypeNumber of Ports UsedNotes
156Gbps IB10The solution used a total of 10 56Gbps IB ports from the storage nodes (1 port for each node) to the IB switch.
2100Gbps IB21The solution used a total of 21 100Gbps IB ports from the clients (1 port from each of 20 client,1 port from prime) to the IB switch.

Transport Configuration Notes

Each of the storage nodes has a dual ports 56Gbps IB HCA with only 1 port used during the test. Each of the clients has a dual ports 100Gbps IB HCA with only one port used during the test.

Switches - Physical

Item NoSwitch NameSwitch TypeTotal Port CountUsed Port CountNotes
1Mellanox SB7790100Gbps IB363121 100G IB cables from clients, 10 56G IB cables from storage cluster nodes.

Processing Elements - Physical

Item NoQtyTypeLocationDescriptionProcessing Function
120CPUProdigy StorageIntel(R) Xeon(R) E5-2650 v4 @ 2.20GHz, 12-core CPU with hyperthreadingNFS (over RDMA) Server
240CPULoad Generator, ClientIntel(R) Xeon(R) E5-2620 v4 @ 2.10GHz, 8-core CPU with hyperthreadingNFS (over RDMA) Client
32CPUPrimeIntel(R) Xeon(R) E5-2620 v4 @ 2.10GHz, 8-core CPU with hyperthreadingSPEC SFS2014 Prime

Processing Element Notes

Each node had 2 CPUs.

Memory - Physical

DescriptionSize in GiBNumber of InstancesNonvolatileTotal GiB
Prodigy storage node memory25610V
Client memory38420V
Prime memory3841V
Grand Total Memory Gibibytes10624

Memory Notes

Each storage node had 8 X 32GiB RAM installed, each client had 12 X 32GiB RAM installed, the prime node had 12 X 32GiB RAM installed .

Stable Storage

The Prodigy System uses disks to store all the data writes. Once the data write is acknowledged, it has been committed to the disk. Further, the disks under test have been configured with Data Protection of "8+1".

Solution Under Test Configuration Notes

The 10 storage nodes were each installed with standard Prodigy OS and then configured into a single storage cluster. None of the components used to perform the test were patched with Spectre or Meltdown patches (CVE-2017-5754, CVE-2017-5753, CVE-2017-5715).

Other Solution Notes

None.

Dataflow

All 20 clients and 10 storage nodes were connected via a single IB switch. Through the IB switch, any client can directly access any storage data from any node in the storage cluster.

Other Notes

None.

Other Report Notes

None.


Generated on Tue Jun 22 18:49:55 2021 by SpecReport
Copyright © 2016-2021 Standard Performance Evaluation Corporation