SPEC SFS®2014_vda Result

Copyright © 2016-2021 Standard Performance Evaluation Corporation

ELEMENTS - syslink GmbH SPEC SFS2014_vda = 11000 Streams
ELEMENTS BOLT w. BeeGFS 7.2.3 - VDA Benchmark Results Overall Response Time = 1.08 msec


Performance

Business
Metric
(Streams)
Average
Latency
(msec)
Streams
Ops/Sec
Streams
MB/Sec
10000.937100064605
20000.990200139206
30001.0283002013852
40000.9844002618488
50001.0595003323056
60001.0436004027675
70001.0777004732303
80001.0868005336882
90001.1489006041561
100001.27310006746160
110001.32111007450708
Performance Graph


Product and Test Information

ELEMENTS BOLT w. BeeGFS 7.2.3 - VDA Benchmark Results
Tested byELEMENTS - syslink GmbH
Hardware AvailableMay 2021
Software AvailableMay 2021
Date Tested8th August 2021
License Number6311
Licensee LocationsDuesseldorf, Germany

ELEMENTS BOLT with the BeeGFS filesystem is an all-NVMe storage solution designed for media entertainment workflows that provides unmatched performance while being future proof due to its open architecture and seamless integration in on-premise, cloud or hybrid media workflows. It can scale in terms of capacity and performance, providing best-in-class throughput and latency at a very small operational footprint. ELEMENTS unique set of media centric workflow features (like the integrated automation engine and web-based asset management) extends its capabilities far beyond that of common IT storage products.

Solution Under Test Bill of Materials

Item NoQtyTypeVendorModel/NameDescription
112Storage NodeELEMENTSELEMENTS BOLT2U server, all-NVMe storage node with BeeGFS Filesystem, scalable in capacity and performance. Each BOLT is half-populated with 12 NVMe devices (Micron 9300), 192GB RAM, 2x Single Port Mellanox Connect-X5 100GbE HBA (MCX515A-CCAT), 1x Dual Port 1Gbit Intel HBA, 1x 100Gbit link from the Mellanox HBA to the switch fabric for storage RDMA traffic. 2x Micron 5300 960GB SSDs for OS / boot.
220Client NodeELEMENTSELEMENTS Gateway2U server with 32GB of RAM and Intel XEON Silver CPU with 3.2 GHz. 2x Single Port Mellanox 100GbE HBA (MCX515A-CCAT) with a single 50Gbit connection to the switch fabric. 2x Micron 5300 480GB SSDs for OS / boot.
31Prime Client NodeELEMENTSELEMENTS Worker Node2U server with 32GB of RAM and AMD Threadripper CPU with 3.9 GHz. 1x Dual Port Mellanox 100GbE HBA (MCX516A-CCAT) with a single 50Gbit connection to the switch fabric. 2x Micron 5300 480GB SSDs for OS / boot. Worker Node is used as "SPEC Prime".
41SwitchMellanoxMSN2700-CS2RMellanox Switch MSN2700-CS2R with 32 port 100Gbit, client ports split 100Gbit port into 2x 50Gbit. For storage communication via RDMA between the clients and the storage nodes. Priority Flow-Control (PFC) configured for RoCEv2.
512CableMellanoxMCP1600-C005E26L-BLMellanox MCP1600-C005E26L-BL QSFP28 to QSFP28 connection. 100Gbit switch port to 100Gbit connector for ELEMENTS BOLT (storage node) connectivity.
611CableMellanoxMCP7H00-G004R26LMellanox MCP7H00-G004R26L QSFP28 to 2x QSFP28 breakout cable to fan out 100Gbit switch port to 2x 50Gbit connectors to 20x ELEMENTS Gateway (load generator) and to 1x Prime Client Node.
71SwitchAristaDCS-7050T-64-FArista Switch DCS-7050T-64-F with 48 port 10Gbit ports, used as lab switch for administrative access. No special configuration applied, not part of the SUT.

Configuration Diagrams

  1. ELEMENTS BOLT VDA SPECSFS 2014

Component Software

Item NoComponentTypeName and VersionDescription
1Storage NodeOperating SystemELEMENTS Linux 7.5Operating System on storage nodes. CentOS 7.5 based (Linux Kernel 3.10.0-862.14.4) with Mellanox OFED 4.9
2Storage NodeFilesystemBeeGFS 7.2.3Filesystem on storage nodes. BeeGFS 7.2.3 storage, metadata and management daemons.
3Client NodeOperating SystemELEMENTS Linux 7.7Operating System on client nodes. CentOS 7.7 (Linux Kernel 3.10.0-1062) based with Mellanox OFED 4.9
4Client NodeFilesystem ClientBeeGFS 7.2.3Filesystem client on ELEMENTS GATEWAY (client nodes). BeeGFS 7.2.3 client daemon.
5Worker NodeOperating SystemELEMENTS Linux 7.7Operating System on worker node (SPEC prime). CentOS 7.7 based with Linux 5.3 kernel.

Hardware Configuration and Tuning - Physical

ELEMENTS BOLT
Parameter NameValueDescription
C-State Power ManagementC0Switch CPU C-State Management to use state C0 only
ELEMENTS GATEWAY
Parameter NameValueDescription
C-State Power ManagementC0Switch CPU C-State Management to use state C0 only
Jumbo Frames9000Use jumbo ethernet frames for optimised throughput and CPU utilization
Priority Flow-ControlenabledConfigure Priority Flow-Control / QOS VLAN tagging on all nodes to manage RoCEv2 RDMA traffic flow

Hardware Configuration and Tuning Notes

None

Software Configuration and Tuning - Physical

ELEMENTS BOLT
Parameter NameValueDescription
Jumbo Frames9000Use jumbo ethernet frames for optimised throughput and CPU utilization
Priority Flow-ControlenabledConfigure Priority Flow-Control / QOS VLAN tagging on all nodes to manage RoCEv2 RDMA traffic flow
BeeGFS storage worker threads24Raise the storage daemon worker threads from 12 to 24
BeeGFS metadata worker threads24Raise the metadata daemon worker threads from 12 to 24
ELEMENTS GATEWAY
Parameter NameValueDescription
Jumbo Frames9000Use jumbo ethernet frames for optimised throughput and CPU utilization
Priority Flow-ControlenabledConfigure Priority Flow-Control / QOS VLAN tagging on all nodes to manage RoCEv2 RDMA traffic flow. See Mellanox configuration guides for details.

Software Configuration and Tuning Notes

BeeGFS client default mount options. Each node has Priority Flow-Control enabled for lossless RDMA backend communication according to Mellanox configuration guides. Aside this, the SOMAXCONN kernel parameter has been raised according to SPEC tuning guidelines to sustain the high amount of TCP socket connections from the SPEC prime client node. This is independent of the SUT.

Service SLA Notes

None

Storage and Filesystems

Item NoDescriptionData ProtectionStable StorageQty
1Micron 9300 3.84TB NVMe SSD, 12 per storage node.RAID5 (11+1)Yes144
Number of Filesystems1
Total Capacity472068GiB
Filesystem TypeBeeGFS 7.2.3

Filesystem Creation Notes

One filesystem across all 12 nodes,striping chunksize 8MB, 4 storage targets per file.

Storage and Filesystem Notes

Each node had one RAID5 (11+1, RAID 16kb stripe size) on 12 NVMEs and exported one LUN. All ELEMENTS BOLT were half populated only.

Transport Configuration - Physical

Item NoTransport TypeNumber of Ports UsedNotes
1100GbE12Storage nodes were using 100Gbit link speed
250GbE20Client nodes were only using 50Gbit link speed due to port split on switch
350GbE1Prime Client node was connected to the storage network using a single 50Gbit link
41GbE32All storage and client nodes were connected to a 1Gbit house network for administrative and management access.

Transport Configuration Notes

Actual link speeds of client nodes were 50Gbit only due to port split config on Mellanox switch.

Switches - Physical

Item NoSwitch NameSwitch TypeTotal Port CountUsed Port CountNotes
1Mellanox MSN2700-CS2R3232100Gbit ports attached to clients operating in 50Gbit split port mode (2x50Gbit port per physical 100Gbit port). Priority Flow-Control enabled.
2AristaDCS-7050T-64-F4848Arista Switch DCS-7050T-64-F with 48 port 10Gbit ports, used as lab switch for administrative access. No special configuration applied, not part of the SUT.

Processing Elements - Physical

Item NoQtyTypeLocationDescriptionProcessing Function
124CPUProcessing Element LocationIntel XEON Gold 5222 4-core CPU 3.8GHzStorage Nodes
220CPUProcessing Element LocationIntel XEON Silver 4215R 8-core CPU 3.2GHzClient Nodes
31CPUProcessing Element LocationAMD Threadripper PRO 3955WX 16-core 3.9GHzPrime Client Node

Processing Element Notes

None

Memory - Physical

DescriptionSize in GiBNumber of InstancesNonvolatileTotal GiB
ELEMENTS BOLT (storage) memory19212V
ELEMENTS Gateway (client) memory3220V
Prime memory321V
Grand Total Memory Gibibytes2976

Memory Notes

The storage nodes have a total of 2304GiB memory, the client nodes have a total of 640GiB and the prime client node has a total of 32GiB RAM. The SUT has a total memory of 2944GiB.

Stable Storage

The ELEMENTS BOLT does not use write cache to store data in flight, writes are immediately committed to the NVMe storage media. FSYNC has been enforced on the whole storage stack (Default BeeGFS filesystem setting), all involved components use redundant power supplies and RAID1 in Write Through mode for operating system disks.

Solution Under Test Configuration Notes

The tested storage configuration is a common solution based on standard ELEMENTS hardware, designed for highest performance media production workflows. This includes ingesting and streaming media in various formats to video production workstations, but also high throughput frame-based and VFX workflows. All components used to perform the test were patched with Spectre and Meltdown patches (CVE-2017-5754, CVE-2017-5753, CVE-2017-5715).

Other Solution Notes

None

Dataflow

All storage and clients nodes are connected to the listed Mellanox switch. The storage is accessed from the client nodes via RDMA (RoCEv2) using the native BeeGFS client. The network layer has been configured to use Priority Flow-Control to manage the data flow as per Mellanox configuration guidelines for RoCEv2.

Other Notes

None

Other Report Notes

None


Generated on Tue Aug 24 07:10:13 2021 by SpecReport
Copyright © 2016-2021 Standard Performance Evaluation Corporation