SPEC SFS®2014_swbuild Result

Copyright © 2016-2019 Standard Performance Evaluation Corporation

WekaIO SPEC SFS2014_swbuild = 1200 Builds
WekaIO Matrix 3.1 with Supermicro BigTwin Servers Overall Response Time = 1.02 msec


Performance

Business
Metric
(Builds)
Average
Latency
(msec)
Builds
Ops/Sec
Builds
MB/Sec
1200.57260002887
2400.5991200041776
3600.6201800072663
4800.6622400093551
6000.7403000114440
7200.8843600135328
8401.1414200156216
9601.4924800177104
10801.7385400197992
12001.9716000158880
Performance Graph


Product and Test Information

WekaIO Matrix 3.1 with Supermicro BigTwin Servers
Tested byWekaIO
Hardware AvailableJuly 2017
Software AvailableNovember 2017
Date TestedFebruary 2018
License Number4553
Licensee LocationsSan Jose, California

WekaIO Matrix is a flash native parallel and distributed, scale out file system designed to solve the challenges of the most demanding workloads, including AI and machine learning, genomic sequencing, real-time analytics, media rendering, EDA, software development and technical compute. Matrix software allows managing and dynamically scaling data stores up to 100s of PB in size as a single name space, globally shared, POSIX compliant file system that delivers industry leading performance and scale at a fraction of traditional storage products price. The software can be deployed as a dedicated storage appliance or in a hyperconverged mode with zero additional storage footprint and can be used on-premises as well as in the public cloud. WekaIO Matrix is a software only solution that runs on any standard X.86 hardware infrastructure delivering huge savings compared to proprietary all-flash based appliances. This test platform is a dedicated storage implementation on Supermicro BigTwin servers.

Solution Under Test Bill of Materials

Item NoQtyTypeVendorModel/NameDescription
11Parallel File SystemWekaIOMatrix Software V3.1WekaIO Matrix is a parallel and distributed POSIX file system that scales across compute nodes and distributes data and metadata across the nodes for parallel access.
24Storage Server ChassisSupermicroSYS-2029BT-HNRSupermicro BigTwin chassis, each with 4 nodes per 2U chassis, populated with 4 NVMe drives per node.
3641.2TB U2 NVMe SSDMicronMTFDHAL1T2MCFMicron 9100 U.2 NVMe Enterprise Class Drives.
432ProcessorIntelBX806735122Intel Xeon Gold 5122 4C 3.6GHz Processor
516Network Interface CardMellanoxMCX456A-ECAT100Gbit ConnectX-4 Ethernet dual port PCI-E adapters, one per node.
6192DIMMSupermicroDIMM 1892mb 2667MHz SRx4 ECCSystem Memory DDR4 2667MHz ECC
716Boot DriveSupermicroMEM-IDSAVM8-128GSATA DOM Boot Drive, 128G
816Network Interface CardIntelAOC-MGP-I2M-O2 Port Intel i350 1GbE RJ45 SIOM
916BIOS ModuleSupermicroSFT-OOB-LICOut of Band Firmware Management BIOS-Flash
101SwitchMellanoxMSN2700-CS2FC32-port 100GbE Switch
1111ClientsAICHP-201-ADAIC chassis, each with 4 servers per 2U chassis. Each server had 2 Intel(R) Xeon(R) E5-2640 v4, CPUs and 128GB memory. A total of 11 of the 12 available servers were used in the testing.

Configuration Diagrams

  1. Solution Under Test

Component Software

Item NoComponentTypeName and VersionDescription
1Storage NodeMatrixFS File System3.1WekaIO Matrix is a distributed and parallel POSIX file system that runs on any NVMe, SAS or SATA enabled commodity server or cloud compute instance and forms a single storage cluster. The file system presents a POSIX compliant, high performance, scalable global namespace to the applications.
2Storage NodeOperating SystemCENTOS 7.3The operating system on each storage node was 64-bit CENTOS Version 7.3.
3ClientOperating SystemCENTOS 7.3The operating system on the load generator client was 64-bit CENTOS Version 7.3.
4ClientMatrixFS Client3.1MatrixFS Client software is mounted on the load generator clients and presents a POSIX compliant file system

Hardware Configuration and Tuning - Physical

Storage Node
Parameter NameValueDescription
SR-IOVEnabledEnables CPU virtualization technology

Hardware Configuration and Tuning Notes

SR-IOV was enabled in the node BIOS. Hyper threading was disabled. No additional hardware tuning was required.

Software Configuration and Tuning - Physical

Storage Node
Parameter NameValueDescription
Jumbo Frames4190Enables up to 4190 bytes of Ethernet Frames
Client
Parameter NameValueDescription
WriteAmplificationOptimizationLevel0Write amplification Optimization level

Software Configuration and Tuning Notes

The MTU is required and valid for all environments and workloads.

Service SLA Notes

Not applicable

Storage and Filesystems

Item NoDescriptionData ProtectionStable StorageQty
11.2TB U.2 Micron 9100 Pro NVMe SSD in the Supermicro BigTwin node14+2Yes64
2128G SATA DOM in the Supermicro BigTwin node to store OSYes16
Number of Filesystems1
Total Capacity48.8 TiB
Filesystem TypeMatrixFS

Filesystem Creation Notes

A single WekaIO Matrix file system was created and distributed evenly across all 64 NVMe drives in the cluster (16 storage nodes x 4 drives/node). Data was protected to a 14+2 failure level.

Storage and Filesystem Notes

WekaIO MatrixFS was created and distributed evenly across all 16 storage nodes in the cluster. The deployment model is as a dedicated server protected with Matrix Distributed Data Coding schema of 14+2. All data and metadata is distributed evenly across the 16 storage nodes.

Transport Configuration - Physical

Item NoTransport TypeNumber of Ports UsedNotes
1100GbE NIC16The solution used a total of 16 100GbE ports from the storage nodes to the network switch.
250GbE NIC11The solution used a total of 11 50GbE ports from the clients to the network switch.

Transport Configuration Notes

The solution under test utilized 16 100Gbit Ethernet ports from the storage nodes to the network switch. The clients utilized 11 50GbE connections to the network switch.

Switches - Physical

Item NoSwitch NameSwitch TypeTotal Port CountUsed Port CountNotes
1Mellanox MSN 2700100Gb Ethernet3227Switch has Jumbo Frames enabled

Processing Elements - Physical

Item NoQtyTypeLocationDescriptionProcessing Function
132CPUSupermicro BigTwin nodeIntel(R) Xeon(R) Gold 5122, 3.6Ghz, 4 core CPU WekaIO MatrixFS, Data Protection, device driver
222CPUAIC HP201-ADIntel(R) Xeon(R) E5-2640 v4, 2.4Ghz, 10 core CPU WekaIO MatrixFS client

Processing Element Notes

Each storage node has 2 processors, each processor has 4 cores at 3.6Ghz. Each client has 2 processors, each processor has 10 cores. WekaIO Matrix utilized 8 of the 20 available cores on the client to run Matrix functions. The Intel Spectre and Meltdown patches were not applied to any element of the SUT, including the processors.

Memory - Physical

DescriptionSize in GiBNumber of InstancesNonvolatileTotal GiB
Storage node memory9616V1536
Client node memory12811V1408
Grand Total Memory Gibibytes2944

Memory Notes

Each storage node has 96GBytes of memory for a total of 1,536GBytes. Each client has 128GBytes of memory, Matrix software utilized 20GBytes of memory per node.

Stable Storage

WekaIO does not use any internal memory to temporarily cache write data to the underlying storage system. All writes are committed directly to the storage disk, therefore there is no need for any RAM battery protection. Data is protected on the storage media using WekaIO Matrix Distributed Data Protection (14+2). In the event of a power failure a write in transit would not be acknowledged.

Solution Under Test Configuration Notes

The solution under test was a standard WekaIO Matrix enabled cluster in dedicated server mode. The solution will handle both large file I/O as well as small file random I/O and metadata intensive applications. No specialized tuning is required for different or mixed use workloads.

Other Solution Notes

None

Dataflow

3 x AIC HP201-AD storage Chassis (11 clients) were used to generate the benchmark workload. Each client had 1 x 50GbE network connection to a Mellanox MSN 2700 switch. 4 x Supermicro BigTwin 2029BT-HNR storage chassis (16 nodes) were benchmarked. Each storage node had 1 x 100GbE network connection to the same Mellanox MSN 2700 switch. The clients (AIC) had the MatrixFS native NVMe POSIX Client mounted and had direct and parallel access to all 16 storage nodes.

Other Notes

None

Other Report Notes

None


Generated on Wed Mar 13 16:41:47 2019 by SpecReport
Copyright © 2016-2019 Standard Performance Evaluation Corporation