SPEC SFS®2014_swbuild Result

Copyright © 2016-2019 Standard Performance Evaluation Corporation

E8 Storage SPEC SFS2014_swbuild = 600 Builds
E8 Storage D24 with IBM Spectrum Scale 5.0 Overall Response Time = 0.69 msec


Performance Graph

Product and Test Information

E8 Storage D24 with IBM Spectrum Scale 5.0
Tested byE8 Storage
Hardware AvailableDecember 2016
Software AvailableDecember 2017
Date TestedDecember 2017
License Number4847
Licensee LocationsSanta Clara, CA, USA

E8 Storage is a pioneer in shared accelerated storage for data-intensive, high-performance applications that drive business revenue. E8 Storage's affordable, reliable and scalable solution is ideally suited for the most demanding low-latency workloads, including real-time analytics, financial and trading applications, transactional processing and large-scale file systems. Driven by the company's patented architecture, E8 Storage's high-performance shared NVMe storage solution delivers 10 times the performance at half the cost of existing storage products. With E8 Storage, enterprise data centers can enjoy unprecedented storage performance density and scale, delivering NVMe performance without compromising on reliability and availability.

IBM Spectrum Scale helps solve the challenge of explosive growth of unstructured data against a flat IT budget. Spectrum Scale provides unified file and object software-defined storage for high performance, large scale workloads on-premises or in the cloud. Spectrum Scale includes the protocols, services and performance required by many industries, Technical Computing, Big Data, HDFS and business critical content repositories. IBM Spectrum Scale provides world-class storage management with extreme scalability, flash accelerated performance, and automatic policy-based storage tiering from flash through disk to tape, reducing storage costs up to 90% while improving security and management efficiency in cloud, big data & analytics environments.

Solution Under Test Bill of Materials

Item NoQtyTypeVendorModel/NameDescription
116Spectrum Scale ClientIBMX3650-M4Spectrum Scale 5.0 client nodes
216Network Interface CardMellanoxConnectX-5 VPIDual port 100GbE Adapters used in each Spectrum Scale Server
31Storage ApplianceE8 StorageE8-D24Dual controller storage appliance with 24 HGST SN200 1.6TB dual-port NVMe SSDs, 2 x Intel Xeon 2.0GHz 14-core CPU and 128GB RAM per controller. 2 x Mellanox ConnectX-4 EN network interface cards are installed per controller
41SwitchMellanoxSN270032-port 100GbE switch
51SwitchJuniperEX4200 Series 8PoE48-port 1GbE switch

Configuration Diagrams

  1. E8 Storage with IBM Spectrum Scale

Component Software

Item NoComponentTypeName and VersionDescription
1Client NodesSpectrum Scale File System5.0.0The Spectrum Scale File System is a distributed file system that runs on both the Elastic Storage Server nodes and client nodes to form a cluster. The cluster allows for the creation and management of single namespace file systems.
2Client NodesOperating SystemRHEL 7.4The operating system on the client nodes was 64-bit Red Hat Enterprise Linux version 7.4.
3Client NodesE8 Storage Agent2.1.1The E8 Storage Agent is a client driver which manages communication and data transfer between the client and storage appliance
4Storage ApplianceE8 Storage Software2.1.1E8 Storage software provides centralized management and high availability functionality for the E8 Storage solution
5Storage ApplianceOperating SystemRHEL 7.4The operating system on the storage appliance was 64-bit Red Hat Enterprise Linux version 7.4.

Hardware Configuration and Tuning - Physical

Spectrum Scale Client Nodes
Parameter NameValueDescription
numaMemoryInterleaveyesEnables memory interleaving on NUMA based systems.
verbsRdmaenableEnables Ethernet RDMA transfers between Spectrum Scale client nodes and E8 Storage controllers
verbsRdmaSendyesEnables the use of Ethernet RDMA for most Spectrum Scale daemon-to-daemon communication.
verbsPortsmlx5_1/1/1 mlx5_1/1/26001Ethernet device names and port numbers.
txqueuelen10000Defines the transmission queue length for the Mellanox adapter

Hardware Configuration and Tuning Notes

The Spectrum Scale configuration parameter was set using the mmchconfig command on one of the nodes in the cluster. The verbs settings in the table above allow for efficient use of the RoCE infrastructure. The settings determine when data are transferred over IP and when they are transferred using the verbs protocol.

Software Configuration and Tuning - Physical

Spectrum Scale Client Nodes
Parameter NameValueDescription
maxStatCache0Specifies the number of inodes to keep in the stat cache.
workerThreads128Controls the maximum number of concurrent file operations at any one instant, as well as the degree of concurrency for flushing dirty data and metadata in the background and for prefetching data and metadata.
maxMBpS10kSpecifies an estimate of how many megabytes of data can be transferred per second into or out of a single node.
pagepool16gSpecifies the size of the cache on each node.
maxFilesToCache7mSpecifies the number of inodes to cache for recently used files that have been closed.
ignorePrefetchLUNCountyesSpecifies that only maxMBpS and not the number of LUNs should be used to dynamically allocate prefetch threads.
prefetchAggressiveness1Defines how aggressively to prefetch data. 1 means prefetch on 2nd access if sequential.
prefetchPct5Specifies what percent of the pagepool (cache) can be used for prefetching.
syncInterval30Specifies the interval (in seconds) in which data that has not been explicitly committed by the client is synced systemwide.
E8 Storage Agent
Parameter NameValueDescription
e8block threads3Specifies the number of threads used by the e8block process on the host servers

Software Configuration and Tuning Notes

The configuration parameters for the Spectrum Scale file system were set using the mmchconfig command on one of the nodes in the cluster. The nodes used mostly default tuning parameters. A discussion of Spectrum Scale tuning can be found in the official documentation for the mmchconfig command and on the IBM developerWorks wiki (for additional information see http://files.gpfsug.org/presentations/2014/UG10_GPFS_Performance_Session_v10.pdf).

The E8 Storage controller used default parameters for all volumes.

Service SLA Notes

There were no opaque services in use

Storage and Filesystems

Item NoDescriptionData ProtectionStable StorageQty
124 x 1.6TB NVMe SSDs in the E8-D24RAID-6Yes1
22 x 300GB 10K SAS HDD internal drives per Spectrum Scale client nodes used to store the OSRAID-1No32
Number of Filesystems1
Total Capacity24TB
Filesystem TypeSpectrum Scale File System

Filesystem Creation Notes

A single Spectrum Scale file system was created with a 4 MiB block size for data and metadata, and 4 KiB inode size. The 24TB file system has 1 data and one metadata volume (aka pool). Each client node mounted the file system.

The nodes each had an ext4 file system that hosted the operating system.

Storage and Filesystem Notes

The E8 Storage appliance has 24 1.6TB drives configured as a 22+2 RAID group for data protection, with a 16+2 stripe size. The data and metadata volumes were provisioned from this single RAID group, with the volumes spanning all drives in the RAID group. All client nodes had shared read / write access to both volumes.

The cluster used a single-tier architecture. The Spectrum Scale nodes performed both file and block level operations. Each node had access to shared volumes, so any file operation on a node was translated to a block operation and serviced on the same node.

Transport Configuration - Physical

Item NoTransport TypeNumber of Ports UsedNotes
11 GbE cluster network18Each Spectrum Scale node and E8 Storage controller connects to a 1 GbE administration network with MTU=1500.
2100 GbE cluster network16Client nodes each have a single port connected to the switch via a 50GbE split cable and each E8 Storage controller has 4 100GbE ports connected to a shared 100GbE cluster network, set to MTU=4200. The ring buffers (rx and tx) were set to 8192 on the network adapters

Transport Configuration Notes

The 1GbE network was used for administrative purposes and for Spectrum Scale inter-node communication. All benchmark traffic flowed through the Mellanox SN2700 100Gb Ethernet switch. Each client node had a single active 50Gb Ethernet port which was connected to the switch with a split cable (2 50GbE clients per 100GbE switch port).

Switches - Physical

Item NoSwitch NameSwitch TypeTotal Port CountUsed Port CountNotes
1Mellanox SN2700100Gb Ethernet3216The default configuration was used on the switch
2Juniper EX4200 Series 8PoE1Gb Ethernet4818Administrative network only. The default configuration was used on the switch.

Processing Elements - Physical

Item NoQtyTypeLocationDescriptionProcessing Function
132CPUSpectrum Scale client nodesIntel(R) Xeon(R) CPU E5-2630 v2 2.60GHz 6-coreSpectrum Scale client, E8 Agent, load generator, device drivers
24CPUE8-D24 controllerIntel(R) Xeon(R) CPU E5-2660 v4 2.00GHz 14-core E8 Storage server, E8 Storage RAID, device drivers

Processing Element Notes

Each of the Spectrum Scale client nodes had 2 physical processors. Each processor had 6 physical cores with one thread per core by default.

The E8-D24 is a dual controller appliance, each controller has 2 physical processors. Each processor had 14 cores with one thread per core.

Memory - Physical

DescriptionSize in GiBNumber of InstancesNonvolatileTotal GiB
Spectrum Scale client node system memory12816V2048
E8 Storage controller system memory1616V256
Grand Total Memory Gibibytes2304

Memory Notes

In the client nodes, Spectrum Scale reserves a portion of the physical memory (per the pagepool setting above) for file data and metadata caching. Some additional memory is dynamically allocated for buffers used for node to node communication and up to 7 million inode stat informations per node.

In the E8 Storage controller, a portion of the physical memory is reserved for block write data and system metadata caching.

Stable Storage

The E8 Storage controller uses a portion of internal memory to temporarily cache write data (as well as store modified data) before being written to the SSDs. Writes are acknowledged as successful once they are stored in the controller write cache, and a redundant copy is kept by the E8 agent on the host. In the event of a controller failure, the hosts will replay the write cache for the surviving controller. In the event of a power failure, each controller has backup battery power which is combined with power-fail protection on the SSDs to ensure data is committed to SSDs prior to shutdown.

Solution Under Test Configuration Notes

The solution under test was a Spectrum Scale cluster optimized for small file, metadata intensive environments. The Spectrum Scale nodes were also the load generators for the benchmark. The benchmark was executed from one of the nodes.

Other Solution Notes



The 16 Spectrum Scale nodes were the load generators for the benchmark. Each load generator had access to the single namespace Spectrum Scale file system. The benchmark accessed a single mount point on each load generator. In turn each of mount points corresponded to a single shared base directory in the file system. The nodes process the file operations, and the data requests to and from the backend storage were serviced locally on each node by the E8 Client Agent.

Other Notes

IBM and IBM Spectrum Scale are trademarks of International Business Machines Corp., registered in many jurisdictions worldwide.

Intel and Xeon are trademarks of the Intel Corporation in the U.S. and/or other countries.

Mellanox is a registered trademark of Mellanox Ltd.

Other Report Notes


Generated on Wed Mar 13 16:56:57 2019 by SpecReport
Copyright © 2016-2019 Standard Performance Evaluation Corporation