SPEC SFS®2014_vda Result

Copyright © 2016-2019 Standard Performance Evaluation Corporation

IBM Corporation SPEC SFS2014_vda = 1720 Streams
IBM Spectrum Scale 4.2.1 with Cisco UCS and IBM FlashSystem 900 Overall Response Time = 2.92 msec


Performance Graph

Product and Test Information

IBM Spectrum Scale 4.2.1 with Cisco UCS and IBM FlashSystem 900
Tested byIBM Corporation
Hardware AvailableFebruary 2016
Software AvailableSeptember 2016
Date TestedNovember 2016
License Number11
Licensee LocationsRaleigh, NC USA

IBM Spectrum Scale helps solve the challenge of explosive growth of unstructured data against a flat IT budget.

Spectrum Scale provides unified file and object software-defined storage for high performance, large scale workloads on-premises or in the cloud. Spectrum Scale includes the protocols, services and performance required by many industries, Technical Computing, Big Data, HDFS and business critical content repositories. IBM Spectrum Scale provides world-class storage management with extreme scalability, flash accelerated performance, and automatic policy-based storage tiering from flash through disk to tape, reducing storage costs up to 90% while improving security and management efficiency in cloud, big data & analytics environments.

Cisco UCS is the first truly unified data center platform that combines industry- standard, x86-architecture servers with networking and storage access into a single system. The system is intelligent infrastructure that is automatically configured through integrated, model-based management to simplify and accelerate deployment of all kinds of applications. The system's x86-architecture rack and blade servers are powered exclusively by Intel(R) Xeon(R) processors and enhanced with Cisco innovations. These innovations include the capability to abstract and automatically configure the server state, built-in virtual interface cards (VICs), and leading memory capacity. Cisco's enterprise-class servers deliver world-record performance to power mission-critical workloads. , Cisco UCS is integrated with a standards-based, high-bandwidth, low-latency, virtualization-aware 10-Gbps unified fabric, with a new generation of Cisco UCS fabric enabling an update to 40 Gbps.

Solution Under Test Bill of Materials

Item NoQtyTypeVendorModel/NameDescription
11Blade Server ChassisCiscoUCS 5108The Cisco UCS 5108 Blade Server Chassis features flexible bay configurations for blade servers. It can support up to eight half-width blades, up to four full-width blades, or up to two full-width double-height blades in a compact 6-rack-unit (6RU) form factor.
24Blade Server, Spectrum Scale NodeCiscoUCS B200 M4UCS B200 M4 Blade Servers, each with: 2X Intel Xeon processors E5-2680 v3 (24 core per node) 256 GB of memory
32Fabric InterconnectCiscoUCS 6332-16UPCisco UCS 6300 Series Fabric Interconnects support line-rate, lossless 40 Gigabit Ethernet and FCoE connectivity.
42Fabric ExtenderCiscoUCS 2304Cisco UCS 2300 Series Fabric Extenders can support up to four 40-Gbps unified fabric uplinks per fabric extender connecting Fabric Interconnect.
54Virtual Interface CardCiscoUCS VIC 1340The Cisco UCS Virtual Interface Card (VIC) 1340 is a 2-port, 40 Gigabit Ethernet, Fibre Channel over Ethernet (FCoE)-capable modular LAN on motherboard (mLOM) mezzanine adapter.
62FlashSystemIBM9840-AE2Each FlashSystem was configured with 12 2.9TB IBM MicroLatency modules (feature code AF24).

Configuration Diagrams

  1. Solution Under Test Diagram

Component Software

Item NoComponentTypeName and VersionDescription
1Spectrum Scale NodesSpectrum Scale File System4.2.1.1The Spectrum Scale File System is a distributed file system that runs on the Cisco UCS B200 M4 servers to form a cluster. The cluster allows for the creation and management of single namespace file systems.
2Spectrum Scale NodesOperating SystemRed Hat Enterprise Linux 7.2 for x86_64The operating system on the Spectrum Scale nodes was 64-bit Red Hat Enterprise Linux version 7.2.
3FlashSystem 900Storage System1.4.4.2The FlashSystem software covers all aspects of administering, configuring, and monitoring the FlashSystem 900.

Hardware Configuration and Tuning - Physical

Spectrum Scale Nodes
Parameter NameValueDescription
numaMemoryInterleaveyesEnables memory interleaving on NUMA based systems.
multipath device: path_selectorqueue-length 0Determines which algorithm to use when selecting paths. With this value the path with the least amount of outstanding I/O is selected.
multipath device: path_grouping_policymultibusDetermines which grouping policy to use for a set of paths. The multibus value causes all paths to be placed in one priority group.

Hardware Configuration and Tuning Notes

The first configuration parameter was set using the "mmchconfig" command on one of the nodes in the cluster. The multipath device parameters were set in the multipath.conf file on each node. A template multipath.conf file for the FlashSystem can be found in the "Implementing IBM FlashSystem 900" Redbook, published by IBM.

Software Configuration and Tuning - Physical

Spectrum Scale Nodes
Parameter NameValueDescription
ignorePrefetchLUNCountyesSpecifies that only maxMBpS and not the number of LUNs should be used to dynamically allocate prefetch threads.
maxblocksize1MSpecifies the maximum file system block size.
maxMBpS10000Specifies an estimate of how many megabytes of data can be transferred per second into or out of a single node.
maxStatCache0Specifies the number of inodes to keep in the stat cache.
pagepoolMaxPhysMemPct90Percentage of physical memory that can be assigned to the page pool
scatterBufferSize256KSpecifies the size of the scatter buffers.
workerThreads1024Controls the maximum number of concurrent file operations at any one instant, as well as the degree of concurrency for flushing dirty data and metadata in the background and for prefetching data and metadata.
maxFilesToCache11MSpecifies the number of inodes to cache for recently used files that have been closed.
pagepool96GSpecifies the size of the cache on each node.
nsdBufSpace70Sets the percentage of the pagepool that is used for NSD (Network Shared Disk) buffers.
nsdMaxWorkerThreads3072Sets the maximum number of threads to use for block level I/O on the NSDs.
nsdMinWorkerThreads3072Sets the minimum number of threads to use for block level I/O on the NSDs.
nsdMultiQueue64Specifies the maximum number of queues to use for NSD I/O.
nsdThreadsPerDisk3Specifies the maximum number of threads to use per NSD.
nsdThreadsPerQueue48Specifies the maximum number of threads to use per NSD I/O queue.
nsdSmallThreadRatio1Specifies the ratio of small thread queues to small thread queues.

Software Configuration and Tuning Notes

The configuration parameters were set using the "mmchconfig" command on one of the nodes in the cluster. Both the nodes used mostly default tuning parameters. A discussion of Spectrum Scale tuning can be found in the official documentation for the mmchconfig command and on the IBM developerWorks wiki.

Service SLA Notes

There were no opaque services in use.

Storage and Filesystems

Item NoDescriptionData ProtectionStable StorageQty
1FlashSystem 900 volumes, 700 GiB each, used as Network Shared Drives for Spectrum Scale.RAID-5Yes64
2FlashSystem 900 volumes, 100 GiB each, used to store the operating system of each Spectrum Scale node.RAID-5Yes4
Number of Filesystems1
Total Capacity44800 GiB
Filesystem TypeSpectrum Scale File System

Filesystem Creation Notes

A single Spectrum Scale file system was created with a 1 MiB block size for data and metadata, 4 KiB inode size, and a 128 MiB log size. The file system was spread across all of the Network Shared Disks (NSDs). Each client node mounted the file system. The file system parameters reflect values that might be used in a typical streaming environment.

The nodes each had an ext4 file system that hosted the operating system.

Storage and Filesystem Notes

Each of the FlashSystem 900 systems had 12 2.9 TiB flash modules. On each system one module was used as a spare and the remaining 11 modules were configured into a RAID-5 array. At that point volumes were created and mapped to the hosts, which were the Spectrum Scale nodes.

There were two sets of volumes used in the benchmark. On one of the FlashSystem 900 systems 4 100 GiB volumes were created, and each was mapped to a single host to be used for the node operating system. On each of the 2 FlashSystem 900 systems 32 700 GiB volumes were created, and each volume was mapped to all 4 hosts. These volumes were configured as NSDs by Spectrum Scale and used as storage for the Spectrum Scale file system.

The cluster used a single-tier architecture. The Spectrum Scale nodes performed both file and block level operations. Each node had access to all of the NSDs, so any file operation on a node was translated to a block operation and serviced on the same node.

Transport Configuration - Physical

Item NoTransport TypeNumber of Ports UsedNotes
140 GbE cluster network4Each node connects to a 40 GbE administration network with MTU=1500
216 Gbps SAN24There were 16 total connections from storage and 8 total connections from servers.

Transport Configuration Notes

Each of the Cisco UCS B200 M4 blade servers comes with a Cisco UCS Virtual Interface Card 1340. The two port card supports 40 GbE and FCoE. To the operating system on the blade servers the card appears as a NIC for Ethernet and as an HBA for fibre channel connectivity. Physically the card connects to the UCS 2304 fabric extenders via internal chassis connections. The eight total ports from the fabric extenders connect to the UCS 6332-16UP fabric interconnects. The two fabric interconnects function as both 16 Gbps FC switches and as 40 Gbps Ethernet switches.

Switches - Physical

Item NoSwitch NameSwitch TypeTotal Port CountUsed Port CountNotes
1Cisco UCS 6332-16UP #140 GbE and 16 Gbps FC4012The default configuration was used on the switch.
2Cisco UCS 6332-16UP #240 GbE and 16 Gbps FC4012The default configuration was used on the switch.

Processing Elements - Physical

Item NoQtyTypeLocationDescriptionProcessing Function
18CPUSpectrum Scale client nodesIntel Xeon CPU E5-2680 v3 @ 2.50GHz 12-coreSpectrum Scale nodes, load generator, device drivers

Processing Element Notes

Each of the Spectrum Scale client nodes had 2 physical processors. Each processor had 12 cores with two threads per core.

Memory - Physical

DescriptionSize in GiBNumber of InstancesNonvolatileTotal GiB
Spectrum Scale node system memory2564V1024
Grand Total Memory Gibibytes1024

Memory Notes

Spectrum Scale reserves a portion of the physical memory in each node for file data and metadata caching. A portion of the memory is also reserved for buffers used for node to node communication.

Stable Storage

All of the storage used by the benchmark was non-volatile flash storage. Modified writes were not acknowledged as complete until the data was written to the FlashSystem 900s. Each FlashSystem has two battery modules that in the case of a power failure allow the system to remain powered long enough for all of the data in the system's write cache to be committed to the flash modules.

Solution Under Test Configuration Notes

The solution under test was a Spectrum Scale cluster optimized for streaming environments. The Spectrum Scale nodes were also the load generators for the benchmark. The benchmark was executed from one of the nodes.

Other Solution Notes

The WARMUP_TIME for the benchmark was 600 seconds.


The 4 Spectrum Scale nodes were the load generators for the benchmark. Each load generator had access to the single namespace Spectrum Scale file system. The benchmark accessed a single mount point on each load generator. In turn each of mount points corresponded to a single shared base directory in the file system. The nodes process the file operations, and the data requests to and from the backend storage were serviced locally on each node. Block access to each LUN on the nodes was controlled via Linux multipath.

Other Notes

IBM, IBM Spectrum Scale, IBM FlashSystem, and MicroLatency are trademarks of International Business Machines Corp., registered in many jurisdictions worldwide.

Cisco UCS is a trademark of Cisco in the USA and certain other countries.

Intel and Xeon are trademarks of the Intel Corporation in the U.S. and/or other countries.

Other Report Notes


Generated on Wed Mar 13 16:50:35 2019 by SpecReport
Copyright © 2016-2019 Standard Performance Evaluation Corporation