SPEC SFS®2014_vda Result

Copyright © 2016-2019 Standard Performance Evaluation Corporation

Cisco Systems Inc. SPEC SFS2014_vda = 2070 Streams
Cisco UCS S3260 with MapR-XD Overall Response Time = 12.94 msec


Performance Graph

Product and Test Information

Cisco UCS S3260 with MapR-XD
Tested byCisco Systems Inc.
Hardware AvailableNovember 2016
Software AvailableAugust 2017
Date TestedOctober 2017
License Number9019
Licensee LocationsSan Jose, CA USA

Cisco UCS Integrated Infrastructure

Cisco Unified Computing System (UCS) is the first truly unified data center platform that combines industry- standard, x86-architecture servers with network and storage access into a single system. The system is intelligent infrastructure that is automatically configured through integrated, model-based management to simplify and accelerate deployment of all kinds of applications. The system's x86-architecture rack and blade servers are powered exclusively by Intel(R) Xeon(R) processors and enhanced with Cisco innovations. These innovations include built-in virtual interface cards (VICs), leading memory capacity, and the capability to abstract and automatically configure the server state. Cisco's enterprise-class servers deliver world-record performance to power mission-critical workloads. Cisco UCS is integrated with a standards-based, high-bandwidth, low-latency, virtualization-aware unified fabric, with a new generation of Cisco UCS fabric enabling 40 Gbps.

Cisco UCS S3260 Servers

The Cisco UCS S3260 Storage Server is a high-density modular storage server designed to deliver efficient, industry-leading storage for data-intensive workloads. The S3260 is a modular chassis with dual server nodes (up to two servers per chassis) and up to 60 large-form-factor (LFF) drives in a 4RU form factor.

MapR-XD is a highly reliable, globally distributed data store creating a distributed data fabric for managing files, objects and containers. MapR-XD supports the most stringent speed, scale, and reliability requirements within and across multiple edge and on-premises environments. MapR-XD is an entire software defined storage solution which can be run on any x86 server. In addition, MapR-XD deliver enterprise data services enabling customers to deploy quickly on production environments.

Solution Under Test Bill of Materials

Item NoQtyTypeVendorModel/NameDescription
16Server ChassisCiscoUCS S3260 ChassisThe Cisco UCS S3260 Chassis can support up to two server nodes and Fifty-six drives, or 1 server node and sixty drives, in a compact 4-rack-unit (4RU) form factor, with 4 x Cisco UCS 1050W AC Power Supply
212Storage Server nodeCiscoUCS S3260 M4 Server NodeCisco UCS S3260 M4 servers, each with: 2 X Intel Xeon processors E5-2680 v4 (28 cores per node), 256 GB of memory (16x16GB 2400MHz DIMMs), Cisco UCS C3000 RAID Controller w 4 GB RAID Cache
312System IO Controller with VIC 1300CiscoS3260 SIOCCisco UCS S3260 SIOC with integrated Cisco UCS VIC 1300, one per server node
4192Storage HDD, 8TB, 7200 RPMCiscoUCS HD8TB8TB 7200 RPM drives for storage, Sixteen per server node. Please note, on a fully populated chassis with two server nodes, we can have twenty-eight drives per server node
51Blade Server ChassisCiscoUCS 5108The Cisco UCS 5108 Blade Server Chassis features flexible bay configurations for blade servers. It can support up to eight half-width blades, up to four full-width blades, or up to two full-width double-height blades in a compact 6-rack-unit (6RU) form factor
68Blade Server, Client nodesCiscoUCS B200 M4UCS B200 M4 Blade Servers, each with: 2X Intel Xeon processors E5-2660 v3 (20 core per node) 256 GB of memory
72Fabric ExtenderCiscoUCS 2304Cisco UCS 2300 Series Fabric Extenders can support up to four 40-Gbps unified fabric uplinks per fabric extender connecting Fabric Interconnect.
88Virtual Interface CardCiscoUCS VIC 1340The Cisco UCS Virtual Interface Card (VIC) 1340 is a 2-port, 40 Gigabit Ethernet, Fibre Channel over Ethernet (FCoE)-capable modular LAN on motherboard (mLOM) mezzanine adapter.
92Fabric InterconnectCiscoUCS 6332Cisco UCS 6300 Series Fabric Interconnects support line-rate, lossless 40 Gigabit Ethernet and FCoE connectivity.
101Cisco Nexus 40Gbps SwitchCiscoCisco Nexus 9332PQThe Cisco Nexus 9332PQ Switch has 32 x 40 Gbps Quad Small Form Factor Pluggable Plus (QSFP+) ports. All ports are line rate, delivering 2.56 Tbps of throughput in a 1-rack-unit (1RU) form factor.
111MapR-XD File SystemMapR TechnologiesMapR-XDMapR File system, provided by MapR Technologies, is a pioneer in bringing analytics and enterprise applications together.
128FUSE-based posix clientMapR TechnologiesFUSE-based posix client premiumMapR FUSE-based POSIX Client allows app servers and client nodes to read and write data directly to a MapR cluster like a Linux filesystem.

Configuration Diagrams

  1. Solution Under Test Diagram - Topological View
  2. Solution Under Test Diagram - Physical View

Component Software

Item NoComponentTypeName and VersionDescription
1Storage Server NodesMapR-XD Scalable Converged Data Platform5.2MapR-XD is a highly reliable, globally distributed data store creating a distributed data fabric for managing files, objects and containers. It runs over the Cisco UCS S3260 servers to form a cluster. The cluster allows for the creation and management of single namespace file systems.
2Client nodesFUSE-based posix client5.2MapR FUSE-based POSIX Client allows app servers and client nodes to read and write data directly to a MapR cluster like a Linux filesystem
3Storage Server and Client nodesOperating SystemRed Hat Enterprise Linux 7.2 for x86_64The operating system on the (storage and client) nodes was 64-bit Red Hat Enterprise Linux version 7.2

Hardware Configuration and Tuning - Physical

Storage Nodes
Parameter NameValueDescription
scaling_governorperformanceSets the CPU frequency to performance
Intel Turbo BoostEnabledEnables the processor to run above its base operating frequency
Intel Hyper-ThreadingEnabledEnables multiple threads to run on each core, improving parallelization of computations performed
mtu9000Sets the Maximum Transmission Unit (MTU) to 9000 for improved throughput

Hardware Configuration and Tuning Notes

The main part of the hardware configuration was handled by Cisco UCS Mananger (UCSM). It supports creation of "Service Profiles", where in all the tuning parameters are specified with their respective values at the start. These service profiles are then replicated across servers and applied during deployment.

Software Configuration and Tuning - Physical

Storage Nodes
Parameter NameValueDescription
Storage Pool Width8Number of drives per storage pool. To set SP width during setup, use the -disk-opts W:'storage pool width' option when running configure.sh command (The script is located in /opt/mapr/server)
Replication factor1Only keep one replica, to change the factor, go to MCS, select Volume -> replication factor -> 1

Software Configuration and Tuning Notes

The nodes used default tuning parameters except where specified in the Hardware and Software tuning sections.

Service SLA Notes

There were no opaque services in use.

Storage and Filesystems

Item NoDescriptionData ProtectionStable StorageQty
1Two 480GB Boot SSDs per server, used to store the operating system for each storage node.RAID-1Yes24
2Sixteen 8TB Large Form Factor (LFF) HDD per server node. Per the design, each server node had 2 storage pools with 8 drives eachNoneYes192
Number of Filesystems1
Total Capacity1300 TiB
Filesystem TypeMapR-XD

Filesystem Creation Notes


Storage and Filesystem Notes

Each UCS S3260 server node in the cluster was populated with 16 8TiB Large Form Factor (LFF) HDDs. The drives were configured in 2 storage pools per node, each one having 8 drives. The cluster used a single-tier architecture, with a file system mount point exposed to the client nodes.

Other Notes (about BOOT SSDs): Per Server: 2 x 480GiB physical drives, Protection: RAID-1, UsableGiB: 480GiB

Transport Configuration - Physical

Item NoTransport TypeNumber of Ports UsedNotes
140GbE Network48Each S3260 server node connects to the Fabric Interconnect over a 40Gb Link. Thus there are twelve 40Gb links to each Fabric Interconnect (configured in active-standby mode). The Cisco UCS Blade chassis connects to each Fabric Interconnect with four 40Gb links, with MTU=9000

Transport Configuration Notes

The two Cisco UCS 6332 fabric interconnects function in HA mode (active-standby) as 40 Gbps Ethernet switches.

Cisco UCS S3260 Server nodes (Storage Servers): Each Cisco UCS S3260 Chassis has two server nodes. Each S3260 server node has an S3260 SIOC with an integrated VIC 1300. This provides 40G connectivity for each server node to each Fabric Interconnect (configured as active-standby).

Cisco UCS B200 M4 Blade servers (Clients Nodes): Each of the Cisco UCS B200 M4 blade servers comes with a Cisco UCS Virtual Interface Card 1340. The two port card supports 40 GbE and FCoE. Physically the card connects to the UCS 2304 fabric extenders via internal chassis connections. The eight total ports from the fabric extenders connect to the UCS 6332 fabric interconnects. The 40G links on the B200 M4 server blades were bonded in the operating system to provide enhanced throughput for the clients (the traffic across Fabric Interconnects was through Cisco Nexus 9332PQ switch).

Detailed Description of the ports used:

2 x {Cisco UCS 6332} in active/standby config.

total ports for each 6332 = (12 x S3260) + (4 x Blade chassis) + (4 x Uplinks) = 20 ports per 6332

For Nexus 9332 (upstream switch), 4 ports connected from each 6332. Thus total used ports = 8

Overall, total used ports = (20x2) + 8 = 48

Switches - Physical

Item NoSwitch NameSwitch TypeTotal Port CountUsed Port CountNotes
1Cisco UCS 6332 #140 GbE3220The Cisco UCS 6332 Fabric Interconnect forms the management and communication backbone for the servers.
2Cisco UCS 6332 #240 GbE3220The Cisco UCS 6332 Fabric Interconnect forms the management and communication backbone for the servers.
3Cisco Nexus 933240 GbE328Cisco Nexus 9332PQ used as an upstream Switch

Processing Elements - Physical

Item NoQtyTypeLocationDescriptionProcessing Function
124CPUFile System Storage NodesIntel Xeon CPU E5-2680 v4 @ 2.40GHz 14-coreFile System Storage Nodes
216CPUFile System Client NodesIntel Xeon CPU E5-2660 v3 @ 2.60GHz 10-coreFile System Client Nodes, load generator

Processing Element Notes

Each node in the system (client and server nodes) had two physical processors each. Each processor had multiple cores as mentioned in the table above.

Memory - Physical

DescriptionSize in GiBNumber of InstancesNonvolatileTotal GiB
System memory on storage node25612V3072
System memory on client node2568V2048
Grand Total Memory Gibibytes5120

Memory Notes


Stable Storage

The two fabric interconnects are configured in the active-standby mode providing complete High Availability (HA) to the entire cluster, ensuring complete stability/availability of the cluster in case of link failures. The storage is over 8TB Large Form Factor (LFF) HDDs. MapR-XD is used for the file system over the underlying storage, 16 x 8TB Large Form Factor (LFF) HDDs per S3260 server. For stable writes and commit operations, MapR-FS acknowledges a write only after all the replicas have been made and acknowledgement of write receieved by the underlying storage system.

Solution Under Test Configuration Notes

The solution under test was a Cisco UCS S3260 with MapR-XD cluster, a solution well suited for streaming environments. The storage server nodes were S3260 servers. UCS B200 M4 blade servers (fully populated in the blade server chassis) were used as load generators for the benchmark. Each node was connected over a 40Gb link to the two fabric interconnects (configured in HA mode).

Other Solution Notes



The 6 Cisco UCS S3260 chassis with two server nodes each were used for the storage (MapR-XD servers). These servers were populated with 16 8TB LFF HDDs each. The 8 Cisco UCS B200 M4 blades were the load generators for the benchmark (client nodes). Each load generator had access to the single namespace MapR-XD file system. The benchmark accessed a single mount point on each load generator. The data requests to and from disk were serviced by the MapR-XD server nodes. All nodes were connected with 40Gb link connectivity across the cluster.

Other Notes

Cisco UCS is a trademark of Cisco Systems Inc. in the USA and/or other countries.

MapR-XD is a trademark of MapR Data Technologies, registered in many jurisdictions worldwide.

Intel and Xeon are trademarks of the Intel Corporation in the U.S. and/or other countries.

Other Report Notes


Generated on Wed Mar 13 16:45:32 2019 by SpecReport
Copyright © 2016-2019 Standard Performance Evaluation Corporation