SPEC SFS®2014_swbuild Result

Copyright © 2016-2019 Standard Performance Evaluation Corporation

Huawei SPEC SFS2014_swbuild = 1000 Builds
Huawei OceanStor 6800F V5 Overall Response Time = 0.59 msec


Performance

Business
Metric
(Builds)
Average
Latency
(msec)
Builds
Ops/Sec
Builds
MB/Sec
1000.15250002646
2000.1481000041293
3000.1681500061941
4000.5672000092586
5000.4112500083235
6000.5593000123881
7000.4393500154528
8000.6584000155175
9001.2794500115822
10002.0735000096469
Performance Graph


Product and Test Information

Huawei OceanStor 6800F V5
Tested byHuawei
Hardware Available04/2018
Software Available04/2018
Date Tested04/2018
License Number3175
Licensee LocationsChengdu, China

Huawei's OceanStor 6800F V5 Storage System is the new generation of mission-critical all-flash storage, dedicated to providing the highest level of data services for enterprises' mission-critical business. Flexible scalability, flash-enabled performance, and hybrid-cloud-ready architecture, provide the optimal data services for enterprises along with simple and agile management.

Solution Under Test Bill of Materials

Item NoQtyTypeVendorModel/NameDescription
11Storage ArrayHuaweiOceanStor 6800F V5 All Flash System(Four Active-Active Controller)A single Huawei OceanStor 6800F V5 engine includes 4 controllers. Huawei OceanStor 6800F V5 is 4-controller full redundancy. Each controller includes 1TB memory and 2 4-port 10GbE Smart I/O Modules, 8 ports used for data (connections to load generators). The 6800F V5 engine includes 2 12-port SAS IO Modules, 4 ports for one SAS IO Modules used to connect to disk enclosure. Included Premium Bundle which includes NFS, CIFS, NDMP, SmartQuota, HyperClone, HyperSnap, HyperReplication, HyperMetro, SmartQoS, SmartPartition, SmartDedupe, SmartCompression, Only NFS protocol license is used in the test.
260Disk driveHuaweiSSDM-900G2S-02900GB SSD SAS Disk Unit(2.5"), each disk enclosure used 15 SSD disks in the test.
34Disk EnclosureHuawei2U SAS disk enclosure2U, AC\240HVDC, 2.5", Expanding Module, 25 Disk Slots. The disks were in the disk enclosures and the enclosures were connected to the storage controller directly.
41610GbE HBA cardIntelIntel Corporation 82599ES 10-Gigabit SFI/SFP+Used in client for data connection to storage,each client used 2 10GbE cards,and each card with 2 ports.
58ClientHuaweiHuawei FusionServer RH2288 V3 serversHuawei server, each with 128GB main memory. 1 used as Prime Client; 8 used to generate the workload including Prime Client.

Configuration Diagrams

  1. Huawei OceanStor 6800F V5 Config Diagram

Component Software

Item NoComponentTypeName and VersionDescription
1LinuxOSSuse12 SP3 for x86_64SUSE Linux Enterprise Server 12 SP3 (x86_64) with the kernel 4.4.73-5-defaultOS for the 8 clients
2OceanStorStorage OSV500R007Storage Operating System

Hardware Configuration and Tuning - Physical

Client
Parameter NameValueDescription
MTU9000Jumbo Frames configured for 10Gb ports

Hardware Configuration and Tuning Notes

Client 10Gb ports used for connection to storage controller. And just the clients were configured for Jumbo frames. The storage used the default MTU(1500).

Software Configuration and Tuning - Physical

Clients
Parameter NameValueDescription
rsize,wsize1048576NFS mount options for data block size
protocoltcpNFS mount options for protocol
tcp_fin_timeout600TCP time to wait for final packet before socket closed
nfsvers3NFS mount options for NFS version

Software Configuration and Tuning Notes

Used the mount command "mount -t nfs -o nfsvers=3 11.11.11.1:/fs_1 /mnt/fs_1" in the test. The mount information is 11.11.11.1:/fs_1 on /mnt/fs_1 type nfs (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=11.11.11.1,mountvers=3,mountport=2050,mountproto=udp,local_lock=none,addr=11.11.11.1).

Service SLA Notes

None

Storage and Filesystems

Item NoDescriptionData ProtectionStable StorageQty
1900GB SSD Drives used for data; 4x 15 RAID5-9 groups including 4 coffer disk;RAID-5Yes60
21 800GiB NVMe used for system data for one controller;7GiB on the 4 coffer disks and 800GiB NVMe to be a RAID1 group;RAID-1Yes4
Number of Filesystems32
Total Capacity28800GiB
Filesystem Typethin

Filesystem Creation Notes

The file system block size was 8KB.

Storage and Filesystem Notes

Used one engine of OceanStor 6800F V5 in the test. And one engine included four controllers. 4 disk enclosure were connnected to the engine. Each disk enclosure used 15 900GB SSD disks. 15 disks in one disk enclosure were created to a storage pool. In the storage pool 8 filesystems were created. And for the 8 filesystems of one storage pool, each controller included 2 filesystems. RAID5-9 was 8+1. The RAID5-9 was on the each stripe and all the stripe was distributed in the all the 15 drives by specifical algorithm. For example, stripe 1 was RAID5-9 from disk1 to disk8. And stripe 2 was from disk2 to disk9. All the stripe was just like this.

Transport Configuration - Physical

Item NoTransport TypeNumber of Ports UsedNotes
110GbE32For the client-to-storage network, client connected the storage directly. No switch was used.There were 32 10GbE connections totally,communicating with NFSv3 over TCP/IP to 8 clients.

Transport Configuration Notes

Each controller used 2 10GbE card and each 10GbE card included 4 port. Totally 8 10GbE port were used for each controller for data transport connectivity to clients. Totally 32 ports for the 8 clients and 32 ports for 4 storage controller were used and the clients connected the storage directly. The 4 controller interconnect used PCIe to be HA pairs.

Switches - Physical

Item NoSwitch NameSwitch TypeTotal Port CountUsed Port CountNotes
1NoneNoneNoneNoneNone

Processing Elements - Physical

Item NoQtyTypeLocationDescriptionProcessing Function
18CPUStorage ControllerIntel(R) Xeon(R) Gold 5120T @ 2.20GHz, 14 coreNFS, TCP/IP, RAID and Storage Controller functions
216CPUClientIntel(R) Xeon(R) CPU E5-2670 v3 @ 2.30GHzNFS Client, Suse Linux OS

Processing Element Notes

Each OceanStor 6800F V5 Storage Controller contains 2 Intel(R) Xeon(R) Gold 5120T @ 2.20GHz processor. Each client contains 2 Intel(R) Xeon(R) CPU E5-2670 v3 @ 2.30GHz processor.

Memory - Physical

DescriptionSize in GiBNumber of InstancesNonvolatileTotal GiB
Main Memory for each OceanStor 6800F V5 Storage Controller10244V4096
Memory for each client1288V1024
Grand Total Memory Gibibytes5120

Memory Notes

Main memory in each storage controller was used for the operating system and caching filesystem data including the read and write cache.

Stable Storage

1.There are three ways to protect data. For the disk failure, OceanStor 6800F V5 Storage uses RAID to protect data. For the controller failure, OceanStor 6800F V5 Storage uses cache mirror which data also wrote to the another controller's cache. And for power failure, there are BBUs to supply power for the storage to flush the cache data to disks. 2.No persistent memory were used in the storage, The BBUs could supply the power for the failure recovery and the 1 TiB memory for each controller included the mirror cache. The data was mirrored between controllers. 3.The write cache was less than 800GiB, so the 800GiB of NVMe drive could cover the user write data.

Solution Under Test Configuration Notes

None

Other Solution Notes

None

Dataflow

Please reference the configuration diagram. 8 clients were used to generate the workload; 1 client acted as Prime Client to control the 7 other clients. Each client had 4 ports and each port connected to each controller. And each port mounted one filesystem for each controller. Totally for one client, 4 filesystems were mounted.

Other Notes

There were two SAS IO module in the engine. 2 controllers shared 1 SAS IO module. 1 disk enclosure had 2 connections to the engine, one connection to one SAS IO module.

Other Report Notes

None


Generated on Wed Mar 13 16:39:02 2019 by SpecReport
Copyright © 2016-2019 Standard Performance Evaluation Corporation