SPEC SFS(R)2014_swbuild Result Huawei : Huawei OceanStor 5500 V5 SPEC SFS2014_swbuild = 200 Builds (Overall Response Time = 0.58 msec) =============================================================================== Performance =========== Business Average Metric Latency Builds Builds (Builds) (msec) Ops/Sec MB/Sec ------------ ------------ ------------ ------------ 20 0.2 10000 129 40 0.2 20000 258 60 0.2 30001 388 80 0.3 40001 517 100 0.6 50002 646 120 0.5 60002 776 140 0.7 70002 905 160 1.0 79999 1035 180 0.6 90003 1165 200 2.1 99994 1294 =============================================================================== Product and Test Information ============================ +---------------------------------------------------------------+ | Huawei OceanStor 5500 V5 | +---------------------------------------------------------------+ Tested by Huawei Hardware Available 04/2018 Software Available 04/2018 Date Tested 07/2018 License Number 3175 Licensee Locations Chengdu, China Huawei's OceanStor 5500 V5 Storage System is the new generation of mid-range hybrid flash storage, dedicated to providing the reliable and efficient data services for enterprises. Cloud-ready operating system, flash-enabled performance, and intelligent management software, delivering top-of-the-line functionality, performance, efficiency, reliability, and ease of use. Satisfies the data storage requirements of large-database OLTP/OLAP, cloud computing, and many other applications, making it a perfect choice for sectors such as government, finance, telecommunications, and manufacturing. Solution Under Test Bill of Materials ===================================== Item No Qty Type Vendor Model/Name Description ---- ---- ---------- ---------- ---------- ----------------------------------- 1 1 Storage Huawei OceanStor A single Huawei OceanStor 5500 V5 Array 5500 V5 engine includes 2 controllers. System(Two Huawei OceanStor 5500 V5 is Active- 2-controller full redundancy. Active Con Each controller includes 128GiB troller) memory and 1 4-port 10GbE Smart I/O Modules, 4 ports used for data (connections to load generators). Each controller includes 2 2-port onboard SAS port. Included Premium Bundle which includes NFS, CIFS, NDMP, SmartQuota, HyperClone, HyperSnap, HyperReplication, HyperMetro, SmartQoS, SmartPartition, SmartDedupe, SmartCompression, Only NFS protocol license is used in the test. 2 24 Disk drive Huawei SSDM- 900GB SSD SAS Disk Unit(2.5"), all 900G2S-02 the 24 SSD disks are in the engine. 3 4 10GbE HBA Intel Intel Corp Used in client for data connection card oration to storage,each client used 2 10GbE 82599ES cards,and each card with 2 ports. 10-Gigabit SFI/SFP+ 4 2 Client Huawei Huawei Fus Huawei server, each with 128GiB ionServer main memory. 1 used as Prime RH2288 V3 Client; 2 used to generate the servers workload including Prime Client. Configuration Diagrams ====================== 1) sfs2014-20180730-00042.config1.jpg (see SPEC SFS2014 results webpage) Component Software ================== Item Name and No Component Type Version Description ---- ------------ ------------ ------------ ----------------------------------- 1 Linux OS SUSE Linux OS for the 2 clients Enterprise Server 12 SP3 with the kernel 4.4.7 3-5-default 2 OceanStor Storage OS V500R007 Storage Operating System Hardware Configuration and Tuning - Physical ============================================ +----------------------------------------------------------------------+ | Client | +----------------------------------------------------------------------+ Parameter Name Value Description --------------- --------------- ---------------------------------------- None None None Hardware Configuration and Tuning Notes --------------------------------------- None Software Configuration and Tuning - Physical ============================================ +----------------------------------------------------------------------+ | Clients | +----------------------------------------------------------------------+ Parameter Name Value Description --------------- --------------- ---------------------------------------- rsize,wsize 1048576 NFS mount options for data block size protocol tcp NFS mount options for protocol nfsvers 3 NFS mount options for NFS version tcp_fin_timeout 600 TCP time to wait for final packet before socket closed somaxconn 65536 Max tcp backlog an application can request tcp_fin_timeout 5 TCP time to wait for final packet before socket closed tcp_slot_table_ 256 number of simultaneous TCP Remote entries Procedure Call (RPC) requests tcp_rmem 10000000 receive buffer size, min, default, max 20000000 40000000 tcp_wmem 10000000 send buffer size; min, default, max 20000000 40000000 netdev_max_back 300000 max number of packets allowed to queue log Software Configuration and Tuning Notes --------------------------------------- Used the mount command "mount -t nfs -o nfsvers=3 31.31.31.1:/fs_1 /mnt/fs_1" in the test. The mount information is 31.31.31.1:/fs_1 on /mnt/fs_1 type nfs (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=31.31.31.1,mountvers=3,mountport=2050,mountproto=udp,local_lock=none,addr=31.31.31.1). Service SLA Notes ----------------- None Storage and Filesystems ======================= Item Stable No Description Data Protection Storage Qty ---- ------------------------------------- ------------------ -------- ----- 1 900GB SSD Drives used for data; 1x 24 RAID-5 Yes 1 RAID5-9 groups including 4 coffer disk; 2 2 64GB 7200 RPM SATA used for system RAID-1 Yes 2 data for the engine Number of Filesystems 8 Total Capacity 8192GiB Filesystem Type thin Filesystem Creation Notes ------------------------- The file system block size was 8KB. Storage and Filesystem Notes ---------------------------- Used one engine of OceanStor 5500 V5 in the test. And one engine included two controllers. The engine had 25 disk slot and there are 24 SSD disks in the enclosure for the test. All the 24 disks were created to be a storage pool with RAID5-9. In the storage pool 8 filesystems were created, and each controller included 4 filesystems. The RAID5-9 was 8+1. The RAID5-9 was on the each stripe and all the stripes were distributed in the all the 24 drives by specifical algorithm. For example, stripe 1 was RAID5-9 from disk1 to disk8. And stripe 2 was from disk2 to disk9. All the stripes were just like this. Transport Configuration - Physical ================================== Item Number of No Transport Type Ports Used Notes ---- --------------- ---------- ----------------------------------------------- 1 10GbE 8 For the client-to-storage network, client connected the storage directly. No switch was used.There were 8 10GbE connections totally,communicating with NFSv3 over TCP/IP to 8 clients. Transport Configuration Notes ----------------------------- Each controller used 1 10GbE card and each 10GbE card included 4 port. Totally 8 10GbE port were used for each controller for data transport connectivity to clients. Totally 8 ports for the 2 clients and 8 ports for the 2 storage controllers were used and the clients were connected to the storage directly. The 2 controller interconnect used PCIe to be HA pairs. Switches - Physical =================== Total Used Item Port Port No Switch Name Switch Type Count Count Notes ---- -------------------- --------------- ------ ----- ------------------------ 1 None None None None None Processing Elements - Physical ============================== Item No Qty Type Location Description Processing Function ---- ---- -------- -------------- ------------------------- ------------------- 1 2 CPU Storage Intel(R) Xeon(R) Gold NFS, TCP/IP, RAID Controller 4109T @ 2.0GHz, 8 core and Storage Controller functions 2 4 CPU Client Intel(R) Xeon(R) CPU NFS Client, SUSE E5-2670 v3 @ 2.30GHz Linux Enterprise Server 12 SP3 Processing Element Notes ------------------------ Each OceanStor 5500 V5 Storage Controller contains 1 Intel(R) Xeon(R) Gold 4109T @ 2.0GHz processor. Each client contains 2 Intel(R) Xeon(R) CPU E5-2670 v3 @ 2.30GHz processor. Memory - Physical ================= Size in Number of Description GiB Instances Nonvolatile Total GiB ------------------------- ---------- ---------- ------------ ------------ Main Memory for each 128 2 V 256 OceanStor 5500 V5 Storage Controller Memory for each client 128 2 V 256 Grand Total Memory Gibibytes 512 Memory Notes ------------ Main memory in each storage controller was used for the operating system and caching filesystem data including the read and write cache. Stable Storage ============== 1.There are three ways to protect data. For the disk failure, OceanStor 5500 V5 Storage uses RAID to protect data. For the controller failure, OceanStor 5500 V5 Storage uses cache mirror which data also wrote to the other controller's cache. And for power failure, there are BBUs to supply power for the storage to flush the cache data to disks. 2.No persistent memory were used in the storage, The BBUs could supply the power for the failure recovery and the 128 GiB memory for each controller included the mirror cache. The data was mirrored between the two controllers. 3.The write cache was less than 64GB, so the 64GB of SATA drive could cover the user write data. Solution Under Test Configuration Notes ======================================= None Other Solution Notes ==================== None Dataflow ======== Please reference the configuration diagram. 2 clients were used to generate the workload and 1 client acted as Prime Client to control the other clients. Each client had 4 ports and two ports connected to one controller. Totally there were 8 ports and 8 filesystem. And each port mounted one filesystem. Other Notes =========== There were no Spectre/Meltdown patches applied to any component in the Solution Under Test. Other Report Notes ================== None =============================================================================== Generated on Wed Mar 13 17:00:03 2019 by SpecReport Copyright (C) 2016-2019 Standard Performance Evaluation Corporation