SPECsfs2008_nfs.v3 Result ================================================================================ Hitachi Data : Hitachi NAS Platform 3090-G2, powered by BlueArc, Two node Systems cluster (with Hitachi NAS Performance Accelerator feature) SPECsfs2008_nfs = 189994 Ops/Sec (Overall Response Time = 2.08 msec) .v3 ================================================================================ Performance =========== Throughput Response (ops/sec) (msec) --------------------- ---------------- 18800 0.8 37650 0.9 56462 1.0 75338 1.2 94141 1.4 113055 1.7 131962 2.1 150864 2.7 169899 3.9 189994 9.5 ================================================================================ Product and Test Information ============================ Tested By Hitachi Data Systems Product Name Hitachi NAS Platform 3090-G2, powered by BlueArc, Two node cluster (with Hitachi NAS Performance Accelerator feature) Hardware Available February 2011 Software Available February 2012 Date Tested January 2012 SFS License Number 276 Licensee Locations Santa Clara, CA, USA The Hitachi NAS Platform, powered by BlueArc, continues to deliver best-in-class performance and scalability, now with a new Performance Accelerator feature. The Hitachi NAS Performance Accelerator is an optional license key based feature that optimizes and improves the overall performance levels of an HNAS 3090 server by enabling very large scale integration (VLSI) features within the server. When combined with additional storage, the Hitachi NAS Performance Accelerator feature can increase performance levels by up to 30%. For efficient data management, the Hitachi NAS Platform provides Intelligent File Tiering, Clustered Namespace, large 256TB file systems, enterprise search enhancements and integration with Hitachi Storage and management products. Hitachi NAS Platform uses a Hybrid Core Architecture that accelerates processing to achieve the industry's best performance in both throughput and operations per second, availability and scalability are further enhanced with the ability to grow to up to four nodes per cluster. Hitachi NAS Platform family delivers the highest scalability in the market which enables organizations to consolidate file servers and other NAS devices into fewer nodes and storage arrays for simplified management, improved space efficiency and lower energy consumption. HNAS midrange model 3090-G2 can scale up to 8PB of usable data storage and supports simultaneous 1GbE and 10GbE LAN access, and 4Gbps FC storage connectivity. Configuration Bill of Materials =============================== Ite m Vendo No Qty Type r Model/Name Description --- --- ---- ----- ---------- ----------- 1 2 Server HDS SX345321.P Hitachi NAS 3090-G2 Base System 2 1 Server HDS SX345278.P System Management Unit (SMU) 3 2 Software HDS SX365117.P Hitachi NAS 3090 Value Cluster SW Bundle 4 2 Software HDS SX435074.P Hitachi NAS SW Lic - Cluster Name Space 5 2 Software HDS Accelerator SW Lic Hitachi NAS SW Lic - Hitachi Performance Accelerator 6 8 FC Interface HDS FTLF8524P2BNV.P SFP 4G SWL FINISAR 1-PK 7 8 Network HDS FTLX8511D3.P 10G 850nm XFP Interface 8 1 Storage HDS VSP-A0001.S VSP Hardware Product 9 1 Disk HDS DKC710I-CBXA.P Primary Controller Chassis Controller 10 1 Disk HDS DKC710I-CBXB.P Second Controller Chassis Controller 11 16 Cache HDS DKC-F710I-C32G.P Cache Memory Module (32GB) 12 2 Cache HDS DKC-F710I-BM128.P Cache Flash Memory Module (in use during Power outage) 13 368 Disk Drives HDS DKC-F710I-146KCM.P SFF 146GB Disk Drive 2.5inch 14 3 Chassis HDS DKC-F710I-SBX.P SFF Drive Chassis 15 4 FC Interface HDS DKC-F710I-16UFC.P Fibre 16-Port HOST Adapter(8Gbps) 16 4 Disk Adapter HDS DKC-F710I-SCA.P Disk Adapter 17 4 Processor HDS DKC-F710I-MP.P Processor Blade Blade 18 2 Switch HDS DKC-F710I-ESW.P PCI-Express Switch Adapter Adapter 19 2 Hub HDS DKC-F710I-HUB.P Hub Kit 20 2 Rack HDS DKC-F710I-RK42.P Rack-42U 21 1 Cables HDS DKC-F710I-MDEXC.P Inter-Controller Connecting Kit 22 1 Software HDS 044-230001-03.P VSP Basic Operating System 20TB Base License 23 1 Software HDS 044-230001-04B.P VSP Basic Operating System 4-VSD Pair Base License Server Software =============== OS Name and Version 10.0.3067.11 Other Software None Filesystem Software SiliconFS 10.0.3067.11 Server Tuning ============= Name Value Description ---- ----- ----------- security-mode UNIX Security mode is native UNIX cifs_auth off Disable CIFS security authorization cache-bias small- Set metadata cache bias to small files files fs-accessed-time off Accessed time management was turned off shortname off Disable short name generation for CIFS clients read-ahead 0 Disable file read-ahead Server Tuning Notes ------------------- None Disks and Filesystems ===================== Number Description of Disks Usable Size ----------- -------- ----------- 146GB SAS 15K RPM Disks 368 36.1 TB 160GB SATA 5400 RPM Disks. These four drives (two per node) 4 320.0 GB are used for storing the core operating system and management logs. No cache or data storage. Total 372 36.4 TB Number of Filesystems 4 Total Exported Capacity 36953.6 GB Filesystem Type WFS-2 Filesystem Creation Options 4K filesystem block size dsb-count (dynamic system block) set at 768 Filesystem Config Each Filesystem was striped across 23 x 3D+1P, RAID-5 LUNs (92 disks). Fileset Size 22019.9 GB The storage configuration consisted of One Virtual Storage Platform storage system (VSP) configured in Dual chassis and with up to 512GB allocated cache memory. There were 368 15K RPM SAS disks in use for these tests. There were 92 LUNs created using RAID-5, 3D+1P. There were sixteen 4Gbps FC ports in use across 2 FED features located in different clusters. The FC ports were connected to the 3090-G2 servers via a redundant pair of Brocade 5320 switches. The 3090-G2 servers were connected to each Brocade 5320 switch via two 4Gbps FC connections, such that a completely redundant path exists from server to the storage. Hitachi NAS Platform servers have two internal mirrored hard disk drives which are used to store the core operating software and system logs. These drives are not used for cache space or for storing data. Network Configuration ===================== Number of Ports Item No Network Type Used Notes ------- ------------ ---------------- ----- 1 10 Gigabit Ethernet 2 Integrated 1GbE / 10GbE Ethernet controller Network Configuration Notes --------------------------- One 10GbE network interface from each 3090-G2 server was connected to a Brocade TurboIron 24X switch, which provided network connectivity to the clients. The interface was configured to use Jumbo frames (MTU size of 8000 bytes). Benchmark Network ================= Each LG has an Intel XF SR10GbE single port PCIe network interface. Each LG connects via a single 10GbE connection to the ports on the Brocade TurboIron 24X network switch. Processing Elements =================== Item No Qty Type Description Processing Function ---- --- ---- ----------- ------------------- 1 4 FPGA Altera Stratix III EP3SE260 Storage Interface, Filesystem 2 4 FPGA Altera Stratix III EP3SL340 Network Interface, NFS, Filesystem 3 2 CPU Intel E8400 3.0GHz, Dual Core Management 4 8 VSD Intel Xeon Quad-Core CPU VSP unit Processing Element Notes ------------------------ Each HNAS 3090-G2 server has 2 FPGA of each type (4 in total) used for benchmark processing functions. The VSD is the VSP.s I/O processor board. There are two pairs of these installed per chassis. Each board includes an Intel Xeon Quad-Core CPU. Memory ====== Size in Number of Nonvolatil Description GB Instances Total GB e ----------- --------- ------------ -------- ---------- Server Main Memory 12 2 24 V Server Filesystem and Storage Cache 14 2 28 V Server Batter-backed NVRAM 2 2 4 NV Cache Memory Module (VSP) 32 16 512 NV Grand Total Memory Gigabytes 568 Memory Notes ------------ Each 3090-G2 server has 12GB of main memory that is used for the operating system and in support of the FPGA functions. 14GB of memory is dedicated to filesystem metadata and sector caches. A separate, integrated battery-backed NVRAM module on the filesystem board is used to provide stable storage for writes that have not yet been written to disk. The VSP storage system was configured with 512GB Memory. Stable Storage ============== The Hitachi NAS Platform server writes first to the battery based (72 hours) NVRAM internal to the Server. Data from NVRAM is then written to the Storage systems at the earliest opportunity, but always within a few seconds of arrival in the NVRAM. In an active-active cluster configuration, the contents of the NVRAM are synchronously mirrored to ensure that in the event of a single node failover, any pending transactions can be completed by the remaining node. The data from the HNAS is first written onto battery backed VSP cache and are backed up onto the Cache Flash Memory modules in the event of power outage. The Cache Flash Memory modules in the VSP is part of the total solution, but is used only during power outage and not used as cache space. System Under Test Configuration Notes ===================================== The system under test consisted of two Hitachi NAS Platform 3090-G2 servers, connected to a VSP storage system via two Brocade 5320 FC switches. The servers are configured in an active-active cluster mode, directly connected by a redundant pair of 10GbE connections to the cluster interconnect ports. The VSP storage system consisted of 368 15K RPM SAS drives. All the connectivity from server to the storage was via a 4Gbps switched FC fabric. For these tests, there were 2 zones created on each FC switch. Each Hitachi NAS 3090-G2 server was connected to each zone via 2 integrated 4Gbps FC ports (corresponding to 2 H-ports). The VSP storage system was connected to the 2 zones (corresponding to 16 FC ports) providing I/O path from the server to storage. The System Management Unit (SMU) is part of the total system solution, but is used for management purposes only and was not active during the test. Other System Notes ================== None Test Environment Bill of Materials ================================== Item No Qty Vendor Model/Name Description ----- --- ------ ---------- ----------- 1 16 Oracle Sun Fire x2200 RHEL 5 clients, two Dual core processors, 8GB RAM 2 1 Brocade TurboIron Brocade TurboIron 24X, 24 port 10GbE Switch Load Generators =============== LG Type Name LG1 BOM Item # 1 Processor Name AMD Opteron Processor Speed 2.6 GHz Number of Processors (chips) 2 Number of Cores/Chip 2 Memory Size 8 GB Operating System Red Hat Enterprise Linux 5, 2.6.18-8.e15 kernel. Network Type 1 x Intel XF SR PCIe 10GbE Load Generator (LG) Configuration ================================= Benchmark Parameters -------------------- Network Attached Storage Type NFS V3 Number of Load Generators 16 Number of Processes per LG 128 Biod Max Read Setting 2 Biod Max Write Setting 2 Block Size 64 Testbed Configuration --------------------- LG No LG Type Network Target Filesystems Notes ----- ------- ------- ------------------ ----- 62..77 LG1 1 /w/d0, /w/d1,/w/d2,/w/d3 None Load Generator Configuration Notes ---------------------------------- All clients were connected to one common namespace on the server cluster, connected to a single 10GbE network Uniform Access Rule Compliance ============================== Each load generating client hosted 128 processes, all accessing a single namespace on the HNAS 3090-G2 cluster through a common network connection. There were 4 target file systems (/w/d0,/w/d1,/w/d2,/w/d3) that are presented as a single cluster namespace through virtual root (/w), accessible to all clients. Each load generator was mounted to each filesystem target (/w/d0,/w/d1,/w/d2,/w/ d3) and cycled through all the file systems in sequence. Other Notes =========== None Hitachi NAS Platform, powered by BlueArc and Virtual Storage Platform are registered trademarks of Hitachi Data Systems, Inc. in the United States, other countries, or both. All other trademarks belong to their respective owners and should be treated as such. ================================================================================ Generated on Wed Feb 22 15:32:02 2012 by SPECsfs2008 ASCII Formatter Copyright (C) 1997-2008 Standard Performance Evaluation Corporation