SPECsfs97_R1.v3 Result =============================================================================== BlueArc Corporation : Titan 3110-F SPECsfs97_R1.v3 = 96428 Ops/Sec (Overall Response Time = 2.11 msec) =============================================================================== Throughput Response ops/sec msec 9433 0.5 19140 0.8 28751 1.0 38165 1.2 47903 1.7 57563 2.1 67371 2.6 76937 3.5 86852 4.2 96428 6.8 =============================================================================== Server Configuration and Availability Vendor BlueArc Corporation Hardware Available March 2008 Software Available May 2008 Date Tested April 2008 SFS License number 000063 Licensee Location San Jose, CA CPU, Memory and Power Model Name Titan 3110-F Processor AMD Opteron 248 2.2Ghz + FPGAs # of Processors 1 core, 1 chip, 1 core/chip + 12 FPGAs Primary Cache 128KB(I+D) on chip Secondary Cache 1MB(I+D) on chip Other Cache 8 GB UPS N/A Other Hardware N/A Memory Size 41 GB (incl. other_cache_size, nvram_size and raid controller cache.) NVRAM Size 2 GB NVRAM Type DIMM NVRAM Description 72 hour battery backed Server Software OS Name and Version SU 5.2 Other Software N/A File System BlueArc Silicon File System with Cluster Name Space (CNS) NFS version 3 Server Tuning Buffer Cache Size N/A # NFS Processes N/A Fileset Size 918.7 GB Network Subsystem Network Type Integrated Network Controller Desc. 2-port, 10Gbps Ethernet (only one port used) Number Networks 1 (N0) Number Network Controllers 1 Protocol Type TCP Switch Type 1 Force10 S2410 10GigE Bridge Type N/A Hub Type N/A Other Network Hardware N/A Disk Subsystem and Filesystems Number Disk Controllers 1 Number of Disks 320 Number of Filesystems 1 (F1) File System Creation Ops 4KB block size File System Config 10 Individual file system volumes aggregated using CNS to present a single, unified namespace Disk Controller Integrated quad port 4Gbps FC # of Controller Type 1 Number of Disks 320 Disk Type ST373455FC File Systems on Disks F1 Special Config Notes see notes Load Generator (LG) Configuration Number of Load Generators 6 Number of Processes per LG 160 Biod Max Read Setting 2 Biod Max Write Setting 2 LG Type LG0 LG Model White Box, Tyan S2915 motherboard Number and Type Processors Dual Opteron 2218 dual core, 2.6Ghz Memory Size 8 GB Operating System Solaris 10 u4 Compiler SFS97_R1 precompiled binaries Compiler Options N/A Network Type Myricom 10GigE Testbed Configuration LG # LG Type Network Target File Systems Notes ---- ------- ------- ------------------- ----- 1-6 LG0 N0 /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f7, /r/f8, /r/f9, /r/f10, /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f7, /r/f8, /r/f9, /r/f10, /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f7, /r/f8, /r/f9, /r/f10, /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f7, /r/f8, /r/f9, /r/f10, /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f7, /r/f8, /r/f9, /r/f10, /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f7, /r/f8, /r/f9, /r/f10, /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f7, /r/f8, /r/f9, /r/f10, /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f7, /r/f8, /r/f9, /r/f10, /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f7, /r/f8, /r/f9, /r/f10, /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f7, /r/f8, /r/f9, /r/f10, /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f7, /r/f8, /r/f9, /r/f10, /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f7, /r/f8, /r/f9, /r/f10, /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f7, /r/f8, /r/f9, /r/f10, /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f7, /r/f8, /r/f9, /r/f10, /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f7, /r/f8, /r/f9, /r/f10, /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f7, /r/f8, /r/f9, /r/f10 =============================================================================== Notes and Tuning <> The tested system was one BlueArc Titan 3110-F server connected via Fibre Channel fabric to five (5) storage arrays. Each array consisted of one (1) BlueArc RC16TB (LSI 3992) Dual RAID controller with 64 FC drives. Each Dual RAID controller set has 2GB memory, for a total of 10GB cashe memory across all storage controllers. RAID controller cache memory is included in the 43GB memory size listed above. <> The Titan server had all standard protection services enabled, including RAID, NVRAM logging, and media error scrubbing. <> Titan 3110 uses 12 Field Programmable Gate Arrays (FPGAs) to accelerate processing of network traffic and file system I/O. <> Disk drives used were 73GB, 15,000 RPM, 4GB-FC Seagate ST373455FC drives. <> Disk and FS configuration was 32 "1+1" RAID-1 LUs per RAID Controller pair. Each RAID Controller pair represented one Storage Pool created by striping across the 32 LUs. (Striping parameters are fixed and not under user control.) Two file systems were created within each Storage Pool. The ten (10) file systems were aggregated to a single namespace "/r" using BlueArc's Cluster Namespace (CNS) feature. <> The storage arrays were connected to the Titan server using redundant Brocade 200E FC switches with dual redundant connections to each array. <> The Titan 3110 server is connected to 6 Load Generators via 10GigE (end to end) through a single Force10 S2410 switch. <> For Uniform Access Rule compliance all LG's accessed all cluster namespace objects uniformly across all interfaces as follows: <> - There is 1 network node (i.e., Titan 3110 server cluster): T0 <> - There are 10 physical target file systems (/r/f1…./r/f10) presented as a single cluster name space (F1) with virtual root “/r” accessible to all clients. <> - All file systems are collectively owned by a single Virtual Server with a single IP address. <> - Each Load Generator (1-6) mounted to each file system target (/r/f*) and cycled through the target file systems /r/f1, /r/f2, /r/f3, etc. in sequence. <> Titan 3110 contains four modules that perform all the storage operations, as follows: NIM3 = Network Interface Module (TCP/IP, UDP handling); FSA2 and FSB3 = File System Modules (NFS and CIFS protocol handling, plus cluster interconnect on FSB3 - Note: cluster interconnect was present but not used on this test run.); and SIM3=Storage Interface Module (Disk controller / FC interface) <> Titan 3110 has 31 gigabytes (GB) of memory, cache and NVRAM distributed within the Titan modules as follows: <> - NIM3 - 3.5 GB memory per Titan <> - FSX - 2.0 GB memory per Titan <> - FSB3 - 14.5 GB memory per Titan, of which 2.0 is NVRAM and 8.0 is FS metadata cache. The remaining amount is used for buffering data moving to/from the disk drives and/or network. <> - SIM3 - 11 GB memory per Titan, of which 8.0 is "sector" cache used for interface with RAID controllers and disk subsystem. This is the "other cache size" in the Titan as noted above. <> For "stable storage" requirement, the Titan server writes first to battery backed (72 hours) NVRAM internal to the Titan. Data from NVRAM is then written to the the drive arrays as convenient, but always within a few seconds of arrival in NVRAM. <> Server tuning: <> - Disable file read-ahead, "read ahead -- disable" <> - Disable shortname generation for CIFS clients "shortname -g off" <> - Server running in "Native Unix" security mode, "security-mode set unix". <> - Set metadata cache bias to small files: "cache-bias --small-files" <> - Accessed time management was turned off: "fs-accessed-time set off" <> - Jumbo frames were enabled. <> Two 10GbE Ethernet ports were present, but only one was used. =============================================================================== Generated on Wed Jun 18 11:48:02 EDT 2008 by SPEC SFS97 ASCII Formatter Copyright (c) 1997-2008 Standard Performance Evaluation Corporation