SPECsfs97_R1.v3 Result =============================================================================== Sun Microsystems, Inc. : Sun StorageTek 5320C NAS SPECsfs97_R1.v3 = 53548 Ops/Sec (Overall Response Time = 2.53 msec) =============================================================================== Throughput Response ops/sec msec 5367 1.5 10728 1.8 16027 2.0 21520 2.2 26897 2.6 32322 2.9 37711 3.2 43209 3.5 48455 3.7 53548 3.9 =============================================================================== Server Configuration and Availability Vendor Sun Microsystems, Inc. Hardware Available July 2006 Software Available July 2006 Date Tested June 2006 SFS License number 6 Licensee Location Irvine, CA CPU, Memory and Power Model Name Sun StorageTek 5320C NAS Processor AMD 2.6GHz Opteron(TM) 252 processor # of Processors 2 cores, 2 chips, 1 core/chip (1 core per node) Primary Cache 64 KB I + 64 KB D on chip per chip Secondary Cache 1MB (I+D) on chip per chip Other Cache N/A UPS N/A Other Hardware N/A Memory Size 8 GB (4GB per node) NVRAM Size 8 GB (2GB per 5320 RAID EU controller tray) NVRAM Type Sun StorageTek 5320 RAID EU NVRAM Description Write-Back with battery backup Server Software OS Name and Version Sun StorageTek NAS OS Ver 4.20B63 Other Software N/A File System N/A NFS version 3 Server Tuning Buffer Cache Size Default # NFS Processes 240 (120 per node) Fileset Size 512.5 GB Network Subsystem Network Type Gigabit Ethernet Network Controller Desc. Integrated Gigabit Ethernet Number Networks 2 (N1 - Service, N2 - Cluster) Number Network Controllers 4 (2 per node) Protocol Type TCP Switch Type Cisco Catalyst 4000 and Summit 5i.(See notes) Bridge Type N/A Hub Type N/A Other Network Hardware N/A Disk Subsystem and Filesystems Number Disk Controllers 4( Dual port) (2 per node) Number of Disks 352 Number of Filesystems 88 (Node 1:F1-F44, Node 2: F45-F88) File System Creation Ops default File System Config each R5 LUN houses (1) 252GB file system Disk Controller 5320 RAID EU controller # of Controller Type 4 Number of Disks 352 Disk Type Seagate ST314680F 146GB 10k RPM FC File Systems on Disks F1-F88 (Alternating accross each controller.) Special Config Notes 88 3+1 RAID 5 LUNs( See notes) Load Generator (LG) Configuration Number of Load Generators 22 Number of Processes per LG 8 Biod Max Read Setting 2 Biod Max Write Setting 2 LG Type LG1 LG Model Intel Base Number and Type Processors 2.4 GHz P4 Memory Size 1GB Operating System Redhat 9.0 Compiler gcc (GCC) 3.2.2 20030222 (RedHat linux 3.2.2-5) Compiler Options Default Network Type Intel embedded Gigabit Ethernet Controller Testbed Configuration LG # LG Type Network Target File Systems Notes ---- ------- ------- ------------------- ----- 1,12 LG1 N1 /F1-4,/F45-48 N/A 2,13 LG1 N1 /F5-8,/F49-52 N/A 3,14 LG1 N1 /F9-12,/F53-56 N/A 4,15 LG1 N1 /F13-16,/F57-60 N/A 5,16 LG1 N1 /F17-20,/F61-64 N/A 6,17 LG1 N1 /F21-24,/F65-68 N/A 7,18 LG1 N1 /F25-28,/F69-72 8,19 LG1 N1 /F29-32,/F73-76 N/A 9,20 LG1 N1 /F33-36,/F77-80 N/A 10,21 LG1 N1 /F37-40,/F81-84 N/A 11,22 LG1 N1 /F41-44,/F85-88 N/A =============================================================================== Notes and Tuning <> OS Tuning: <> Checkpoint disabled on all volumes: disabled via menu <> NFS worker threads set to 120: set nfs.nworker 120 <> Disable system monitor: set sysmon.test.enable yes <> Disable directory audit: =sfs2_no_dir_audit=1 <> Disable file system read ahead: =sfs2_nrax=0 <> Disable file system access time update: fsctl atime disable * <> Maximum queued HBA commands limited to 128: set qlogic.isp.maxthrottle 128 <> The storage subsystem consists of (4) dual-controller Sun StorageTek <> NAS 5320 RAID EU controller trays with cache mirroring disabled, <> and (20) Sun StorageTek NAS 5320 RAID EU F expansion trays, totalling <> 352 drives. (88) 3+1 RAID 5 LUNs are configured. <> All LUNs are available to both NAS heads <> via the SAN switches, with each head having primary server <> responsibility for (44) LUNs. (2) Brocade Silkworm 4100 and 3900 <> switches instantiate the fabric. In the case that a NAS head fails, <> the remaining head automatically assumes responsibility for LUNs <> belonging to the failed head. <> <> There are (88) LUNs.Node 1: F1 is on first port of first controller,F2 is on first port of second controller, <> F3 is on second port of first controller, F4 is on second port of second controller, and etc. <> Node 2: F45 is on first port of first controller,F46 is on first port of second controller, <> F47 is on second port of first controller, F48 is on second port of second controller, and etc. <> <> <> Each 252GB file system is built from a <> single segment, with each segment residing on one RAID5 LUN, (1) file <> system per LUN. <> the Ethernet switches are interconnected thought single Gigabit port. <> There are eleven clients and a cluster node on each one. <> One port goes to each of the FC switches from each dual HBA. <> Controller arrays provide battery-backed cache memory. In the <> case that line power is lost, battery power is sufficient to <> preserve cache contents for 72 hours minimum. If the batteries <> are depleted to the extent that they cannot maintain memory for <> 72 hours, the controller will automatically switch to write- <> through mode. <> <> The tested system is a failover cluster composed of (2) nodes. <> There are (2) gigabit ports per node. (1) port is used for <> heartbeat and administration. The other port is used for file <> system serving. Each node is the owner of (44) file systems and <> has the capability of taking over the partner node file system <> <> The network under test utilized GigaBit ethernet with standard frame size. =============================================================================== Generated on Mon Aug 7 15:01:05 EDT 2006 by SPEC SFS97 ASCII Formatter Copyright (c) 1997-2004 Standard Performance Evaluation Corporation