SPECsfs97_R1.v3 Result =============================================================================== IBM Corporation : IBM TotalStorage NAS Gateway 500 (UDP - Fa SPECsfs97_R1.v3 = 68444 Ops/Sec (Overall Response Time = 1.75 msec) =============================================================================== Throughput Response ops/sec msec 6942 0.5 13886 0.8 20869 1.0 27926 1.3 34881 1.6 42102 1.9 49096 2.2 56165 2.6 63336 3.4 68444 5.9 =============================================================================== Server Configuration and Availability Vendor IBM Corporation Hardware Available February 2004 Software Available February 2004 Date Tested November 2003 SFS License number 11 Licensee Location Tucson, Arizona CPU, Memory and Power Model Name IBM TotalStorage NAS Gateway 500 (UDP - Failover Cluster) Processor 1.45GHz POWER4+ # of Processors 8 (4 per node) Primary Cache 64KBI+32KBD on chip per CPU Secondary Cache 1536KB shared, unified on chip per SCM Other Cache 8MB unified off chip per SCM, 2 SCMs per node UPS N/A Other Hardware N/A Memory Size 64GB (32 GB per node) NVRAM Size 4GB NVRAM Type 2GB Non Volatile Storage (NVS) per ESS 800 NVRAM Description 72 hour battery backup of NVS Server Software OS Name and Version NAS Gateway 500 System Software, v1.1.0 Other Software Cluster software feature enabled File System Enhanced JFS NFS version 3 Server Tuning Buffer Cache Size Dynamic # NFS Processes 2 (1 per node) Fileset Size 667.2 GB Network Subsystem Network Type Jumbo Frame Gigabit Ethernet Network Controller Desc. 2-Port Gigabit Ethernet-SX PCI-X Adapter Number Networks 1 (N1) Number Network Controllers 2 active, 2 standby (see other notes) Protocol Type UDP Switch Type Cisco WS-C6009 Bridge Type N/A Hub Type N/A Other Network Hardware N/A Disk Subsystem and Filesystems Number Disk Controllers 10 (5 per node) Number of Disks 226 Number of Filesystems 2 (F1-F2) File System Creation Ops Default File System Config See Other Notes Disk Controller Wide/Ultra-3 SCSI I/O Controller # of Controller Type 2 (1 per node) Number of Disks 2 (1 per node) Disk Type 36.4GB 16-bit SCSI File Systems on Disks OS, page space Special Config Notes N/A Disk Controller IBM Fibre Channel Adapter (2Gbps) # of Controller Type 8 (4 per node) Number of Disks 224 (112 per node) Disk Type 72.8GB 10Krpm SSA File Systems on Disks F1-F2 Special Config Notes See Other Notes Load Generator (LG) Configuration Number of Load Generators 16 Number of Processes per LG 24 Biod Max Read Setting 2 Biod Max Write Setting 2 LG Type LG1 LG Model IBM Intellistation M Pro Number and Type Processors 933MHz 6868 4BU Memory Size 256 MB Operating System Red Hat Linux 7.3 Compiler N/A, SFS97_R1 pre-compiled binaries Compiler Options N/A Network Type Intel 10/100/1000 Base-T Ethernet PCI Adapter Testbed Configuration LG # LG Type Network Target File Systems Notes ---- ------- ------- ------------------- ----- 1/2/../8 LG1 1 /fs1, /fs1, /fs1, /fs1, /fs1, /fs1, /fs1, /fs1, /fs1, /fs1, /fs1, /fs1, /fs2, /fs2, /fs2, /fs2, /fs2, /fs2, /fs2, /fs2, /fs2, /fs2, /fs2, /fs2 N/A 9/10/../16 LG1 1 /fs2, /fs2, /fs2, /fs2, /fs2, /fs2, /fs2, /fs2, /fs2, /fs2, /fs2, /fs2, /fs1, /fs1, /fs1, /fs1, /fs1, /fs1, /fs1, /fs1, /fs1, /fs1, /fs1, /fs1 N/A =============================================================================== Notes and Tuning <> The filesystems were spread equally across IBM ESS 800 and IBM ESS 800 Turbo <> The ESS 800 had 12 ranks of 8 disks. The ESS 800 Turbo had 16 ranks of 8 disks. <> All 28 ranks were formatted RAID 5. <> 28 logical volumes (1 per rank) were created on the ESS 800s. <> Each filesystem was spread across 14 logical volumes, 6 on the ESS 800 and 8 on the ESS 800 Turbo. <> The physical partition size was 32MB. <> One striped JFS2LOG per test filesystem. <> <> SCM stands for Single-Chip Module. There are 2 CPUs per SCM. <> ESS stands for Enterprise Storage Server. <> <> Server tuning: Gigabit Ethernet adapter jumbo_frames=yes (9000-byte MTU) <> The number of NFS daemons (nfsd threads) is dynamic. <> There is one NFS process per node to which these nfsd threads belong. <> <> All NFS traffic across 1 active gigabit ethernet port per node. <> 1 standby port per node. 2 unconfigured ports per node. <> <> Cluster feature enabled. Filesystems can failover to either node. <> Cluster heartbeat via serial cable and cross-over ethernet cable. =============================================================================== Generated on Wed Jan 7 13:28:27 EST 2004 by SPEC SFS97 ASCII Formatter Copyright (c) 1997-2002 Standard Performance Evaluation Corporation