SPECsfs97_R1.v3 Result =============================================================================== EMC Corp. : Celerra NS704G Failover Cluster / CLARiiON CX700 SPECsfs97_R1.v3 = 100150 Ops/Sec (Overall Response Time = 2.63 msec) =============================================================================== Throughput Response ops/sec msec 9101 0.9 18340 1.1 27458 1.5 36549 1.9 45758 2.2 55090 2.5 64168 2.9 73617 3.4 83077 4.0 92154 4.9 100150 8.3 =============================================================================== Server Configuration and Availability Vendor EMC Corp. Hardware Available September 2004 Software Available September 2004 Date Tested August 2004 SFS License number 47 Licensee Location Hopkinton, MA CPU, Memory and Power Model Name Celerra NS704G Failover Cluster / CLARiiON CX700x2 Processor 3.06 GHz Intel Xeon P4 # of Processors 6 cores, 6 chips, 1 core/chip with HT Technology enabled (2 chips per Data Mover) Primary Cache Execution Trace Cache 12K uOPs + Data Cache 8KB Secondary Cache 512KB (unified) Other Cache N/A UPS N/A Other Hardware N/A Memory Size 12 GB (4 GB per Data Mover) NVRAM Size N/A NVRAM Type N/A NVRAM Description N/A Server Software OS Name and Version DART 5.3 Other Software Linux 2.4.9-34.5306.EMC on Control Station File System UxFS NFS version 3 Server Tuning Buffer Cache Size Dynamic # NFS Processes 4 Fileset Size 965.3 GB Network Subsystem Network Type Jumbo Gigabit Ethernet Network Controller Desc. Gigabit Ethernet Controller Number Networks 1 Number Network Controllers 6 (2 per Data Mover) Protocol Type TCP Switch Type Cisco 6509 Gbit Switch (Jumbo) Bridge Type N/A Hub Type N/A Other Network Hardware N/A Disk Subsystem and Filesystems Number Disk Controllers 6 [2 per active Data Mover] Number of Disks 298 Number of Filesystems 3 File System Creation Ops N/A File System Config Each filesystem is striped (8KB element size), across 96 disks (32 LUNs) for fs1, fs2 and fs3 Disk Controller Integrated 2Gb Fibre Channel # of Controller Type 6 [2 per active Data Mover] Number of Disks 294 (dual ported) Disk Type Seagate Fibre Channel 146GB 10k-rpm File Systems on Disks Filesystem fs1 (96); Filesystem fs2 (96); Filesystem fs3 (96); Storage utility, OS, and UxFS Log (5); Second Storage utility (5) Special Config Notes 4GB cache per SP on the storage arrays. 3072 MB of that 4GB cache is mirrored write cache. See notes below. Load Generator (LG) Configuration Number of Load Generators 24 Number of Processes per LG 30 Biod Max Read Setting 5 Biod Max Write Setting 5 LG Type LG1 LG Model Sun SPARC 220R Number and Type Processors 2 x 450 GHz SPARC Memory Size 512 MB Operating System Solaris 2.8 Compiler Sun Forte 6.2 Compiler Options None Network Type Intraserve Gbe LG Type LG2 LG Model Sun SPARC 420R Number and Type Processors 2 x 450 GHz SPARC Memory Size 1 GB Operating System Solaris 2.8 Compiler Sun Forte 6.2 Compiler Options None Network Type Intraserve Gbe Testbed Configuration LG # LG Type Network Target File Systems Notes ---- ------- ------- ------------------- ----- 1..12 LG1 1 /fs1, /fs2, /fs3 N/A 13..24 LG2 1 /fs1, /fs2, /fs3 N/A =============================================================================== Notes and Tuning <> Failover is supported by an additional Celerra NS704G Data Mover that operates in <> standby mode. In the event of a Data Mover failure, this unit takes over the <> function of the failed unit. The standby Data Mover does not contribute to <> the performance of the system and is not included in the processor, memory, <> or other components listed above. <> <> Data Mover tuning: <> param ufs inoBlkHashSize=170669 (inode block hash size) <> param ufs inoHashTableSize=1218761 (inode hash table size) <> param ufs updateAccTime=0 (disable access-time updates) <> param nfs withoutCollector=1 (enable NFS-to-CPU thread affinity) <> param file prefetch=0 (disable DART read prefetch) <> param mkfsArgs dirType=DIR_COMPAT (Compatibility mode directory style) <> param kernel maxStrToBeProc=16 (no. of network streams to process at once) <> param kernel outerLoop=8 (8 consecutive iterations of network packets processing) <> file initialize nodes=1000000 dnlc=1656000 (no. of inodes, no. of directory name lookup cache entries) <> nfs start openfiles=980462 nfsd=4 (no. of open files and 4 NFS daemons) <> param ufs nFlushCyl=32 (no. of UxFS cylinder group blocks flush threads) <> param ufs nFlushDir=32 (no. of UxFS directory and indirect blocks flush threads) <> param ufs nFlushIno=32 (no. of UxFS inode blocks flush threads) <> ap msr write=0x880281 (disables hardware prefetch in the P4) <> <> Storage array notes: <> The system under test contained two CLARiiON CX700 storage arrays. Each CX700 contained <> 149 disks that were configured into 48 three-disk RAID-0 groups plus one five-disk <> RAID-5 group. Each group was exported as a single LUN. The RAID-0 groups were used for <> file systems. The RAID-5 group was for system use. <> The stripe size for all RAID-0 LUNs was 8KB and each LUN was 15GB. <> Filesystem (fs1) was built on a volume made by striping across 32 LUNs on the first two SPs <> Filesystem (fs2) was built on a volume made by striping across 32 LUNs on the second two SPs <> Filesystem (fs3) was built on a volume made by striping across 32 LUNs on all four SPs <> The default location and size (64MB) was used for all Data Movers. <> Each storage array has dual storage processor units that work as <> an active-active failover pair. The mirrored write cache is backed up <> with a battery unit capable of saving the write cache to disk in the event of <> a power failure. In the event of a storage array failure, the second <> storage processor unit is capable of saving all state that was managed <> by the first (and vise versa), even with a simultaneous power failure. <> When one of the storage processors or battery units are off-line, the <> system turns off the write cache and writes directly to disk before <> acknowledging any write operations. <> The length of time the battery can support the retention of data is 2 <> minutes - that is sufficient to write all necessary data twice. <> Storage processor A could have written 99% of its memory to disk and <> then fail. In that case storage processor B has enough battery to <> store its copy of A's data as well as its own. =============================================================================== Generated on Tue Aug 31 17:02:37 EDT 2004 by SPEC SFS97 ASCII Formatter Copyright (c) 1997-2004 Standard Performance Evaluation Corporation