SPECsfs97_R1.v3 Result =============================================================================== EMC Corp. : Celerra NS700 Failover Cluster (one secondary) SPECsfs97_R1.v3 = 36335 Ops/Sec (Overall Response Time = 2.23 msec) =============================================================================== Throughput Response ops/sec msec 3018 0.7 6128 0.9 9123 1.1 12235 1.5 15284 1.7 18354 2.0 21456 2.5 24555 2.6 27618 2.9 30686 3.7 33830 4.2 36335 7.1 =============================================================================== Server Configuration and Availability Vendor EMC Corp. Hardware Available February 2004 Software Available February 2004 Date Tested February 2004 SFS License number 47 Licensee Location Hopkinton, MA CPU, Memory and Power Model Name Celerra NS700 Failover Cluster (one secondary) Processor 3.06GHz Intel Xeon P4 # of Processors 2 Primary Cache Execution Trace Cache 12K uOPs + Data Cache 8KB Secondary Cache 512KB (unified) Other Cache N/A UPS N/A Other Hardware N/A Memory Size 4 GB NVRAM Size N/A NVRAM Type N/A NVRAM Description N/A Server Software OS Name and Version Dart 5.2 Other Software Redhat Linux 7.2 on Control Station File System UxFS NFS version 3 Server Tuning Buffer Cache Size Dynamic # NFS Processes 4 Fileset Size 350.4 GB Network Subsystem Network Type Jumbo Gigabit Ethernet Network Controller Desc. Gigabit Ethernet Controller Number Networks 1 Number Network Controllers 2 Protocol Type TCP Switch Type Cisco 6509 Gbit Switch (Jumbo) Bridge Type N/A Hub Type N/A Other Network Hardware N/A Disk Subsystem and Filesystems Number Disk Controllers 4 [2 per Storage Processor (SP), 2 SPs] Number of Disks 131 Number of Filesystems 1 File System Creation Ops N/A File System Config striped (8KB element size), across 126 disk (42 LUNs) Disk Controller Integrated 2Gb Fibre Channel # of Controller Type 4 [2 per SP] Number of Disks 131 (dual ported) Disk Type Seagate Fibre Channel 146GB 10k-rpm File Systems on Disks OS and UxFS Log (5), Filesystem fs1 (126) Special Config Notes 4GB cache per SP on storage array. 3072 MB mirrored write cache per SP. See notes below. Load Generator (LG) Configuration Number of Load Generators 7 Number of Processes per LG 32 Biod Max Read Setting 5 Biod Max Write Setting 5 LG Type LG1 LG Model Dell 2450 Number and Type Processors 2x 1G PIII Memory Size 1 GB Operating System Linux 2.4.20-24.7 smp Compiler GCC 2.96 Compiler Options None Network Type Intel 82543GC GbE LG Type LG2 LG Model Dell 2450 Number and Type Processors 2x 1G PIII Memory Size 2 GB Operating System Linux 2.4.20-24.7 smp Compiler GCC 2.96 Compiler Options None Network Type Intel 82543GC GbE Testbed Configuration LG # LG Type Network Target File Systems Notes ---- ------- ------- ------------------- ----- 1..6 LG1 1 /fs1../fs1 N/A 7 LG2 1 /fs1../fs1 N/A =============================================================================== Notes and Tuning <> Failover is supported by an additional Celerra NS700 Data Mover that operates in <> standby mode. In the event of a Data Mover failure, this unit takes over the <> function of the failed unit. The standby Data Mover does not contribute to <> the performance of the system and is not included in the processor, memory, <> or other components listed above. <> <> Server tuning: <> param ufs inoBlkHashSize=170669 (inode block hash size) <> param ufs inoHashTableSize=1218761 (inode hash table size) <> param ufs updateAccTime=0 (disable access-time updates) <> param nfs withoutCollector=1 (enable NFS-to-CPU thread affinity) <> param file prefetch=0 (disable Dart read prefetch) <> param mkfsArgs dirType=DIR_COMPAT (Compatability mode directory style) <> param kernel maxStrToBeProc=16 (no. of network streams to process at once) <> param kernel outerLoop=8 (8 consecutive iterations of network packets processing) <> param kernel heapReserve=90000 (Reserve memory frames for the heap) <> file initialize nodes=1000000 dnlc=1656000 (no. of inodes, no. of dirctory name lookup cache entries) <> nfs start openfiles=980462 nfsd=4 (no. of open files and 4 NFS daemons) <> param ufs nFlushCyl=32 (no. of UxFS cylinder group blocks flush threads) <> param ufs nFlushDir=32 (no. of UxFS directory and indirect blocks flush threads) <> param ufs nFlushIno=32 (no. of UxFS inode blocks flush threads) <> camconfig nexus depth=8 order=0xa1 limit=10 (increase the I/O queue depth to 8) <> <> Storage array notes: <> The storage array was configured into 42 RAID-0 groups of 3 spindles each (126 total disk spindles) <> The stripe size for all 42 RAID-0 LUNs was 8KB and each LUN was 30GB. <> The filesystem is built on a volume made by striping across all 42 LUNs <> on the CX700 storage array with an 8KB stripe size. <> The default UFS log of 64MB was used. The UFS log and OS shared the same 4+1 RAID-5 group. <> The storage array has dual storage processor units that work as <> an active-active failover pair. The mirrored write cache is backed up <> with a battery unit capable of saving the write cache to disk in the event of <> a power failure. In the event of a storage array failure, the second <> storage processor unit is capable of saving all state that was managed <> by the first (and vise-versa), even with a simultaneous power failure. <> When one of the storage processors or battery units are off-line, the <> system turns off the write cache and writes directly to disk before <> acknowledging any write operations. <> The length of time the battery can support the retention of data is 2 <> minutes - that is sufficient to write all necessary data twice. <> Storage processor A could have written 99% of its memory to disk and <> then fail. In that case storage processor B has enough battery to <> store its copy of A's data as well as its own. =============================================================================== Generated on Tue Mar 2 16:41:32 EST 2004 by SPEC SFS97 ASCII Formatter Copyright (c) 1997-2004 Standard Performance Evaluation Corporation