SPECsfs97_R1.v3 Result =============================================================================== EMC Corp : Celerra NS80G Failover Cluster 4 Xblade60's (1 st SPECsfs97_R1.v3 = 123109 Ops/Sec (Overall Response Time = 1.79 msec) =============================================================================== Throughput Response ops/sec msec 12714 0.6 25295 0.8 38139 1.0 50876 1.4 63850 1.6 76532 1.9 89398 2.2 102328 2.7 115242 3.8 123109 6.1 =============================================================================== Server Configuration and Availability Vendor EMC Corp Hardware Available November 2006 Software Available November 2006 Date Tested November 2006 SFS License number 47 Licensee Location Hopkinton, MA CPU, Memory and Power Model Name Celerra NS80G Failover Cluster 4 Xblade60's (1 stdby) Processor 3.6 GHz Intel Pentium 4 (Irwindale) # of Processors 6 cores, 6 chips, 1 core/chip with HT Technology enabled (2 chips per Xblade60) Primary Cache 16 KB Data + 12 K instruction uOPs Secondary Cache 2 MB (unified) Other Cache N/A UPS N/A Other Hardware EMC DS-4100B 4Gbit Fibre Channel Switch. See the NS80 Datamover to CX3-80 StorageArray Connectivity notes below. Memory Size 12 GB (4 GB per Xblade60) NVRAM Size N/A NVRAM Type N/A NVRAM Description N/A Server Software OS Name and Version DART 5.5.76.4 Other Software N/A File System UxFS NFS version 3 Server Tuning Buffer Cache Size Dynamic # NFS Processes 4 Fileset Size 1218.5 GB Network Subsystem Network Type Jumbo Gigabit Ethernet Network Controller Desc. Gigabit Ethernet Controller Number Networks 1 Number Network Controllers 6 (2 of the 4 available per Xblade60 were used) Protocol Type TCP Switch Type Cisco 6500 Gbit Switch (Jumbo) Bridge Type N/A Hub Type N/A Other Network Hardware N/A Disk Subsystem and Filesystems Number Disk Controllers 6 (2 per active Xblade60) Number of Disks 450 Number of Filesystems 3 File System Creation Ops DIR_COMPAT File System Config The file systems were each striped across 150 disks. These disks were bound into 90 4+1 RAID 5 LUNs. The first 5 disks fs1 is striped over is shared space with the Celerra reserved LUNs that contain the file system logs. Disk Controller Integrated 4 Gb Fibre Channel # of Controller Type 8 (2 per SP) Number of Disks 225 (dual ported) per CX3-80 Storage Array Disk Type 15K RPM 146 GB Fibre Channel (part # 005048701) File Systems on Disks File system fs1 (150 disks), File system fs2 (150 disks), File system fs3 (150 disks), UxFS log shares 5 disks with fs1 Special Config Notes 8 GB cache per CX3-80 SP, 2 pairs of CX3-80 SP's. (Configured as 3072 MB Mirrored write cache per SP pair, 3656 MB read cache per SP of the 2 CX3-80 Storage Arrays). See notes below. Load Generator (LG) Configuration Number of Load Generators 18 Number of Processes per LG 36 Biod Max Read Setting 5 Biod Max Write Setting 5 LG Type LG1 LG Model Dell PowerEdge 1750 Number and Type Processors 2 x 3.06 GHz Intel P4 Xeon with HT Technology enabled Memory Size 2 GB Operating System Linux 2.2.21-4smp Compiler GCC 3.2.3 Compiler Options -O Network Type Broadcom BCM5704 NetXtreme Gigabit Ethernet Testbed Configuration LG # LG Type Network Target File Systems Notes ---- ------- ------- ------------------- ----- 1 LG1 1 /fs1, fs2, fs3, ..., /fs1, /fs2, /fs3 N/A 2 LG1 1 /fs1, fs2, fs3, ..., /fs1, /fs2, /fs3 N/A 3 LG1 1 /fs1, fs2, fs3, ..., /fs1, /fs2, /fs3 N/A 4 LG1 1 /fs1, fs2, fs3, ..., /fs1, /fs2, /fs3 N/A 5 LG1 1 /fs1, fs2, fs3, ..., /fs1, /fs2, /fs3 N/A 6 LG1 1 /fs1, fs2, fs3, ..., /fs1, /fs2, /fs3 N/A 7 LG1 1 /fs1, fs2, fs3, ..., /fs1, /fs2, /fs3 N/A 8 LG1 1 /fs1, fs2, fs3, ..., /fs1, /fs2, /fs3 N/A 9 LG1 1 /fs1, fs2, fs3, ..., /fs1, /fs2, /fs3 N/A 10 LG1 1 /fs1, fs2, fs3, ..., /fs1, /fs2, /fs3 N/A 11 LG1 1 /fs1, fs2, fs3, ..., /fs1, /fs2, /fs3 N/A 12 LG1 1 /fs1, fs2, fs3, ..., /fs1, /fs2, /fs3 N/A 13 LG1 1 /fs1, fs2, fs3, ..., /fs1, /fs2, /fs3 N/A 14 LG1 1 /fs1, fs2, fs3, ..., /fs1, /fs2, /fs3 N/A 15 LG1 1 /fs1, fs2, fs3, ..., /fs1, /fs2, /fs3 N/A 16 LG1 1 /fs1, fs2, fs3, ..., /fs1, /fs2, /fs3 N/A 17 LG1 1 /fs1, fs2, fs3, ..., /fs1, /fs2, /fs3 N/A 18 LG1 1 /fs1, fs2, fs3, ..., /fs1, /fs2, /fs3 N/A =============================================================================== Notes and Tuning <> Failover is supported by an additional Xblade60 that operates in <> standby mode. In the event of any Xblade60 failure, this unit takes over the <> function of the failed unit. The standby Xblade60 does not contribute to <> the performance of the system and is not included in the processor, memory, <> or other components listed above. <> <> Server tuning: <> param ufs inoBlkHashSize=170669 (inode block hash size) <> param ufs inoHashTableSize=1218761 (inode hash table size) <> param ufs updateAccTime=0 (disable access-time updates) <> param nfs withoutCollector=1 (enable NFS-to-CPU thread affinity) <> param file prefetch=0 (disable DART read prefetch) <> param mkfsArgs dirType=DIR_COMPAT (Compatability mode directory style) <> param kernel maxStrToBeProc=16 (no. of network streams to process at once) <> param kernel outerLoop=8 (8 consecutive iterations of network packets processing) <> param kernel buffersWatermarkPercentage=5 (flushing buffer cache threshold) <> file initialize nodes=1000000 dnlc=1676000 (no. of inodes, no. of dirctory name lookup cache entries) <> nfs start openfiles=980462 nfsd=4 (no. of open files and 4 NFS daemons) <> param ufs nFlushCyl=32 (no. of UxFS cylinder group blocks flush threads) <> param ufs nFlushDir=32 (no. of UxFS directory and indirect blocks flush threads) <> param ufs nFlushIno=32 (no. of UxFS inode blocks flush threads) <> <> Storage array notes: <> The CX3-80 used for storage exposes LUNs across multiple Fibre ports. <> Each LUN is configured 4+1 RAID 5 on 5 physical drives. Each LUN is 342016 MB in size. <> Each of the 4 CX3-80 SPs (2 CX3-80 Storage Arrays) has 8 GB of memory. 3072 MB <> write cache was configured. The write cache is mirrored between the CX3-80 SPs. <> 3656 MB read cache was configured on each CX3-80 SP. The memory is backed up with <> sufficient battery power to safely destage all cached data in the write cache onto <> the disk in the event of a power failure. <> <> NS80 Datamover to CX3-80 StorageArray Connectivity notes: <> Each NS80 Xblade60 has 2 Fibre Channel ports connected to the DS-4100B switch. <> <> The switch is zoned so the first Fibre Channel port of the NS80 Xblade60 accesses the <> first port of each of the CX3-80 SP (4 SP's). The second Fibre Channel port of the <> Xblade60 accesses the second port of each of the CX3-80 SP's. There is <> one Xblade60 Fibre Channel port and 1 CX3-80 Fibre Channel port in each zone on <> the DS-4100B switch. =============================================================================== Generated on Wed Dec 6 15:28:31 EST 2006 by SPEC SFS97 ASCII Formatter Copyright (c) 1997-2004 Standard Performance Evaluation Corporation