SPECsfs97_R1.v3 Result =============================================================================== EMC Corp. : Celerra NS40 Failover Cluster 2 X-Blades (1 stdb SPECsfs97_R1.v3 = 37382 Ops/Sec (Overall Response Time = 2.45 msec) =============================================================================== Throughput Response ops/sec msec 3719 0.8 7463 1.0 11182 1.4 14946 1.8 18692 2.3 22524 2.7 26330 3.4 29996 3.6 33742 4.2 37382 6.8 =============================================================================== Server Configuration and Availability Vendor EMC Corp. Hardware Available October 2006 Software Available October 2006 Date Tested July 2006 SFS License number 47 Licensee Location Hopkinton, MA CPU, Memory and Power Model Name Celerra NS40 Failover Cluster 2 X-Blades (1 stdby) Processor 2.8 GHz Intel Pentium 4 (Nocona) # of Processors 2 cores, 2 chips, 1 core/chip with HT Technology enabled (2 chips per X-blade) Primary Cache 16 KB Data + 12 K instruction uOPs Secondary Cache 1 MB (unified) Other Cache N/A UPS N/A Other Hardware N/A Memory Size 4 GB (4 GB per X-blade) NVRAM Size N/A NVRAM Type N/A NVRAM Description N/A Server Software OS Name and Version DART 5.5.22.2 Other Software N/A File System UxFS NFS version 3 Server Tuning Buffer Cache Size Dynamic # NFS Processes 4 Fileset Size 357.8 GB Network Subsystem Network Type Jumbo Gigabit Ethernet Network Controller Desc. Gigabit Ethernet Controller Number Networks 1 Number Network Controllers 2 (4 available per X-blade) Protocol Type TCP Switch Type Cisco 6500 Gbit Switch (Jumbo) Bridge Type N/A Hub Type N/A Other Network Hardware N/A Disk Subsystem and Filesystems Number Disk Controllers 2 Number of Disks 120 Number of Filesystems 1 File System Creation Ops DIR_COMPAT File System Config The file system is striped across 120 disks. These disks were bound into 24 4+1 RAID 5 LUNs. The first 5 disks share space with the Celerra reserved LUNs that contain the file system logs. Disk Controller Integrated 4 Gb Fibre Channel # of Controller Type 2 (1 per SP) Number of Disks 120 (dual ported) Disk Type 10K RPM 300 GB Fibre Channel (part # 005048633) File Systems on Disks File system fs1 (120 disks), UxFS log shares 5 disks Special Config Notes 4 GB cache per SP (Configured as 2500 MB Mirrored write cache, 516 MB read cache per SP of the CX3-40 Storage Array). See notes below. Load Generator (LG) Configuration Number of Load Generators 6 Number of Processes per LG 36 Biod Max Read Setting 5 Biod Max Write Setting 5 LG Type LG1 LG Model Dell PowerEdge 1750 Number and Type Processors 2 x 3.06 GHz Intel P4 Xeon with HT Technology enabled Memory Size 2 GB Operating System Linux 2.2.21-4smp Compiler GCC 3.2.3 Compiler Options -O Network Type Broadcom BCM5704 NetXtreme Gigabit Ethernet Testbed Configuration LG # LG Type Network Target File Systems Notes ---- ------- ------- ------------------- ----- 1 LG1 1 /fs1, ..., /fs1 N/A 2 LG1 1 /fs1, ..., /fs1 N/A 3 LG1 1 /fs1, ..., /fs1 N/A 4 LG1 1 /fs1, ..., /fs1 N/A 5 LG1 1 /fs1, ..., /fs1 N/A 6 LG1 1 /fs1, ..., /fs1 N/A =============================================================================== Notes and Tuning <> Failover is supported by an additional X-blade that operates in <> standby mode. In the event of any X-blade failure, this unit takes over the <> function of the failed unit. The standby X-blade does not contribute to <> the performance of the system and is not included in the processor, memory, <> or other components listed above. <> <> Server tuning: <> param ufs inoBlkHashSize=170669 (inode block hash size) <> param ufs inoHashTableSize=1218761 (inode hash table size) <> param ufs updateAccTime=0 (disable access-time updates) <> param nfs withoutCollector=1 (enable NFS-to-CPU thread affinity) <> param file prefetch=0 (disable DART read prefetch) <> param mkfsArgs dirType=DIR_COMPAT (Compatability mode directory style) <> param kernel maxStrToBeProc=16 (no. of network streams to process at once) <> param kernel outerLoop=8 (8 consecutive iterations of network packets processing) <> param kernel buffersWatermarkPercentage=5 (flushing buffer cache threshold) <> file initialize nodes=1000000 dnlc=1656000 (no. of inodes, no. of dirctory name lookup cache entries) <> nfs start openfiles=980462 nfsd=4 (no. of open files and 4 NFS daemons) <> param ufs nFlushCyl=32 (no. of UxFS cylinder group blocks flush threads) <> param ufs nFlushDir=32 (no. of UxFS directory and indirect blocks flush threads) <> param ufs nFlushIno=32 (no. of UxFS inode blocks flush threads) <> <> Storage array notes: <> The integrated CX3-40 used for storage exposes LUNs across multiple Fibre ports. <> Each LUN is configured 4+1 RAID 5 on 5 physical drives. Each LUN is 30720 MB in size. <> There are 2 fibre ports on each NAS X-blade which connect the X-Blade to the <> CX3-40 SPs (1 connection per CX3-40 SP). Each of the 2 CX3-40 SPs <> memory. 2500 MB write cache was configured. The write cache is mirrored between <> the CX3-40 SPs. 516 MB read cache was configured on each CX3-40 SP. The memory <> is backed up with sufficient battery power to safely destage all cached data in <> the write cache onto the disk in the event of a power failure. =============================================================================== Generated on Tue Nov 21 18:03:59 EST 2006 by SPEC SFS97 ASCII Formatter Copyright (c) 1997-2004 Standard Performance Evaluation Corporation