SPECsfs97_R1.v3 Result =============================================================================== Network Appliance, Inc. : FAS2050 SPECsfs97_R1.v3 = 20027 Ops/Sec (Overall Response Time = 1.37 msec) =============================================================================== Throughput Response ops/sec msec 1939 0.5 3875 0.6 5828 0.6 7775 0.8 9743 1.0 11687 1.2 13613 1.5 15572 1.9 17539 2.7 19601 4.2 20027 4.8 =============================================================================== Server Configuration and Availability Vendor Network Appliance, Inc. Hardware Available June 2007 Software Available June 2007 Date Tested August 2007 SFS License number 33 Licensee Location Sunnyvale, CA CPU, Memory and Power Model Name FAS2050 Processor 2.2 GHz Mobile Intel(R) Celeron(R) # of Processors 2 cores, 2 chips, 1 core/chip Primary Cache 12 KB uops I + 8 KB D on chip Secondary Cache 256 KB (I+D) on chip Other Cache N/A UPS none Other Hardware -- Memory Size 4 GB (2 GB per node) NVRAM Size 512 MB (256 MB per node) NVRAM Type battery backed up DIMM on main memory card NVRAM Description minimum 3-day battery-backed shelf-life Server Software OS Name and Version Data ONTAP 7.2.2L1 Other Software Cluster Option File System WAFL NFS version 3 Server Tuning Buffer Cache Size default # NFS Processes N/A Fileset Size 190.3 GB Network Subsystem Network Type Jumbo Frame Gigabit Ethernet Network Controller Desc. integrated 10/100/1000 Ethernet controller Number Networks 1 (N1) Number Network Controllers 2 (1 per node) Protocol Type TCP Switch Type Cisco 6509 (N1,N2) Bridge Type N/A Hub Type N/A Other Network Hardware N/A Disk Subsystem and Filesystems Number Disk Controllers 2 (1 per node) Number of Disks 20 Number of Filesystems 2 File System Creation Ops default File System Config 1 RAID-DP (Double Parity) group of 10 disks Disk Controller integrated LSI Logic 1068E SAS Controller # of Controller Type 2 (1 per node) Number of Disks 10/10 Disk Type X287A 300GB 15K RPM SAS File Systems on Disks F1/F2 Special Config Notes see notes Load Generator (LG) Configuration Number of Load Generators 16 Number of Processes per LG 6 Biod Max Read Setting 2 Biod Max Write Setting 2 LG Type LG1 LG Model Supermicro SuperServer 6014H-i2 Number and Type Processors 2 x 3.4-GHz Intel Xeon Memory Size 2048 MB Operating System Red Hat Enterprise Linux AS release 3 (2.4.21-40.ELsmp) Compiler gcc, used SFS97_R1 Precompiled Binaries Compiler Options N/A Network Type Integrated Dual Port Intel 82546GB Gigabit Ethernet Testbed Configuration LG # LG Type Network Target File Systems Notes ---- ------- ------- ------------------- ----- 1/2/../16 LG1 N1 F1,F2,F1,F2,F1,F2 N/A =============================================================================== Notes and Tuning <> NetApp's embedded operating system processes NFS requests from the network layer without any NFS daemons, and uses non-volatile memory to improve performance. <> <> All standard data protection features, including background RAID and media error scrubbing, software validated RAID checksumming, and double disk failure protection via double parity RAID (RAID-DP) were enabled during the test. <> <> The tested system was an active-active failover cluster comprised of two nodes joined by a built-in cluster interconnect. <> The cluster option was licensed and enabled. <> <> Each node had 1 cpu (1 cores, 1 chips, 1 core/chip). <> <> The single disk controller on each node had a single 3Gbit/s SAS port connecting it to all 20 disks in the system. <> Each node was the owner of a single, 10 disk pool or "aggregate". <> Each aggregate was composed of one RAID-DP group composed of 8 data disks and 2 parity disks. <> Within each aggregate, a flexible volume (utilizing DataONTAP FlexVol (TM) technology) was created to hold the SFS filesystem for that node. <> The F1 filesystem was striped across the disks in the aggregate owned by the first node, using a variable size striping mechanism. The F2 filesystem was striped across the disks in the aggregate owned by the second node, using the same variable size striping mechanism. <> Each node was the owner of one filesystem, but the disks in each aggregate were dual-attached so that, in the event of a fault, they could be controlled by the other node. <> A separate flexible volume resided on the aggregate of each node held the DataONTAP operating system and system files. <> <> All network ports were set to use jumbo frames (MTU=9000). <> <> Server tunings: <> - vol options vol1 no_atime_on 1 -- to disable access time update <> <> NetApp is a registered trademark and "Data ONTAP", "Network Appliance", "FlexVol", and "WAFL" are trademarks of Network Appliance, Inc. in the United States and other countries. <> All other trademarks belong to their respective owners and should be treated as such. =============================================================================== Generated on Tue Sep 11 15:05:19 EDT 2007 by SPEC SFS97 ASCII Formatter Copyright (c) 1997-2007 Standard Performance Evaluation Corporation