SPECsfs97_R1.v3 Result =============================================================================== Exacube Systems Inc. : Netbine500 SPECsfs97_R1.v3 = 15014 Ops/Sec (Overall Response Time = 1.30 msec) =============================================================================== Throughput Response ops/sec msec 1517 0.3 3048 0.5 4584 0.8 6043 1.0 7577 1.3 9061 1.5 10561 1.8 12058 2.0 13525 2.4 15014 2.9 =============================================================================== Server Configuration and Availability Vendor Exacube Systems Inc. Hardware Available Jan 2004 Software Available Jan 2004 Date Tested Oct 2004 SFS License number 2978 Licensee Location Songnam, Korea CPU, Memory and Power Model Name Netbine500 Processor Intel Xeon 2.8 Ghz # of Processors 4 cores, 4 chips, 1 core/chip (2 cores per node) Primary Cache Execution Trace Cache 12K uOPs + Data Cache 8KB on chip Secondary Cache 512KB (I+D) on chip Other Cache N/A UPS APC SmartUPS 1000 Other Hardware N/A Memory Size 4 GB (2 GB per node) NVRAM Size 5 GB (all node memory is UPS protected, and 512MB per RAID controller) NVRAM Type Node: UPS-protected DRAM, RAID controller: Battery backed RAM NVRAM Description See notes Server Software OS Name and Version ClusStor 3.2 Other Software N/A File System XFS NFS version 3 Server Tuning Buffer Cache Size Dynamic # NFS Processes 256 Fileset Size 145.1 GB Network Subsystem Network Type Gigabit Ethernet (MTU 1500 bytes) Network Controller Desc. Intel PRO/1000 1000BASE-T Number Networks 4 (2 per node: 1 port for service, 1 port for cluster) Number Network Controllers 2 (N1 - Service, N2 - Cluster) Protocol Type UDP Switch Type 3Com SuperStack 3 network switch Bridge Type N/A Hub Type N/A Other Network Hardware N/A Disk Subsystem and Filesystems Number Disk Controllers 4 (2 per node) Number of Disks 12 Number of Filesystems 2 File System Creation Ops N/A File System Config Each file system is RAID level 5(128KB element size), across 6 disks(1 LUN) Disk Controller QLogic QLA2300 # of Controller Type 4 (2 per node) Number of Disks 12 Disk Type Seagate 146GB FC Disk 10K RPM File Systems on Disks F1,F2 Special Config Notes See notes Load Generator (LG) Configuration Number of Load Generators 2 Number of Processes per LG 20 Biod Max Read Setting 2 Biod Max Write Setting 2 LG Type LG1 LG Model Intel SR2200 Number and Type Processors 2 x Pentium III 1.0 Ghz Memory Size 512 MB Operating System Linux RedHat 9.0 Compiler gcc 3.2.2 Compiler Options default Network Type Intel PRO/1000 1000BASE-T Testbed Configuration LG # LG Type Network Target File Systems Notes ---- ------- ------- ------------------- ----- 1 LG1 N1 F1,F2 N/A 2 LG1 N1 F1,F2 N/A =============================================================================== Notes and Tuning <> We set NFS share access through insecure port larger than 1024. <> RAID controller's disk cache size is 512MB (Disk cache can be extended to 1 GB). <> NVRAM: Each RAID controller has 512 MB of write-back cache (72 hours battery-backed). <> Each NAS server and RAID was connected with 2 ports through 2 Gbps FC link. <> Netbine500 has two RAID controllers, which were configured active-standby failover. <> The RAID is configured with 2 LUNs. Each LUN has 6 disks with RAID level 5 and the stripe size is 128 KB. <> Each NAS server has onboard Adaptec U320 SCSI controller for OS disks. <> Two OS disks in a NAS server are mirrored with software RAID 1 for failure protection. <> Uniform Access Rules: <> Each node has 1 network port(N1) for NFS service connected with the LGs. <> Each node has 1 network port(N2) for cluster operation connected with each node in the cluster. <> There are two LUNs in the RAID. A file system consists of one LUN. <> Each LG has 20 processes and the 10 processes access F1 via active server and the other 10 processes access F2. <> Stable Storage: <> There is an integral UPS for each NAS server, which guarantees sufficient power in the event of power failure. <> The irrecoverable power failure detected by UPS, the active NAS server flushes all data safely and shutdown to failover. <> Buffer cache mirroring are performed between active node and standby node to prevent data loss in buffer cache. <> If active node fails in H/W, OS or some media, the standby node takes over all services and data without any loss. <> When the cluster is recovered, all written data in buffer cache are flushed and cache mirroring is started. =============================================================================== Generated on Wed Nov 24 14:43:45 EST 2004 by SPEC SFS97 ASCII Formatter Copyright (c) 1997-2004 Standard Performance Evaluation Corporation