SPECsfs97_R1.v3 Result =============================================================================== Exanet Inc. : ExaStore EX200FC SPECsfs97_R1.v3 = 61070 Ops/Sec (Overall Response Time = 1.18 msec) =============================================================================== Throughput Response ops/sec msec 6064 0.7 12134 0.8 18177 0.9 24299 1.0 30398 1.1 36563 1.1 42643 1.3 48929 1.6 54911 1.9 61070 2.8 =============================================================================== Server Configuration and Availability Vendor Exanet Inc. Hardware Available Sep 2004 Software Available Oct 2004 Date Tested Sep 2004 SFS License number 211 Licensee Location Hertzlia, Israel CPU, Memory and Power Model Name ExaStore EX200FC Processor Intel(R) Xeon(TM) CPU 3.20GHz # of Processors 4 cores, 4 chips, 1 core/chip (2 cores per node) Primary Cache 12 K uops I + 8 KBD on chip Secondary Cache 512KB(I+D) on chip Other Cache L3 2048 KB UPS APC SmartUPS 10001U Other Hardware None Memory Size 24 GB (12 GB per node) NVRAM Size 28 GB (all node memory is UPS protected, and 1 GB per 2882 storage controller) NVRAM Type Node: UPS-protected DRAM, Storage Controller: Battery-backed RAM NVRAM Description See notes Server Software OS Name and Version ExaOS V2.0 Other Software N/A File System ExaFS NFS version 3 Server Tuning Buffer Cache Size Dynamic # NFS Processes 8 (4 per node) Fileset Size 580.2 GB Network Subsystem Network Type Jumbo Frame Gigabit Ethernet (MTU 9000) Network Controller Desc. Intel(R) PRO/1000 MT Dual Port Server Adapter Number Networks 2 (N1 - Server, N2 - Cluster) Number Network Controllers 4 (2 per node: 2 ports server, 2 ports cluster) Protocol Type TCP Switch Type Nortel Baystack 5510 (N1) Bridge Type N/A Hub Type N/A Other Network Hardware N/A Disk Subsystem and Filesystems Number Disk Controllers 4 (2 per node) Number of Disks 116 Number of Filesystems 1 (F1) File System Creation Ops default File System Config 8 RAID-Groups of 14 disks each Disk Controller Adaptec ZCR (zero-channel RAID) 2010S # of Controller Type 2 (1 per node) Number of Disks 4 (2 per node) Disk Type Fujitsu 36 GB 10K RPM File Systems on Disks OS, write cache dump files Special Config Notes N/A Disk Controller Fibre Channel: QLogic Corp. QLA2342 Dual Port Fibre Channel Adapter # of Controller Type 2 (1 per node, dual ported) Number of Disks 112 Disk Type Seagate Cheetah ST 373453 FC 73GB 15K RPM File Systems on Disks F1 Special Config Notes See notes Load Generator (LG) Configuration Number of Load Generators 8 Number of Processes per LG 24 Biod Max Read Setting 2 Biod Max Write Setting 2 LG Type LG1 LG Model SuperServer 5013C-T Number and Type Processors 1x2.8 GHz Pentium 4, 1MB cache Memory Size 1024 MB Operating System Linux 2.4 Compiler gcc 3.2 Compiler Options -g -DNO_T_TYPES -DUSE_INTTYPES -DLinux Network Type Intel e1000 Gigabit Ethernet, MTU = 9000 Testbed Configuration LG # LG Type Network Target File Systems Notes ---- ------- ------- ------------------- ----- 1/2/../8 LG1 N1 F1 N/A =============================================================================== Notes and Tuning <> The system under test is an ExaStore EX200FC which is a two node cluster with two 56-disk LSI 2882 FC Storage Systems. <> Each 2882 FC Storage System contains one 2882 array module which contains two 2882 controllers. <> Each 2882 controller has a single connection to a different FC adapter on each of the EX200FC nodes. <> Each 2882 controller has 1 GB of battery-backed RAM for a total of 1 GB of storage controller NVRAM. <> NVRAM cache mirroring was disabled in the 2882 controller. <> Read ahead was disabled in the 2882 controller. <> Each 2882 array module holds 14 disks. Each 2882 FC Storage System has three 2 Gb FC Drive Modules. <> Each 2 Gb FC Drive Module holds 14 disks. <> The 2882 array module and the 2 Gb FC Drive Modules are chained together with redundant FC-ALs. <> The single file system, F1, was comprised of 8 RAID-5 sets. <> Each RAID set had a stripe size of 256 KB. <> <> Uniform Access Rules (UAR): <> Each Node in the cluster has two N1 network ports connected to the server network (N1) for communication <> with all Load Generators. <> Each node in the cluster has two N2 network ports connected to the cluster network (N2) for communication <> with the other node. <> Each node in the cluster access 1/2 of the 2882 FC Storage Systems RAID sets. <> Each load generator process accessed the cluster through a single node and had full access to all the data. <> Each of the two nodes served as the access point for 1/2 of the load generator processes. <> Each node reads and writes 1/2 of its dataset to its attached RAID sets; the rest is read and <> written via the cluster network (N2). <> <> Stable Storage: <> All memory in the ExaStore was used by the system for general-purpose memory and dynamically sized data cache. <> There was an integral UPS for each ExaStore node, which guarantees sufficient power in the event of power failure <> to dump the write cache to its local SCSI RAID. In case of unsafe situations (e.g. low battery condition), <> the system uses a journal mechanism to synchronously log all write-type operations to the LSI 2882 FC Storage Systems <> which are connected to all nodes. When power is restored, the ExaStore system is able to recover all committed data <> and metadata operations from journal files or dump files accordingly. <> <> Server Tunings: <> The nodes read ahead option was disabled. <> The nodes write cache mirroring was disabled. This option enables mirroring of newly written data to a peer node to <> support a higher standard of high availability in addition to the UPS backed memory. <> All periodical system diagnostics and monitoring jobs were disabled for reproducibility: <> The cluster failover manager was disabled. <> Logging of periodical statistics was disabled. <> Event log compression and mail delivery were disabled. <> <> Exanet and ExaMesh are trademarks of Exanet, Inc. =============================================================================== Generated on Mon Nov 1 13:00:45 EST 2004 by SPEC SFS97 ASCII Formatter Copyright (c) 1997-2004 Standard Performance Evaluation Corporation