|Exanet Inc. :||ExaStore EX400FC|
|SPECsfs97_R1.v3 =||139061 Ops/Sec (Overall Response Time = 1.13 msec)|
|Server Configuration and Availability|
|Vendor ||Exanet Inc. |
|Hardware Available ||Sep 2004|
|Software Available||Oct 2004 |
|Date Tested||Sep 2004 |
|SFS License Number||211 |
|Licensee Locations||Hertzlia, Israel |
|CPU, Memory and Power|
|Model Name ||ExaStore EX400FC |
|Processor ||Intel(R) Xeon(TM) CPU 3.20GHz |
|# of Processors ||8 cores, 8 chips, 1 core/chip (2 cores per node) |
|Primary Cache ||12 K uops I + 8 KBD on chip |
|Secondary Cache ||512KB(I+D) on chip |
|Other Cache ||L3 2048 KB |
|UPS ||APC SmartUPS 10001U|
|Other Hardware ||None |
| Memory Size ||48 GB (12 GB per node) |
|NVRAM Size ||56 GB (all node memory is UPS protected, and 1 GB per 2882 storage controller) |
|NVRAM Type ||Node: UPS-protected DRAM, Storage Controller: Battery-backed RAM |
|NVRAM Description||See notes |
|OS Name and Version||ExaOS V2.0|
|Other Software ||N/A |
|File System ||ExaFS |
|NFS version ||3 |
|Buffer Cache Size ||Dynamic|
|# NFS Processes ||16 (4 per node)|
|Fileset Size ||1353.8 GB|
|Network Type ||Jumbo Frame Gigabit Ethernet (MTU 9000) |
|Network Controller Desc. ||Intel(R) PRO/1000 MT Dual Port Server Adapter|
|Number Networks ||2 (N1 - Server, N2 - Cluster) |
|Number Network Controllers||8 (2 per nodes: 2 ports server, 2 ports cluster) |
|Protocol Type ||UDP |
|Switch Type ||Nortel Baystack 5510 (N1)|
|Bridge Type ||N/A |
|Hub Type ||N/A |
|Other Network Hardware ||N/A |
|Disk Subsystem and Filesystems|
|Number Disk Controllers ||8 (2 per node) |
|Number of Disks ||232 |
|Number of Filesystems ||1 (F1) |
|File System Creation Ops||default|
|File System Config ||16 RAID-Groups of 14 disks each |
|Disk Controller ||Adaptec ZCR (zero-channel RAID) 2010S |
|# of Controller Type ||4 (1 per node) |
|Number of Disks ||8 (2 per node) |
|Disk Type ||Fujitsu 36 GB 10K RPM |
|File Systems on Disks ||OS, write cache dump files |
|Special Config Notes ||N/A|
|Disk Controller ||Fibre Channel: QLogic Corp. QLA2342 Dual Port Fibre Channel Adapter |
|# of Controller Type ||4 (1 per node, dual ported) |
|Number of Disks ||224 |
|Disk Type ||Seagate Cheetah ST 373453 FC 73GB 15K RPM |
|File Systems on Disks ||F1 |
|Special Config Notes ||See notes|
|Load Generator (LG) Configuration|
|Number of Load Generators ||16 |
|Number of Processes per LG||24 |
|Biod Max Read Setting ||2 |
|Biod Max Write Setting ||2 |
|LG Type ||LG1 |
|LG Model ||SuperServer 5013C-T |
|Number and Type Processors||1x2.8 GHz Pentium 4, 1MB cache |
|Memory Size ||1024 MB |
|Operating System ||Linux 2.4 |
|Compiler ||gcc 3.2 |
|Compiler Options ||-g -DNO_T_TYPES -DUSE_INTTYPES -DLinux |
|Network Type ||Intel e1000 Gigabit Ethernet, MTU = 9000|
|LG #||LG Type||Network||Target File Systems||Notes|
- The system under test is an ExaStore EX400FC which is a four node cluster with four 56-disk LSI 2882 FC Storage Systems.
- The cluster is comprised of 2 redundant node pairs. Each pair includes two nodes and two 2882 FC Storage Systems.
- Each 2882 FC Storage System contains one 2882 array module which contains two 2882 controllers.
- Each 2882 controller has a single connection to a different FC adapter on each of the pair nodes.
- Each 2882 controller has 1 GB of battery-backed RAM for a total of 1 GB of storage controller NVRAM.
- NVRAM cache mirroring was disabled in the 2882 controller.
- Read ahead was disabled in the 2882 controller.
- Each 2882 array module holds 14 disks. Each 2882 FC Storage System has three 2 Gb FC Drive Modules.
- Each 2 Gb FC Drive Module holds 14 disks.
- The 2882 array module and the 2 Gb FC Drive Modules are chained together with redundant FC-ALs.
- The single file system, F1, was comprised of 16 RAID-5 sets.
- Each RAID set had a stripe size of 256 KB.
- Uniform Access Rules (UAR):
- Each Node in the cluster has two N1 network ports connected to the server network (N1) for communication
- with all Load Generators.
- Each node in the cluster has two N2 network ports connected to the cluster network (N2) via two trunked
- Extreme Summit1i switches, for communication with the other nodes.
- Each node in the cluster accessed 1/4 of the 2882 FC Storage Systems RAID sets.
- Each load generator process accessed the cluster through a single node and had full access to all the data.
- Each of the four nodes served as the access point for 1/4 of the load generator processes.
- Each node reads and writes 1/4 of its dataset to its attached RAID sets;
- the other 3/4 are read and written via the cluster network (N2).
- Stable Storage:
- All memory in the ExaStore was used by the system for general-purpose memory and dynamically sized data cache.
- There was an integral UPS for each ExaStore node, which guarantees sufficient power in the event of power failure
- to dump the write cache to its local SCSI RAID. In case of unsafe situations (e.g. low battery condition),
- the system uses a journal mechanism to synchronously log all write-type operations to the LSI 2882 FC Storage Systems
- which are connected to all nodes. When power is restored, the ExaStore system is able to recover all committed data
- and metadata operations from journal files or dump files accordingly.
- Server Tunings:
- The nodes read ahead option was disabled.
- The nodes write cache mirroring was disabled. This option enables mirroring of newly written data to a peer node to
- support a higher standard of high availability in addition to the UPS backed memory.
- All periodical system diagnostics and monitoring jobs were disabled for reproducibility:
- The cluster failover manager was disabled.
- Logging of periodical statistics was disabled.
- Event log compression and mail delivery were disabled.
- Exanet and ExaMesh are trademarks of Exanet, Inc.
Generated on Mon Nov 1 13:00:45 2004 by SPEC SFS97 HTML Formatter
Copyright © 1997-2004 Standard Performance Evaluation Corporation
First published at SPEC.org on 26-Oct-2004