|Spinnaker Networks :||SpinServer 4100 (3-node Scalable Cluster, HA Failover)|
|SPECsfs97_R1.v3 =||75735 Ops/Sec (Overall Response Time = 1.81 msec)|
|Server Configuration and Availability|
|Vendor ||Spinnaker Networks |
|Hardware Available ||June 2003|
|Software Available||June 2003 |
|Date Tested||May 2003 |
|SFS License Number||84 |
|Licensee Locations||Pittsburgh, PA |
|CPU, Memory and Power|
|Model Name ||SpinServer 4100 (3-node Scalable Cluster, HA Failover) |
|Processor ||2.8 GHz Intel Xeon |
|# of Processors ||6 (2 per node) |
|Primary Cache ||12 K uops I + 8 KB D on-chip |
|Secondary Cache ||512 KB on-chip |
|Other Cache ||N/A |
|UPS ||APC Smart-UPS 1000 (1 per node)|
|Other Hardware ||Brocade Silkworm 3800 Switch |
| Memory Size ||12 GB (4 GB per node) |
|NVRAM Size ||5.1 GB (1.7 GB per node) |
|NVRAM Type ||UPS backed write cache and mirrored local SCSI drives |
|NVRAM Description||see notes |
|OS Name and Version||SpinFS 2.1|
|Other Software ||none |
|File System ||SpinFS |
|NFS version ||3 |
|Buffer Cache Size ||default|
|# NFS Processes ||N/A|
|Fileset Size ||715.5 GB|
|Network Type ||Jumbo Frame Gigabit Ethernet |
|Network Controller Desc. ||1316-0014-01 (PCI Adapter)|
|Number Networks ||2 (N1-Client, N2-Cluster) |
|Number Network Controllers||9 (1 Client and 2 Clusters per node) |
|Protocol Type ||TCP |
|Switch Type ||Extreme Summit 7i|
|Bridge Type ||N/A |
|Hub Type ||N/A |
|Other Network Hardware ||N/A |
|Disk Subsystem and Filesystems|
|Number Disk Controllers ||3 (1 per node) |
|Number of Disks ||144 (48 per node) |
|Number of Filesystems ||1 namespace (see notes) |
|File System Creation Ops||default|
|File System Config ||27 RAID-Groups of 5 disks each |
|Disk Controller ||1316-0015-01 (2Gb FC-AL Adapter) |
|# of Controller Type ||3 (dual-ported) |
|Number of Disks ||144 |
|Disk Type ||1320-0008-01 (146GB, 10K RPM) |
|File Systems on Disks ||F1 |
|Special Config Notes ||see notes|
|Load Generator (LG) Configuration|
|Number of Load Generators ||12 |
|Number of Processes per LG||36 |
|Biod Max Read Setting ||2 |
|Biod Max Write Setting ||2 |
|LG Type ||LG1 |
|LG Model ||Dell PowerEdge 1550 |
|Number and Type Processors||2x1.0-GHz Pentium III |
|Memory Size ||1024 MB |
|Operating System ||Linux 2.4.9-31smp (Red Hat 7.2) |
|Compiler ||gcc 2.96 |
|Compiler Options ||-O -DNO_T_TYPES -DUSE_INTTYPES |
|Network Type ||3COM 3C996-T GigE, MTU=9000|
|LG #||LG Type||Network||Target File Systems||Notes|
- SpinFS allows up to 512 SpinServers with automatic failover to work together in a single namespace.
- The system under test was a 3-node SpinServer 4100 cluster with nine 16-disk SpinStor 200 Arrays (3 Raid + 6 JBOD).
- Each RAID array had two JBOD expansion arrays.
- Each SpinServer and Raid Array had dual FC connections to the Brocade switch for performance and availabilty.
- The single namespace, F1, was comprised of 27 RAID-5 sets. Nine additional disks were configured as hot spares.
- Each RAID set had a strip size of 32 KB.
- Server Tunings:
- All network ports were set to use jumbo frames (MTU=9000).
- All scheduled SpinShot jobs were disabled for reproducibility
- HA Details:
- This 3-node scalable cluster was configured as a failover ring for high availability.
- Server-1 was the active secondary for Server-2 and the passive secondary for Server-3.
- Server-2 was the active secondary for Server-3 and the passive secondary for Server-1.
- Server-3 was the active secondary for Server-1 and the passive secondary for Server-2.
- Upon a SpinServer failure, the following three events occur automatically to maintain high availability.
- 1. The active secondary for the storage pool of the failed server becomes its primary server.
- 2. The passive secondary for the storage pool of the failed server gets promoted to active secondary.
- 3. The primary server that just lost its active secondary promotes its passive secondary to active secondary.
- Cluster Details:
- Each server had access to the client network (N1) for communication with all Load Generators.
- Each server had access to the cluster network (N2) for communication with the other servers.
- One Storage Pool was created behind each server. One VFS was created per Storage Pool.
- Each VFS was mapped to a subdirectory of the global namespace under root (/vfs1, /vfs2, /vfs3).
- For UAR compliance, the client processes uniformly mounted 3 different VFS objects from the single namespace:
- server1:/vfs1, server1:/vfs2, server1:/vfs3
- server2:/vfs1, server2:/vfs2, server2:/vfs3
- server3:/vfs1, server3:/vfs2, server3:/vfs3
- This mounting pattern assured that 1/3 of processes access data local to the server they mounted through.
- It also assured that 2/3 of processes cross the cluster to access data uniformly behind the remaining servers.
- Each SpinStor 200 RAID Array utilized dual RAID Controllers, each with 256 MB of battery-backed RAM.
- The RAID Controller was configured with a Write-Back cache that switches to Write-Through mode when
- the charge level of its battery is below 72 hours.
- The SpinServer 4100 has 1.7 Gbytes of UPS backed cache that will survive a power failure or an operating
- system crash. In the event of a power failure or low battery condition, the cache is written to
- mirrored local SCSI disks. Recovery software restores the cache when power returns. This RAM is not used
- by the operating system as general purpose memory but as NVRAM that is dedicated for caching disk reads
- and writes. The UPS is guaranteed to have sufficient energy after a battery low condition to flush the
- entire contents of the 1.7 GBytes of cache to local drives.
- Spinnaker Networks, SpinServer, SpinStor, and SpinFS are trademarks of Spinnaker Networks in the United States.
- All other trademarks belong to their respective owners and should be treated as such.
Generated on Wed Jul 23 11:40:09 2003 by SPEC SFS97 HTML Formatter
Copyright © 1997-2002 Standard Performance Evaluation Corporation
First published at SPEC.org on 24-Jun-2003