|Network Appliance, Inc. :||Data ONTAP GX System (24-node FAS6070)|
|SPECsfs97_R1.v3 =||1032461 Ops/Sec (Overall Response Time = 1.53 msec)|
|Server Configuration and Availability|
|Vendor ||Network Appliance, Inc. |
|Hardware Available ||March 2006|
|Software Available||September 2006 |
|Date Tested||May 2006 |
|SFS License Number||33 |
|Licensee Locations||Sunnyvale, CA |
|CPU, Memory and Power|
|Model Name ||Data ONTAP GX System (24-node FAS6070) |
|Processor ||2.6-GHz AMD Opteron(tm) 852 |
|# of Processors ||96 cores, 96 chips, 1 core/chip |
|Primary Cache ||64KB I + 64KB D on chip |
|Secondary Cache ||1MB (I+D) on chip |
|Other Cache ||N/A |
|Other Hardware ||X3148-R5 NVRAM/HA interconnect adapter (see notes) |
| Memory Size ||768 GB (32 GB per node) |
|NVRAM Size ||48 GB (2 GB per node) |
|NVRAM Type ||DIMMs on PCI cards |
|NVRAM Description||minimum 7-day battery-backed shelf-life |
|OS Name and Version||Data ONTAP GX 10.0.1|
|Other Software ||-- |
|File System ||WAFL |
|NFS version ||3 |
|Buffer Cache Size ||default|
|# NFS Processes ||N/A|
|Fileset Size ||1477.0 GB|
|Network Type ||Jumbo Frame Gigabit Ethernet |
|Network Controller Desc. ||integrated 10/100/1000 Ethernet controller|
|Number Networks ||2 (N1-data, N2-cluster) |
|Number Network Controllers||72 (1 data and 2 cluster per node) |
|Protocol Type ||TCP |
|Switch Type ||Cisco 7609 (N1,N2)|
|Bridge Type ||N/A |
|Hub Type ||N/A |
|Other Network Hardware ||N/A |
|Disk Subsystem and Filesystems|
|Number Disk Controllers ||48 (2 per node) |
|Number of Disks ||2016 |
|Number of Filesystems ||single namespace (see notes) |
|File System Creation Ops||default|
|File System Config ||120 RAID-DP (Double Parity) groups of 16 disks each |
|Disk Controller ||integrated dual-channel QLogic ISP-2322 FC Controller (4MB RAM standard) |
|# of Controller Type ||48 (2 per node) |
|Number of Disks ||42/42/42/.../42 |
|Disk Type ||X273B 72GB 15K RPM FC-AL |
|File Systems on Disks ||F1 |
|Special Config Notes ||see notes|
|Load Generator (LG) Configuration|
|Number of Load Generators ||216 |
|Number of Processes per LG||24 |
|Biod Max Read Setting ||2 |
|Biod Max Write Setting ||2 |
|LG Type ||LG1 |
|LG Model ||IBM HS20 884325U |
|Number and Type Processors||2 x 3.2-GHz Intel Xeon |
|Memory Size ||2048 MB |
|Operating System ||Red Hat Enterprise Linux AS release 4 (2.6.9-5.ELsmp) |
|Compiler ||cc, used SFS97_R1 Precompiled Binaries |
|Compiler Options ||N/A |
|Network Type ||Integrated Dual Channel Broadcom Gigabit Ethernet|
|LG #||LG Type||Network||Target File Systems||Notes|
- NetApp's embedded operating system processes NFS requests from the network layer without any NFS daemons, and uses non-volatile memory to improve performance.
- The tested system was a Data ONTAP GX System comprised of 24 FAS6070 nodes joined by cluster network N2.
- Cluster Details:
- Each node had access to the client network (N1) for communication with all load generators.
- Each node had access to the cluster network (N2) for communication with the other nodes.
- Each node was the owner of a single disk pool or "aggregate", composed of five RAID-DP groups, each composed of 14 data disks and 2 parity disks. A separate aggregate consisting of 3 disks in a single RAID-DP group was created on each node to hold the Data ONTAP GX operating system files. Each node was also allocated a spare disk.
- Each flexible volume was mapped to a subdirectory of the global namespace under root (/vol1, /vol2, ... , /vol24).
- For UAR compliance, the processes on each client uniformly mounted 24 different flexible volumes from the single namespace. On each client, one of the processes accessed a volume that resided on the node which it was directly mounting. The other 23 processes accessed their assigned flexvols through nodes which did not reside on the node that was directly mounted. The resulting access pattern was such that 1/24th of all accesses to each volume were made through the node that hosted the volume, and 23/24th of all accesses were made through nodes which did not host the volume and were transferred to the hosting node via the cluster network.
- Each disk controller had two 2Gbit/s FC-AL ESH2 (Electronically Switched Hub) loops, both loops were active. Each disk was attached to two loops, one from each controller so that, in the event of a controller or loop-fault, the disk could be controlled by the surviving controller.
- All standard data protection features, including background RAID and media error scrubbing, software validated RAID checksumming, and double disk failure protection via double parity RAID (RAID-DP) were enabled during the test.
- All network ports were set to use jumbo frames (MTU=9000).
- Server tunings:
- - volume modify -Min-readahead true -atime-update false
- NetApp is a registered trademark and "Data ONTAP", "Network Appliance", "FlexVol", and "WAFL" are trademarks of Network Appliance, Inc. in the United States and other countries.
- All other trademarks belong to their respective owners and should be treated as such.
Generated on Sun Jun 11 23:01:18 2006 by SPEC SFS97 HTML Formatter
Copyright © 1997-2004 Standard Performance Evaluation Corporation
First published at SPEC.org on 06-Jun-2006