|Alacritech, Inc.||:||ANX 1500-20|
|SPECsfs2008_nfs.v3||=||120954 Ops/Sec (Overall Response Time = 0.92 msec)|
|Tested By||Alacritech, Inc.|
|Product Name||ANX 1500-20|
|Hardware Available||January 2011|
|Software Available||March 2011|
|Date Tested||March 2011|
|SFS License Number||1466|
San Jose, CA
The Alacritech ANX 1500 is a NFS throughput acceleration appliance designed to improve both response times and aggregate throughput in NAS environments. The ANX 1500-20 consists of 4TB of usable SSD capacity and is additionally comprised of 64-bit Intel processors executing Alacritech's proprietary operating system, NFS-BridgeOS. The ANX 1500 functions as a write-through cache, satisfying NFS reads and most NFS metadata operations from its cache. Throughput and latency benefits are enhanced through the use of Alacritech 10 GbE TCP offload technology with optimizations for NFS and tight integration with NFS-BridgeOS. This test utilized a dual-node NetApp FAS6070C system as back end storage.
|1||1||Throughput Acceleration Appliance||Alacritech, Inc.||ANX 1500-20||Alacritech caching appliance running Alacritech OS version 188.8.131.52 software. Includes 20 200GB solid state drives (10 of which were Unigen Model UGB88PGC200HF3, and 10 were SuperTalent Model FTM20FT25H).|
|2||2||NAS Appliance||NetApp, Inc.||FAS6070C||NAS Appliance used for back-end storage|
|3||12||Disk Shelf||NetApp, Inc.||DS14MK4||Disk shelf for NetApp FAS6070C|
|4||168||Disk||NetApp, Inc.||X278A-R5||144GB 15k RPM FC disks. 160 drives were used for SPEC sfs2008 data, 6 drives were used for system disks, 2 drives were reserved as spares.|
|5||4||FC-AL Adapter||NetApp, Inc.||X2055A-R6||HBA,FC,2-port,PCIe,4Gb,R6|
|6||2||10 Gigabit Ethernet Adapter||NetApp, Inc.||X1008A-R6||NIC,TOE,2-Port,10GbE Fiber, PCIe,R6|
|7||2||Software License||NetApp, Inc.||SW-T7C-NFS||NFS Software,T7C|
|OS Name and Version||Alacritech OS version 184.108.40.206|
|Other Software||FAS6070C Storage system ran NetApp Data ONTAP 7.3.3|
|Read Pull-Ahead Blocks||0||The number of readahead SSD blocks|
|vol options 'volume' no_atime_update||on||Disable atime updates on FAS6070C storage system (applied to all volumes)|
|Description||Number of Disks||Usable Size|
|The ANX 1500 contains 20 200GB SSDs used for cache storage.||20||3.9 TB|
|The ANX 1500 contains 2 mirrored 500GB SATA drives. These disks are for system use.||2||1000.0 GB|
|The NetApp FAS6070C contains 160 15K RPM 144GB FCAL drives. All data filesystems reside on these disks.||160||16.0 TB|
|The NetApp FAS6070C contains 6 15k RPM 144GB FCAL drives. These disks are for system use||6||114.0 GB|
|Number of Filesystems||2|
|Total Exported Capacity||16 TB|
|Filesystem Creation Options||Default|
|Filesystem Config||Each filesystem was striped across 80 disks|
|Fileset Size||14058.2 GB|
The storage configuration consisted of 12 shelves, each with 14 disks. Groups of 6 shelves were daisy-chained such that the outputs of each shelf were attached to the inputs of the next shelf in the group. The first shelf in each group had two 4Gbit/S FC-AL loop connections, each one connected to one of 2 FC-AL ports (on the FC HBA) on the storage controller. Each storage controller was the primary owner of 6 shelves, with 80 disks in those shelves placed into a single aggregate. Each aggregate was composed of 6 RAID 4 groups. 5 RAID groups were comprised of 13 data disks and 1 parity disk and the remaining group was comprised of 9 data disks and 1 parity disk. Within each aggregate, a flexible volume (utilizing Data ONTAP FlexVol (TM) technology) was created to hold the SFS filesystem for that controller. Each volume was striped across all disks in the aggregate where it resided. Each controller was the owner of a single volume, but the disks in each aggregate were dual-attached so that, in the event of a fault, they could be managed by the other controller via an alternate loop. A separate flexible volume residing in a three-disk root aggregate on each controller was created to hold the Data ONTAP operating system and system files. The remaining disk on each controller was reserved as a spare.
|Item No||Network Type||Number of Ports Used||Notes|
|1||10 Gigabit Ethernet||2||Alacritech 10 Gigabit Ethernet TCP offload adapters|
|2||10 Gigabit Ethernet||4||10 Gigabit Dual-port Ethernet PCI-X adapter|
The Alacritech ANX 1500 was connected by way of 2 Alacritech 10 Gigabit Ethernet Accelerator ports to a Brocade TurboIron 24X switch. The load generating clients were each connected by way of a single 10 Gigabit Ethernet port to the same switch. Each FAS6070C storage controller was connected to the Brocade switch by way of 2 10 Gigabit Ethernet ports. A 1500 byte MTU was deployed throughout the network.
A standard MTU size of 1500 was set for all connections to the switch.
|Item No||Qty||Type||Description||Processing Function|
|1||2||CPU||Intel Xeon E5520 2.26 GHz Quad-Core Processor.||ANX 1500 OS, Network, NFS, Cache Subsystem, Device Drivers|
|2||8||CPU||AMD Opteron 852 2.6-GHz Single Core||NAS Storage System|
|3||2||TOE||Alacritech TCP Offload Engine||NFS/TCP/IP/Ethernet|
The ANX 1500 has two physical processors, in addition to two Alacritech ASICs found on the Alacritech 10 Gibabit Ethernet TCP Offload adapters.
|Description||Size in GB||Number of Instances||Total GB||Nonvolatile|
|ANX 1500 system memory||48||1||48||V|
|NAS Storage memory||32||2||64||V|
|NAS Storage Non-volatile memory||2||2||4||NV|
|Grand Total Memory Gigabytes||116|
The ANX 1500 contains main memory that is used by both NFS-BridgeOS and for caching filesystem data. Each NAS storage controller contains main memory used for the ONTAP operating system and for caching filesystem data. Each NAS storage controller also contains a separate battery backed RAM module used to provide stable storage for writes that have yet to be written to the disk drives.
The ANX 1500 operates as an optimized write-through cache. The ANX 1500 will not respond to a write request, or other filesystem modifying transaction, nor commit it to its cache, until such transaction has been successfully responded to by the NAS. NetApp's WAFL filesystem, excecuted on each NAS storage controller, logs writes and other filesystem modifying transactions to the NVRAM adapter. Filesystem modifying operations are not acknowledged until after the storage system has confirmed that the related data is stored in the NVRAM adapter. The battery backing the NVRAM ensures that any uncommited transactions are preserved for at least 72 hours.
The system under test consisted of one ANX 1500-20. The ANX 1500 was attached to the network via two 10 Gigabit Ethernet adapters. The ANX 1500 node contains 20 200GB solid-state disks. The NAS storage system consisted of two FAS6070C storage controllers configured with 12 disk drive storage shelves, with each shelf containing 14 144GB FC-AL disk drives. The two NAS storage controllers executed the Data ONTAP 7.3.3 operating system. Each storage controller included a dual-port 10 Gigabit Ethernet adapter. The storage shelves were configured in groups of six shelves, which were connected to each other via two 4Gbit/S FC-AL connections. Each group had four 4Gbit/S FC-AL connections, two to each storage controller. The ANX 1500, the FAS6070C storage controllers, and the clients all connected to a Brocade TurboIron 24X 10 Gigabit Ethernet switch.
All standard data protection features were enabled on the NAS storage system, including background RAID and media error scrubbing, software validated RAID checksumming, and disk failure protections via RAID 4.
|1||7||Advanced Industrial Computer, Inc.||PSG-SB-2URGEDP0114||Workstation with 24GB of RAM running RHEL 2.6.18-128.el5|
|2||1||Brocade||TurboIron 24X||Ethernet Switch|
|LG Type Name||LG1|
|BOM Item #||1|
|Processor Name||Quad-Core Intel E5520 Processor|
|Processor Speed||2.26 GHz|
|Number of Processors (chips)||2|
|Number of Cores/Chip||4|
|Memory Size||24 GB|
|Operating System||RHEL 2.6.18-128.el5 SMP|
|Network Type||Intel X520-SR1 10 Gigabit Ethernet server adapter|
|Network Attached Storage Type||NFS V3|
|Number of Load Generators||7|
|Number of Processes per LG||96|
|Biod Max Read Setting||2|
|Biod Max Write Setting||2|
|LG No||LG Type||Network||Target Filesystems||Notes|
Both filesystems were mounted on all clients, which were connected to the same physical and logical network.
Each client hosted 96 processes. The assignment of processes to filesystems and network interfaces was done such that they were evenly divided across all filesystems and network paths to the ANX 1500. The filesystem data was distributed evenly across all disks and FC-AL loops on the NAS storage system
Generated on Thu Apr 14 10:25:21 2011 by SPECsfs2008 HTML Formatter
Copyright © 1997-2008 Standard Performance Evaluation Corporation
First published at SPEC.org on 05-Apr-2011