SPECsfs2008_nfs.v3 Result ================================================================================ Avere Systems, Inc. : FXT 3800 (3 Node Cluster, Amplidata AmpliStor Cloud Storage Config) SPECsfs2008_nfs.v3 = 180229 Ops/Sec (Overall Response Time = 0.95 msec) ================================================================================ Performance =========== Throughput Response (ops/sec) (msec) --------------------- ---------------- 17807 0.3 35659 0.4 53529 0.5 71387 0.6 89262 0.8 107172 0.9 125280 1.1 143069 1.4 160721 1.9 180229 3.0 ================================================================================ Product and Test Information ============================ Tested By Avere Systems, Inc. Product Name FXT 3800 (3 Node Cluster, Amplidata AmpliStor Cloud Storage Config) Hardware Available July 2013 Software Available May 2014 Date Tested February 26 2014 SFS License Number 9020 Licensee Locations Pittsburgh, PA USA The Avere Systems FXT 3800 Edge filer running AvereOS V4.0 with FlashCloud(tm) provides Cloud NAS storage that enables performance scalability at the edge while leveraging object-based Cloud storage capacity at the core. The Amplidata AmpliStor object-based software defined storage platform delivers very high data durability, scalability from petabytes to exabytes, and much higher efficiency than replication. The AvereOS Hybrid NAS software dynamically organizes hot data into RAM, SSD and SAS tiers, retaining active data on the FXT Edge filer, and placing inactive data on the object-based Cloud Core filer. The FXT Edge filer managed by AvereOS software provides a global namespace, clusters to scale to as many as 50 nodes, supports millions of I/O operations per second, and delivers over 100 GB/s of I/O bandwidth. The FXT 3800 is built on a 64-bit architecture that provides sub-millisecond responses to NFSv3 and CIFS client requests consisting of read, write, and directory/metadata update operations. The AvereOS FlashCloud functionality enables the Avere Edge filer to immediately acknowledge all filesystem requests, and flush inactive directory data and user data to the object-based Cloud Core filer. The tested Edge filer configuration consisted of (3) FXT 3800 nodes backed by object-based Cloud Core filer storage. The object-based Cloud Core filer storage config, an Amplidata AmpliStor system, consisted of (3) AC4 Controller Nodes and (8) AS36 Storage Nodes. Configuration Bill of Materials =============================== Ite Mode m l/ No Qty Type Vendor Name Description --- --- ---- ------ ---- ----------- 1 3 Storage Avere FXT Avere Systems Cloud NAS Edge filer running Appliance Systems, 3800 AvereOS V3.2 software. Includes (13) 600 GB SAS Inc. Disks and (2) 400GB SSD Drives. 2 3 AmpliStor Amplidat AC4 AmpliStor controller node serves http/REST and Controlle a, Inc. hosts the BitSpread distributed encoder. r Node 3 8 AmpliStor Amplidat AS36 AmpliStor storage node provides 36TB raw Storage a, Inc. capacity as the storage component of the Node AmpliStor Optimized Object Storage (OOS) system. Includes (12) 3TB SATA Disks. Server Software =============== OS Name and Version AvereOS V3.2 Other Software Object-based Cloud Core filer is running AmpliStor V3.4 Filesystem Software AvereOS V3.2 Server Tuning ============= Name Value Description ---- ----- ----------- Writeback Time 12 Files may be modified up to 12 hours before hours being written back to the object-based Cloud Core filer. buf.autoTune 0 Statically size FXT memory caches. buf.InitialBalance. 57 Tune smallFile buffer to use 57 percent of smallFilePercent memory pages. buf.InitialBalance. 10 Tune largeFile buffers to use 10 percent of largeFilePercent memory pages. cfs.randomWindowSize 32 Increase the size of random IOs from disk. cluster.dirMgrConnMult 12 Multiplex directory manager network connections. dirmgrSettings. 400000 Increase the directory log size. unflushedFDLRThresholdToStart 00 Flushing dirmgrSettings. 120000 Increase the directory log size. maxNumFdlrsPerNode 000 dirmgrSettings. 150000 Increase the directory log size. maxNumFdlrsPerLog 0 dirmgrSettings. 1 Balance directory manager objects across balanceAlgorithm the cluster. tokenmgrs.geoXYZ. no Disable the use of full control read fcrTokenSupported tokens. tokenmgrs.geoXYZ. no Disable token contention detection. trackContentionOn tokenmgrs.geoXYZ. 154000 Set a threshold for token recycling. lruTokenThreshold 00 tokenmgrs.geoXYZ. 155000 Set maximum token count. maxTokenThreshold 00 vcm.readdir_readahead_mask 0x3000 Optimize readdir performance. vcm. 1 Disable optimistic filehandle recycling. disableAgressiveFhpRecycle vcm.readdirInvokesReaddirplus 0 Disable optimistic trigger of client readdir calls to readdirplus fetches. initcfg:cfs.num_inodes 280000 Increase in-memory inode structures. 00 initcfg:vcm.fh_cache_entries 160000 Increase in-memory filehandle cache 00 structures. initcfg:vcm. 180000 Increase in-memory name cache structures. name_cache_entries 00 Server Tuning Notes ------------------- None Disks and Filesystems ===================== Numb er of D Usable Description isks Size ----------- ---- --------- Each FXT 3800 node contains (13) 600 GB 10K RPM SAS disks. All 39 21.3 TB FXT data resides on these disks. Each FXT 3800 node contains (2) 400 GB eMLC SSD disks. All FXT 6 2.2 TB data resides on these disks. Each FXT 3800 node contains (1) 250 GB SATA disk. System disk. 3 698.9 GB The 8-node AmpliStor AS36 storage system contains (96) 3 TB 96 259.9 TB Variable RPM (WD Green) SATA disks. The 3-node AmpliStor AC4 controller system contains (6) 1TB 6 2.9 TB mirrored system disks. Total 150 286.9 TB Number of 1 Filesystems Total 186310 GiB (Object-based Cloud Core filer storage system capacity) Exported Capacity Filesystem FlashCloud Filesystem Type Filesystem AvereAPI command: Creation corefiler.createCloudFiler('Amplidata',{'cloudType': 'amplidata', Options 'bucket': 'behn-cloudboi', 'serverName': amplidata-ac4.cc.avere.net', 'cloudCredential':ampliCred', 'cryptoMode': 'CBC-AES-256-HMAC-SHA-512'}) The Amplistor system was restored to factory defaults, a new storage pool configured, and a new object container configured for use by the FlashCloud Core Filer. Filesystem Filesystem is configured to encrypt objects as they are written out Config to the object-based Cloud Core filer. Amplidata storage is configured with a 20/4 BitSpread durability policy yielding 181.9 TiB usable capacity. Fileset Size 20862.2 GB Network Configuration ===================== Number Ite of m Ports No Network Type Used Notes --- ------------ ------- ----- 1 10 Gigabit 3 One 10 Gigabit Ethernet port used for each FXT Ethernet 3800 Edge filer. 2 10 Gigabit 3 The Amplidata AC4 systems each have a 10 Gigabit Ethernet Ethernet port. 3 1 Gigabit Ethernet 8 The Amplidata AS36 systems each have a 1 Gigabit Ethernet port. Network Configuration Notes --------------------------- Each FXT 3800 was attached via a single 10 GbE port to one Gnodal GS7200 72 port 10 GbE switch. The load generator client was attached to the same switch via 10 GbE interface. The Amplidata AmpliStor object-based storage system was connected to both 10 GbE and 1 GbE switches. The AC4 Controller nodes were attached to the network via 10 Gigabit Ethernet. The AS36 Storage nodes were attached to the network via 1 Gigabit Ethernet. A 1500 byte MTU was used on the network. Benchmark Network ================= An MTU size of 1500 was set for all connections to the switch. The Gnodal and ProCurve switches were linked connected with a 10 GbE inter-switch link. The load generator was connected to the network via a single 10 GbE port. The SUT was configured with 3 separate IP addresses on one subnet. Each Avere Edge filer node was connected via a 10 GbE NIC and was sponsoring 1 IP address. Processing Elements =================== Ite m Typ No Qty e Description Processing Function --- --- --- ----------- ------------------- 1 6 CPU Intel Xeon CPU E5645 FXT 3800 AvereOS, Network, NFS/CIFS, 2.40 GHz Hex-Core Filesystem, Device Drivers Processor 2 6 CPU Intel 8-Core Xeon Amplidata AmpliStor AC4 Controller Node - E5-2650 2.0 GHz http/REST server and distributed erasure Processor coding. 3 8 CPU Intel Dual-Core Xeon Amplidata AmpliStor AS36 Storage Node - E3-1220LV2 2.3 GHz storage component of the AmpliStor Processor Optimized Object Storage system Processing Element Notes ------------------------ Each Avere Edge filer node has two physical processors. Each AmpliStor AC4 Controller Node has two physical processors. Each AmpliStor AS36 Storage Node has one physical processor. Memory ====== Number Size of Insta Total Nonvol Description in GB nces GB atile ----------- ------ -------- ------- ------ FXT 3800 System Memory 144 3 432 V FXT 3800 NVRAM 2 3 6 NV Amplidata AmpliStor AS36 storage system memory 8 8 64 V Amplidata AmpliStor AC4 controller system memory 64 3 192 V Grand Total Memory Gigabytes 694 Memory Notes ------------ Each FXT node has main memory that is used for the operating system and for caching filesystem data. Each FXT contains two (2) super-capacitor-backed NVRAM modules used to provide stable storage for writes that have not yet been written to disk. Each AmpliStor node has system memory that is used for the operating system and for managing the object name-space. Stable Storage ============== The Avere filesystem logs writes and metadata updates to the NVRAM module. Filesystem modifying NFS operations are not acknowledged until the data has been safely stored in NVRAM. The super-capacitor backing the NVRAM ensures that any uncommitted transactions are committed to persistent flash memory on the NVRAM card in the event of power loss. The Amplistor system, configured with 20/4 BitSpread, acknowledges PUT requests once 16 out of 20 parts of the erasure-coded object data have been synchronously committed to disk. System Under Test Configuration Notes ===================================== The system under test consisted of (3) Avere FXT 3800 nodes. Each node was attached to the network via 10 Gigabit Ethernet. Each FXT 3800 node contains (13) 600 GB SAS disks and (2) 400GB eMLC SSD drives. The Amplidata AmpliStor object-based Cloud Core filer storage system consists of AC4 Controller nodes and AS36 Storage nodes. The controller nodes were attached to the network via 10 Gigabit Ethernet. The storage nodes were attached to the network via 1 Gigabit Ethernet. Other System Notes ================== N/A Test Environment Bill of Materials ================================== Item No Qty Vendor Model/Name Description ---- --- ------ ---------- ----------- 1 1 Supermi SYS-1026T- Supermicro Server with 48GB of RAM running CentOS cro 6RFT+ 6.4 (Linux 2.6.32-358.0.1.el6.x86_64) 2 1 Gnodal GS7200 Gnodal 72 Port 10 GbE Switch. 72 SFP/SFP+ ports 3 1 HP 2900-48G HP ProCurve 48 Port 1 GbE Switch. 10GbE uplink module Load Generators =============== LG Type Name LG1 BOM Item # 1 Processor Name Intel Xeon E5645 2.40GHz Hex-Core Processor Processor Speed 2.40 GHz Number of Processors (chips) 2 Number of Cores/Chip 6 Memory Size 48 GB Operating System CentOS 6.4 (Linux 2.6.32-358.0.1.el6.x86_64) Network Type Intel Corporation 82599EB 10-Gigabit SFI/SFP+ Load Generator (LG) Configuration ================================= Benchmark Parameters -------------------- Network Attached Storage Type NFS V3 Number of Load Generators 1 Number of Processes per LG 768 Biod Max Read Setting 2 Biod Max Write Setting 2 Block Size 0 Testbed Configuration --------------------- Networ Target LG No LG Type k Filesystems Notes ----- ------- ------ ----------------- ----- 1..1 LG1 1 /Amplidata LG1 node is connected to the same Gnodal GS700 network switch. Load Generator Configuration Notes ---------------------------------- All clients were mounted against the single filesystem on all FXT nodes. Uniform Access Rule Compliance ============================== The load generator client hosted 768 processes. The assignment of 768 processes to 3 network interfaces was done such that they were evenly divided across all network paths to the FXT appliances. The filesystem data was evenly distributed across all disks and Avere Edge filer FXT appliances. Other Notes =========== N/A ================================================================================ Generated on Tue Apr 29 08:58:39 2014 by SPECsfs2008 ASCII Formatter Copyright (C) 1997-2008 Standard Performance Evaluation Corporation