Avere Systems, Inc. | : | FXT 3800 (3 Node Cluster, Cleversafe dsNet Cloud Storage Config) |
SPECsfs2008_nfs.v3 | = | 180394 Ops/Sec (Overall Response Time = 0.89 msec) |
|
Tested By | Avere Systems, Inc. |
---|---|
Product Name | FXT 3800 (3 Node Cluster, Cleversafe dsNet Cloud Storage Config) |
Hardware Available | July 2013 |
Software Available | March 2014 |
Date Tested | February 20 2014 |
SFS License Number | 9020 |
Licensee Locations |
Pittsburgh, PA USA |
The Avere Systems FXT 3800 Edge filer running AvereOS V3.2 with FlashCloud™ provides Cloud NAS storage that enables performance scalability at the edge while leveraging object-based Cloud storage capacity at the core. The Cleversafe Dispersed Storage Technology object-based storage solution leverages information dispersal algorithms to expand, virtualize, transform, slice and disperse data across a network of storage nodes. The AvereOS Hybrid NAS software dynamically organizes hot data into RAM, SSD and SAS tiers, retaining active data on the FXT Edge filer, and placing inactive data on the object-based Cloud Core filer. The FXT Edge filer managed by AvereOS software provides a global namespace, clusters to scale to as many as 50 nodes, supports millions of I/O operations per second, and delivers over 100 GB/s of I/O bandwidth. The FXT 3800 is built on a 64-bit architecture that provides sub-millisecond responses to NFSv3 and CIFS client requests consisting of read, write, and directory/metadata update operations. The AvereOS FlashCloud functionality enables the Avere Edge filer to immediately acknowledge all filesystem requests, and flush inactive directory data and user data to the object-based Cloud Core filer. The tested Edge filer configuration consisted of (3) FXT 3800 nodes backed by object-based Cloud Core filer storage. The object-based Cloud Core filer storage config, a Cleversafe dsNet system, consisted of (1) dsNet Manager, (2) Accesser 3100 nodes, and (9) Slicestor 2210 nodes.
Item No | Qty | Type | Vendor | Model/Name | Description |
---|---|---|---|---|---|
1 | 3 | Storage Appliance | Avere Systems, Inc. | FXT 3800 | Avere Systems Cloud NAS Edge filer running AvereOS V3.2 software. Includes (13) 600 GB SAS Disks and (2) 400GB SSD Drives. |
2 | 9 | Dispersed Storage Servers | Cleversafe, Inc. | Slicestor 2210 | Storage for slices - Each Slicestor includes (12) 3TB SATA disks with 36TB total raw capacity. |
3 | 2 | Dispersed Storage Router | Cleversafe, Inc. | Accesser 3100 | Access node to slice, disperse and retreive data |
4 | 1 | Dispersed Storage Manager | Cleversafe, Inc. | dsNet Manager 2100 | Provides comprehensive management, operation and maintenance functions. |
OS Name and Version | AvereOS V3.2 |
---|---|
Other Software | Object-based Cloud Core filer is running ClevOS 3.1.2 |
Filesystem Software | AvereOS V3.2 |
Name | Value | Description |
---|---|---|
Writeback Time | 12 hours | Files may be modified up to 12 hours before being written back to the object-based Cloud Core filer. |
buf.autoTune | 0 | Statically size FXT memory caches. |
buf.InitialBalance.smallFilePercent | 57 | Tune smallFile buffer to use 57 percent of memory pages. |
buf.InitialBalance.largeFilePercent | 10 | Tune largeFile buffers to use 10 percent of memory pages. |
cfs.randomWindowSize | 32 | Increase the size of random IOs from disk. |
cluster.dirMgrConnMult | 12 | Multiplex directory manager network connections. |
dirmgrSettings.unflushedFDLRThresholdToStartFlushing | 40000000 | Increase the directory log size. |
dirmgrSettings.maxNumFdlrsPerNode | 120000000 | Increase the directory log size. |
dirmgrSettings.maxNumFdlrsPerLog | 1500000 | Increase the directory log size. |
dirmgrSettings.balanceAlgorithm | 1 | Balance directory manager objects across the cluster. |
tokenmgrs.geoXYZ.fcrTokenSupported | no | Disable the use of full control read tokens. |
tokenmgrs.geoXYZ.trackContentionOn | no | Disable token contention detection. |
tokenmgrs.geoXYZ.lruTokenThreshold | 15400000 | Set a threshold for token recycling. |
tokenmgrs.geoXYZ.maxTokenThreshold | 15500000 | Set maximum token count. |
vcm.readdir_readahead_mask | 0x3000 | Optimize readdir performance. |
vcm.disableAgressiveFhpRecycle | 1 | Disable optimistic filehandle recycling. |
vcm.readdirInvokesReaddirplus | 0 | Disable optimistic trigger of client readdir calls to readdirplus fetches. |
initcfg:cfs.num_inodes | 28000000 | Increase in-memory inode structures. |
initcfg:vcm.fh_cache_entries | 16000000 | Increase in-memory filehandle cache structures. |
initcfg:vcm.name_cache_entries | 18000000 | Increase in-memory name cache structures. |
None
Description | Number of Disks | Usable Size |
---|---|---|
Each FXT 3800 node contains (13) 600 GB 10K RPM SAS disks. All FXT data resides on these disks. | 39 | 21.3 TB |
Each FXT 3800 node contains (2) 400 GB eMLC SSD disks. All FXT data resides on these disks. | 6 | 2.2 TB |
Each FXT 3800 node contains (1) 250 GB SATA disk. System disk. | 3 | 698.9 GB |
The 9-node Cleversafe Slicestor object storage system contains (108) 4 TB 5900 RPM SATA disks. | 108 | 389.9 TB |
The 9-node Cleversafe Slicestor system contains (9) 250GB system disks. | 9 | 2.0 TB |
The 2-node Cleversafe Accesser system contains (2) 500GB system disks. | 2 | 931.5 GB |
The Cleversafe Manager system contains (2) 1TB mirrored system disks. | 2 | 921.6 GB |
Total | 169 | 417.9 TB |
Number of Filesystems | 1 |
---|---|
Total Exported Capacity | 219972 GiB (Object-based Cloud Core filer storage system capacity) |
Filesystem Type | FlashCloud Filesystem |
Filesystem Creation Options |
AvereAPI command: corefiler.createCloudFiler('Cleversafe',{'cloudType': 'cleversafe', 'bucket': 'behn-cloudboi', 'serverName': 'cleversafe-ana.cc.avere.net', 'cloudCredential':cleverCred', 'cryptoMode': 'CBC-AES-256-HMAC-SHA-512'}) The Cleversafe system was restored to factory defaults, a new storage pool configured, and a new vault configured for use by the FlashCloud Core Filer. |
Filesystem Config |
Single file system exported via global name space. The filesystem is configured to encrypt objects as they are written out to the object-based Cloud Core filer. Cleversafe Vault is configured with a 5/9 protection threshold and a 214TiB quota. |
Fileset Size | 20862.2 GB |
Item No | Network Type | Number of Ports Used | Notes |
---|---|---|---|
1 | 10 Gigabit Ethernet | 3 | One 10 Gigabit Ethernet port used for each FXT 3800 Edge filer. |
2 | 10 Gigabit Ethernet | 2 | The Cleversafe Accesser systems each have a 10 Gigabit Ethernet port. |
3 | 1 Gigabit Ethernet | 10 | The Cleversafe Slicestor and dsNet Manager systems each have a 1 Gigabit Ethernet port. |
Each FXT 3800 was attached via a single 10 GbE port to one Gnodal GS7200 72 port 10 GbE switch. The load generating client was attached to the same switch via 10 GbE interface. The Cleversafe dsNet object-based storage system was connected to both 10 GbE and 1 GbE switches. The Accesser nodes were attached to the network via 10 Gigabit Ethernet. The Slicestor nodes and the dsNet Manager were attached to the network via 1 Gigabit Ethernet. A 1500 byte MTU was used on the network.
An MTU size of 1500 was set for all connections to the switch. The Gnodal and ProCurve switches were linked connected with a 10 GbE inter-switch link. The load generator was connected to the network via a single 10 GbE port. The SUT was configured with 3 separate IP addresses on one subnet. Each Avere Edge filer node was connected via a 10 GbE NIC and was sponsoring 1 IP address.
Item No | Qty | Type | Description | Processing Function |
---|---|---|---|---|
1 | 6 | CPU | Intel Xeon CPU E5645 2.40 GHz Hex-Core Processor | FXT 3800 AvereOS, Network, NFS/CIFS, Filesystem, Device Drivers |
2 | 9 | CPU | Intel Quad Core Xeon 2.66 GHz, 8 MB cache | Cleversafe Slicestor node - stores slices |
3 | 4 | CPU | Intel 8 Core Xeon 2.0 GHz, 20 MB cache | Cleversafe Accesser node - data slicing, dispersing, retrieving |
4 | 1 | CPU | Intel Quad Core Xeon 2.4 GHz, 8 MB cache | Cleversafe dsNet Manager node - storage configuration and management |
Each Avere Edge filer node has two physical processors.
Each Cleversafe Accesser node has two physical processors.
All Cleversafe Slicestor and dsNet Manager nodes have one physical processor.
Description | Size in GB | Number of Instances | Total GB | Nonvolatile |
---|---|---|---|---|
FXT 3800 System Memory | 144 | 3 | 432 | V |
FXT 3800 NVRAM | 2 | 3 | 6 | NV |
Cleversafe Slicestor storage system memory | 24 | 9 | 216 | V |
Cleversafe Accesser system memory | 128 | 2 | 256 | V |
Cleversafe dsNet Manager system memory | 16 | 1 | 16 | V |
Grand Total Memory Gigabytes | 926 |
Each FXT node has main memory that is used for the operating system and for caching filesystem data. Each FXT contains two (2) super-capacitor-backed NVRAM modules used to provide stable storage for writes that have not yet been written to disk. Each Cleversafe node has system memory that is used for the operating system and for managing the object name-space.
The Avere filesystem logs writes and metadata updates to the NVRAM module. Filesystem modifying NFS operations are not acknowledged until the data has been safely stored in NVRAM. The super-capacitor backing the NVRAM ensures that any uncommitted transactions are committed to persistent flash memory on the NVRAM card in the event of power loss. The Cleversafe dsNet system acknowledges PUT requests once 5 out of 9 dispersal blocks have been synchronously committed to disk.
The system under test consisted of (3) Avere FXT 3800 nodes. Each node was attached to the network via 10 Gigabit Ethernet. Each FXT 3800 node contains (13) 600 GB SAS disks and (2) 400GB eMLC SSD drives. The Cleversafe object-based Cloud Core filer storage system consists of Slicestor, Accesser and dsNet Manager systems. The Accessor nodes were attached to the network via 10 Gigabit Ethernet. The Slicestor nodes and the dsNet Manager were attached to the network via 1 Gigabit Ethernet.
N/A
Item No | Qty | Vendor | Model/Name | Description |
---|---|---|---|---|
1 | 1 | Supermicro | SYS-1026T-6RFT+ | Supermicro Server with 48GB of RAM running CentOS 6.4 (Linux 2.6.32-358.0.1.el6.x86_64) |
2 | 1 | Gnodal | GS7200 | Gnodal 72 Port 10 GbE Switch. 72 SFP/SFP+ ports |
3 | 1 | HP | 2900-48G | HP ProCurve 48 Port 1 GbE Switch. 10GbE uplink module |
LG Type Name | LG1 |
---|---|
BOM Item # | 1 |
Processor Name | Intel Xeon E5645 2.40GHz Hex-Core Processor |
Processor Speed | 2.40 GHz |
Number of Processors (chips) | 2 |
Number of Cores/Chip | 6 |
Memory Size | 48 GB |
Operating System | CentOS 6.4 (Linux 2.6.32-358.0.1.el6.x86_64) |
Network Type | Intel Corporation 82599EB 10-Gigabit SFI/SFP+ |
Network Attached Storage Type | NFS V3 |
---|---|
Number of Load Generators | 1 |
Number of Processes per LG | 768 |
Biod Max Read Setting | 2 |
Biod Max Write Setting | 2 |
Block Size | 0 |
LG No | LG Type | Network | Target Filesystems | Notes |
---|---|---|---|---|
1..1 | LG1 | 1 | /Cleversafe | LG1 node is connected to the same Gnodal GS700 network switch. |
The load generating client was mounted against the single filesystem on all FXT nodes.
The load-generating client hosted 768 processes. The assignment of 768 processes to 3 network interfaces was done such that they were evenly divided across all network paths to the FXT appliances. The filesystem data was evenly distributed across all disks and Avere Edge filer FXT appliances.
N/A
Generated on Mon Apr 07 08:54:56 2014 by SPECsfs2008 HTML Formatter
Copyright © 1997-2008 Standard Performance Evaluation Corporation
First published at SPEC.org on 07-Apr-2014