SPECsfs97_R1.v3 Result =============================================================================== Panasas, Inc. : Panasas ActiveScale storage cluster (60 Dire SPECsfs97_R1.v3 = 305805 Ops/Sec (Overall Response Time = 1.76 msec) =============================================================================== Throughput Response ops/sec msec 29960 0.5 60864 0.8 91010 0.9 120848 1.1 151840 1.2 182446 1.3 212602 1.6 243676 2.1 274040 4.0 305805 7.8 =============================================================================== Server Configuration and Availability Vendor Panasas, Inc. Hardware Available October 2003 Software Available January 2004 Date Tested October 2003 SFS License number 250 Licensee Location Fremont, CA CPU, Memory and Power Model Name Panasas ActiveScale storage cluster (60 DirectorBlades) Processor 2.4 GHz Intel LV Xeon # of Processors 60 (1 per DirectorBlade) Primary Cache 12K uops I + 8KB D on-chip Secondary Cache 512KB on-chip Other Cache N/A UPS Integrated into Panasas Shelf Other Hardware N/A Memory Size 240 GB (4 GB per DirectorBlade) NVRAM Size All memory is UPS protected NVRAM Type UPS-protected DRAM NVRAM Description Cache commit with UPS protection, flushed to local disk on failure with recovery software (see notes) Server Software OS Name and Version Panasas ActiveScale V 1.2 Other Software N/A File System Panasas ActiveScale File System NFS version 3 Server Tuning Buffer Cache Size Dynamic # NFS Processes 3000 (50 per DirectorBlade) Fileset Size 2900.9 GB Network Subsystem Network Type Gigabit Ethernet (standard 1500 byte frames) Network Controller Desc. Integrated Broadcom NetXtreme BCM5703 Number Networks 1 (N1) Number Network Controllers 60 (1 per DirectorBlade) Protocol Type TCP Switch Type Extreme BlackDiamond 6816 Bridge Type N/A Hub Type N/A Other Network Hardware N/A Disk Subsystem and Filesystems Number Disk Controllers 60 (each GE NIC is also an iSCSI HBA) Number of Disks 540 (270 OSD StorageBlades - 2 disks per blade) Number of Filesystems 60 (see notes) File System Creation Ops default File System Config 10 Bladesets, RAID 5 per file, RAID width of 10 blades (see notes) Disk Controller Integrated Broadcom NetXtreme BCM5703 (as iSCSI HBA and host NIC) # of Controller Type 18 Number of Disks 324 (162 OSD StorageBlades - 2 disks per blade) Disk Type 100300-001 240 GB StorageBlade (two 120GB, 7200 RPM, S-ATA disks per blade) File Systems on Disks V1 .. V36 Special Config Notes see notes Disk Controller Integrated Broadcom NetXtreme BCM5703 (as iSCSI HBA and host NIC) # of Controller Type 12 Number of Disks 216 (108 OSD StorageBlades - 2 disks per blade) Disk Type 100100-001 160 GB StorageBlade (two 80GB, 7200 RPM, ATA/100 disks per blade) File Systems on Disks V37 .. V60 Special Config Notes see notes Load Generator (LG) Configuration Number of Load Generators 45 Number of Processes per LG 60 Biod Max Read Setting 2 Biod Max Write Setting 2 LG Type LG1 LG Model ASA SuperMicro SuperServer Number and Type Processors 2.4 GHz Intel Pentium 4 Xeon Memory Size 1024 MB Operating System Red Hat Linux 7.3, kernel 2.4.21 Compiler gcc 2.96 Compiler Options -O -DNO_T_TYPES -DUSE_INTTYPES Network Type Intel Pro/1000 Gigabit Ethernet, MTU = 1500 Testbed Configuration LG # LG Type Network Target File Systems Notes ---- ------- ------- ------------------- ----- 1/2/../45 LG1 N1 V1, V2, .. V60 N/A =============================================================================== Notes and Tuning <> Configuration: <> The Panasas ActiveScale storage cluster under test was comprised of 270 StorageBlades and 60 DirectorBlades joined over a Gigabit Ethernet network. <> Each DirectorBlade contained one processor, 4 GB memory, Gigabit Ethernet port, and local disk providing metadata management to the storage cluster and NFS access to the Panasas ActiveScale filesystem (PanFS). <> Each DirectorBlade stores filesystem data and metadata on StorageBlades using the Object-based Storage Device (OSD) protocol which is transported over iSCSI over Gigabit Ethernet. <> Each StorageBlade contained one processor, 512 MB memory, Gigabit Ethernet port, and two local disks (RAID 0) storing filesystem objects and running the Object-based Storage Device (OSD) protocol module over Gigabit Ethernet. Total over entire 270 StorageBlades, the storage subsystem contained 270 processors, 135 GB memory, and 270 Gigabit Ethernet ports. <> All load generators, DirectorBlades and StorageBlades were interconnected via an Extreme BlackDiamond 6816 with Gigabit Ethernet and standard 1500 byte frames. <> Each rack-mounted shelf contained 9 StorageBlades, 2 DirectorBlades, a UPS (redundant power supplies, and a battery), and a Gigabit Ethernet switch card that linked the 11 blade slots to one Gigabit Ethernet uplink. <> The cluster was configured with 10 Bladesets, each grouping 27 StorageBlades providing the ability to recover from any single StorageBlade failure in a Bladeset. Each file is split into objects, 10 objects maximum, one object per StorageBlade. RAID 5 is computed over the objects that comprise the file. <> Each virtual volume stores objects on any blade in its Bladeset as needed. <> The cluster was configured with 60 virtual volumes, 6 virtual volumes per Bladeset. Each virtual volume was a subdirectory of the single namespace mapped under the system root (/V1, /V2 .. /V60) <> Each DirectorBlade had the ability to serve any of the volumes over NFS and was the owner of one virtual volume. <> <> Stable Storage: <> All memory in the DirectorBlade was used by the system for general purpose memory and dynamically-sized data cache. <> There was an integral UPS for each shelf which provides sufficient power in the event of AC power loss to flush the DirectorBlade and StorageBlade caches to their local disks. Recovery software in the DirectorBlade is able to recover all committed data and metadata operations when power is restored. <> <> Uniform Access Requirements (UAR): <> Each client uniformly mounted all virtual volumes V1-V60, one process per volume. <> db1:/V1, db2:/V2, db3:/V3, db4:/V4, db5:/V5, db6:/V6, db7:/V7, db8:/V8, db9:/V9, db10:/V10, <> db11:/V11, db12:/V12, db13:/V13, db14:/V14, db15:/V15, db16:/V16, db17:/V17, db18:/V18, db19:/V19, db20:/V20, <> db21:/V21, db22:/V22, db23:/V23, db24:/V24, db25:/V25, db26:/V26, db27:/V27, db28:/V28, db29:/V29, db30:/V30, <> db31:/V31, db32:/V32, db33:/V33, db34:/V34, db35:/V35, db36:/V36, db37:/V37, db38:/V38, db39:/V39, db40:/V40, <> db41:/V41, db42:/V42, db43:/V43, db44:/V44, db45:/V45, db46:/V46, db47:/V47, db48:/V48, db49:/V49, db50:/V50, <> db51:/V51, db52:/V52, db53:/V53, db54:/V54, db55:/V55, db56:/V56, db57:/V57, db58:/V58, db59:/V59, db60:/V60. <> <> This mounting pattern assured that all DirectorBlades and all StorageBlades observed uniform load from all clients. <> The single namespace UAR requirement was not met by the configuration of this SPECsfs97_R1 test, even though the system under test is a single namespace cluster. Accordingly, we report this test as a 60 filesystem cluster, consistent with other multiple node tests in systems without single namespace capabilities. <> <> Panasas, ActiveScale storage cluster, PanFS, DirectorBlade, and StorageBlade are trademarks of Panasas, Inc. =============================================================================== Generated on Wed Dec 10 14:32:28 EST 2003 by SPEC SFS97 ASCII Formatter Copyright (c) 1997-2002 Standard Performance Evaluation Corporation