SPECsfs97_R1.v3 Result =============================================================================== GlueSys Co, Ltd. : AnyStor GW-C (failover Cluster) SPECsfs97_R1.v3 = 66932 Ops/Sec (Overall Response Time = 0.64 msec) =============================================================================== Throughput Response ops/sec msec 7531 0.3 14126 0.3 22137 0.4 28983 0.5 36135 0.5 43183 0.7 50987 0.8 57201 1.1 64115 1.6 66932 2.1 =============================================================================== Server Configuration and Availability Vendor GlueSys Co, Ltd. Hardware Available Nov 2007 Software Available Nov 2007 Date Tested Nov 2007 SFS License number 3473 Licensee Location Seoul Korea CPU, Memory and Power Model Name AnyStor GW-C (failover Cluster) Processor Intel Xeon Process 5130 Dual Core 2.0GHz # of Processors 8 cores, 4 chips, 2 cores/chip (2 chips per node) Primary Cache 32KBI+32KBD on chip Secondary Cache 4MB(I+D) on chip Other Cache N/A UPS N/A Other Hardware N/A Memory Size 8 GB (4 GB per node) NVRAM Size 256 GB NVRAM Type DIMM NVRAM Description 48 hour battery backed Server Software OS Name and Version GBS4.1 Other Software N/A File System GBFS NFS version 3 Server Tuning Buffer Cache Size Dynamic # NFS Processes 256 (128 per node) Fileset Size 676.9 GB Network Subsystem Network Type Gigabit Ethernet Network Controller Desc. Jumbo Frame (mtu 9000), 2 port aggregated Number Networks 1 (N1) Number Network Controllers 8 (4 per node) Protocol Type TCP Switch Type Cisco 3650 Bridge Type N/A Hub Type N/A Other Network Hardware N/A Disk Subsystem and Filesystems Number Disk Controllers 6 Number of Disks 260 Number of Filesystems 32 (F1-F32) File System Creation Ops default File System Config noatime Disk Controller LSI SAS controller # of Controller Type 2 (1 per node) Number of Disks 4 (2 per node) Disk Type Seagate Cheetah 73GB 10K RPM File Systems on Disks Operating System Special Config Notes RAID 1 mirroring Disk Controller QLA2422 (Dual port) # of Controller Type 4 Number of Disks 256 Disk Type 146GB 15K RPM FC (128), 146GB 10K RPM FC (128) File Systems on Disks F1-F32 Special Config Notes RAID Level 10 with 8 disks, total 32 LUNs Load Generator (LG) Configuration Number of Load Generators 4 Number of Processes per LG 32 Biod Max Read Setting 2 Biod Max Write Setting 2 LG Type LG1 LG Model Intel SE7520JR2 Number and Type Processors 2.4GHz Xeon Memory Size 1GB Operating System RHEL 4.3 Compiler precompiled binary Compiler Options N/A Network Type Integrated Intel Pro/1000 Gigabit Testbed Configuration LG # LG Type Network Target File Systems Notes ---- ------- ------- ------------------- ----- 1 LG1 N1 /v1, /v2, /v3, /v4, /v5, /v6, /v7, /v8, /v9, /v10, /v11, /v12, /v13, /v14, /v15, /v16, /v17, /v18, /v19, /v20, /v21, /v22, /v23, /v24, /v25, /v26, /v27, /v28, /v29, /v30, /v31, /v32 N/A 2 LG1 N1 /v9, /v10, /v11, /v12, /v13, /v14, /v15, /v16, /v17, /v18, /v19, /v20, /v21, /v22, /v23, /v24, /v25, /v26, /v27, /v28, /v29, /v30, /v31, /v32, /v1, /v2, /v3, /v4, /v5, /v6, /v7, /v8 N/A 3 LG1 N1 /v17, /v18, /v19, /v20, /v21, /v22, /v23, /v24, /v25, /v26, /v27, /v28, /v29, /v30, /v31, /v32, /v1, /v2, /v3, /v4, /v5, /v6, /v7, /v8, /v9, /v10, /v11, /v12, /v13, /v14, /v15, /v16 N/A 4 LG1 N1 /v19, /v20, /v21, /v22, /v23, /v24, /v25, /v26, /v27, /v28, /v29, /v30, /v31, /v32, /v1, /v2, /v3, /v4, /v5, /v6, /v7, /v8, /v9, /v10, /v11, /v12, /v13, /v14, /v15, /v16, /v17, /v18 N/A =============================================================================== Notes and Tuning <> The tested system was a Gluesys AnyStor GW-C 2 node cluster with an HDS Universal Storage Platform(USP1100/USP-V model) Storage array. <> AnyStor GW-C were directly connected with dual 1Gb Ethernet cluster links for Heart beat. <> AnyStor GW-C is a failover active/active cluster. The version is AS4.1.5. <> There are four gigabit ports in a NODE, which attached at PCI express BUS. <> Each 2 port nic was aggregated sigle link (round-robin mode) in each node. <> There are six disk controllers (Qlogic QLA2422 * 4, LSI SAS * 2) in each node. <> The QLA2422 is dual port HBA (4Gbps) in PCI-E slot, which connected to USP RAID controller. <> The LSI SAS controller is in onboard, which connected to SAS Disks for OS disk. <> USP1100 RAID Controller has totally 128GB NVRAM (48 hour battery backed). <> The RAID is comprised of 32 LUNs. Each LUN has 8 disks with RAID 10. <> Each disk controller serves 8 LUNs, respectively. <> <> OS disks in each server are mirrored with software RAID 1 for failure protection. <> Uniform Access Rule: <> Each node has 2 network port(N1) for NFS service. <> Each Client has one network for NFS service. <> There are 32 LUNs in RAID storage. There are one file systems in each LUNs. <> Totally 32 filesystem (F1-F32) is used for NFS service. Each node serve 16 filesystems (F1-F16/F17-F32). <> Each LG has 32 processes: 32 processes of LG1 access F1-F32, respectively. <> Filesystem Setting: <> Filesystem was mounted with noatime option. <> Server tunning: <> - Disable block read-ahead "blkdev -ra 0". <> - Jumbo Frames were enabled "ifconfig mtu 9000". <> For "stable storage" requirement, <> The USP1100 cache is mirrored and battery backed for up to 48 hours, and from the USP1100 cache data is written to disk. <> In the event of power failure the USP will flush all data in cache to disk before battery is exhausted. =============================================================================== Generated on Fri Jan 25 09:56:41 EST 2008 by SPEC SFS97 ASCII Formatter Copyright (c) 1997-2007 Standard Performance Evaluation Corporation