SPEC® MPIM2007 Result

Copyright 2006-2010 Standard Performance Evaluation Corporation

Linux Networx

LS-1,
Scali MPI Connect 5.6.1,
PathScale 3.0 compilers

SPECmpiM_peak2007 = Not Run

MPI2007 license: 021 Test date: Feb-2008
Test sponsor: Scali, Inc Hardware Availability: Sep-2007
Tested by: Scali, Inc Software Availability: Feb-2008
Benchmark results graph

Results Table

Benchmark Base Peak
Ranks Seconds Ratio Seconds Ratio Seconds Ratio Ranks Seconds Ratio Seconds Ratio Seconds Ratio
Results appear in the order in which they were run. Bold underlined text indicates a median measurement.
104.milc 128 141 11.1  141 11.1  141 11.1 
107.leslie3d 128 417 12.5  417 12.5  416 12.5 
113.GemsFDTD 128 392 16.1  393 16.0  391 16.1 
115.fds4 128 160 12.2  161 12.1  160 12.2 
121.pop2 128 450 9.18 450 9.18 448 9.22
122.tachyon 128 245 11.4  246 11.4  245 11.4 
126.lammps 128 245 11.9  245 11.9  245 11.9 
127.wrf2 128 381 20.4  380 20.5  380 20.5 
128.GAPgeofem 128 146 14.1  147 14.1  146 14.1 
129.tera_tf 128 308 8.98 308 8.99 309 8.97
130.socorro 128 193 19.8  190 20.1  189 20.2 
132.zeusmp2 128 256 12.1  255 12.2  255 12.1 
137.lu 128 178 20.6  179 20.5  179 20.5 
Hardware Summary
Type of System: Homogenous
Compute Node: Linux Networx LS-1
Interconnect: InfiniBand
File Server Node: Linux Networx LS1 I/O Nodes
Total Compute Nodes: 32
Total Chips: 64
Total Cores: 128
Total Threads: 128
Total Memory: 256 GB
Base Ranks Run: 128
Minimum Peak Ranks: --
Maximum Peak Ranks: --
Software Summary
C Compiler: PathScale C Compiler 3.0
C++ Compiler: PathScale C++ Compiler 3.0
Fortran Compiler: PathScale Fortran Compiler 3.0
Base Pointers: 64-bit
Peak Pointers: Not Applicable
MPI Library: Scali MPI Connect 5.6.1-58818
Other MPI Info: IB Gold VAPI
Pre-processors: None
Other Software: None

Node Description: Linux Networx LS-1

Hardware
Number of nodes: 32
Uses of the node: compute
Vendor: Linux Networx, Inc.
Model: LS-1
CPU Name: Intel Xeon 5160
CPU(s) orderable: 1-2 chips
Chips enabled: 2
Cores enabled: 4
Cores per chip: 2
Threads per core: 1
CPU Characteristics: 1333 Mhz FSB
CPU MHz: 3000
Primary Cache: 32 KB I + 32 KB D on chip per core
Secondary Cache: 4 MB I+D on chip per chip
L3 Cache: None
Other Cache: None
Memory: 8 GB (8 x 1GB DIMMs 667 MHz)
Disk Subsystem: 250GB SAS hard drive
Other Hardware: None
Adapter: Mellanox MHGA28-XTC
PCI-Express DDR InfiniBand HCA
Number of Adapters: 1
Slot Type: PCIe x8
Data Rate: InfiniBand 4x DDR
Ports Used: 1
Interconnect Type: Infiniband
Software
Adapter: Mellanox MHGA28-XTC
PCI-Express DDR InfiniBand HCA
Adapter Driver: IBGD 1.8.2
Adapter Firmware: 5.1.4
Operating System: SLES9 SP3
Local File System: Not applicable
Shared File System: GPFS
System State: multi-user
Other Software: None

Node Description: Linux Networx LS1 I/O Nodes

Hardware
Number of nodes: 8
Uses of the node: file server
Vendor: Linux Networx, Inc.
Model: LS1
CPU Name: Intel Xeon 5150
CPU(s) orderable: 1-2 chips
Chips enabled: 2
Cores enabled: 4
Cores per chip: 2
Threads per core: 1
CPU Characteristics: 1333 Mhz FSB
CPU MHz: 2660
Primary Cache: 32 KB I + 32 KB D on chip per core
Secondary Cache: 4 MB I+D on chip per chip
L3 Cache: None
Other Cache: None
Memory: 4 GB (4 x 1GB DIMMs 667 MHz)
Disk Subsystem: 18 TB SAN interconnected by FC4
Other Hardware: None
Adapter: Mellanox MHGA28-XTC
PCI-X DDR InfiniBand HCA
Number of Adapters: 1
Slot Type: PCIe x8
Data Rate: InfiniBand 4x DDR
Ports Used: 1
Interconnect Type: InfiniBand
Software
Adapter: Mellanox MHGA28-XTC
PCI-X DDR InfiniBand HCA
Adapter Driver: IBGD 1.8.2
Adapter Firmware: 5.2.0
Operating System: SLES9 SP3
Local File System: Not applicable
Shared File System: GPFS
System State: multi-user
Other Software: None

Interconnect Description: InfiniBand

Hardware
Vendor: QLogic
Model: QLogic Silverstorm 9120 Fabric Director
Switch Model: 9120
Number of Switches: 1
Number of Ports: 144
Data Rate: InfiniBand 4x SDR and InfiniBand 4x DDR
Firmware: 4.1.1.1.11
Topology: Single switch (star)
Primary Use: MPI and filesystem traffic

General Notes

The following approved srcalts are used
   tera_tf - fixbuffer
   wrf2    - fixcalling

Base Compiler Invocation

C benchmarks:

 /opt/scali/bin/mpicc -ccl pathcc 

C++ benchmarks:

126.lammps:  /opt/scali/bin/mpicc -ccl pathCC 

Fortran benchmarks:

 /opt/scali/bin/mpif77 -ccl pathf90 

Benchmarks using both Fortran and C:

 /opt/scali/bin/mpicc -ccl pathcc   /opt/scali/bin/mpif77 -ccl pathf90 

Base Portability Flags

104.milc:  -DSPEC_MPI_LP64 
115.fds4:  -DSPEC_MPI_LC_TRAILING_DOUBLE_UNDERSCORE   -DSPEC_MPI_LP64 
121.pop2:  -DSPEC_MPI_DOUBLE_UNDERSCORE   -DSPEC_MPI_LP64 
122.tachyon:  -DSPEC_MPI_LP64 
127.wrf2:  -DF2CSTYLE   -DSPEC_MPI_DOUBLE_UNDERSCORE   -DSPEC_MPI_LINUX   -DSPEC_MPI_LP64 
128.GAPgeofem:  -DSPEC_MPI_LP64 
130.socorro:  -fno-second-underscore   -DSPEC_MPI_LP64 
132.zeusmp2:  -DSPEC_MPI_LP64 

Base Optimization Flags

C benchmarks:

 -march=core   -O3   -OPT:Ofast 

C++ benchmarks:

126.lammps:  -march=core   -O3   -OPT:Ofast   -CG:local_fwd_sched=on 

Fortran benchmarks:

 -march=core   -O3   -OPT:Ofast   -LANG:copyinout=off 

Benchmarks using both Fortran and C:

 -march=core   -O3   -OPT:Ofast   -LANG:copyinout=off 

The flags files that were used to format this result can be browsed at
http://www.spec.org/mpi2007/flags/MPI2007_flags.20080611.html,
http://www.spec.org/mpi2007/flags/MPI2007_flags.0.html.

You can also download the XML flags sources by saving the following links:
http://www.spec.org/mpi2007/flags/MPI2007_flags.20080611.xml,
http://www.spec.org/mpi2007/flags/MPI2007_flags.0.xml.