SPEC® MPIM2007 Result

Copyright 2006-2010 Standard Performance Evaluation Corporation

Linux Networx

LS-1,
Scali MPI Connect 5.6.1,
Intel 10.1 compilers

SPECmpiM_peak2007 = Not Run

MPI2007 license: 021 Test date: Feb-2008
Test sponsor: Scali, Inc Hardware Availability: Sep-2007
Tested by: Scali, Inc Software Availability: Feb-2008

SPEC has determined that this result was not in compliance with the SPEC
MPI2007 run and reporting rules. Specifically, the result did not meet the
requirement for baseline optimization flags to not use assertion flags (the
flag -fno-alias is a violation of this rule). The result was found to be
performance neutral compared to runs without -fno-alias. Replacement
results could not be produced because of system access limitations.

Benchmark results graph

Results Table

Benchmark Base Peak
Ranks Seconds Ratio Seconds Ratio Seconds Ratio Ranks Seconds Ratio Seconds Ratio Seconds Ratio
Results appear in the order in which they were run. Bold underlined text indicates a median measurement.
104.milc NC NC NC NC NC NC NC
107.leslie3d NC NC NC NC NC NC NC
113.GemsFDTD NC NC NC NC NC NC NC
115.fds4 NC NC NC NC NC NC NC
121.pop2 NC NC NC NC NC NC NC
122.tachyon NC NC NC NC NC NC NC
126.lammps NC NC NC NC NC NC NC
127.wrf2 NC NC NC NC NC NC NC
128.GAPgeofem NC NC NC NC NC NC NC
129.tera_tf NC NC NC NC NC NC NC
130.socorro NC NC NC NC NC NC NC
132.zeusmp2 NC NC NC NC NC NC NC
137.lu NC NC NC NC NC NC NC
Hardware Summary
Type of System: Homogenous
Compute Node: Linux Networx LS-1
Interconnect: InfiniBand
File Server Node: Linux Networx LS1 I/O Nodes
Total Compute Nodes: 8
Total Chips: 16
Total Cores: 32
Total Threads: 32
Total Memory: 64 GB
Base Ranks Run: 32
Minimum Peak Ranks: --
Maximum Peak Ranks: --
Software Summary
C Compiler: Intel C Compiler 10.1 for Linux (10.1.008)
C++ Compiler: Intel C++ Compiler 10.1 for Linux (10.1.008)
Fortran Compiler: Intel Fortran Compiler 10.1 for Linux (10.1.008)
Base Pointers: 64-bit
Peak Pointers: Not Applicable
MPI Library: Scali MPI Connect 5.6.1-58818
Other MPI Info: IB Gold VAPI
Pre-processors: None
Other Software: None

Node Description: Linux Networx LS-1

Hardware
Number of nodes: 8
Uses of the node: compute
Vendor: Linux Networx, Inc.
Model: LS-1
CPU Name: Intel Xeon 5160
CPU(s) orderable: 1-2 chips
Chips enabled: 2
Cores enabled: 4
Cores per chip: 2
Threads per core: 1
CPU Characteristics: 1333 Mhz FSB
CPU MHz: 3000
Primary Cache: 32 KB I + 32 KB D on chip per core
Secondary Cache: 4 MB I+D on chip per chip
L3 Cache: None
Other Cache: None
Memory: 8 GB (8 x 1GB DIMMs 667 MHz)
Disk Subsystem: 250GB SAS hard drive
Other Hardware: None
Adapter: Mellanox MHGA28-XTC
PCI-Express DDR InfiniBand HCA
Number of Adapters: 1
Slot Type: PCIe x8
Data Rate: InfiniBand 4x DDR
Ports Used: 1
Interconnect Type: Infiniband
Software
Adapter: Mellanox MHGA28-XTC
PCI-Express DDR InfiniBand HCA
Adapter Driver: IBGD 1.8.2
Adapter Firmware: 5.1.4
Operating System: SLES9 SP3
Local File System: Not applicable
Shared File System: GPFS
System State: multi-user
Other Software: None

Node Description: Linux Networx LS1 I/O Nodes

Hardware
Number of nodes: 8
Uses of the node: file server
Vendor: Linux Networx, Inc.
Model: LS1
CPU Name: Intel Xeon 5150
CPU(s) orderable: 1-2 chips
Chips enabled: 2
Cores enabled: 4
Cores per chip: 2
Threads per core: 1
CPU Characteristics: 1333 Mhz FSB
CPU MHz: 2660
Primary Cache: 32 KB I + 32 KB D on chip per core
Secondary Cache: 4 MB I+D on chip per chip
L3 Cache: None
Other Cache: None
Memory: 4 GB (4 x 1GB DIMMs 667 MHz)
Disk Subsystem: 18 TB SAN interconnected by FC4
Other Hardware: None
Adapter: Mellanox MHGA28-XTC
PCI-X DDR InfiniBand HCA
Number of Adapters: 1
Slot Type: PCIe x8
Data Rate: InfiniBand 4x DDR
Ports Used: 1
Interconnect Type: InfiniBand
Software
Adapter: Mellanox MHGA28-XTC
PCI-X DDR InfiniBand HCA
Adapter Driver: IBGD 1.8.2
Adapter Firmware: 5.2.0
Operating System: SLES9 SP3
Local File System: Not applicable
Shared File System: GPFS
System State: multi-user
Other Software: None

Interconnect Description: InfiniBand

Hardware
Vendor: QLogic
Model: QLogic Silverstorm 9120 Fabric Director
Switch Model: 9120
Number of Switches: 1
Number of Ports: 144
Data Rate: InfiniBand 4x SDR and InfiniBand 4x DDR
Firmware: 4.1.1.1.11
Topology: Single switch (star)
Primary Use: MPI and filesystem traffic

General Notes

The following approved srcalts are used
   tera_tf - fixbuffer
   wrf2    - fixcalling

Base Compiler Invocation

C benchmarks:

 /opt/scali/bin/mpicc -ccl icc 

C++ benchmarks:

126.lammps:  /opt/scali/bin/mpicc -ccl icpc 

Fortran benchmarks:

 /opt/scali/bin/mpif77 -ccl ifort 

Benchmarks using both Fortran and C:

 /opt/scali/bin/mpicc -ccl icc   /opt/scali/bin/mpif77 -ccl ifort 

Base Portability Flags

121.pop2:  -DSPEC_MPI_CASE_FLAG 
127.wrf2:  -DSPEC_MPI_LINUX   -DSPEC_MPI_CASE_FLAG 

Base Optimization Flags

C benchmarks:

 -O3   -no-prec-div   -fno-alias   -xT 

C++ benchmarks:

126.lammps:  -O3   -no-prec-div   -fno-alias   -xT 

Fortran benchmarks:

 -O3   -no-prec-div   -fno-alias   -xT 

Benchmarks using both Fortran and C:

 -O3   -no-prec-div   -fno-alias   -xT 

The flags files that were used to format this result can be browsed at
http://www.spec.org/mpi2007/flags/MPI2007_flags.20080611.html,
http://www.spec.org/mpi2007/flags/MPI2007_flags.0.20080611.html.

You can also download the XML flags sources by saving the following links:
http://www.spec.org/mpi2007/flags/MPI2007_flags.20080611.xml,
http://www.spec.org/mpi2007/flags/MPI2007_flags.0.20080611.xml.