SPEC® MPIM2007 Result

Copyright 2006-2010 Standard Performance Evaluation Corporation

SGI

SGI Altix ICE 8200EX
(Intel Xeon X5570, 2.93 GHz)

SPECmpiM_peak2007 = Not Run

MPI2007 license: 4 Test date: Feb-2009
Test sponsor: SGI Hardware Availability: Mar-2009
Tested by: SGI Software Availability: Jan-2009
Benchmark results graph

Results Table

Benchmark Base Peak
Ranks Seconds Ratio Seconds Ratio Seconds Ratio Ranks Seconds Ratio Seconds Ratio Seconds Ratio
Results appear in the order in which they were run. Bold underlined text indicates a median measurement.
104.milc 16 433 3.62 433 3.62
107.leslie3d 16 1710 3.05 1706 3.06
113.GemsFDTD 16 1276 4.94 1277 4.94
115.fds4 16 617 3.16 616 3.17
121.pop2 16 946 4.36 944 4.37
122.tachyon 16 1169 2.39 1164 2.40
126.lammps 16 1140 2.56 1139 2.56
127.wrf2 16 1197 6.51 1200 6.50
128.GAPgeofem 16 533 3.87 533 3.87
129.tera_tf 16 987 2.80 987 2.81
130.socorro 16 893 4.27 893 4.27
132.zeusmp2 16 965 3.22 973 3.19
137.lu 16 1253 2.93 1257 2.93
Hardware Summary
Type of System: Homogeneous
Compute Node: SGI Altix ICE 8200EX Compute Node
Interconnects: InfiniBand (MPI)
InfiniBand (I/O)
File Server Node: SGI InfiniteStorage Nexis 2000 NAS
Total Compute Nodes: 2
Total Chips: 4
Total Cores: 16
Total Threads: 32
Total Memory: 96 GB
Base Ranks Run: 16
Minimum Peak Ranks: --
Maximum Peak Ranks: --
Software Summary
C Compiler: Intel C Compiler for Linux
Version 10.1, Build 20080801
C++ Compiler: Intel C++ Compiler for Linux
Version 10.1, Build 20080801
Fortran Compiler: Intel Fortran Compiler for Linux
Version 10.1, Build 20080801
Base Pointers: 64-bit
Peak Pointers: 64-bit
MPI Library: SGI MPT 1.23
Other MPI Info: OFED 1.3.1
Pre-processors: None
Other Software: None

Node Description: SGI Altix ICE 8200EX Compute Node

Hardware
Number of nodes: 2
Uses of the node: compute
Vendor: SGI
Model: SGI Altix ICE 8200EX (Intel Xeon X5570, 2.93 GHz)
CPU Name: Intel Xeon X5570
CPU(s) orderable: 1-2 chips
Chips enabled: 2
Cores enabled: 8
Cores per chip: 4
Threads per core: 2
CPU Characteristics: Intel Turbo Boost Technology up to 3.33 GHz,
6.4 GT/s QPI, Hyper-Threading enabled
CPU MHz: 2934
Primary Cache: 32 KB I + 32 KB D on chip per core
Secondary Cache: 256 KB I+D on chip per core
L3 Cache: 8 MB I+D on chip per chip
Other Cache: None
Memory: 48 GB (12*4GB DDR3-1066 CL7 RDIMMs)
Disk Subsystem: None
Other Hardware: None
Adapter: Mellanox MT26418 ConnectX IB DDR
(PCIe x8 Gen2 5 GT/s)
Number of Adapters: 1
Slot Type: PCIe x8 Gen2
Data Rate: InfiniBand 4x DDR
Ports Used: 2
Interconnect Type: InfiniBand
Software
Adapter: Mellanox MT26418 ConnectX IB DDR
(PCIe x8 Gen2 5 GT/s)
Adapter Driver: OFED-1.3.1
Adapter Firmware: 2.5.0
Operating System: SUSE Linux Enterprise Server 10 (x86_64) SP2
Kernel 2.6.16.60-0.30-smp
Local File System: NFSv3
Shared File System: NFSv3 IPoIB
System State: Multi-user, run level 3
Other Software: SGI ProPack 6 for Linux Service Pack 2

Node Description: SGI InfiniteStorage Nexis 2000 NAS

Hardware
Number of nodes: 1
Uses of the node: fileserver
Vendor: SGI
Model: SGI Altix XE 240 (Intel Xeon 5140, 2.33 GHz)
CPU Name: Intel Xeon 5140
CPU(s) orderable: 1-2 chips
Chips enabled: 2
Cores enabled: 4
Cores per chip: 2
Threads per core: 1
CPU Characteristics: 1333 MHz FSB
CPU MHz: 2328
Primary Cache: 32 KB I + 32 KB D on chip per core
Secondary Cache: 4 MB I+D on chip per chip
L3 Cache: None
Other Cache: None
Memory: 24 GB (6*4GB DDR2-400 DIMMS)
Disk Subsystem: 7 TB RAID 5
48 x 147 GB SAS (Seagate Cheetah 15000 rpm)
Other Hardware: None
Adapter: Mellanox MT25208 InfiniHost III Ex
(PCIe x8 Gen1 2.5 GT/s)
Number of Adapters: 2
Slot Type: PCIe x8 Gen1
Data Rate: InfiniBand 4x DDR
Ports Used: 2
Interconnect Type: InfiniBand
Software
Adapter: Mellanox MT25208 InfiniHost III Ex
(PCIe x8 Gen1 2.5 GT/s)
Adapter Driver: OFED-1.3
Adapter Firmware: 5.3.0
Operating System: SUSE Linux Enterprise Server 10 (x86_64) SP1
Kernel 2.6.16.54-0.2.5-smp
Local File System: xfs
Shared File System: --
System State: Multi-user, run level 3
Other Software: SGI ProPack 5 for Linux Service Pack 5

Interconnect Description: InfiniBand (MPI)

Hardware
Vendor: Mellanox Technologies
Model: MT26418 ConnectX
Switch Model: Mellanox MT47396 InfiniScale III
Number of Switches: 8
Number of Ports: 24
Data Rate: InfiniBand 4x DDR
Firmware: 2020001
Topology: Bristle hypercube with express links
Primary Use: MPI traffic

Interconnect Description: InfiniBand (I/O)

Hardware
Vendor: Mellanox Technologies
Model: MT26418 ConnectX
Switch Model: Mellanox MT47396 InfiniScale-III
Number of Switches: 8
Number of Ports: 24
Data Rate: InfiniBand 4x DDR
Firmware: 2020001
Topology: Bristle hypercube with express links
Primary Use: I/O traffic

Submit Notes

The config file option 'submit' was used.

General Notes

 Software environment:
   setenv MPI_REQUEST_MAX 65536
     Determines the maximum number of nonblocking sends and
     receives that can simultaneously exist for any single MPI
     process.  MPI generates an error message if this limit
     (or the default, if not set) is exceeded.  Default:  16384
   setenv MPI_TYPE_MAX 32768
     Determines the maximum number of data types that can
     simultaneously exist for any single MPI process.
     MPI generates an error message if this limit (or the default,
     if not set) is exceeded.  Default:  1024
   setenv MPI_BUFS_THRESHOLD 1
     Determines whether MPT uses per-host or per-process message
     buffers for communicating with other hosts.  Per-host buffers
     are generally faster but for jobs running across many hosts they
     can consume a prodigious amount of memory.  MPT will use per-
     host buffers for jobs using up to and including this many hosts
     and will use per-process buffers for larger host counts.
     Default:  64
   setenv MPI_DSM_DISTRIBUTE
     Activates NUMA job placement mode.  This mode ensures that each
     MPI process gets a unique CPU and physical memory on the node
     with which that CPU is associated.  Currently, the CPUs are
     chosen by simply starting at relative CPU 0 and incrementing
     until all MPI processes have been forked.

   limit stacksize unlimited
     Removes limits on the maximum size of the automatically-
     extended stack region of the current process and each
     process it creates.
 PBS Pro batch scheduler (www.altair.com) is used with
   placement sets to ensure each MPI job is assigned to
   a topologically compact set of nodes
 BIOS settings:
   AMI BIOS version 8.15
   Hyper-Threading Technology enabled (default)
   Intel Turbo Boost Technology enabled (default)
   Intel Turbo Boost Technology activated in the OS via
     /etc/init.d/acpid start
     /etc/init.d/powersaved start
     powersave -f

Base Compiler Invocation

C benchmarks:

 icc 

C++ benchmarks:

126.lammps:  icpc 

Fortran benchmarks:

 ifort 

Benchmarks using both Fortran and C:

 icc   ifort 

Base Portability Flags

121.pop2:  -DSPEC_MPI_CASE_FLAG 
127.wrf2:  -DSPEC_MPI_CASE_FLAG   -DSPEC_MPI_LINUX 

Base Optimization Flags

C benchmarks:

 -O3   -ipo   -xT   -no-prec-div 

C++ benchmarks:

126.lammps:  -O3   -ipo   -xT   -no-prec-div   -ansi-alias 

Fortran benchmarks:

 -O3   -ipo   -xT   -no-prec-div 

Benchmarks using both Fortran and C:

 -O3   -ipo   -xT   -no-prec-div 

Base Other Flags

C benchmarks:

 -lmpi 

C++ benchmarks:

126.lammps:  -lmpi 

Fortran benchmarks:

 -lmpi 

Benchmarks using both Fortran and C:

 -lmpi 

The flags file that was used to format this result can be browsed at
http://www.spec.org/mpi2007/flags/EM64T_Intel101_flags.20080611.html.

You can also download the XML flags source by saving the following link:
http://www.spec.org/mpi2007/flags/EM64T_Intel101_flags.20080611.xml.