SPEC(R) MPIM2007 Summary Linux Networx Linux Networx LS-1 Fri Oct 12 11:28:42 2007 MPI2007 License: 021 Test date: Sep-2007 Test sponsor: Scali, Inc Hardware availability: Apr-2007 Tested by: Scali, Inc Software availability: Aug-2007 Base Base Base Peak Peak Peak Benchmarks Ranks Run Time Ratio Ranks Run Time Ratio -------------- ------ --------- --------- ------ --------- --------- 104.milc 128 144 10.9 S 104.milc 128 142 11.0 S 104.milc 128 142 11.0 * 107.leslie3d 128 451 11.6 S 107.leslie3d 128 451 11.6 * 107.leslie3d 128 452 11.6 S 113.GemsFDTD 128 402 15.7 S 113.GemsFDTD 128 401 15.7 * 113.GemsFDTD 128 401 15.7 S 115.fds4 128 165 11.8 S 115.fds4 128 166 11.7 * 115.fds4 128 168 11.6 S 121.pop2 128 527 7.83 S 121.pop2 128 527 7.84 S 121.pop2 128 527 7.84 * 122.tachyon 128 240 11.7 S 122.tachyon 128 239 11.7 S 122.tachyon 128 239 11.7 * 126.lammps 128 249 11.7 * 126.lammps 128 249 11.7 S 126.lammps 128 249 11.7 S 127.wrf2 128 387 20.2 S 127.wrf2 128 436 17.9 S 127.wrf2 128 388 20.1 * 128.GAPgeofem 128 151 13.6 S 128.GAPgeofem 128 153 13.5 S 128.GAPgeofem 128 153 13.5 * 129.tera_tf 128 314 8.80 S 129.tera_tf 128 314 8.80 * 129.tera_tf 128 314 8.81 S 130.socorro 128 200 19.0 S 130.socorro 128 198 19.3 S 130.socorro 128 199 19.2 * 132.zeusmp2 128 262 11.9 * 132.zeusmp2 128 261 11.9 S 132.zeusmp2 128 262 11.8 S 137.lu 128 178 20.6 S 137.lu 128 178 20.6 * 137.lu 128 178 20.6 S ============================================================================== 104.milc 128 142 11.0 * 107.leslie3d 128 451 11.6 * 113.GemsFDTD 128 401 15.7 * 115.fds4 128 166 11.7 * 121.pop2 128 527 7.84 * 122.tachyon 128 239 11.7 * 126.lammps 128 249 11.7 * 127.wrf2 128 388 20.1 * 128.GAPgeofem 128 153 13.5 * 129.tera_tf 128 314 8.80 * 130.socorro 128 199 19.2 * 132.zeusmp2 128 262 11.9 * 137.lu 128 178 20.6 * SPECmpiM_base2007 12.9 SPECmpiM_peak2007 Not Run BENCHMARK DETAILS ----------------- Type of System: Heterogenous Total Compute Nodes: 32 Total Chips: 64 Total Cores: 128 Total Threads: 128 Total Memory: 304 GB Base Ranks Run: 128 Minimum Peak Ranks: -- Maximum Peak Ranks: -- C Compiler: QLogic PathScale C Compiler 3.0 C++ Compiler: QLogic PathScale C++ Compiler 3.0 Fortran Compiler: QLogic PathScale Fortran Compiler 3.0 Base Pointers: 64-bit Peak Pointers: Not Applicable MPI Library: Scali MPI Connect 5.5 Other MPI Info: IB Gold VAPI Pre-processors: None Other Software: None Node Description: Linux Networx LS-1 ==================================== HARDWARE -------- Number of nodes: 26 Uses of the node: compute Vendor: Linux Networx, Inc. Model: LS-1 CPU Name: Intel Xeon 5160 CPU(s) orderable: 1-2 chips Chips enabled: 2 Cores enabled: 4 Cores per chip: 2 Threads per core: 1 CPU Characteristics: 1333 Mhz FSB CPU MHz: 3000 Primary Cache: 32 KB I + 32 KB D on chip per core Secondary Cache: 4 MB I+D on chip per chip L3 Cache: None Other Cache: None Memory: 8 GB (8 x 1GB DIMMs) Disk Subsystem: 250GB SAS hard drive Other Hardware: None Adapter: Mellanox MHGA28-XTC Number of Adapters: 1 Slot Type: PCIe x8 Data Rate: InfiniBand 4x DDR Ports Used: 1 Interconnect Type: InfiniBand SOFTWARE -------- Adapter: Mellanox MHGA28-XTC Adapter Driver: IBGD 1.8.2 Adapter Firmware: 5.1.4 Operating System: SLES9 SP3 Local File System: Not applicable Shared File System: GPFS System State: multi-user Other Software: None Node Description: Linux Networx LS-1 ==================================== HARDWARE -------- Number of nodes: 6 Uses of the node: compute Vendor: Linux Networx, Inc. Model: LS-1 CPU Name: Intel Xeon 5160 CPU(s) orderable: 1-2 chips Chips enabled: 2 Cores enabled: 4 Cores per chip: 2 Threads per core: 1 CPU Characteristics: 1333 Mhz FSB CPU MHz: 3000 Primary Cache: 32 KB I + 32 KB D on chip per core Secondary Cache: 4 MB I+D on chip per chip L3 Cache: None Other Cache: None Memory: 16 GB (8 x 2GB DIMMs) Disk Subsystem: 250GB SAS hard drive Other Hardware: None Adapter: Mellanox MHGA28-XTC Number of Adapters: 1 Slot Type: PCIe x8 Data Rate: InfiniBand 4x DDR Ports Used: 1 Interconnect Type: InfiniBand SOFTWARE -------- Adapter: Mellanox MHGA28-XTC Adapter Driver: IBGD 1.8.2 Adapter Firmware: 5.1.4 Operating System: SLES9 SP3 Local File System: Not applicable Shared File System: GPFS System State: multi-user Other Software: None Node Description: Linux Networx Evolocity 1 =========================================== HARDWARE -------- Number of nodes: 8 Uses of the node: file server Vendor: Linux Networx, Inc. Model: Evolocity 1 CPU Name: AMD Opteron 248 CPU(s) orderable: 1-2 chips Chips enabled: 2 Cores enabled: 2 Cores per chip: 1 Threads per core: 1 CPU Characteristics: -- CPU MHz: 2200 Primary Cache: 64 KB I + 64 KB D on chip per core Secondary Cache: 1 MB I+D on chip per core L3 Cache: None Other Cache: None Memory: 8 GB (8 x 1GB DIMMs) Disk Subsystem: 18 TB SAN interconnected by FC2 Other Hardware: -- Adapter: Mellanox MHXL-CF128-T Number of Adapters: 1 Slot Type: PCI-X Data Rate: InfiniBand 4x SDR Ports Used: 1 Interconnect Type: InfiniBand SOFTWARE -------- Adapter: Mellanox MHXL-CF128-T Adapter Driver: IBGD 1.8.2 Adapter Firmware: 3.5.0 Operating System: SLES9 SP3 Local File System: Not applicable Shared File System: GPFS System State: multi-user Other Software: -- Interconnect Description: InfiniBand ==================================== HARDWARE -------- Vendor: QLogic Model: QLogic Silverstorm 9120 Fabric Director Switch Model: 9120 Number of Switches: 1 Number of Ports: 144 Data Rate: InfiniBand 4x SDR and InfiniBand 4x DDR Firmware: 4.0.0.5.5 Topology: Single switch (star) Primary Use: MPI and filesystem traffic Submit Notes ------------ Scali MPI Connect's mpirun wrapper has been used to submit the jobs. Description of switches: -aff manual:0x1:0x2:0x4:0x8: instruct the launcher to bind rank N..N+3 to the cores corresponding to the masks 1,2,4, and 8 respectively on each node. -npn 4: launch 4 processes per node. -rsh rsh: use rsh as method to connect to nodes. -mstdin none: do not connect the processes' STDIN to anything. -q: quiet mode, no output from launcher. -machinefile: file selecting the hosts to run on. -net smp,ib: prioritized list of networks used for communication between processes. General Notes ------------- Scali, Inc has executed the benchmark on Linux Networx's Solution Center. We are grateful for the support from Linux Networx and in particular Justin Wood in order to finalize the submissions. Base Compiler Invocation ------------------------ C benchmarks: /opt/scali/bin/mpicc -ccl pathcc C++ benchmarks: 126.lammps: /opt/scali/bin/mpicc -ccl pathCC Fortran benchmarks: /opt/scali/bin/mpif77 -ccl pathf90 Benchmarks using both Fortran and C: /opt/scali/bin/mpicc -ccl pathcc /opt/scali/bin/mpif77 -ccl pathf90 Base Portability Flags ---------------------- 104.milc: -DSPEC_MPI_LP64 115.fds4: -DSPEC_MPI_LC_TRAILING_DOUBLE_UNDERSCORE -DSPEC_MPI_LP64 121.pop2: -DSPEC_MPI_DOUBLE_UNDERSCORE -DSPEC_MPI_LP64 122.tachyon: -DSPEC_MPI_LP64 127.wrf2: -DF2CSTYLE -DSPEC_MPI_DOUBLE_UNDERSCORE -DSPEC_MPI_LINUX -DSPEC_MPI_LP64 128.GAPgeofem: -DSPEC_MPI_LP64 130.socorro: -fno-second-underscore -DSPEC_MPI_LP64 132.zeusmp2: -DSPEC_MPI_LP64 Base Optimization Flags ----------------------- C benchmarks: -march=core -Ofast -OPT:malloc_alg=1 C++ benchmarks: 126.lammps: -march=core -O3 -OPT:Ofast -CG:local_fwd_sched=on Fortran benchmarks: -march=core -O3 -OPT:Ofast -OPT:malloc_alg=1 -LANG:copyinout=off Benchmarks using both Fortran and C: -march=core -Ofast -OPT:malloc_alg=1 -O3 -OPT:Ofast -LANG:copyinout=off Base Other Flags ---------------- C benchmarks: -IPA:max_jobs=4 C++ benchmarks: 126.lammps: -IPA:max_jobs=4 Fortran benchmarks: -IPA:max_jobs=4 Benchmarks using both Fortran and C: -IPA:max_jobs=4 The flags file that was used to format this result can be browsed at http://www.spec.org/mpi2007/flags/MPI2007_flags.20071107.00.html You can also download the XML flags source by saving the following link: http://www.spec.org/mpi2007/flags/MPI2007_flags.20071107.00.xml SPEC and SPEC MPI are registered trademarks of the Standard Performance Evaluation Corporation. All other brand and product names appearing in this result are trademarks or registered trademarks of their respective holders. ----------------------------------------------------------------------------- For questions about this result, please contact the tester. For other inquiries, please contact webmaster@spec.org. Copyright 2006-2010 Standard Performance Evaluation Corporation Tested with SPEC MPI2007 v1.0. Report generated on Tue Jul 22 13:33:14 2014 by MPI2007 ASCII formatter v1463. Originally published on 7 November 2007.