SPEC® MPIL2007 Result

Copyright 2006-2010 Standard Performance Evaluation Corporation

Hewlett Packard Enterprise

SGI 8600
(Intel Xeon Gold 6148, 2.40 GHz)

SPECmpiL_peak2007 = Not Run

MPI2007 license: 1 Test date: Oct-2017
Test sponsor: HPE Hardware Availability: Jul-2017
Tested by: HPE Software Availability: Nov-2017
Benchmark results graph

Results Table

Benchmark Base Peak
Ranks Seconds Ratio Seconds Ratio Seconds Ratio Ranks Seconds Ratio Seconds Ratio Seconds Ratio
Results appear in the order in which they were run. Bold underlined text indicates a median measurement.
121.pop2 320 234   16.6  233   16.7  232   16.8 
122.tachyon 320 196   9.90 197   9.88 197   9.86
125.RAxML 320 229   12.8  228   12.8  228   12.8 
126.lammps 320 189   13.0  187   13.2  191   12.9 
128.GAPgeofem 320 184   32.2  183   32.4  184   32.3 
129.tera_tf 320 97.9 11.2  98.1 11.2  97.6 11.3 
132.zeusmp2 320 117   18.1  117   18.2  120   17.7 
137.lu 320 113   37.1  113   37.0  113   37.3 
142.dmilc 320 127   28.9  127   29.0  127   28.9 
143.dleslie 320 106   29.1  106   29.2  106   29.2 
145.lGemsFDTD 320 203   21.7  203   21.7  204   21.6 
147.l2wrf2 320 374   21.9  375   21.9  376   21.8 
Hardware Summary
Type of System: Homogeneous
Compute Node: HPE XA730i Gen10 Server Node
Interconnect: InfiniBand (MPI and I/O)
File Server Node: Lustre FS
Total Compute Nodes: 8
Total Chips: 16
Total Cores: 320
Total Threads: 640
Total Memory: 1536 GB
Base Ranks Run: 320
Minimum Peak Ranks: --
Maximum Peak Ranks: --
Software Summary
C Compiler: Intel C Composer XE for Linux,
Version 18.0.0.128 Build 20170811
C++ Compiler: Intel C++ Composer XE for Linux,
Version 18.0.0.128 Build 20170811
Fortran Compiler: Intel Fortran Composer XE for Linux,
Version 18.0.0.128 Build 20170811
Base Pointers: 64-bit
Peak Pointers: Not Applicable
MPI Library: HPE Performance Software - Message Passing
Interface 2.17
Other MPI Info: OFED 3.2.2
Pre-processors: None
Other Software: None

Node Description: HPE XA730i Gen10 Server Node

Hardware
Number of nodes: 8
Uses of the node: compute
Vendor: Hewlett Packard Enterprise
Model: SGI 8600 (Intel Xeon Gold 6148, 2.40 GHz)
CPU Name: Intel Xeon Gold 6148
CPU(s) orderable: 1-2 chips
Chips enabled: 2
Cores enabled: 40
Cores per chip: 20
Threads per core: 2
CPU Characteristics: Intel Turbo Boost Technology up to 3.70 GHz
CPU MHz: 2400
Primary Cache: 32 KB I + 32 KB D on chip per core
Secondary Cache: 1 MB I+D on chip per core
L3 Cache: 27.5 MB I+D on chip per chip
Other Cache: None
Memory: 192 GB (12 x 16 GB 2Rx4 PC4-2666V-R)
Disk Subsystem: None
Other Hardware: None
Adapter: Mellanox MT27700 with ConnectX-4 ASIC
Number of Adapters: 2
Slot Type: PCIe x16 Gen3 8GT/s
Data Rate: InfiniBand 4X EDR
Ports Used: 1
Interconnect Type: InfiniBand
Software
Adapter: Mellanox MT27700 with ConnectX-4 ASIC
Adapter Driver: OFED-3.4-2.1.8.0
Adapter Firmware: 12.18.1000
Operating System: Red Hat Enterprise Linux Server 7.3 (Maipo),
Kernel 3.10.0-514.2.2.el7.x86_64
Local File System: LFS
Shared File System: LFS
System State: Multi-user, run level 3
Other Software: SGI Management Center Compute Node 3.5.0,
Build 716r171.rhel73-1705051353

Node Description: Lustre FS

Hardware
Number of nodes: 4
Uses of the node: fileserver
Vendor: Hewlett Packard Enterprise
Model: Rackable C1104-GP2 (Intel Xeon E5-2690 v3, 2.60
GHz)
CPU Name: Intel Xeon E5-2690 v3
CPU(s) orderable: 1-2 chips
Chips enabled: 2
Cores enabled: 24
Cores per chip: 12
Threads per core: 1
CPU Characteristics: Intel Turbo Boost Technology up to 3.50 GHz
Hyper-Threading Technology disabled
CPU MHz: 2600
Primary Cache: 32 KB I + 32 KB D on chip per core
Secondary Cache: 256 KB I+D on chip per core
L3 Cache: 30 MB I+D on chip per chip
Other Cache: None
Memory: 128 GB (8 x 16 GB 2Rx4 PC4-2133P-R)
Disk Subsystem: 684 TB RAID 6
48 x 8+2 2TB 7200 RPM
Other Hardware: None
Adapter: Mellanox MT27700 with ConnectX-4 ASIC
Number of Adapters: 2
Slot Type: PCIe x16 Gen3
Data Rate: InfiniBand 4X EDR
Ports Used: 1
Interconnect Type: InfiniBand
Software
Adapter: Mellanox MT27700 with ConnectX-4 ASIC
Adapter Driver: OFED-3.3-1.0.0.0
Adapter Firmware: 12.14.2036
Operating System: Red Hat Enterprise Linux Server 7.3 (Maipo),
Kernel 3.10.0-514.2.2.el7.x86_64
Local File System: ext3
Shared File System: LFS
System State: Multi-user, run level 3
Other Software: None

Interconnect Description: InfiniBand (MPI and I/O)

Hardware
Vendor: Mellanox Technologies and SGI
Model: SGI P0002145
Switch Model: SGI P0002145
Number of Switches: 1
Number of Ports: 36
Data Rate: InfiniBand 4X EDR
Firmware: 11.0350.0394
Topology: Enhanced Hypercube
Primary Use: MPI and I/O traffic

Base Tuning Notes

src.alt used: 143.dleslie->integer_overflow

Submit Notes

The config file option 'submit' was used.

General Notes


 Software environment:
   export MPI_REQUEST_MAX=65536
   export MPI_TYPE_MAX=32768
   export MPI_IB_RAILS=2
   export MPI_IB_IMM_UPGRADE=false
   export MPI_IB_DCIS=2
   export MPI_IB_HYPER_LAZY=false
   export MPI_CONNECTIONS_THRESHOLD=0
   ulimit -s unlimited

 BIOS settings:
   AMI BIOS version SAED7177, 07/17/2017

 Job Placement:
   Each MPI job was assigned to a topologically compact set
   of nodes.

 Additional notes regarding interconnect:
   The Infiniband network consists of two independent planes,
   with half the switches in the system allocated to each plane.
   I/O traffic is restricted to one plane, while MPI traffic can
   use both planes.

Base Compiler Invocation

C benchmarks:

 icc 

C++ benchmarks:

126.lammps:  icpc 

Fortran benchmarks:

 ifort 

Benchmarks using both Fortran and C:

 icc   ifort 

Base Portability Flags

121.pop2:  -DSPEC_MPI_CASE_FLAG 

Base Optimization Flags

C benchmarks:

 -O3   -xCORE-AVX512   -no-prec-div   -ipo 

C++ benchmarks:

126.lammps:  -O3   -xCORE-AVX512   -no-prec-div   -ansi-alias   -ipo 

Fortran benchmarks:

 -O3   -xCORE-AVX512   -no-prec-div   -ipo 

Benchmarks using both Fortran and C:

 -O3   -xCORE-AVX512   -no-prec-div   -ipo 

Base Other Flags

C benchmarks:

 -lmpi 

C++ benchmarks:

126.lammps:  -lmpi 

Fortran benchmarks:

 -lmpi 

Benchmarks using both Fortran and C:

 -lmpi 

The flags file that was used to format this result can be browsed at
http://www.spec.org/mpi2007/flags/HPE_x86_64_Intel18_flags.html.

You can also download the XML flags source by saving the following link:
http://www.spec.org/mpi2007/flags/HPE_x86_64_Intel18_flags.xml.