SPEChpc™ 2021 Small Result

Copyright 2021-2023 Standard Performance Evaluation Corporation

Bull (Test Sponsor: Technische Universitaet Dresden)

Taurus: bullx DLC B720 (Intel Xeon E5-2680 v3)

SPEChpc 2021_sml_base = 0.995

SPEChpc 2021_sml_peak = Not Run

hpc2021 License: 37A Test Date: Sep-2021
Test Sponsor: Technische Universitaet Dresden Hardware Availability: Jan-2015
Tested by: Technische Universitaet Dresden Software Availability: Sep-2020

Benchmark result graphs are available in the PDF report.

Results Table

Benchmark Base Peak
Model Ranks Thrds/Rnk Seconds Ratio Seconds Ratio Seconds Ratio Model Ranks Thrds/Rnk Seconds Ratio Seconds Ratio Seconds Ratio
SPEChpc 2021_sml_base 0.995
SPEChpc 2021_sml_peak Not Run
Results appear in the order in which they were run. Bold underlined text indicates a median measurement.
605.lbm_s MPI 240 1 1557 0.996 1595 0.972
613.soma_s MPI 240 1 1584 1.010 1715 0.933
618.tealeaf_s MPI 240 1 2033 1.010 2032 1.010
619.clvleaf_s MPI 240 1 1657 0.996 1658 0.995
621.miniswp_s MPI 240 1 1055 1.040 1063 1.030
628.pot3d_s MPI 240 1 1649 1.020 1652 1.010
632.sph_exa_s MPI 240 1 2311 0.995 2307 0.997
634.hpgmgfv_s MPI 240 1 972 1.000 969 1.010
635.weather_s MPI 240 1 2596 1.000 2595 1.000
Hardware Summary
Type of System: Homogenous Cluster
Compute Node: Taurus Compute Node (Haswell)
Interconnect: InfiniBand
Compute Nodes Used: 10
Total Chips: 20
Total Cores: 240
Total Threads: 240
Total Memory: 640 GB
Software Summary
Compiler: C/C++/Fortran: Version 8.2.0 of
GNU Compilers
MPI Library: OpenMPI Version 3.1.3
Other MPI Info: None
Other Software: None
Base Parallel Model: MPI
Base Ranks Run: 240
Base Threads Run: 1
Peak Parallel Models: Not Run

Node Description: Taurus Compute Node (Haswell)

Hardware
Number of nodes: 10
Uses of the node: compute
Vendor: Bull
Model: bullx DLC B720
CPU Name: Intel Xeon E5-2680 v3
CPU(s) orderable: 1,2 chips
Chips enabled: 2
Cores enabled: 24
Cores per chip: 12
Threads per core: 1
CPU Characteristics: Intel Turbo Boost Technology disabled
CPU MHz: 2500
Primary Cache: 32 KB I + 32 KB D on chip per core
Secondary Cache: 256 KB I+D on chip per core
L3 Cache: 30 MB I+D on chip per chip
Other Cache: None
Memory: 64 GB (8 x 8 GB 2Rx8 PC4-2133R-10)
Disk Subsystem: Micron M510 128GB SSD SATA 6 Gb/s
Other Hardware: None
Adapter: Mellanox Technologies MT27600 (MCB193A-FCAT)
Number of Adapters: 1
Slot Type: PCIe 3.0 x16
Data Rate: 56 Gb/s
Ports Used: 1
Interconnect Type: InfiniBand
Software
Adapter: Mellanox Technologies MT27600 (MCB193A-FCAT)
Adapter Driver: mlx5_core
Adapter Firmware: 10.16.1200
Operating System: Red Hat Enterprise Linux Server 7.9 (Maipo)
3.10.0-1127.19.1.el7.x86_64
Local File System: ext4
Shared File System: 4 PB Lustre over Infiniband FDR (56 Gb/s)
System State: Multi-user, run level 3
Other Software: None

Interconnect Description: InfiniBand

Hardware
Vendor: Mellanox Technologies
Model: Mellanox InfiniBand FDR
Switch Model: SX6025 (36), SX6512 (216)
Number of Switches: 17
Number of Ports: 36
Data Rate: 56 Gb/s
Firmware: 9.4.2000, 9.4.5070
Topology: FatTree
Primary Use: MPI Traffic and File System
Software

Submit Notes

The config file option 'submit' was used.
    srun -n $ranks -c $threads $command

General Notes

This benchmark result is intended to provide perspective on
past performance using the historical hardware and/or
software described on this result page.

The system as described on this result page was formerly
generally available.  At the time of this publication, it may
not be shipping, and/or may not be supported, and/or may fail
to meet other tests of General Availability described in the
SPEC HPG Policy document, http://www.spec.org/hpg/policy.html

This measured result may not be representative of the result
that would be measured were this benchmark run with hardware
and software available as of the publication date.

Compiler Version Notes

==============================================================================
 FC  619.clvleaf_s(base) 628.pot3d_s(base) 635.weather_s(base)
------------------------------------------------------------------------------
GNU Fortran (GCC) 8.2.0
Copyright (C) 2018 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
------------------------------------------------------------------------------

==============================================================================
 CXXC 632.sph_exa_s(base)
------------------------------------------------------------------------------
g++ (GCC) 8.2.0
Copyright (C) 2018 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
------------------------------------------------------------------------------

==============================================================================
 CC  605.lbm_s(base) 613.soma_s(base) 618.tealeaf_s(base) 621.miniswp_s(base)
      634.hpgmgfv_s(base)
------------------------------------------------------------------------------
gcc (GCC) 8.2.0
Copyright (C) 2018 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
------------------------------------------------------------------------------

Base Compiler Invocation

C benchmarks:

 mpicc 

C++ benchmarks:

 mpicxx 

Fortran benchmarks:

 mpif90 

Base Portability Flags

619.clvleaf_s:  -ffree-line-length-none 
621.miniswp_s:  -DUSE_KBA   -DUSE_ACCELDIR 
628.pot3d_s:  -ffree-line-length-none 
632.sph_exa_s:  -DSPEC_USE_LT_IN_KERNELS 
635.weather_s:  -ffree-line-length-none 

Base Optimization Flags

C benchmarks:

 -Ofast   -march=native 

C++ benchmarks:

 -Ofast   -march=native   -std=c++14 

Fortran benchmarks:

 -Ofast   -march=native   -fno-stack-protector 

The flags file that was used to format this result can be browsed at
http://www.spec.org/hpc2021/flags/gcc.html.

You can also download the XML flags source by saving the following link:
http://www.spec.org/hpc2021/flags/gcc.xml.