SPEChpc™ 2021 Tiny Result

Copyright 2021-2023 Standard Performance Evaluation Corporation

Bull (Test Sponsor: Technische Universitaet Dresden)

Taurus: bullx DLC B720 (Intel Xeon E5-2680 v3)

SPEChpc 2021_tny_base = 1.00

SPEChpc 2021_tny_peak = Not Run

hpc2021 License: 37A Test Date: Sep-2021
Test Sponsor: Technische Universitaet Dresden Hardware Availability: Jan-2015
Tested by: Technische Universitaet Dresden Software Availability: Sep-2020

Benchmark result graphs are available in the PDF report.

Results Table

Benchmark Base Peak
Model Ranks Thrds/Rnk Seconds Ratio Seconds Ratio Seconds Ratio Model Ranks Thrds/Rnk Seconds Ratio Seconds Ratio Seconds Ratio
SPEChpc 2021_tny_base 1.00
SPEChpc 2021_tny_peak Not Run
Results appear in the order in which they were run. Bold underlined text indicates a median measurement.
505.lbm_t MPI 24 1 2209 1.020 2218 1.010
513.soma_t MPI 24 1 3638 1.020 3632 1.020
518.tealeaf_t MPI 24 1 1652 0.999 1659 0.995
519.clvleaf_t MPI 24 1 1646 1.000 1650 1.000
521.miniswp_t MPI 24 1 1571 1.020 1590 1.010
528.pot3d_t MPI 24 1 2126 1.000 2133 0.996
532.sph_exa_t MPI 24 1 1960 0.995 1958 0.996
534.hpgmgfv_t MPI 24 1 1178 0.997 1178 0.997
535.weather_t MPI 24 1 3211 1.000 3204 1.010
Hardware Summary
Type of System: Homogenous Cluster
Compute Node: Taurus Compute Node (Haswell)
Interconnect: InfiniBand
Compute Nodes Used: 1
Total Chips: 2
Total Cores: 24
Total Threads: 24
Total Memory: 64 GB
Software Summary
Compiler: C/C++/Fortran: Version 8.2.0 of
GNU Compilers
MPI Library: OpenMPI Version 3.1.3
Other MPI Info: None
Other Software: None
Base Parallel Model: MPI
Base Ranks Run: 24
Base Threads Run: 1
Peak Parallel Models: Not Run

Node Description: Taurus Compute Node (Haswell)

Hardware
Number of nodes: 1
Uses of the node: compute
Vendor: Bull
Model: bullx DLC B720
CPU Name: Intel Xeon E5-2680 v3
CPU(s) orderable: 1,2 chips
Chips enabled: 2
Cores enabled: 24
Cores per chip: 12
Threads per core: 1
CPU Characteristics: Intel Turbo Boost Technology disabled
CPU MHz: 2500
Primary Cache: 32 KB I + 32 KB D on chip per core
Secondary Cache: 256 KB I+D on chip per core
L3 Cache: 30 MB I+D on chip per chip
Other Cache: None
Memory: 64 GB (8 x 8 GB 2Rx8 PC4-2133R-10)
Disk Subsystem: Micron M510 128GB SSD SATA 6 Gb/s
Other Hardware: None
Adapter: Mellanox Technologies MT27600 (MCB193A-FCAT)
Number of Adapters: 1
Slot Type: PCIe 3.0 x16
Data Rate: 56 Gb/s
Ports Used: 1
Interconnect Type: InfiniBand
Software
Adapter: Mellanox Technologies MT27600 (MCB193A-FCAT)
Adapter Driver: mlx5_core
Adapter Firmware: 10.16.1200
Operating System: Red Hat Enterprise Linux Server 7.9 (Maipo)
3.10.0-1127.19.1.el7.x86_64
Local File System: ext4
Shared File System: 4 PB Lustre over Infiniband FDR (56 Gb/s)
System State: Multi-user, run level 3
Other Software: None

Interconnect Description: InfiniBand

Hardware
Vendor: Mellanox Technologies
Model: Mellanox InfiniBand FDR
Switch Model: SX6025 (36), SX6512 (216)
Number of Switches: 17
Number of Ports: 36
Data Rate: 56 Gb/s
Firmware: 9.4.2000, 9.4.5070
Topology: FatTree
Primary Use: MPI Traffic and File System
Software

Submit Notes

The config file option 'submit' was used.
    srun -n $ranks -c $threads $command

General Notes

This benchmark result is intended to provide perspective on
past performance using the historical hardware and/or
software described on this result page.

The system as described on this result page was formerly
generally available.  At the time of this publication, it may
not be shipping, and/or may not be supported, and/or may fail
to meet other tests of General Availability described in the
SPEC HPG Policy document, http://www.spec.org/hpg/policy.html

This measured result may not be representative of the result
that would be measured were this benchmark run with hardware
and software available as of the publication date.

Compiler Version Notes

==============================================================================
 FC  519.clvleaf_t(base) 528.pot3d_t(base) 535.weather_t(base)
------------------------------------------------------------------------------
GNU Fortran (GCC) 8.2.0
Copyright (C) 2018 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
------------------------------------------------------------------------------

==============================================================================
 CXXC 532.sph_exa_t(base)
------------------------------------------------------------------------------
g++ (GCC) 8.2.0
Copyright (C) 2018 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
------------------------------------------------------------------------------

==============================================================================
 CC  505.lbm_t(base) 513.soma_t(base) 518.tealeaf_t(base) 521.miniswp_t(base)
      534.hpgmgfv_t(base)
------------------------------------------------------------------------------
gcc (GCC) 8.2.0
Copyright (C) 2018 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
------------------------------------------------------------------------------

Base Compiler Invocation

C benchmarks:

 mpicc 

C++ benchmarks:

 mpicxx 

Fortran benchmarks:

 mpif90 

Base Portability Flags

519.clvleaf_t:  -ffree-line-length-none 
521.miniswp_t:  -DUSE_KBA   -DUSE_ACCELDIR 
528.pot3d_t:  -ffree-line-length-none 
532.sph_exa_t:  -DSPEC_USE_LT_IN_KERNELS 
535.weather_t:  -ffree-line-length-none 

Base Optimization Flags

C benchmarks:

 -Ofast   -march=native 

C++ benchmarks:

 -Ofast   -march=native   -std=c++14 

Fortran benchmarks:

 -Ofast   -march=native   -fno-stack-protector 

The flags file that was used to format this result can be browsed at
http://www.spec.org/hpc2021/flags/gcc.html.

You can also download the XML flags source by saving the following link:
http://www.spec.org/hpc2021/flags/gcc.xml.