SPEC® CPU2017 Floating Point Rate Result

Copyright 2017-2018 Standard Performance Evaluation Corporation

Hewlett Packard Enterprise (Test Sponsor: HPE)

ProLiant DL385 Gen10
(2.40 GHz, AMD EPYC 7351)

SPECrate2017_fp_base = 18600

SPECrate2017_fp_peak = Not Run

CPU2017 License: 3 Test Date: Nov-2017
Test Sponsor: HPE Hardware Availability: Nov-2017
Tested by: HPE Software Availability: Sep-2017

Benchmark result graphs are available in the PDF report.

Hardware
CPU Name: AMD EPYC 7351
  Max MHz.: 2900
  Nominal: 2400
Enabled: 32 cores, 2 chips, 2 threads/core
Orderable: 1, 2 chip(s)
Cache L1: 64 KB I + 32 KB D on chip per core
  L2: 512 KB I+D on chip per core
  L3: 64 MB I+D on chip per chip, 8 MB shared / 2 cores
  Other: None
Memory: 1 TB (16 x 64 GB 4Rx4 PC4-2666V-L)
Storage: 1 x 400 GB SAS SSD, RAID 0
Other: None
Software
OS: SUSE Linux Enterprise Server 12 (x86_64) SP3
Kernel 4.4.73-5-default
Compiler: C/C++: Version 1.0.0 of AOCC
Fortran: Version 4.8.5 of GCC
Parallel: No
Firmware: HPE BIOS Version A40 released Nov-2017 (tested with A40 (11/10/2017))
File System: xfs
System State: Run level 3 (multi-user)
Base Pointers: 64-bit
Peak Pointers: Not Applicable
Other: None

Results Table

Benchmark Base Peak
Copies Seconds Ratio Seconds Ratio Seconds Ratio Copies Seconds Ratio Seconds Ratio Seconds Ratio
SPECrate2017_fp_base 18600
SPECrate2017_fp_peak Not Run
Results appear in the order in which they were run. Bold underlined text indicates a median measurement.
503.bwaves_r 64 1045 614 1048 612 1045 614
507.cactuBSSN_r 64 452 179 453 179 454 179
508.namd_r 64 445 137 445 137 445 137
510.parest_r 64 1088 154 1089 154 1088 154
511.povray_r 64 872 171 869 172 868 172
519.lbm_r 64 514 131 514 131 513 131
521.wrf_r 64 746 192 742 193 730 196
526.blender_r 64 511 191 513 190 512 190
527.cam4_r 64 654 171 675 166 656 171
538.imagick_r 64 650 245 655 243 650 245
544.nab_r 64 518 208 518 208 521 207
549.fotonik3d_r 64 1409 177 1406 177 1408 177
554.roms_r 64 955 106 952 107 964 105

Submit Notes

The config file option 'submit' was used.
'numactl' was used to bind copies to the cores.
See the configuration file for details.

Operating System Notes

'ulimit -s unlimited' was used to set environment stack size
'ulimit -l 2097152' was used to set environment locked pages in memory limit

runspec command invoked through numactl i.e.:
numactl --interleave=all runspec <etc>

Set dirty_ratio=8 to limit dirty cache to 8% of memory
Set swappiness=1 to swap only if necessary
Set zone_reclaim_mode=1 to free local node memory and avoid remote memory
sync then drop_caches=3 to reset caches before invoking runcpu
Linux governor set to performance with cpupower "cpupower frequency-set -r -g performance"
dirty_ratio, swappiness, zone_reclaim_mode and drop_caches were
all set using privileged echo (e.g. echo 1 > /proc/sys/vm/swappiness).

Transparent huge pages were enabled for this run (OS default)

Huge pages were not configured for this run.

General Notes

Environment variables set by runcpu before the start of the run:
LD_LIBRARY_PATH = "/home/cpu2017/amd1704-rate-libs-revC/64;/home/cpu2017/amd1704-rate-libs-revC/32:"
MALLOC_CONF = "lg_chunk:28"

The AMD64 AOCC Compiler Suite is available at
http://developer.amd.com/tools-and-sdks/cpu-development/amd-optimizing-cc-compiler/

Binaries were compiled on a system with 2x AMD EPYC 7601 CPU + 512GB Memory using RHEL 7.4

jemalloc, a general purpose malloc implementation, was obtained at
https://github.com/jemalloc/jemalloc/releases/download/4.5.0/jemalloc-4.5.0.tar.bz2
jemalloc was built with GCC v4.8.5 in RHEL v7.2 under default conditions.
jemalloc uses environment variable MALLOC_CONF with values narenas and lg_chunk:
  narenas: sets the maximum number of arenas to use for automatic multiplexing
           of threads and arenas.
  lg_chunk: set the virtual memory chunk size (log base 2). For example,
            lg_chunk:21 sets the default chunk size to 2^21 = 2MiB.

The AOCC Gold Linker plugin was installed and used for the link stage.

The AOCC Fortran Plugin version 1.0 was used to leverage AOCC optimizers
with gfortran. It is available here:
http://developer.amd.com/amd-aocc/

Platform Notes

 BIOS Configuration:
  Thermal Configuration set to Maximum Cooling
  Performance Determinism set to Power Deterministic
  Memory Patrol Scrubbing set to Disabled
  Workload Profile set to General Throughput Compute
  Minimum Processor Idle Power Core C-State set to C6 State
  Processor Power and Utilization Monitoring set to Disabled
 Sysinfo program /home/cpu2017/bin/sysinfo
 Rev: r5797 of 2017-06-14 96c45e4568ad54c135fd618bcc091c0f
 running on dl385g10-1 Wed Nov 29 15:36:08 2017

 SUT (System Under Test) info as seen by some common utilities.
 For more information on this section, see
    https://www.spec.org/cpu2017/Docs/config.html#sysinfo

 From /proc/cpuinfo
    model name : AMD EPYC 7351 16-Core Processor
       2  "physical id"s (chips)
       64 "processors"
    cores, siblings (Caution: counting these is hw and system dependent. The following
    excerpts from /proc/cpuinfo might not be reliable.  Use with caution.)
       cpu cores : 16
       siblings  : 32
       physical 0: cores 0 1
       physical 1: cores 0 1

 From lscpu:
      Architecture:          x86_64
      CPU op-mode(s):        32-bit, 64-bit
      Byte Order:            Little Endian
      CPU(s):                64
      On-line CPU(s) list:   0-63
      Thread(s) per core:    2
      Core(s) per socket:    16
      Socket(s):             2
      NUMA node(s):          8
      Vendor ID:             AuthenticAMD
      CPU family:            23
      Model:                 1
      Model name:            AMD EPYC 7351 16-Core Processor
      Stepping:              2
      CPU MHz:               2400.000
      CPU max MHz:           2400.0000
      CPU min MHz:           1200.0000
      BogoMIPS:              4790.91
      Virtualization:        AMD-V
      L1d cache:             32K
      L1i cache:             64K
      L2 cache:              512K
      L3 cache:              8192K
      NUMA node0 CPU(s):     0-3,32-35
      NUMA node1 CPU(s):     4-7,36-39
      NUMA node2 CPU(s):     8-11,40-43
      NUMA node3 CPU(s):     12-15,44-47
      NUMA node4 CPU(s):     16-19,48-51
      NUMA node5 CPU(s):     20-23,52-55
      NUMA node6 CPU(s):     24-27,56-59
      NUMA node7 CPU(s):     28-31,60-63
      Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov
      pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm
      constant_tsc rep_good nopl nonstop_tsc extd_apicid amd_dcm aperfmperf eagerfpu pni
      pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c
      rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch
      osvw skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_l2 mwaitx arat cpb
      hw_pstate npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists
      pausefilter pfthreshold vmmcall avic fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap
      clflushopt sha_ni xsaveopt xsavec xgetbv1 clzero irperf overflow_recov succor smca

 /proc/cpuinfo cache data
    cache size : 512 KB

 From numactl --hardware  WARNING: a numactl 'node' might or might not correspond to a
 physical chip.
   available: 8 nodes (0-7)
   node 0 cpus: 0 1 2 3 32 33 34 35
   node 0 size: 128775 MB
   node 0 free: 128560 MB
   node 1 cpus: 4 5 6 7 36 37 38 39
   node 1 size: 129022 MB
   node 1 free: 128826 MB
   node 2 cpus: 8 9 10 11 40 41 42 43
   node 2 size: 129022 MB
   node 2 free: 128842 MB
   node 3 cpus: 12 13 14 15 44 45 46 47
   node 3 size: 129022 MB
   node 3 free: 128858 MB
   node 4 cpus: 16 17 18 19 48 49 50 51
   node 4 size: 129022 MB
   node 4 free: 128875 MB
   node 5 cpus: 20 21 22 23 52 53 54 55
   node 5 size: 129022 MB
   node 5 free: 128875 MB
   node 6 cpus: 24 25 26 27 56 57 58 59
   node 6 size: 129022 MB
   node 6 free: 128871 MB
   node 7 cpus: 28 29 30 31 60 61 62 63
   node 7 size: 128868 MB
   node 7 free: 128721 MB
   node distances:
   node   0   1   2   3   4   5   6   7
     0:  10  16  16  16  32  32  32  32
     1:  16  10  16  16  32  32  32  32
     2:  16  16  10  16  32  32  32  32
     3:  16  16  16  10  32  32  32  32
     4:  32  32  32  32  10  16  16  16
     5:  32  32  32  32  16  10  16  16
     6:  32  32  32  32  16  16  10  16
     7:  32  32  32  32  16  16  16  10

 From /proc/meminfo
    MemTotal:       1056540324 kB
    HugePages_Total:       0
    Hugepagesize:       2048 kB

 /usr/bin/lsb_release -d
    SUSE Linux Enterprise Server 12 SP3

 From /etc/*release* /etc/*version*
    SuSE-release:
       SUSE Linux Enterprise Server 12 (x86_64)
       VERSION = 12
       PATCHLEVEL = 3
       # This file is deprecated and will be removed in a future service pack or release.
       # Please check /etc/os-release for details about this release.
    os-release:
       NAME="SLES"
       VERSION="12-SP3"
       VERSION_ID="12.3"
       PRETTY_NAME="SUSE Linux Enterprise Server 12 SP3"
       ID="sles"
       ANSI_COLOR="0;32"
       CPE_NAME="cpe:/o:suse:sles:12:sp3"

 uname -a:
    Linux dl385g10-1 4.4.73-5-default #1 SMP Tue Jul 4 15:33:39 UTC 2017 (b7ce4e4) x86_64
    x86_64 x86_64 GNU/Linux

 run-level 3 Jan 1 06:17

 SPEC is set to: /home/cpu2017
    Filesystem     Type  Size  Used Avail Use% Mounted on
    /dev/sda4      xfs   331G   43G  288G  13% /home

 Additional information from dmidecode follows.  WARNING: Use caution when you interpret
 this section. The 'dmidecode' program reads system data which is "intended to allow
 hardware to be accurately determined", but the intent may not be met, as there are
 frequent changes to hardware, firmware, and the "DMTF SMBIOS" standard.
   BIOS HPE A40 11/10/2017
   Memory:
    16x UNKNOWN NOT AVAILABLE
    16x UNKNOWN NOT AVAILABLE 64 GB 4 rank 2666

 (End of data from sysinfo program)

Base Compiler Invocation

C benchmarks:

 clang 

C++ benchmarks:

 clang++ 

Fortran benchmarks:

 clang   gfortran 

Benchmarks using both Fortran and C:

 clang   gfortran 

Benchmarks using both C and C++:

 clang++   clang 

Benchmarks using Fortran, C, and C++:

 clang++   clang   gfortran 

Base Portability Flags

503.bwaves_r:  -DSPEC_LP64 
507.cactuBSSN_r:  -DSPEC_LP64 
508.namd_r:  -DSPEC_LP64 
510.parest_r:  -DSPEC_LP64 
511.povray_r:  -DSPEC_LP64 
519.lbm_r:  -DSPEC_LP64 
521.wrf_r:  -DSPEC_CASE_FLAG   -fconvert=big-endian   -DSPEC_LP64 
526.blender_r:  -funsigned-char   -D__BOOL_DEFINED   -DSPEC_LP64 
527.cam4_r:  -DSPEC_CASE_FLAG   -DSPEC_LP64 
538.imagick_r:  -DSPEC_LP64 
544.nab_r:  -DSPEC_LP64 
549.fotonik3d_r:  -DSPEC_LP64 
554.roms_r:  -DSPEC_LP64 

Base Optimization Flags

C benchmarks:

 -flto   -Wl,-plugin-opt=   -merge-constant   -Wl,-plugin-opt=-lsr-in-nested-loop   -disable-vect-cmp   -O3   -ffast-math   -march=znver1   -fstruct-layout=2   -mllvm   -unroll-threshold=100   -fremap-arrays   -mno-avx2   -inline-threshold=1000   -z muldefs   -ljemalloc 

C++ benchmarks:

 -flto   -Wl,-plugin-opt=   -merge-constant   -Wl,-plugin-opt=-lsr-in-nested-loop   -disable-vect-cmp   -O3   -march=znver1   -mllvm   -unroll-threshold=100   -finline-aggressive   -fremap-arrays   -inline-threshold=1000   -z muldefs   -ljemalloc 

Fortran benchmarks:

 -flto   -Wl,-plugin-opt=   -merge-constant   -Wl,-plugin-opt=-lsr-in-nested-loop   -disable-vect-cmp   -O3(gfortran)   -O3(clang)   -mavx   -madx   -funroll-loops   -ffast-math   -z muldefs   -fplugin=dragonegg.so   -fplugin-arg-dragonegg-llvm-option="-merge-constant   -disable-vect-cmp"   -ljemalloc   -lgfortran    -lamdlibm  

Benchmarks using both Fortran and C:

 -flto   -Wl,-plugin-opt=   -merge-constant   -Wl,-plugin-opt=-lsr-in-nested-loop   -disable-vect-cmp   -O3(clang)   -ffast-math   -march=znver1   -fstruct-layout=2   -mllvm   -unroll-threshold=100   -fremap-arrays   -mno-avx2   -inline-threshold=1000   -O3(gfortran)   -mavx   -madx   -funroll-loops   -z muldefs   -fplugin=dragonegg.so   -fplugin-arg-dragonegg-llvm-option="-merge-constant   -disable-vect-cmp"   -ljemalloc   -lgfortran    -lamdlibm  

Benchmarks using both C and C++:

 -flto   -Wl,-plugin-opt=   -merge-constant   -Wl,-plugin-opt=-lsr-in-nested-loop   -disable-vect-cmp   -O3   -ffast-math   -march=znver1   -fstruct-layout=2   -mllvm   -unroll-threshold=100   -fremap-arrays   -mno-avx2   -inline-threshold=1000   -finline-aggressive   -z muldefs   -ljemalloc 

Benchmarks using Fortran, C, and C++:

 -flto   -Wl,-plugin-opt=   -merge-constant   -Wl,-plugin-opt=-lsr-in-nested-loop   -disable-vect-cmp   -O3(clang)   -ffast-math   -march=znver1   -fstruct-layout=2   -mllvm   -unroll-threshold=100   -fremap-arrays   -mno-avx2   -inline-threshold=1000   -finline-aggressive   -O3(gfortran)   -mavx   -madx   -funroll-loops   -z muldefs   -fplugin=dragonegg.so   -fplugin-arg-dragonegg-llvm-option="-merge-constant   -disable-vect-cmp"   -ljemalloc 

The flags files that were used to format this result can be browsed at
http://www.spec.org/cpu2017/flags/gcc.2017-11-20.html,
http://www.spec.org/cpu2017/flags/aocc100-flags-revC-I.html,
http://www.spec.org/cpu2017/flags/HPE-Platform-Flags-AMD-V1.2-EPYC-revD.html.

You can also download the XML flags sources by saving the following links:
http://www.spec.org/cpu2017/flags/gcc.2017-11-20.xml,
http://www.spec.org/cpu2017/flags/aocc100-flags-revC-I.xml,
http://www.spec.org/cpu2017/flags/HPE-Platform-Flags-AMD-V1.2-EPYC-revD.xml.