SPEC CPU®2017 Integer Speed Result

Copyright 2017-2019 Standard Performance Evaluation Corporation

Hewlett Packard Enterprise (Test Sponsor: HPE)

ProLiant DL385 Gen10
(2.00 GHz, AMD EPYC 7702)

SPECspeed®2017_int_base = 7.85

SPECspeed®2017_int_energy_base = 40.90

SPECspeed®2017_int_peak = 8.43

SPECspeed®2017_int_energy_peak = 43.80

CPU2017 License: 003 Test Date: May-2019
Test Sponsor: HPE Hardware Availability: Oct-2019
Tested by: HPE Software Availability: Aug-2019

Benchmark result graphs are available in the PDF report.

Hardware
CPU Name: AMD EPYC 7702
  Max MHz: 3350
  Nominal: 2000
Enabled: 128 cores, 2 chips
Orderable: 1, 2 chip(s)
Cache L1: 32 KB I + 32 KB D on chip per core
  L2: 512 KB I+D on chip per core
  L3: 256 MB I+D on chip per chip,
16 MB shared / 4 cores
  Other: None
Memory: 1 TB (16 x 64 GB 4Rx4 PC4-2933Y-L)
Storage: 1 x HPE 240 GB SATA 6G M.2 SSD
Other: None
Software
OS: SUSE Linux Enterprise Server 15 (x86_64) SP1
Kernel 4.12.14-195-default
Compiler: C/C++/Fortran: Version 2.0.0 of AOCC
Parallel: Yes
Firmware: HPE BIOS Version A40 07/20/2019 released Aug-2019
File System: btrfs
System State: Run level 3 (multi-user)
Base Pointers: 64-bit
Peak Pointers: 32/64-bit
Other: jemalloc: jemalloc memory allocator library v5.1.0
Power Management: Disabled
Power
Max. Power (W): 586.1
Idle Power (W): 197.38
Min. Temperature (C): 23.75
Elevation (m): 132
Line Standard: 208 V / 60 Hz / 1 phase / 2 wires
Provisioning: Line-powered
Power Settings
Management FW: Version 1.43 of iLO5 released May 23 2019
Memory Mode: Normal
Power-Relevant Hardware
Power Supply: 1 x 800 W (non-redundant)
  Details: HPE 800W Flex Slot Titanium Hot Plug Low Halogen
Power Supply Kit (865438-B21)
Backplane: None
Other Storage: Embedded SATA Controller
Storage Model #s: 875488-B21
NICs Installed: 1 x HPE Ethernet 4-port 331i Adapter @ 1 Gb
NICs Enabled (FW/OS): 4 / 4
NICs Connected/Speed: 2 @ 1 Gb
Other HW Model #s: 6 x High Performance Fans (867810-B21)
Power Analyzer
Power Analyzer: 10.216.1.13:8888
Hardware Vendor: Yokogawa
Model: YokogawaWT210
Serial Number: 91GC21887
Input Connection: GPIB via NI GIPB-USB-HS
Metrology Institute: NIST
Calibration By: TRANSCAT
Calibration Label: 5-E62NT-80-1
Calibration Date: 11-Jun-2019
PTDaemon™ Version: 1.9.1 (a2d19f26; 2019-07-17)
Setup Description: SUT Power Supply 1 via neoXt NXB 20815
Current Ranges Used: 1A, 2A, 5A
Voltage Range Used: 300V
Temperature Meter
Temperature Meter: 10.216.1.13:8889
Hardware Vendor: Digi International Inc.
Model: DigiWATCHPORT_H
Serial Number: V45084325
Input Connection: USB
PTDaemon Version: 1.9.1 (a2d19f26; 2019-07-17)
Setup Description: 5 mm in front of SUT main intake

Base Results Table

Benchmark Threads Seconds Ratio Energy
(kJ)
Energy
Ratio
Average
Power
Maximum
Power
Seconds Ratio Energy
(kJ)
Energy
Ratio
Average
Power
Maximum
Power
Seconds Ratio Energy
(kJ)
Energy
Ratio
Average
Power
Maximum
Power
SPECspeed®2017_int_base 7.85
SPECspeed®2017_int_energy_base 40.90
Results appear in the order in which they were run. Bold underlined text indicates a median measurement.
600.perlbench_s 128 414 4.28 84.2 22.9 203 212 385 4.61 78.4 24.6 204 213 394 4.51 80.3 24.0 204 213
602.gcc_s 128 466 8.54 97.5 44.4 209 235 465 8.57 97.2 44.5 209 235 470 8.48 98.3 44.0 209 237
605.mcf_s 128 337 14.00 70.9 72.7 211 242 336 14.00 70.9 72.7 211 242 336 14.10 70.9 72.7 211 243
620.omnetpp_s 128 642 2.54 1320 13.4 206 216 427 3.82 88.2 20.1 206 214 653 2.50 1340 13.2 206 214
623.xalancbmk_s 128 162 8.75 32.9 46.8 203 204 160 8.85 32.6 47.2 203 206 162 8.76 33.0 46.7 204 205
625.x264_s 128 148 11.90 30.1 63.7 204 206 159 11.10 32.3 59.4 203 206 151 11.70 30.8 62.3 204 206
631.deepsjeng_s 128 314 4.56 64.2 24.3 204 224 335 4.28 68.4 22.8 204 235 318 4.50 65.2 23.9 205 233
641.leela_s 128 414 4.13 83.7 22.1 202 204 416 4.11 84.2 22.0 203 206 413 4.13 83.8 22.0 203 205
648.exchange2_s 128 182 16.20 36.9 86.6 203 205 182 16.20 37.0 86.4 203 206 182 16.20 37.0 86.4 204 205
657.xz_s 128 294 21.10 71.7 93.9 244 583 294 21.10 71.8 93.7 245 586 293 21.10 71.9 93.6 245 584

Peak Results Table

Benchmark Threads Seconds Ratio Energy
(kJ)
Energy
Ratio
Average
Power
Maximum
Power
Seconds Ratio Energy
(kJ)
Energy
Ratio
Average
Power
Maximum
Power
Seconds Ratio Energy
(kJ)
Energy
Ratio
Average
Power
Maximum
Power
SPECspeed®2017_int_peak 8.43
SPECspeed®2017_int_energy_peak 43.80
Results appear in the order in which they were run. Bold underlined text indicates a median measurement.
600.perlbench_s 1 365 4.86 74.6 25.8 204 214 365 4.86 74.7 25.8 204 215 366 4.85 74.7 25.8 204 214
602.gcc_s 1 462 8.62 97.1 44.6 210 236 462 8.63 96.9 44.7 210 235 462 8.62 97.0 44.6 210 236
605.mcf_s 1 315 15.00 66.2 77.8 210 243 315 15.00 66.2 77.8 210 240 315 15.00 66.2 77.8 210 241
620.omnetpp_s 1 432 3.78 89.3 19.9 207 214 432 3.78 89.3 19.9 207 213 437 3.74 90.3 19.7 207 214
623.xalancbmk_s 1 149 9.52 30.3 50.8 204 205 148 9.57 30.2 51.0 204 206 146 9.68 29.8 51.6 204 205
625.x264_s 1 145 12.20 29.5 65.0 204 206 145 12.20 29.6 64.9 205 207 145 12.20 29.6 64.9 204 206
631.deepsjeng_s 1 308 4.65 63.1 24.7 205 237 308 4.66 63.1 24.7 205 227 308 4.66 63.1 24.7 205 226
641.leela_s 128 414 4.13 83.7 22.1 202 204 416 4.11 84.2 22.0 203 206 413 4.13 83.8 22.0 203 205
648.exchange2_s 1 182 16.20 37.0 86.4 204 205 182 16.10 37.1 86.3 204 205 182 16.20 37.1 86.3 204 206
657.xz_s 128 294 21.10 71.7 93.9 244 583 294 21.10 71.8 93.7 245 586 293 21.10 71.9 93.6 245 584

Compiler Notes

The AMD64 AOCC Compiler Suite is available at
http://developer.amd.com/amd-aocc/

Submit Notes

The config file option 'submit' was used.
'numactl' was used to bind copies to the cores.
See the configuration file for details.

Operating System Notes

'ulimit -s unlimited' was used to set environment stack size
'ulimit -l 2097152' was used to set environment locked pages in memory limit

runcpu command invoked through numactl i.e.:
numactl --interleave=all runcpu <etc>

Set dirty_ratio=8 to limit dirty cache to 8% of memory
Set swappiness=1 to swap only if necessary
Set zone_reclaim_mode=1 to free local node memory and avoid remote memory
sync then drop_caches=3 to reset caches before invoking runcpu

dirty_ratio, swappiness, zone_reclaim_mode and drop_caches were
all set using privileged echo (e.g. echo 1 > /proc/sys/vm/swappiness).

Transparent huge pages set to 'always' for this run (OS default)

The date was incorrectly set for this system. The test date should be Aug-2019.

Environment Variables Notes

Environment variables set by runcpu before the start of the run:
GOMP_CPU_AFFINITY = "0-127"
LD_LIBRARY_PATH =
     "/cpu2017/amd_speed_aocc200_rome_C_lib/64;/cpu2017/amd_speed_aocc200_rom
     e_C_lib/32:"
MALLOC_CONF = "retain:true"
OMP_DYNAMIC = "false"
OMP_SCHEDULE = "static"
OMP_STACKSIZE = "128M"
OMP_THREAD_LIMIT = "128"

Environment variables set by runcpu during the 600.perlbench_s peak run:
GOMP_CPU_AFFINITY = "0"

Environment variables set by runcpu during the 602.gcc_s peak run:
GOMP_CPU_AFFINITY = "0"

Environment variables set by runcpu during the 605.mcf_s peak run:
GOMP_CPU_AFFINITY = "0"

Environment variables set by runcpu during the 620.omnetpp_s peak run:
GOMP_CPU_AFFINITY = "0"

Environment variables set by runcpu during the 623.xalancbmk_s peak run:
GOMP_CPU_AFFINITY = "0"
OMP_STACKSIZE = "128M"

Environment variables set by runcpu during the 625.x264_s peak run:
GOMP_CPU_AFFINITY = "0"

Environment variables set by runcpu during the 631.deepsjeng_s peak run:
GOMP_CPU_AFFINITY = "0"

Environment variables set by runcpu during the 648.exchange2_s peak run:
GOMP_CPU_AFFINITY = "0"

General Notes

Binaries were compiled on a system with 2x AMD EPYC 7601 CPU + 512GB Memory using Fedora 26

NA: The test sponsor attests, as of date of publication, that CVE-2017-5754 (Meltdown)
is mitigated in the system as tested and documented.
Yes: The test sponsor attests, as of date of publication, that CVE-2017-5753 (Spectre variant 1)
is mitigated in the system as tested and documented.
Yes: The test sponsor attests, as of date of publication, that CVE-2017-5715 (Spectre variant 2)
is mitigated in the system as tested and documented.

jemalloc: configured and built with GCC v9.1.0 in Ubuntu 19.04 with -O3 -znver2 -flto
jemalloc 5.1.0 is available here:
https://github.com/jemalloc/jemalloc/releases/download/5.1.0/jemalloc-5.1.0.tar.bz2


Submitted_by: "Bucek, James" <james.bucek@hpe.com>
Submitted: Tue Sep 17 00:02:18 EDT 2019
Submission: cpu2017-20190903-17794.sub

Submitted_by: "Bucek, James" <james.bucek@hpe.com>
Submitted: Tue Sep 17 09:00:11 EDT 2019
Submission: cpu2017-20190903-17794.sub

Platform Notes

BIOS Configuration:
 AMD SMT Option set to Disabled
 Thermal Configuration set to Optimal Cooling
 Determinism Control set to Manual
 Performance Determinism set to Power Deterministic
 Memory Patrol Scrubbing set to Disabled
 NUMA memory domains per socket set to Four memory domains per socket
 Last-Level Cache (LLC) as NUMA Node set to Enabled
 Workload Profile set to General Throughput Compute
 Minimum Processor Idle Power Core C-State set to C6 State

 Sysinfo program /cpu2017/bin/sysinfo
 Rev: r6365 of 2019-08-21 295195f888a3d7edb1e6e46a485a0011
 running on dl385gen10 Tue May 28 20:09:52 2019

 SUT (System Under Test) info as seen by some common utilities.
 For more information on this section, see
    https://www.spec.org/cpu2017/Docs/config.html#sysinfo

 From /proc/cpuinfo
    model name : AMD EPYC 7702 64-Core Processor
       2  "physical id"s (chips)
       128 "processors"
    cores, siblings (Caution: counting these is hw and system dependent. The following
    excerpts from /proc/cpuinfo might not be reliable.  Use with caution.)
       cpu cores : 64
       siblings  : 64
       physical 0: cores 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
       25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52
       53 54 55 56 57 58 59 60 61 62 63
       physical 1: cores 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
       25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52
       53 54 55 56 57 58 59 60 61 62 63

 From lscpu:
      Architecture:        x86_64
      CPU op-mode(s):      32-bit, 64-bit
      Byte Order:          Little Endian
      Address sizes:       48 bits physical, 48 bits virtual
      CPU(s):              128
      On-line CPU(s) list: 0-127
      Thread(s) per core:  1
      Core(s) per socket:  64
      Socket(s):           2
      NUMA node(s):        8
      Vendor ID:           AuthenticAMD
      CPU family:          23
      Model:               49
      Model name:          AMD EPYC 7702 64-Core Processor
      Stepping:            0
      CPU MHz:             2000.000
      CPU max MHz:         2000.0000
      CPU min MHz:         1500.0000
      BogoMIPS:            3992.56
      Virtualization:      AMD-V
      L1d cache:           32K
      L1i cache:           32K
      L2 cache:            512K
      L3 cache:            16384K
      NUMA node0 CPU(s):   0-15
      NUMA node1 CPU(s):   16-31
      NUMA node2 CPU(s):   32-47
      NUMA node3 CPU(s):   48-63
      NUMA node4 CPU(s):   64-79
      NUMA node5 CPU(s):   80-95
      NUMA node6 CPU(s):   96-111
      NUMA node7 CPU(s):   112-127
      Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov
      pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm
      constant_tsc rep_good nopl xtopology nonstop_tsc cpuid extd_apicid aperfmperf pni
      pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c
      rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch
      osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_l2 mwaitx cpb
      cat_l3 cdp_l3 hw_pstate ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2
      cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves
      cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr arat npt
      lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter
      pfthreshold avic v_vmsave_vmload vgif umip rdpid overflow_recov succor smca

 /proc/cpuinfo cache data
    cache size : 512 KB

 From numactl --hardware  WARNING: a numactl 'node' might or might not correspond to a
 physical chip.
   available: 8 nodes (0-7)
   node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
   node 0 size: 128802 MB
   node 0 free: 128649 MB
   node 1 cpus: 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
   node 1 size: 129019 MB
   node 1 free: 128879 MB
   node 2 cpus: 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47
   node 2 size: 129019 MB
   node 2 free: 128770 MB
   node 3 cpus: 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63
   node 3 size: 128978 MB
   node 3 free: 128832 MB
   node 4 cpus: 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79
   node 4 size: 129019 MB
   node 4 free: 128932 MB
   node 5 cpus: 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95
   node 5 size: 129019 MB
   node 5 free: 128935 MB
   node 6 cpus: 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111
   node 6 size: 129019 MB
   node 6 free: 128934 MB
   node 7 cpus: 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127
   node 7 size: 129018 MB
   node 7 free: 128933 MB
   node distances:
   node   0   1   2   3   4   5   6   7
     0:  10  12  12  12  32  32  32  32
     1:  12  10  12  12  32  32  32  32
     2:  12  12  10  12  32  32  32  32
     3:  12  12  12  10  32  32  32  32
     4:  32  32  32  32  10  12  12  12
     5:  32  32  32  32  12  10  12  12
     6:  32  32  32  32  12  12  10  12
     7:  32  32  32  32  12  12  12  10

 From /proc/meminfo
    MemTotal:       1056663620 kB
    HugePages_Total:       0
    Hugepagesize:       2048 kB

 From /etc/*release* /etc/*version*
    os-release:
       NAME="SLES"
       VERSION="15-SP1"
       VERSION_ID="15.1"
       PRETTY_NAME="SUSE Linux Enterprise Server 15 SP1"
       ID="sles"
       ID_LIKE="suse"
       ANSI_COLOR="0;32"
       CPE_NAME="cpe:/o:suse:sles:15:sp1"

 uname -a:
    Linux dl385gen10 4.12.14-195-default #1 SMP Tue May 7 10:55:11 UTC 2019 (8fba516)
    x86_64 x86_64 x86_64 GNU/Linux

 Kernel self-reported vulnerability status:

 CVE-2018-3620 (L1 Terminal Fault):        Not affected
 Microarchitectural Data Sampling:         Not affected
 CVE-2017-5754 (Meltdown):                 Not affected
 CVE-2018-3639 (Speculative Store Bypass): Mitigation: Speculative Store Bypass disabled
                                           via prctl and seccomp
 CVE-2017-5753 (Spectre variant 1):        Mitigation: __user pointer sanitization
 CVE-2017-5715 (Spectre variant 2):        Mitigation: Full AMD retpoline, IBPB:
                                           conditional, IBRS_FW, STIBP: disabled, RSB
                                           filling

 run-level 3 May 28 19:22

 SPEC is set to: /cpu2017
    Filesystem     Type   Size  Used Avail Use% Mounted on
    /dev/sda2      btrfs  222G   43G  178G  20% /

 From /sys/devices/virtual/dmi/id
     BIOS:    HPE A40 07/20/2019
     Vendor:  HPE
     Product: ProLiant DL385 Gen10
     Product Family: ProLiant
     Serial:  7CE724P4SJ

 Additional information from dmidecode follows.  WARNING: Use caution when you interpret
 this section. The 'dmidecode' program reads system data which is "intended to allow
 hardware to be accurately determined", but the intent may not be met, as there are
 frequent changes to hardware, firmware, and the "DMTF SMBIOS" standard.
   Memory:
     16x UNKNOWN NOT AVAILABLE
     16x UNKNOWN NOT AVAILABLE 64 GB 4 rank 2933

 (End of data from sysinfo program)

Power Settings Notes

PTDaemon to measure power and temperature was run on a ProLiant DL360 Gen9 as a controller
with 2x Intel Xeon E5-2660 v3 CPU and 128 GB of memory using Windows Server 2012 R2.
Power management in the OS was disabled by setting Linux CPU governor to performance for all cores:
 cpupower frequency-set -r -g performance
Power management in the BIOS was default except for any settings mentioned in BIOS Configuration.
No power management settings were set in the management firmware.
The Embedded SATA controller was the HPE Smart Array S100i SR Gen10 SW RAID.
The system was configured with 3 drive cage blanks, 6 High Performance Fans,
16 DIMM blanks, 2 high performance heatsinks (882098-B21) and baffles that fit over
the high performance heatsinks in order to produce correct airflow and cooling.
The run was started and observed through the management firmware.

Compiler Version Notes

==============================================================================
C       | 600.perlbench_s(base, peak) 602.gcc_s(base, peak) 605.mcf_s(base,
        | peak) 625.x264_s(base, peak) 657.xz_s(base, peak)
------------------------------------------------------------------------------
AOCC.LLVM.2.0.0.B191.2019_07_19 clang version 8.0.0 (CLANG: Jenkins
  AOCC_2_0_0-Build#191) (based on LLVM AOCC.LLVM.2.0.0.B191.2019_07_19)
Target: x86_64-unknown-linux-gnu
Thread model: posix
InstalledDir: /sppo/dev/compilers/aocc-compiler-2.0.0/bin
------------------------------------------------------------------------------

==============================================================================
C++     | 623.xalancbmk_s(peak)
------------------------------------------------------------------------------
AOCC.LLVM.2.0.0.B191.2019_07_19 clang version 8.0.0 (CLANG: Jenkins
  AOCC_2_0_0-Build#191) (based on LLVM AOCC.LLVM.2.0.0.B191.2019_07_19)
Target: i386-unknown-linux-gnu
Thread model: posix
InstalledDir: /sppo/dev/compilers/aocc-compiler-2.0.0/bin
------------------------------------------------------------------------------

==============================================================================
C++     | 620.omnetpp_s(base, peak) 623.xalancbmk_s(base)
        | 631.deepsjeng_s(base, peak) 641.leela_s(base, peak)
------------------------------------------------------------------------------
AOCC.LLVM.2.0.0.B191.2019_07_19 clang version 8.0.0 (CLANG: Jenkins
  AOCC_2_0_0-Build#191) (based on LLVM AOCC.LLVM.2.0.0.B191.2019_07_19)
Target: x86_64-unknown-linux-gnu
Thread model: posix
InstalledDir: /sppo/dev/compilers/aocc-compiler-2.0.0/bin
------------------------------------------------------------------------------

==============================================================================
C++     | 623.xalancbmk_s(peak)
------------------------------------------------------------------------------
AOCC.LLVM.2.0.0.B191.2019_07_19 clang version 8.0.0 (CLANG: Jenkins
  AOCC_2_0_0-Build#191) (based on LLVM AOCC.LLVM.2.0.0.B191.2019_07_19)
Target: i386-unknown-linux-gnu
Thread model: posix
InstalledDir: /sppo/dev/compilers/aocc-compiler-2.0.0/bin
------------------------------------------------------------------------------

==============================================================================
C++     | 620.omnetpp_s(base, peak) 623.xalancbmk_s(base)
        | 631.deepsjeng_s(base, peak) 641.leela_s(base, peak)
------------------------------------------------------------------------------
AOCC.LLVM.2.0.0.B191.2019_07_19 clang version 8.0.0 (CLANG: Jenkins
  AOCC_2_0_0-Build#191) (based on LLVM AOCC.LLVM.2.0.0.B191.2019_07_19)
Target: x86_64-unknown-linux-gnu
Thread model: posix
InstalledDir: /sppo/dev/compilers/aocc-compiler-2.0.0/bin
------------------------------------------------------------------------------

==============================================================================
Fortran | 648.exchange2_s(base, peak)
------------------------------------------------------------------------------
AOCC.LLVM.2.0.0.B191.2019_07_19 clang version 8.0.0 (CLANG: Jenkins
  AOCC_2_0_0-Build#191) (based on LLVM AOCC.LLVM.2.0.0.B191.2019_07_19)
Target: x86_64-unknown-linux-gnu
Thread model: posix
InstalledDir: /sppo/dev/compilers/aocc-compiler-2.0.0/bin
------------------------------------------------------------------------------

Base Compiler Invocation

C benchmarks:

 clang 

C++ benchmarks:

 clang++ 

Fortran benchmarks:

 flang 

Base Portability Flags

600.perlbench_s:  -DSPEC_LINUX_X64   -DSPEC_LP64 
602.gcc_s:  -DSPEC_LP64 
605.mcf_s:  -DSPEC_LP64 
620.omnetpp_s:  -DSPEC_LP64 
623.xalancbmk_s:  -DSPEC_LINUX   -DSPEC_LP64 
625.x264_s:  -DSPEC_LP64 
631.deepsjeng_s:  -DSPEC_LP64 
641.leela_s:  -DSPEC_LP64 
648.exchange2_s:  -DSPEC_LP64 
657.xz_s:  -DSPEC_LP64 

Base Optimization Flags

C benchmarks:

 -flto   -Wl,-mllvm -Wl,-function-specialize   -Wl,-mllvm -Wl,-region-vectorize   -Wl,-mllvm -Wl,-vector-library=LIBMVEC   -Wl,-mllvm -Wl,-reduce-array-computations=3   -O3   -ffast-math   -march=znver2   -fstruct-layout=3   -mllvm -unroll-threshold=50   -fremap-arrays   -mllvm -function-specialize   -mllvm -enable-gvn-hoist   -mllvm -reduce-array-computations=3   -mllvm -global-vectorize-slp   -mllvm -vector-library=LIBMVEC   -mllvm -inline-threshold=1000   -flv-function-specialization   -z muldefs   -DSPEC_OPENMP   -fopenmp   -DUSE_OPENMP   -fopenmp=libomp   -lomp   -lpthread   -ldl   -lmvec   -lamdlibm   -ljemalloc   -lflang 

C++ benchmarks:

 -flto   -Wl,-mllvm -Wl,-function-specialize   -Wl,-mllvm -Wl,-region-vectorize   -Wl,-mllvm -Wl,-vector-library=LIBMVEC   -Wl,-mllvm -Wl,-reduce-array-computations=3   -Wl,-mllvm -Wl,-suppress-fmas   -O3   -ffast-math   -march=znver2   -mllvm -loop-unswitch-threshold=200000   -mllvm -vector-library=LIBMVEC   -mllvm -unroll-threshold=100   -flv-function-specialization   -mllvm -enable-partial-unswitch   -z muldefs   -DSPEC_OPENMP   -fopenmp   -DUSE_OPENMP   -fopenmp=libomp   -lomp   -lpthread   -ldl   -lmvec   -lamdlibm   -ljemalloc   -lflang 

Fortran benchmarks:

 -flto   -Wl,-mllvm -Wl,-function-specialize   -Wl,-mllvm -Wl,-region-vectorize   -Wl,-mllvm -Wl,-vector-library=LIBMVEC   -Wl,-mllvm -Wl,-reduce-array-computations=3   -ffast-math   -Wl,-mllvm -Wl,-inline-recursion=4   -Wl,-mllvm -Wl,-lsr-in-nested-loop   -Wl,-mllvm -Wl,-enable-iv-split   -O3   -march=znver2   -funroll-loops   -Mrecursive   -mllvm -vector-library=LIBMVEC   -z muldefs   -mllvm -disable-indvar-simplify   -mllvm -unroll-aggressive   -mllvm -unroll-threshold=150   -DSPEC_OPENMP   -fopenmp   -DUSE_OPENMP   -fopenmp=libomp   -lomp   -lpthread   -ldl   -lmvec   -lamdlibm   -ljemalloc   -lflang 

Base Other Flags

C benchmarks:

 -Wno-return-type 

C++ benchmarks:

 -Wno-return-type 

Fortran benchmarks:

 -Wno-return-type 

Peak Compiler Invocation

C benchmarks:

 clang 

C++ benchmarks:

 clang++ 

Fortran benchmarks:

 flang 

Peak Portability Flags

600.perlbench_s:  -DSPEC_LINUX_X64   -DSPEC_LP64 
602.gcc_s:  -DSPEC_LP64 
605.mcf_s:  -DSPEC_LP64 
620.omnetpp_s:  -DSPEC_LP64 
623.xalancbmk_s:  -DSPEC_LINUX   -D_FILE_OFFSET_BITS=64 
625.x264_s:  -DSPEC_LP64 
631.deepsjeng_s:  -DSPEC_LP64 
641.leela_s:  -DSPEC_LP64 
648.exchange2_s:  -DSPEC_LP64 
657.xz_s:  -DSPEC_LP64 

Peak Optimization Flags

C benchmarks:

600.perlbench_s:  -flto   -Wl,-mllvm -Wl,-function-specialize   -Wl,-mllvm -Wl,-region-vectorize   -Wl,-mllvm -Wl,-vector-library=LIBMVEC   -Wl,-mllvm -Wl,-reduce-array-computations=3   -fprofile-instr-generate(pass 1)   -fprofile-instr-use(pass 2)   -Ofast   -march=znver2   -mno-sse4a   -fstruct-layout=5   -mllvm -vectorize-memory-aggressively   -mllvm -function-specialize   -mllvm -enable-gvn-hoist   -mllvm -unroll-threshold=50   -fremap-arrays   -mllvm -vector-library=LIBMVEC   -mllvm -reduce-array-computations=3   -mllvm -global-vectorize-slp   -mllvm -inline-threshold=1000   -flv-function-specialization   -DSPEC_OPENMP   -fopenmp   -DUSE_OPENMP   -lmvec   -lamdlibm   -fopenmp=libomp   -lomp   -lpthread   -ldl   -ljemalloc   -lflang 
602.gcc_s:  -flto   -Wl,-mllvm -Wl,-function-specialize   -Wl,-mllvm -Wl,-region-vectorize   -Wl,-mllvm -Wl,-vector-library=LIBMVEC   -Wl,-mllvm -Wl,-reduce-array-computations=3   -Ofast   -march=znver2   -mno-sse4a   -fstruct-layout=5   -mllvm -vectorize-memory-aggressively   -mllvm -function-specialize   -mllvm -enable-gvn-hoist   -mllvm -unroll-threshold=50   -fremap-arrays   -mllvm -vector-library=LIBMVEC   -mllvm -reduce-array-computations=3   -mllvm -global-vectorize-slp   -mllvm -inline-threshold=1000   -flv-function-specialization   -z muldefs   -DSPEC_OPENMP   -fopenmp   -DUSE_OPENMP   -fgnu89-inline   -fopenmp=libomp   -lomp   -lpthread   -ldl   -ljemalloc 
605.mcf_s:  -flto   -Wl,-mllvm -Wl,-function-specialize   -Wl,-mllvm -Wl,-region-vectorize   -Wl,-mllvm -Wl,-vector-library=LIBMVEC   -Wl,-mllvm -Wl,-reduce-array-computations=3   -Ofast   -march=znver2   -mno-sse4a   -fstruct-layout=5   -mllvm -vectorize-memory-aggressively   -mllvm -function-specialize   -mllvm -enable-gvn-hoist   -mllvm -unroll-threshold=50   -fremap-arrays   -mllvm -vector-library=LIBMVEC   -mllvm -reduce-array-computations=3   -mllvm -global-vectorize-slp   -mllvm -inline-threshold=1000   -flv-function-specialization   -DSPEC_OPENMP   -fopenmp   -DUSE_OPENMP   -lmvec   -lamdlibm   -fopenmp=libomp   -lomp   -lpthread   -ldl   -ljemalloc   -lflang 
625.x264_s:  Same as 600.perlbench_s 
657.xz_s:  basepeak = yes 

C++ benchmarks:

620.omnetpp_s:  -flto   -Wl,-mllvm -Wl,-function-specialize   -Wl,-mllvm -Wl,-region-vectorize   -Wl,-mllvm -Wl,-vector-library=LIBMVEC   -Wl,-mllvm -Wl,-reduce-array-computations=3   -Ofast   -march=znver2   -flv-function-specialization   -mllvm -unroll-threshold=100   -mllvm -enable-partial-unswitch   -mllvm -loop-unswitch-threshold=200000   -mllvm -vector-library=LIBMVEC   -mllvm -inline-threshold=1000   -DSPEC_OPENMP   -fopenmp   -DUSE_OPENMP   -fopenmp=libomp   -lomp   -lpthread   -ldl   -lmvec   -lamdlibm   -ljemalloc   -lflang 
623.xalancbmk_s:  -m32   -flto   -Wl,-mllvm -Wl,-function-specialize   -Wl,-mllvm -Wl,-region-vectorize   -Wl,-mllvm -Wl,-vector-library=LIBMVEC   -Wl,-mllvm -Wl,-reduce-array-computations=3   -Ofast   -march=znver2   -flv-function-specialization   -mllvm -unroll-threshold=100   -mllvm -enable-partial-unswitch   -mllvm -loop-unswitch-threshold=200000   -mllvm -vector-library=LIBMVEC   -mllvm -inline-threshold=1000   -DSPEC_OPENMP   -fopenmp   -DUSE_OPENMP   -fopenmp=libomp   -lomp   -lpthread   -ldl   -ljemalloc 
631.deepsjeng_s:  Same as 620.omnetpp_s 
641.leela_s:  basepeak = yes 

Fortran benchmarks:

 -flto   -Wl,-mllvm -Wl,-function-specialize   -Wl,-mllvm -Wl,-region-vectorize   -Wl,-mllvm -Wl,-vector-library=LIBMVEC   -Wl,-mllvm -Wl,-reduce-array-computations=3   -ffast-math   -Wl,-mllvm -Wl,-inline-recursion=4   -Wl,-mllvm -Wl,-lsr-in-nested-loop   -Wl,-mllvm -Wl,-enable-iv-split   -O3   -march=znver2   -funroll-loops   -Mrecursive   -mllvm -vector-library=LIBMVEC   -mllvm -disable-indvar-simplify   -mllvm -unroll-aggressive   -mllvm -unroll-threshold=150   -DSPEC_OPENMP   -fopenmp   -DUSE_OPENMP   -fopenmp=libomp   -lomp   -lpthread   -ldl   -lmvec   -lamdlibm   -ljemalloc   -lflang 

Peak Other Flags

C benchmarks:

 -Wno-return-type 

C++ benchmarks (except as noted below):

 -Wno-return-type 
623.xalancbmk_s:  -Wno-return-type   -L/sppo/dev/cpu2017/v110/amd_speed_aocc200_rome_C_lib/32 

Fortran benchmarks:

 -Wno-return-type 

The flags files that were used to format this result can be browsed at
http://www.spec.org/cpu2017/flags/aocc200-flags-C1-HPE.html,
http://www.spec.org/cpu2017/flags/HPE-Platform-Flags-AMD-V1.2-EPYC-revF.html.

You can also download the XML flags sources by saving the following links:
http://www.spec.org/cpu2017/flags/aocc200-flags-C1-HPE.xml,
http://www.spec.org/cpu2017/flags/HPE-Platform-Flags-AMD-V1.2-EPYC-revF.xml.