CPU2017 Flag Description
Dell Inc. PowerEdge R7525 (AMD EPYC 7272 12-Core Processor)

Compilers: AMD Optimizing C/C++ Compiler Suite


Base Compiler Invocation

C benchmarks

C++ benchmarks

Fortran benchmarks

Benchmarks using both Fortran and C

Benchmarks using both C and C++

Benchmarks using Fortran, C, and C++


Peak Compiler Invocation

C benchmarks

C++ benchmarks

Fortran benchmarks

Benchmarks using both Fortran and C

Benchmarks using both C and C++

Benchmarks using Fortran, C, and C++


Base Portability Flags

503.bwaves_r

507.cactuBSSN_r

508.namd_r

510.parest_r

511.povray_r

519.lbm_r

521.wrf_r

526.blender_r

527.cam4_r

538.imagick_r

544.nab_r

549.fotonik3d_r

554.roms_r


Peak Portability Flags

503.bwaves_r

507.cactuBSSN_r

508.namd_r

510.parest_r

511.povray_r

519.lbm_r

521.wrf_r

526.blender_r

527.cam4_r

538.imagick_r

544.nab_r

549.fotonik3d_r

554.roms_r


Base Optimization Flags

C benchmarks

C++ benchmarks

Fortran benchmarks

Benchmarks using both Fortran and C

Benchmarks using both C and C++

Benchmarks using Fortran, C, and C++


Peak Optimization Flags

C benchmarks

519.lbm_r

538.imagick_r

544.nab_r

C++ benchmarks

508.namd_r

510.parest_r

Fortran benchmarks

503.bwaves_r

549.fotonik3d_r

554.roms_r

Benchmarks using both Fortran and C

521.wrf_r

527.cam4_r

Benchmarks using both C and C++

511.povray_r

526.blender_r

Benchmarks using Fortran, C, and C++

507.cactuBSSN_r


Base Other Flags

C benchmarks

C++ benchmarks

Fortran benchmarks

Benchmarks using both Fortran and C

Benchmarks using both C and C++

Benchmarks using Fortran, C, and C++


Peak Other Flags

C benchmarks

C++ benchmarks

Fortran benchmarks

Benchmarks using both Fortran and C

Benchmarks using both C and C++

Benchmarks using Fortran, C, and C++


Implicitly Included Flags

This section contains descriptions of flags that were included implicitly by other flags, but which do not have a permanent home at SPEC.


Commands and Options Used to Submit Benchmark Runs

Using numactl to bind processes and memory to cores

For multi-copy runs or single copy runs on systems with multiple sockets, it is advantageous to bind a process to a particular core. Otherwise, the OS may arbitrarily move your process from one core to another. This can affect performance. To help, SPEC allows the use of a "submit" command where users can specify a utility to use to bind processes. We have found the utility 'numactl' to be the best choice.

numactl runs processes with a specific NUMA scheduling or memory placement policy. The policy is set for a command and inherited by all of its children. The numactl flag "--physcpubind" specifies which core(s) to bind the process. "-l" instructs numactl to keep a process's memory on the local node while "-m" specifies which node(s) to place a process's memory. For full details on using numactl, please refer to your Linux documentation, 'man numactl'

Note that some older versions of numactl incorrectly interpret application arguments as its own. For example, with the command "numactl --physcpubind=0 -l a.out -m a", numactl will interpret a.out's "-m" option as its own "-m" option. To work around this problem, we put the command to be run in a shell script and then run the shell script using numactl. For example: "echo 'a.out -m a' > run.sh ; numactl --physcpubind=0 bash run.sh"


Shell, Environment, and Other Software Settings

Transparent Huge Pages (THP)

THP is an abstraction layer that automates most aspects of creating, managing, and using huge pages. It is designed to hide much of the complexity in using huge pages from system administrators and developers. Huge pages increase the memory page size from 4 kilobytes to 2 megabytes. This provides significant performance advantages on systems with highly contended resources and large memory workloads. If memory utilization is too high or memory is badly fragmented which prevents huge pages being allocated, the kernel will assign smaller 4k pages instead. Most recent Linux OS releases have THP enabled by default.

THP usage is controlled by the sysfs setting /sys/kernel/mm/transparent_hugepage/enabled. Possible values:

The SPEC CPU benchmark codes themselves never explicitly request huge pages, as the mechanism to do that is OS-specific and can change over time. Libraries such as jemalloc which are used by the benchmarks may explicitly request huge pages, and use of such libraries can make the "madvise" setting relevant and useful.

When no huge pages are immediately available and one is requested, how the system handles the request for THP creation is controlled by the sysfs setting /sys/kernel/mm/transparent_hugepage/defrag. Possible values:

An application that "always" requests THP often can benefit from waiting for an allocation until those huge pages can be assembled.
For more information see the Linux transparent hugepage documentation.

ulimit -s <n>

Sets the stack size to n kbytes, or unlimited to allow the stack size to grow without limit.

ulimit -l <n>

Sets the maximum size of memory that may be locked into physical memory.

powersave -f (on SuSE)

Makes the powersave daemon set the CPUs to the highest supported frequency.

/etc/init.d/cpuspeed stop (on Red Hat)

Disables the cpu frequency scaling program in order to set the CPUs to the highest supported frequency.

LD_LIBRARY_PATH

An environment variable that indicates the location in the filesystem of bundled libraries to use when running the benchmark binaries.

kernel/numa_balancing

This OS setting controls automatic NUMA balancing on memory mapping and process placement. NUMA balancing incurs overhead for no benefit on workloads that are already bound to NUMA nodes.

Possible settings:

For more information see the numa_balancing entry in the Linux sysctl documentation.

kernel/randomize_va_space (ASLR)

This setting can be used to select the type of process address space randomization. Defaults differ based on whether the architecture supports ASLR, whether the kernel was built with the CONFIG_COMPAT_BRK option or not, or the kernel boot options used.

Possible settings:

Disabling ASLR can make process execution more deterministic and runtimes more consistent. For more information see the randomize_va_space entry in the Linux sysctl documentation.

MALLOC_CONF

The jemalloc library has tunable parameters, many of which may be changed at run-time via several mechanisms, one of which is the MALLOC_CONF environment variable. Other methods, as well as the order in which they're referenced, are detailed in the jemalloc documentation's TUNING section.

The options that can be tuned at run-time are everything in the jemalloc documentation's MALLCTL NAMESPACE section that begins with "opt.".

The options that may be encountered in SPEC CPU 2017 results are detailed here:

PGHPF_ZMEM

An environment variable used to initialize the allocated memory. Setting PGHPF_ZMEM to "Yes" has the effect of initializing all allocated memory to zero.

GOMP_CPU_AFFINITY

This environment variable is used to set the thread affinity for threads spawned by OpenMP.

OMP_DYNAMIC

This environment variable is defined as part of the OpenMP standard. Setting it to "false" prevents the OpenMP runtime from dynamically adjusting the number of threads to use for parallel execution.

For more information, see chapter 4 ("Environment Variables") in the OpenMP 4.5 Specification.

OMP_SCHEDULE

This environment variable is defined as part of the OpenMP standard. Setting it to "static" causes loop iterations to be assigned to threads in round-robin fashion in the order of the thread number.

For more information, see chapter 4 ("Environment Variables") in the OpenMP 4.5 Specification.

OMP_STACKSIZE

This environment variable is defined as part of the OpenMP standard and controls the size of the stack for threads created by OpenMP.

For more information, see chapter 4 ("Environment Variables") in the OpenMP 4.5 Specification.

OMP_THREAD_LIMIT

This environment variable is defined as part of the OpenMP standard and limits the maximum number of OpenMP threads that can be created.

For more information, see chapter 4 ("Environment Variables") in the OpenMP 4.5 Specification.


Operating System Tuning Parameters

kernel.randomize_va_space (ASLR)
This setting can be used to select the type of process address space randomization. Defaults differ based on whether the architecture supports ASLR, whether the kernel was built with the CONFIG_COMPAT_BRK option or not, or the kernel boot options used.
Possible settings: Disabling ASLR can make process execution more deterministic and runtimes more consistent. For more information see the randomize_va_space entry in the Linux sysctl documentation.

Transparent Hugepages (THP)
THP is an abstraction layer that automates most aspects of creating, managing, and using huge pages. It is designed to hide much of the complexity in using huge pages from system administrators and developers. Huge pages increase the memory page size from 4 kilobytes to 2 megabytes. This provides significant performance advantages on systems with highly contended resources and large memory workloads. If memory utilization is too high or memory is badly fragmented which prevents hugepages being allocated, the kernel will assign smaller 4k pages instead. Most recent Linux OS releases have THP enabled by default.
THP usage is controlled by the sysfs setting /sys/kernel/mm/transparent_hugepage/enabled. Possible values: THP creation is controlled by the sysfs setting /sys/kernel/mm/transparent_hugepage/defrag. Possible values: An application that "always" requests THP often can benefit from waiting for an allocation until those huge pages can be assembled.
For more information see the Linux transparent hugepage documentation.

drop_caches
sysctl is used to change kernel parameters at run-time.
-w vm.drop_caches=3 - clears filesystem caches

tuned-adm
This command line utility allows you to switch between user definable tuning profiles. Several predefined profiles are already included. You can even create your own profile, either based on one of the existing ones by copying it or make a completely new one. The distribution provided profiles are stored in subdirectories below /usr/lib/tuned and the user defined profiles in subdirectories below /etc/tuned. If there are profiles with the same name in both places, user defined profiles have precedence.

Profile used:
- throughput-performance - Broadly applicable tuning that provides excellent performance across a variety of common server workloads

Firmware / BIOS / Microcode Settings

Logical Processor
Default: Enabled

Each processor core supports up to two logical processors. When set to Enabled, the BIOS reports all logical processors. When set to Disabled, the BIOS only reports one logical processor per core. Generally, higher processor count results in increased performance for most multi-threaded workloads and the recommendation is to keep this enabled. However, there are some floating point/scientific workloads, including HPC workloads, where disabling this feature may result in higher performance.

Virtualization Technology
Default: Enabled

When set to Enabled, the BIOS will enable processor Virtualization features and provide the virtualization support to the Operating System (OS) through the DMAR table. In general, only virtualized environments such as VMware(r) ESX (tm), Microsoft Hyper-V(r) , Red Hat(r) KVM, and other virtualized operating systems will take advantage of these features. Disabling this feature is not known to significantly alter the performance or power characteristics of the system, so leaving this option Enabled is advised for most cases.

Memory Interleaving
Default: Auto

Memory interleaving is supported if a symmetric memory configuration is installed. When the field is set to Disabled, the system supports Non-Uniform Memory Access (NUMA) (asymmetric) memory configurations. Channel interleaving is available with all configurations and is the intra-die memory interleave option. With channel interleaving, the memory behind each UMC will be interleaved and seen as (1) NUMA domain per die.

NUMA Nodes per Socket
Default: 1

This field allows configuration of the memory NUMA domains per socket. The configuration can consist of one whole doman (NPS1), two domains (NPS2) or four domains (NPS4). In the case of two-socket platforms, an additional NPS profile is available to have whole system memory be mapped as a single NUMA domain (NPS0).

L3 Cache as NUMA Domain
Default: Disabled

This field specifies that each CCX within the processor will be declared as a NUMA domain.

DRAM Refresh Delay
Default: Minimum

By enabling the CPU memory controller to delay running the REFRESH command, you can improve the performance for some workloads. By minimizing the delay time, it is ensured that the memory controller runs the REFRESH command at regular intervals.

System Profile
Default: Performance Per Watt (DAPC)

When set to Custom, other settings can changed for Memory Patrol Scrub, CPU Power Management, CIE, C States, Energy Efficiency Policy.

CPU Power Management
Default: System DBPM (DAPC)

Allows selection of CPU power management methodology. Maximum Performance is typically selected for performance-centric workloads where it is acceptable to consume additional power to achieve the highest possible performance for the computing environment. This mode drives processor frequency to the maximum across all cores (although idled cores can still be frequency reduced by C-state enforcement through BIOS or OS mechanisms if enabled). This mode also offers the lowest latency of the CPU Power Management Mode options, so is always preferred for latency-sensitive environments. OS DBPM is another performance-per-watt option that relies on the operating system to dynamically control individual cores in order to save power.

Memory Patrol Scrub
Default: Standard

Patrol Scrubbing searches the memory for errors and repairs correctable errors to prevent the accumulation of memory errors. When set to Disabled, no patrol scrubbing will occur. When set to Standard Mode, the entire memory array will be scrubbed once in a 24 hour period. When set to Extended Mode, the entire memory array will be scrubbed more frequently to further increase system reliability.

PCI ASPM L1 Link Power Management
Default: Enabled

When enabled, PCIe Advanced State Power Management (ASPM) can reduce overall system power a bit while slightly reducing system performance. NOTE: Some devices may not perform properly (they may hang or cause the system to hang) when ASPM is enabled. For this reason L1 will only be enabled for validated qualified cards.

CPU Interconnect Bus Link Power Management
Default: Enabled

When Enabled, CPU interconnect bus link power management can reduce overall system power a bit while slightly reducing system performance.

Algorithm Performance Boost Disable (ApbDis)
Default: Disabled

Fan Speed Offset
Default: Off

Configuring this option allows additional cooling to the server. In case hardware is added (example, new PCIe cards), it may require additional cooling. A fan speed offset causes fan speeds to increase (by the offset % value) over baseline fan speeds calculated by the Thermal Control algorithm.

Flag description origin markings:

[user] Indicates that the flag description came from the user flags file.
[suite] Indicates that the flag description came from the suite-wide flags file.
[benchmark] Indicates that the flag description came from a per-benchmark flags file.

The flags files that were used to format this result can be browsed at
http://www.spec.org/cpu2017/flags/aocc300-flags-B2.html,
http://www.spec.org/cpu2017/flags/Dell-Platform-Flags-PowerEdge-AMD-Milan-rev2.4.html.

You can also download the XML flags sources by saving the following links:
http://www.spec.org/cpu2017/flags/aocc300-flags-B2.xml,
http://www.spec.org/cpu2017/flags/Dell-Platform-Flags-PowerEdge-AMD-Milan-rev2.4.xml.


For questions about the meanings of these flags, please contact the tester.
For other inquiries, please contact info@spec.org
Copyright 2017-2022 Standard Performance Evaluation Corporation
Tested with SPEC CPU2017 v1.1.8.
Report generated on 2022-02-15 16:27:43 by SPEC CPU2017 flags formatter v5178.