CPU2017 Flag Description
Fujitsu PRIMERGY RX2450 M1, AMD EPYC 7513 2.60 GHz

Compilers: AMD Optimizing C/C++ Compiler Suite


Base Compiler Invocation

C benchmarks

C++ benchmarks

Fortran benchmarks

Benchmarks using both Fortran and C

Benchmarks using both C and C++

Benchmarks using Fortran, C, and C++


Base Portability Flags

503.bwaves_r

507.cactuBSSN_r

508.namd_r

510.parest_r

511.povray_r

519.lbm_r

521.wrf_r

526.blender_r

527.cam4_r

538.imagick_r

544.nab_r

549.fotonik3d_r

554.roms_r


Base Optimization Flags

C benchmarks

C++ benchmarks

Fortran benchmarks

Benchmarks using both Fortran and C

Benchmarks using both C and C++

Benchmarks using Fortran, C, and C++


Base Other Flags

C benchmarks

C++ benchmarks

Fortran benchmarks

Benchmarks using both Fortran and C

Benchmarks using both C and C++

Benchmarks using Fortran, C, and C++


Implicitly Included Flags

This section contains descriptions of flags that were included implicitly by other flags, but which do not have a permanent home at SPEC.


Commands and Options Used to Submit Benchmark Runs

Using numactl to bind processes and memory to cores

For multi-copy runs or single copy runs on systems with multiple sockets, it is advantageous to bind a process to a particular core. Otherwise, the OS may arbitrarily move your process from one core to another. This can affect performance. To help, SPEC allows the use of a "submit" command where users can specify a utility to use to bind processes. We have found the utility 'numactl' to be the best choice.

numactl runs processes with a specific NUMA scheduling or memory placement policy. The policy is set for a command and inherited by all of its children. The numactl flag "--physcpubind" specifies which core(s) to bind the process. "-l" instructs numactl to keep a process's memory on the local node while "-m" specifies which node(s) to place a process's memory. For full details on using numactl, please refer to your Linux documentation, 'man numactl'

Note that some older versions of numactl incorrectly interpret application arguments as its own. For example, with the command "numactl --physcpubind=0 -l a.out -m a", numactl will interpret a.out's "-m" option as its own "-m" option. To work around this problem, we put the command to be run in a shell script and then run the shell script using numactl. For example: "echo 'a.out -m a' > run.sh ; numactl --physcpubind=0 bash run.sh"


Shell, Environment, and Other Software Settings

Transparent Huge Pages (THP)

THP is an abstraction layer that automates most aspects of creating, managing, and using huge pages. It is designed to hide much of the complexity in using huge pages from system administrators and developers. Huge pages increase the memory page size from 4 kilobytes to 2 megabytes. This provides significant performance advantages on systems with highly contended resources and large memory workloads. If memory utilization is too high or memory is badly fragmented which prevents huge pages being allocated, the kernel will assign smaller 4k pages instead. Most recent Linux OS releases have THP enabled by default.

THP usage is controlled by the sysfs setting /sys/kernel/mm/transparent_hugepage/enabled. Possible values:

The SPEC CPU benchmark codes themselves never explicitly request huge pages, as the mechanism to do that is OS-specific and can change over time. Libraries such as jemalloc which are used by the benchmarks may explicitly request huge pages, and use of such libraries can make the "madvise" setting relevant and useful.

When no huge pages are immediately available and one is requested, how the system handles the request for THP creation is controlled by the sysfs setting /sys/kernel/mm/transparent_hugepage/defrag. Possible values:

An application that "always" requests THP often can benefit from waiting for an allocation until those huge pages can be assembled.
For more information see the Linux transparent hugepage documentation.

ulimit -s <n>

Sets the stack size to n kbytes, or unlimited to allow the stack size to grow without limit.

ulimit -l <n>

Sets the maximum size of memory that may be locked into physical memory.

powersave -f (on SuSE)

Makes the powersave daemon set the CPUs to the highest supported frequency.

/etc/init.d/cpuspeed stop (on Red Hat)

Disables the cpu frequency scaling program in order to set the CPUs to the highest supported frequency.

LD_LIBRARY_PATH

An environment variable that indicates the location in the filesystem of bundled libraries to use when running the benchmark binaries.

kernel/numa_balancing

This OS setting controls automatic NUMA balancing on memory mapping and process placement. NUMA balancing incurs overhead for no benefit on workloads that are already bound to NUMA nodes.

Possible settings:

For more information see the numa_balancing entry in the Linux sysctl documentation.

kernel/randomize_va_space (ASLR)

This setting can be used to select the type of process address space randomization. Defaults differ based on whether the architecture supports ASLR, whether the kernel was built with the CONFIG_COMPAT_BRK option or not, or the kernel boot options used.

Possible settings:

Disabling ASLR can make process execution more deterministic and runtimes more consistent. For more information see the randomize_va_space entry in the Linux sysctl documentation.

MALLOC_CONF

The jemalloc library has tunable parameters, many of which may be changed at run-time via several mechanisms, one of which is the MALLOC_CONF environment variable. Other methods, as well as the order in which they're referenced, are detailed in the jemalloc documentation's TUNING section.

The options that can be tuned at run-time are everything in the jemalloc documentation's MALLCTL NAMESPACE section that begins with "opt.".

The options that may be encountered in SPEC CPU 2017 results are detailed here:

PGHPF_ZMEM

An environment variable used to initialize the allocated memory. Setting PGHPF_ZMEM to "Yes" has the effect of initializing all allocated memory to zero.

GOMP_CPU_AFFINITY

This environment variable is used to set the thread affinity for threads spawned by OpenMP.

OMP_DYNAMIC

This environment variable is defined as part of the OpenMP standard. Setting it to "false" prevents the OpenMP runtime from dynamically adjusting the number of threads to use for parallel execution.

For more information, see chapter 4 ("Environment Variables") in the OpenMP 4.5 Specification.

OMP_SCHEDULE

This environment variable is defined as part of the OpenMP standard. Setting it to "static" causes loop iterations to be assigned to threads in round-robin fashion in the order of the thread number.

For more information, see chapter 4 ("Environment Variables") in the OpenMP 4.5 Specification.

OMP_STACKSIZE

This environment variable is defined as part of the OpenMP standard and controls the size of the stack for threads created by OpenMP.

For more information, see chapter 4 ("Environment Variables") in the OpenMP 4.5 Specification.

OMP_THREAD_LIMIT

This environment variable is defined as part of the OpenMP standard and limits the maximum number of OpenMP threads that can be created.

For more information, see chapter 4 ("Environment Variables") in the OpenMP 4.5 Specification.


Operating System Tuning Parameters

ulimit
This sets user limits of system-wide resources and can set the stack size to n kbytes, or unlimited to allow the stack size to grow without limit. Some common ulimit commands may include:
Kernel parameters
The following Linux Kernel parameters were set for better optimize performance.

Firmware / BIOS / Microcode Settings

ACPI SRAT L3 Cache As NUMA Domain
This BIOS switch controls to generate the distance information of each L3 Cache as a NUMA node in ACPI Static Resource Affinity Table (SRAT). When this feature is enabled, the BIOS will expose L3 Cache information as a NUMA node in SRAT and allow the operating system to access and use the information to optimize software thread allocation and memory usage. This feature allows 3 options: "Disabled", "Enabled", and "Auto". Default is "Auto".
APBDIS
This BIOS switch enables or disables Algorithm Performance Boost(APB). The processor feature of Application Power Management(APM) will allows P-states to be defined with higher frequencies but it may cause the performance jitter between the high power state and the low power state. To reduce the jitter, this feature forces the Infinity Fabric, which is the interconnect inside and outside of CPU chip, into the fixed high power state. This feature allows 3 options: "0", "1", and "Auto". Default is "Auto".
Fix SOC P-state
This BIOS switch limits CPU SOC (uncore) P-states when "APBDIS" is enabled to minimize the variance of the performance. This feature allows 4 options: "P0", "P1", "P2", "P3", and "Auto". When "Auto" is selected, CPU SOC P-states will be dynamically adjusted. Selecting a specific P-state forces the SOC into the P-state frequency. Default is "Auto".
cTDP Control
This BIOS switch allows the user can manually configure the switch of "cTDP". This feature allows 2 options: "Manual" and "Auto". "Manual" enables to customize configurable TDP. "Auto" uses the platform default TDP. Default is "Auto".
cTDP
This BIOS switch configures the maximum power that the CPU will consume, up to the platform power limit. Valid values vary by CPU model. If a value outside the valid range or the default value of "0" is set, the CPU will automatically adjust the value so that it does fall within the valid range. When increasing cTDP, additional power will only be consumed up to the Package Power Limit, which may be less than the cTDP setting.
ModelMinimum cTDPMaximum cTDP
EPYC 7763225280
EPYC 7643225240
EPYC 75F3225280
EPYC 7513165200
EPYC 7453225240
EPYC 74F3225240
EPYC 7443165200
EPYC 7343165200
EPYC 72F3165200
Determinism Slider
This BIOS switch is for the determinism to control performance and allows 3 options: "Auto", "Power", and "Performance". "Auto" setting uses default values for deterministic performance control. "Power" setting provides predicable performance across all processors of the same type. "Power" setting maximizes performance withing the power limits defined by cTDP. Default is "Auto".
DRAM Scrub Time
This BIOS switch controls the time between DRAM Scrubbing, which cyclically accesses the main memory of the system in the background regardless of the operating system in order to detect and correct memory errors in a preventive way. This feature allows 8 options: "Disabled", "1 hour", "4 hours", "8 hours", "16 hours", "24 hours", "48 hours", and "Auto". "Disabled" option disables the feature of DRAM Scrubbing and it may result in improving the performance under certain circumstances but increases the probability of discovering memory errors in case of active accesses by the operating system. Until these errors are correctable, the ECC technology of the memory modules ensures that the system continues to run in a stable way. However, too many correctable memory errors increase the risk of discovering non-correctable errors, which then result in a system standstill.
EDC Control
This BIOS switch allows the user can manually configure the switch of "EDC" and "EDC Platform Limit". This feature allows 2 options: "Manual" and "Auto". "Manual" enables to customize "EDC" and "EDC Platform Limit". "Auto" uses the platform default EDC. Default is "Auto".
EDC
Electrical Design Current(EDC) indicates the total maximum current capacity in Apms which can be supplied to the socket for a short time. The default value of EDC is 0 which select the platform default setting and it can be set up to 300 A. Increasing this value may increase the frequency at the cost of additional power consumption.
EDC Platform Limit
This BIOS switch limit the maximum EDC in watts which the platform can support. The default value is 0 which selects the platform default setting and it can be set up to 300 W. Increasing this value may increase the frequency at the cost of additional power consumption.
Global C-state Control
This BIOS switch controls IO based C-state generation and DF C-states. This feature allows 2 options: "Disabled", "Enabled, and "Auto". Default is "Auto"
IOMMU
This BIOS switch allows enabling or disabling of Input-Output Memory Management Unit(IOMMU) which supports the address translation and system memory access protection on DMA transfer from I/O devices in the system. This feature allows 3 options: "Auto", "Enabled", and "Disabled". Default is "Auto".
L1 Stream HW Prefetcher
This BIOS switch allows enabling or disabling of L1 Stream HW Prefetcher. This feature allows 2 options: "Disabled", "Enabled, and "Auto". Default is "Auto".
L2 Stream HW Prefetcher
This BIOS switch allows enabling or disabling of L2 Stream HW Prefetcher. This feature allows 2 options: "Disabled", "Enabled, and "Auto". Default is "Auto".
NUMA nodes per socket
This BIOS switch specifies the number of desired NUMA nodes per populated socket in the system. This feature allows 5 options: "NPS0", "NPS1", "NPS2", "NPS4", and "Auto". Default is "Auto".
Package Power Limit Control
This BIOS switch configures a per CPU Package Power Limit value applicable for all populated CPUs in the system. This feature allows 2 options: "Manual" and "Auto". "Manual" set customized configurable Package Power Limit. "Auto" uses the platform default Package Power Limit.
Package Power Limit
This BIOS switch specifies the maximum power that each CPU package may consume in the system. The actual power is limited by the maximum setting of both the "Package Power Limit" and "cTDP".
SMT Control
This BIOS switch allows enabling or disabling of symmetric multithreading on processors. This feature allows 3 options: "Disabled", "Enabled", and "Auto". When "Enabled" is set, each physical processor core operates as two logical processor cores. When "Disabled" is set, each physical core operates as only one logical processor core. "Auto" enables this feature and can improve overall performance for applications that benefit from a higher processor core count. Default is "Auto".
SVM Mode
This BIOS switch is for CPU virtualization function. With SVM enabled virtual machines can be installed on the system. This feature allows 2 options: "Enabled" and "Disabled". Default is "Enabled".
xGMI Link Max Speed
This BIOS switch controls the maximum link speed of GMI (Global Memory Interface) which is the socket-to-socket interconnection. Limitting the maximum link speed can reduce xGMI power consumption and increaces the available power for cores which may improve the performance in workloads which aware NUMA. The default setting is "Auto" which selects the maximum link speed per CPU model.

Flag description origin markings:

[user] Indicates that the flag description came from the user flags file.
[suite] Indicates that the flag description came from the suite-wide flags file.
[benchmark] Indicates that the flag description came from a per-benchmark flags file.

The flags files that were used to format this result can be browsed at
http://www.spec.org/cpu2017/flags/aocc300-flags-B2.html,
http://www.spec.org/cpu2017/flags/Fujitsu-Platform-Settings-V1.0-MILAN-RevB.html.

You can also download the XML flags sources by saving the following links:
http://www.spec.org/cpu2017/flags/aocc300-flags-B2.xml,
http://www.spec.org/cpu2017/flags/Fujitsu-Platform-Settings-V1.0-MILAN-RevB.xml.


For questions about the meanings of these flags, please contact the tester.
For other inquiries, please contact info@spec.org
Copyright 2017-2021 Standard Performance Evaluation Corporation
Tested with SPEC CPU2017 v1.1.8.
Report generated on 2021-11-24 11:18:36 by SPEC CPU2017 flags formatter v5178.