CPU2006 Flag Description
Hewlett-Packard Company ProLiant DL385p Gen8 (2.80 GHz AMD Opteron 6386 SE)

Compilers: x86 Open64 Compiler Suite



Base Compiler Invocation

C benchmarks

C++ benchmarks


Peak Compiler Invocation

C benchmarks

C++ benchmarks


Base Portability Flags

400.perlbench

401.bzip2

403.gcc

429.mcf

445.gobmk

456.hmmer

458.sjeng

462.libquantum

464.h264ref

483.xalancbmk


Peak Portability Flags

400.perlbench

401.bzip2

445.gobmk

456.hmmer

458.sjeng

462.libquantum

464.h264ref

473.astar

483.xalancbmk


Base Optimization Flags

C benchmarks

C++ benchmarks


Peak Optimization Flags

C benchmarks

400.perlbench

401.bzip2

403.gcc

429.mcf

445.gobmk

456.hmmer

458.sjeng

462.libquantum

464.h264ref

C++ benchmarks

471.omnetpp

473.astar

483.xalancbmk


Implicitly Included Flags

This section contains descriptions of flags that were included implicitly by other flags, but which do not have a permanent home at SPEC.


Commands and Options Used to Submit Benchmark Runs

Using numactl to bind processes and memory to cores

For multi-copy runs or single copy runs on systems with multiple sockets, it is advantageous to bind a process to a particular core. Otherwise, the OS may arbitrarily move your process from one core to another. This can effect performance. To help, SPEC allows the use of a "submit" command where users can specify a utility to use to bind processes. We have found the utility 'numactl' to be the best choice.

numactl runs processes with a specific NUMA scheduling or memory placement policy. The policy is set for a command and inherited by all of its children. The numactl flag "--physcpubind" specifies which core(s) to bind the process. "-l" instructs numactl to keep a process memory on the local node while "-m" specifies which node(s) to place a process memory. For full details on using numactl, please refer to your Linux documentation, 'man numactl'

Note that some versions of numactl, particularly the version found on SLES 10, we have found that the utility incorrectly interprets application arguments as it's own. For example, with the command "numactl --physcpubind=0 -l a.out -m a", numactl will interpret a.out's "-m" option as it's own "-m" option. To work around this problem, a user can put the command to be run in a shell script and then run the shell script using numactl. For example: "echo 'a.out -m a' > run.sh ; numactl --physcpubind=0 bash run.sh"


Shell, Environment, and Other Software Settings

Linux Huge Page settings

In order to take full advantage of using x86 Open64's huge page runtime library, your system must be configured to use huge pages. It is safe to run binaries compiled with "-HP" on systems not configured to use huge pages, however, you will not benefit from the performance improvements huge pages offer. To configure your system for huge pages perform the following steps:

Note that further information about huge pages may be found in your Linux documentation file: /usr/src/linux/Documentation/vm/hugetlbpage.txt

HUGETLB_LIMIT

For the x86 Open64 compiler, the maximum number of huge pages an application is allowed to use can be set at run time via the environment variable HUGETLB_LIMIT. If not set, then the process may use all available huge pages when compiled with "-HP (or -HUGEPAGE)" or a maximum of n pages where the value of n is set via the compile time flag "-HP:limit=n".

Transparent Huge Pages (THP)

THP is an abstraction layer that automates most aspects of creating, managing, and using huge pages. THP is designed to hides much of the complexity in using huge pages from system administrators and developers, as normal huge pages must be assigned at boot time, can be difficult to manage manually, and often require significant changes to code in order to be used effectively.

Set transparent_hugepage boot parameter

In the file /boot/grub/menu.lst, add the boot parameter "transparent_hugepage=never" to the OS you plan to use to instruct it to disable Transparent Huge Pages (THP).

ulimit -s <n>

Sets the stack size to n kbytes, or unlimited to allow the stack size to grow without limit.

ulimit -l <n>

Sets the maximum size of memory that may be locked into physical memory.

OMP_NUM_THREADS

Sets the maximum number of OpenMP parallel threads auto-parallelized (-apo) applications may use.

O64_OMP_AFFINITY_MAP

Specifies the thread-CPU relationship when the operating system's affinity mechanism is used to assign OpenMP threads to CPUs.

O64_OMP_SPIN_USER_LOCK

Specifies whether or not to use the user-level spin mechanism for OpenMP locks. If the variable is set to TRUE then user-level spin mechanisms are used. If the variable is set to FALSE then pthread mutexes are used. The default if the variable is not set is the same as FALSE.

powersave -f (on SuSE)

Makes the powersave daemon set the CPUs to the highest supported frequency.

/etc/init.d/cpuspeed stop (on Red Hat)

Disables the cpu frequency scaling program in order to set the CPUs to the highest supported frequency.

LD_LIBRARY_PATH

An environment variable set to include the x86 Open64 and SmartHeap libraries used during compilation of the binaries. This environment variable setting is not needed when building the binaries on the system under test.

kernel/randomize_va_space

This option can be used to select the type of process address space randomization that is used in the system, for architectures that support this feature. 0 - Turn the process address space randomization off. This is the default for architectures that do not support this feature anyways, and kernels that are booted with the "norandmaps" parameter. 1 - Make the addresses of mmap base, stack and VDSO page randomized. This, among other things, implies that shared libraries will be loaded to random addresses. Also for PIE-linked binaries, the location of code start is randomized. This is the default if the CONFIG_COMPAT_BRK option is enabled. 2 - Additionally enable heap randomization. This is the default if CONFIG_COMPAT_BRK is disabled.

O64_OMP_SPIN_COUNT

Specify the number of times the spin loops will spin at user-level before falling back to operating system schedule/reschedule mechanisms. The default value is 20000.


Operating System Tuning Parameters

OS Tuning

submit= MYMASK=`printf '0x%x' \$((1<<\$SPECCOPYNUM))`; /usr/bin/taskset \$MYMASK $command

When running multiple copies of benchmarks, the SPEC config file feature submit is sometimes used to cause individual jobs to be bound to specific processors. This specific submit command is used for Linux. The description of the elements of the command are:

Using numactl to bind processes and memory to cores

For multi-copy runs or single copy runs on systems with multiple sockets, it is advantageous to bind a process to a particular core. Otherwise, the OS may arbitrarily move your process from one core to another. This can effect performance. To help, SPEC allows the use of a "submit" command where users can specify a utility to use to bind processes. We have found the utility 'numactl' to be the best choice.

numactl runs processes with a specific NUMA scheduling or memory placement policy. The policy is set for a command and inherited by all of its children. The numactl flag "--physcpubind" specifies which core(s) to bind the process. "-l" instructs numactl to keep a process memory on the local node while "-m" specifies which node(s) to place a process memory. For full details on using numactl, please refer to your Linux documentation, 'man numactl'

numactl --interleave=all "runspec command"

Launching a process with numactl --interleave=all sets the memory interleave policy so that memory will be allocated using round robin on nodes. When memory cannot be allocated on the current interleave target fall back to other nodes.

Transparent Huge Pages

On RedHat EL 6 and later, Transparent Hugepages increase the memory page size from 4 kilobytes to 2 megabytes. Transparent Hugepages provide significant performance advantages on systems with highly contended resources and large memory workloads. If memory utilization is too high or memory is badly fragmented which prevents hugepages being allocated, the kernel will assign smaller 4k pages instead. Hugepages are used by default if /sys/kernel/mm/redhat_transparent_hugepage/enabled is set to always.

ulimit -s [n | unlimited] (Linux)

Sets the stack size to n kbytes, or unlimited to allow the stack size to grow without limit.

KMP_STACKSIZE=integer[B|K|M|G|T] (Linux)

Sets the number of bytes to allocate for each parallel thread to use as its private stack. Use the optional suffix B, K, M, G, or T, to specify bytes, kilobytes, megabytes, gigabytes, or terabytes. The default setting is 2M on IA32 and 4M on IA64.

KMP_AFFINITY=physical,n (Linux)

Assigns threads to consecutive physical processors (for example, cores), beginning at processor n. Specifies the static mapping of user threads to physical cores, beginning at processor n. For example, if a system is configured with 8 cores, and OMP_NUM_THREADS=8 and KMP_AFFINITY=physical,2 are set, then thread 0 will mapped to core 2, thread 1 will be mapped to core 3, and so on in a round-robin fashion.

OMP_NUM_THREADS=n

This Environment Variable sets the maximum number of threads to use for OpenMP* parallel regions to n if no other value is specified in the application. This environment variable applies to both -openmp and -parallel (Linux) or /Qopenmp and /Qparallel (Windows). Example syntax on a Linux system with 8 cores:
export OMP_NUM_THREADS=8
Default is the number of cores visible to the OS.

vm.max_map_count-n (Linux)

The maximum number of memory map areas a process may have. Memory map areas are used as a side-effect of calling malloc, directly by mmap and mprotect, and also when loading shared libraries.


Firmware / BIOS / Microcode Settings

Firmware Settings

One or more of the following settings may have been set. If so, the "Platform Notes" section of the report will say so; and you can read below to find out more about what these settings mean.

Power Regulator for ProLiant support (Default=HP Dynamic Power Savings Mode)

Values for this BIOS setting can be:

HP Power Profile (Default = Balanced Power and Performance):

Values for this BIOS setting can be:

Power Efficiency Mode (Default=Efficiency)

Values for this BIOS setting can be:

Dynamic Power Capping Functionality (Default = Enabled):

This BIOS option allows the user to disable the System ROM Power Calibration feature that is executed during the boot process. When disabled, the user can expect faster boot times but will not be able to enable a Dynamic Power Cap until this feature is re-enabled.

Minimum Processor Idle Power C1e State (Default = Disabled):

This BIOS option allows the enabling/disabling of a processor mechanism to allow the processor to enter a reduced power C1e state when all cores of a processor have entered a low power C-state. Enabling this feature will result in substanital power savings in most configurations.

Adjacent Sector Prefetch (Default = Enabled):

This BIOS option allows the enabling/disabling of a processor mechanism to fetch the adjacent cache line within an 128-byte sector that contains the data needed due to a cache line miss.

In some limited cases, setting this option to Disabled may improve performance. In the majority of cases, the default value of Enabled provides better performance. Users should only disable this option after performing application benchmarking to verify improved performance in their environment.

Hardware Prefetch (Default = Enabled):

This BIOS option allows allows the enabling/disabling of a processor mechanism to prefetch data into the cache according to a pattern recognition algorithm.

In some limited cases, setting this option to Disabled may improve performance. In the majority of cases, the default value of Enabled provides better performance. Users should only disable this option after performing application benchmarking to verify improved performance in their environment.

Data Reuse (Default = Enabled):

This BIOS option allows the enabling/disabling of the Data Reuse optimization.

Enabling this option reduces the frequency of L3 cache updates from the L1 cache. This may improve performance by reducing the internal bandwidth consumed by constantly updating L1 cache lines in the L3 cache.

Since this optimization results in more fetches to main memory, in some limited cases, setting this option to Disabled may improve performance. In the majority of cases, the default value of Enabled provides better performance. Users should only disable this option after performing application benchmarking to verify improved performance in their environment.

Thermal Configuration (Default = Optimal Cooling):

This feature allows the user to select the fan cooling solution for the system. Values for this BIOS option can be:

Collaborative Power Control (Default = Enabled):

This BIOS option allows the enabling/disabling of the Processor Clocking Controll (PCC) Interface, for operating systems which support this feature. Enabling this option allows the Operating System to request processor frequency changes even when the server has the Power Regulator option configured for Dynamic Power Savings Mode.

For Operating Systems that do not support the PCC Interface or when the Power Regulator Mode is not configured for Dynamic Power Savings Mode, this option has no impact on system operation.

SATA #1 Controller (Default=Auto)

Sets the mode for the embedded controller. The values for this BIOS setting can be:

Processor Power and Utilization Monitoring (Default = Enabled):

This BIOS option allows allows the enabling/disabling of iLo4 Processor State Mode Switching and Insight Power Management Processor Utilization Monitoring.

When set to disabled, the system will also set the HP Power Regulator mode to HP Static High Performance mode and the HP Power Profile mode to Custom. This option may be useful in some environments that require absolute minimum latency.

Memory Refresh Rate (Default = 2x Refresh):

This BIOS option controls the refresh rate of the memory controller and may affect the performance and resiliency of the servers memory.

When set to 1x Refresh, the memory refresh rate will be decreased, the HP Power Regulator mode will be set to HP Static High Performance mode, and the HP Power Profile mode to Custom. This option may be useful in some environments that require absolute minimum latency.

When set to 3x Refresh, the memory refresh rate will be increased, the HP Power Regulator mode will be set to HP Static High Performance mode, and the HP Power Profile mode to Custom.

Last updated March 31st, 2014.


Flag description origin markings:

[user] Indicates that the flag description came from the user flags file.
[suite] Indicates that the flag description came from the suite-wide flags file.
[benchmark] Indicates that the flag description came from a per-benchmark flags file.

The flags files that were used to format this result can be browsed at
http://www.spec.org/cpu2006/flags/HP-Platform-Flags-AMD-V1.2-revC.html,
http://www.spec.org/cpu2006/flags/x86-open64-452-flags-rate-revA-III.html.

You can also download the XML flags sources by saving the following links:
http://www.spec.org/cpu2006/flags/HP-Platform-Flags-AMD-V1.2-revC.xml,
http://www.spec.org/cpu2006/flags/x86-open64-452-flags-rate-revA-III.xml.


For questions about the meanings of these flags, please contact the tester.
For other inquiries, please contact webmaster@spec.org
Copyright 2006-2014 Standard Performance Evaluation Corporation
Tested with SPEC CPU2006 v1.2.
Report generated on Wed Jul 30 10:53:59 2014 by SPEC CPU2006 flags formatter v6906.