CPU2006 Flag Description
Fujitsu PRIMEQUEST 1800E(Intel Xeon X7560)

Copyright © 2006 Intel Corporation. All Rights Reserved.


Base Compiler Invocation

C benchmarks

C++ benchmarks

Fortran benchmarks

Benchmarks using both Fortran and C


Peak Compiler Invocation

C benchmarks

C++ benchmarks

Fortran benchmarks

Benchmarks using both Fortran and C


Base Portability Flags

410.bwaves

416.gamess

433.milc

434.zeusmp

435.gromacs

436.cactusADM

437.leslie3d

444.namd

447.dealII

450.soplex

453.povray

454.calculix

459.GemsFDTD

465.tonto

470.lbm

481.wrf

482.sphinx3


Peak Portability Flags

410.bwaves

416.gamess

433.milc

434.zeusmp

435.gromacs

436.cactusADM

437.leslie3d

444.namd

447.dealII

450.soplex

453.povray

454.calculix

459.GemsFDTD

465.tonto

470.lbm

481.wrf

482.sphinx3


Base Optimization Flags

C benchmarks

C++ benchmarks

Fortran benchmarks

Benchmarks using both Fortran and C


Peak Optimization Flags

C benchmarks

433.milc

470.lbm

482.sphinx3

C++ benchmarks

444.namd

447.dealII

450.soplex

453.povray

Fortran benchmarks

410.bwaves

416.gamess

434.zeusmp

437.leslie3d

459.GemsFDTD

465.tonto

Benchmarks using both Fortran and C

435.gromacs

436.cactusADM

454.calculix

481.wrf


Implicitly Included Flags

This section contains descriptions of flags that were included implicitly by other flags, but which do not have a permanent home at SPEC.


System and Other Tuning Information

Platform settings

One or more of the following settings may have been set. If so, the "General Notes" section of the report will say so; and you can read below to find out more about what these settings mean.

KMP_STACKSIZE

Specify stack size to be allocated for each thread.

KMP_AFFINITY

KMP_AFFINITY = < physical | logical >, starting-core-id
specifies the static mapping of user threads to physical cores. For example, if you have a system configured with 8 cores, OMP_NUM_THREADS=8 and KMP_AFFINITY=physical,0 then thread 0 will mapped to core 0, thread 1 will be mapped to core 1, and so on in a round-robin fashion.

KMP_AFFINITY = granularity=fine,scatter
The value for the environment variable KMP_AFFINITY affects how the threads from an auto-parallelized program are scheduled across processors.
Specifying granularity=fine selects the finest granularity level, causes each OpenMP thread to be bound to a single thread context.
This ensures that there is only one thread per core on cores supporting HyperThreading Technology
Specifying scatter distributes the threads as evenly as possible across the entire system.
Hence a combination of these two options, will spread the threads evenly across sockets, with one thread per physical core.

OMP_NUM_THREADS

Sets the maximum number of threads to use for OpenMP* parallel regions if no other value is specified in the application. This environment variable applies to both -openmp and -parallel (Linux and Mac OS X) or /Qopenmp and /Qparallel (Windows). Example syntax on a Linux system with 8 cores: export OMP_NUM_THREADS=8

Hardware Prefetch:

This BIOS option allows the enabling/disabling of a processor mechanism to prefetch data into the cache according to a pattern-recognition algorithm.

In some cases, setting this option to Disabled may improve performance. Users should only disable this option after performing application benchmarking to verify improved performance in their environment.

Adjacent Sector Prefetch:

This BIOS option allows the enabling/disabling of a processor mechanism to fetch the adjacent cache line within a 128-byte sector that contains the data needed due to a cache line miss.

In some cases, setting this option to Disabled may improve performance. Users should only disable this option after performing application benchmarking to verify improved performance in their environment.

ulimit -s <n>

Sets the stack size to n kbytes, or unlimited to allow the stack size to grow without limit.

mkdir /dev/cpuset; mount -t cpuset none /dev/cpuset; echo 1 > /dev/cpuset/memory_spread_page

There are two boolean flag files per cpuset that control where the kernel allocates pages for the file system buffers and related in kernel data structures. They are called 'memory_spread_page' and 'memory_spread_slab'. If the per-cpuset boolean flag file 'memory_spread_page' is set, then the kernel will spread the file system buffers (page cache) evenly over all the nodes that the faulting task is allowed to use, instead of preferring to put those pages on the node where the task is running. If the per-cpuset boolean flag file 'memory_spread_slab' is set, then the kernel will spread some file system related slab caches, such as for inodes and dentries evenly over all the nodes that the faulting task is allowed to use, instead of preferring to put those pages on the node where the task is running. The setting of these flags does not affect anonymous data segment or stack segment pages of a task.

By default, both kinds of memory spreading are off, and memory pages are allocated on the node local to where the task is running, except perhaps as modified by the tasks NUMA mempolicy or cpuset configuration, so long as sufficient free memory pages are available. When new cpusets are created, they inherit the memory spread settings of their parent. Setting memory spreading causes allocations for the affected page or slab caches to ignore the tasks NUMA mempolicy and be spread instead. Tasks using mbind() or set_mempolicy() calls to set NUMA mempolicies will not notice any change in these calls as a result of their containing tasks memory spread settings. If memory spreading is turned off, then the currently specified NUMA mempolicy once again applies to memory page allocations. Both 'memory_spread_page' and 'memory_spread_slab' are boolean flag files. By default they contain "0", meaning that the feature is off for that cpuset. If a "1" is written to that file, then that turns the named feature on.

Using numactl to bind processes and memory to cores

For multi-copy runs or single copy runs on systems with multiple sockets, it is advantageous to bind a process to a particular core. Otherwise, the OS may arbitrarily move your process from one core to another. This can affect performance. To help, SPEC allows the use of a "submit" command where users can specify a utility to use to bind processes. We have found the utility 'numactl' to be the best choice.

numactl runs processes with a specific NUMA scheduling or memory placement policy. The policy is set for a command and inherited by all of its children. The numactl flag "--physcpubind" specifies which core(s) to bind the process. "-l" instructs numactl to keep a process memory on the local node while "-m" specifies which node(s) to place a process memory. For full details on using numactl, please refer to your Linux documentation, 'man numactl'


Flag description origin markings:

[user] Indicates that the flag description came from the user flags file.
[suite] Indicates that the flag description came from the suite-wide flags file.
[benchmark] Indicates that the flag description came from a per-benchmark flags file.

The flags file that was used to format this result can be browsed at
http://www.spec.org/cpu2006/flags/Fujitsu.PQ1800.ic11.1-linux64.html.

You can also download the XML flags source by saving the following link:
http://www.spec.org/cpu2006/flags/Fujitsu.PQ1800.ic11.1-linux64.xml.


For questions about the meanings of these flags, please contact the tester.
For other inquiries, please contact webmaster@spec.org
Copyright 2006-2014 Standard Performance Evaluation Corporation
Tested with SPEC CPU2006 v1.1.
Report generated on Wed Jul 23 09:50:13 2014 by SPEC CPU2006 flags formatter v6906.