CPU2006 Flag Description
Sugon Sugon I610-G20

Copyright © 2006 Intel Corporation. All Rights Reserved.


Base Compiler Invocation

C benchmarks

C++ benchmarks

Fortran benchmarks

Benchmarks using both Fortran and C


Peak Compiler Invocation

C benchmarks

C++ benchmarks (except as noted below)

450.soplex

Fortran benchmarks

Benchmarks using both Fortran and C


Base Portability Flags

410.bwaves

416.gamess

433.milc

434.zeusmp

435.gromacs

436.cactusADM

437.leslie3d

444.namd

447.dealII

450.soplex

453.povray

454.calculix

459.GemsFDTD

465.tonto

470.lbm

481.wrf

482.sphinx3


Peak Portability Flags

410.bwaves

416.gamess

433.milc

434.zeusmp

435.gromacs

436.cactusADM

437.leslie3d

444.namd

447.dealII

450.soplex

453.povray

454.calculix

459.GemsFDTD

465.tonto

470.lbm

481.wrf

482.sphinx3


Base Optimization Flags

C benchmarks

C++ benchmarks

Fortran benchmarks

Benchmarks using both Fortran and C


Peak Optimization Flags

C benchmarks

433.milc

470.lbm

482.sphinx3

C++ benchmarks

444.namd

447.dealII

450.soplex

453.povray

Fortran benchmarks

410.bwaves

416.gamess

434.zeusmp

437.leslie3d

459.GemsFDTD

465.tonto

Benchmarks using both Fortran and C

435.gromacs

436.cactusADM

454.calculix

481.wrf


Implicitly Included Flags

This section contains descriptions of flags that were included implicitly by other flags, but which do not have a permanent home at SPEC.


Commands and Options Used to Submit Benchmark Runs

submit= MYMASK=`printf '0x%x' $((1<<$SPECCOPYNUM))`; /usr/bin/taskset $MYMASK $command
When running multiple copies of benchmarks, the SPEC config file feature submit is used to cause individual jobs to be bound to specific processors. This specific submit command, using taskset, is used for Linux64 systems without numactl.
Here is a brief guide to understanding the specific command which will be found in the config file:
submit= numactl --localalloc --physcpubind=$SPECCOPYNUM $command
When running multiple copies of benchmarks, the SPEC config file feature submit is used to cause individual jobs to be bound to specific processors. This specific submit command is used for Linux64 systems with support for numactl.
Here is a brief guide to understanding the specific command which will be found in the config file:

Shell, Environment, and Other Software Settings

numactl --interleave=all "runspec command"
Launching a process with numactl --interleave=all sets the memory interleave policy so that memory will be allocated using round robin on nodes. When memory cannot be allocated on the current interleave target fall back to other nodes.
KMP_STACKSIZE
Specify stack size to be allocated for each thread.
KMP_AFFINITY
Syntax: KMP_AFFINITY=[<modifier>,...]<type>[,<permute>][,<offset>]
The value for the environment variable KMP_AFFINITY affects how the threads from an auto-parallelized program are scheduled across processors.
It applies to binaries built with -openmp and -parallel (Linux and Mac OS X) or /Qopenmp and /Qparallel (Windows).
modifier:
    granularity=fine Causes each OpenMP thread to be bound to a single thread context.
type:
    compact Specifying compact assigns the OpenMP thread <n>+1 to a free thread context as close as possible to the thread context where the <n> OpenMP thread was placed.
    scatter Specifying scatter distributes the threads as evenly as possible across the entire system.
permute: The permute specifier is an integer value controls which levels are most significant when sorting the machine topology map. A value for permute forces the mappings to make the specified number of most significant levels of the sort the least significant, and it inverts the order of significance.
offset: The offset specifier indicates the starting position for thread assignment.

Please see the Thread Affinity Interface article in the Intel Composer XE Documentation for more details.

Example: KMP_AFFINITY=granularity=fine,scatter
Specifying granularity=fine selects the finest granularity level and causes each OpenMP or auto-par thread to be bound to a single thread context.
This ensures that there is only one thread per core on cores supporting HyperThreading Technology
Specifying scatter distributes the threads as evenly as possible across the entire system.
Hence a combination of these two options, will spread the threads evenly across sockets, with one thread per physical core.

Example: KMP_AFFINITY=compact,1,0
Specifying compact will assign the n+1 thread to a free thread context as close as possible to thread n.
A default granularity=core is implied if no granularity is explicitly specified.
Specifying 1,0 sets permute and offset values of the thread assignment.
With a permute value of 1, thread n+1 is assigned to a consecutive core. With an offset of 0, the process's first thread 0 will be assigned to thread 0.
The same behavior is exhibited in a multisocket system.
OMP_NUM_THREADS
Sets the maximum number of threads to use for OpenMP* parallel regions if no other value is specified in the application. This environment variable applies to both -openmp and -parallel (Linux and Mac OS X) or /Qopenmp and /Qparallel (Windows). Example syntax on a Linux system with 8 cores: export OMP_NUM_THREADS=8
Set stack size to unlimited
The command "ulimit -s unlimited" is used to set the stack size limit to unlimited.
Free the file system page cache
The command "echo 1> /proc/sys/vm/drop_caches" is used to free up the filesystem page cache.

Red Hat Specific features

Transparent Huge Pages
On RedHat EL 6 and later, Transparent Hugepages increase the memory page size from 4 kilobytes to 2 megabytes. Transparent Hugepages provide significant performance advantages on systems with highly contended resources and large memory workloads. If memory utilization is too high or memory is badly fragmented which prevents hugepages being allocated, the kernel will assign smaller 4k pages instead.
Hugepages are used by default unless the /sys/kernel/mm/redhat_transparent_hugepage/enabled field is changed from its RedHat EL6 default of 'always'.

Firmware / BIOS / Microcode Settings

Energy Performance:
This BIOS switch allows 4 options: "Balanced performance", "Performance", "Balanced Energy" and "Energy Efficient". The default is "Balanced Performance", which has been optimized to maximum power savings with minimal impact on performance. "Performance" disables all power management options with any impact on performance. "Balanced Energy" is optimized for power efficiency and "Energy Efficient" for power savings. The BIOS switch is only selectable if the BIOS switch "Power Technology" is set to "Custom".
The two options "Balanced Performance" and "Balanced Energy" should always be the first choice as both options optimize the efficiency of the system. In cases where the performance is not sufficient or the power consumption is too high the two options "Performance" or "Energy Efficient" could be an alternative.
QPI (QuickPath Interconnect) Snoop Mode:
There're two switches below this option menu: "COD" and "Early Snoop", and each of them could be configured as "auto","disable","enable".Both of the default option are "auto". These two BIOS switches should be configured as one of the following four combinations:
- "COD"=enable, "Early Snoop"=disable:
When configured with such options, the system will work on "Cluster on Die" mode, which logically splits a socket into 2 NUMA domains that are exposed to the OS with half the amount of cores and LLC assigned to each NUMA domain in a socket. This mode utilizes an on-die directory cache in memory directory bits to determine whether a snoop needs to be sent. Use this mode for highly NUMA optimized workloads to get the lowest local memory latency and highest local memory bandwidth for NUMA workloads.
- "COD"=disable, "Early Snoop"=enable:
In this case, the system will use "Early Snoop" mode for workloads that are memory latency sensitive or for workloads that benefit from fast cache-to-cache transfer latencies from the remote socket. Snoops are sent out earlier, which is why memory latency is lower in this mode.
- "COD"=disable, "Early Snoop"=disable:
In this case, the system will use "Home Snoop" mode for NUMA workloads that are memory bandwidth sensitive and need both local and remote memory bandwidth. In "Home Snoop" and "Early Snoop" modes, snoops are always sent, but they originate from different places: the caching agent(earlier) in "Early Snoop" mode and the home agent (later) in "Home Snoop" mode.
- "COD"=auto, "Early Snoop"=auto:
This case is equally to "COD"=disable, "Early Snoop"=enable. See the configuration above.
CPU C1E Support
Enabling this option which is the default allows the processor to transmit to its minimum frequency when entering the power state C1. If the switch is disabled the CPU stays at its maximum frequency in C1. Because of the increase of power consumption users should only select this option after performing application benchmarking to verify improved performance in their environment.
QPI Link Frequency Select
This switch allows the configuration of the QPI link speed. Default is auto, which configures the optimal link speed automatically.
Power C-States:
Enabling the CPU States causes the CPU to enter a low-power mode when the CPU is idle.
Turbo Mode:
Enabling turbo mode can boost the overall CPU performance when all CPU cores are not being fully utilized.
Turbo Boost:
This BIOS option can be set to Power Optimized or Traditional. When Power Optimized is selected, Intel Turbo Boost Technology engages after Performance state P0 is sustained for longer than two seconds. When Traditional is selected, Intel Turbo Boost Technology is engaged even for P0 requests less than two seconds.
Hyper-threading:
This BIOS setting enables/disables Intel's Hyper-Threading (HT) Technology. With HT Technology, the operating system can execute two threads in parallel within each processor core.
NUMA:
This BIOS setting enables/disables Intel's Hyper-Threading (HT) Technology. With HT Technology, the operating system can execute two threads in parallel within each processor core.
Enforce POR:
This BIOS switch provider with two options: Enabled or Disabled, and the default is Enabled.
When this BIOS switch set as Enabled, the DIMMs should be populated as Intel recommended, and if populated as 2 DPC (Dimms Per Channel) in 2 SPC (Slots Per Channel) product or 3 SPC product with RDIMM memory, the system could support 1866 MHz at most;if populated as 3 DPC in 3 SPC product with RDIMM memory, the supported highest frequency of memory would be 1600 MHz. Of course, if populated with only 1 DPC in the system, the supported highest frequecy of memory is 2133 MHz. For LRDIMM memory, if populated with less than 2 DPC, the supported highest frequency is 2133 MHz, and if populated with 3 DPC in 3 SPC product, the supported highest frequency is 1600 MHz.
When this BIOS switch set as Disabled, the DIMMs population could go above Intel recommended. If populated as 2 DPC in 2 SPC product, or populated with 2 or 3 DPC in 3 SPC product, and with the RDIMM or LRDIMM memory, the system could support 2133 MHz or higher frequency. But one point should be noticed is that the rate of the memory may be not as high as you expected when populated as 3 DPC in 3 SPC product, although you are told by the dmidecode tool in the operating system such as RHEL6.4 that all of the memory are working at the expected frequency(e.g. 2133 MHz or higher frequency set in the BIOS DDR Speed option). So we do not recommend the memory populated this way.
Memory Frequency:
This BIOS switch allows options:Auto,3200,3000,2993,2800,2667,2600,2400,2200,2133,2000,1867,1800,1600,1400,1333 and four "reserved" options, the default is Auto.
-Auto When set the Memory Frequency as this option, the motherboard allows the memory to be clocked to the highest supported frequency.
-3200,3000,2993,2800,2667,2600,2400,2200,2000,1800,1400 and four "reserved" options were designed for the memory and processors which could support them in the future.
-2133,1867,1600,1333 is in the common use in Sugon product. These options allow the memory to be clocked to the specified frequency. These options will take effect only if the memory and processor could support such a frequency.

Flag description origin markings:

[user] Indicates that the flag description came from the user flags file.
[suite] Indicates that the flag description came from the suite-wide flags file.
[benchmark] Indicates that the flag description came from a per-benchmark flags file.

The flags files that were used to format this result can be browsed at
http://www.spec.org/cpu2006/flags/Intel-ic16.0-official-linux64.html,
http://www.spec.org/cpu2006/flags/Sugon-Platform-Settings-V1.2-HSW-revA.20141203.html.

You can also download the XML flags sources by saving the following links:
http://www.spec.org/cpu2006/flags/Intel-ic16.0-official-linux64.xml,
http://www.spec.org/cpu2006/flags/Sugon-Platform-Settings-V1.2-HSW-revA.20141203.xml.


For questions about the meanings of these flags, please contact the tester.
For other inquiries, please contact webmaster@spec.org
Copyright 2006-2016 Standard Performance Evaluation Corporation
Tested with SPEC CPU2006 v1.2.
Report generated on Wed Mar 23 18:08:52 2016 by SPEC CPU2006 flags formatter v6906.