SPEC CPU2006 Flag Description for the Intel(R) C++Coompiler 10.1 for IA32 and Intel 64 applications and Intel(R) Fortran Compiler 10.1 for IA32 and Intel 64 applications

Copyright © 2006 Intel Corporation. All Rights Reserved.

Sections

Selecting one of the following will take you directly to that section:


Optimization Flags


Portability Flags


Compiler Flags


Other Flags


System and Other Tuning Information

Platform settings

One or more of the following settings may have been set. If so, the "General Notes" section of the report will say so; and you can read below to find out more about what these settings mean.

Hardware Prefetch:

This BIOS option allows the enabling/disabling of a processor mechanism to prefetch data into the cache according to a pattern-recognition algorithm.

In some cases, setting this option to Disabled may improve performance. Users should only disable this option after performing application benchmarking to verify improved performance in their environment.

Adjacent Sector Prefetch:

This BIOS option allows the enabling/disabling of a processor mechanism to fetch the adjacent cache line within an 128-byte sector that contains the data needed due to a cache line miss.

In some cases, setting this option to Disabled may improve performance. Users should only disable this option after performing application benchmarking to verify improved performance in their environment.

ulimit -s

Sets the stack size to n kbytes, or unlimited to allow the stack size to grow without limit.

submit= MYMASK=`printf '0x%x' \$((1<<\$SPECCOPYNUM))`; /usr/bin/taskset \$MYMASK $command

When running multiple copies of benchmarks, the SPEC config file feature submit is sometimes used to cause individual jobs to be bound to specific processors. This specific submit command is used for Linux. The description of the elements of the command are:

Intel(R) MPI Library 3.1 for Linux* options and environment variables

Job starup command flags

-perhost <# of processes>

Use this option to place the indicated number of consecutive MPI processes on every host in group round robin fashion. The number of processes to start is controlled by the option -n as usual.

-n <# of processes> or -np <# of processes>

Use this option to set the number of MPI processes to run the current arg-set.

-genv <ENVVAR> <value>

Use this option to set the <ENVVAR> environment variable to the specified <value> for all MPI processes.

Environment variables

I_MPI_DEVICE=<device>[:<provider>]

Select the particular network fabric to be used.

sock - Sockets

shm - Shared-memory only (no sockets)

ssm - Combined sockets + shared memory (for clusters with SMP nodes)

rdma - RDMA-capable network fabrics including InfiniBand*, Myrinet* (via DAPL*)

rdssm - Combined sockets + shared memory + DAPL* (for clusters with SMP nodes and RDMA-capable network fabrics)

I_MPI_FALLBACK_DEVICE=(enable|disable)

Set this environment variable to enable fallback to the available fabric. It is valid only for rdssm and rdma modes.

Fall back to the shared memory and/or socket fabrics if initialization of the DAPL* fabric fails. This is the default value.

Terminate the job if the fabric selected by the I_MPI_DEVICE environment variable cannot be initialized.