MPI2007 Flag Description
Intel Corporation Intel Server System S9248WK1HLC (Intel Xeon 9242 Platinum, 2.30 GHz, DDR4-2993 MHz, Turbo on)

Copyright © 2013 Intel Corporation. All Rights Reserved.


Base Compiler Invocation

C benchmarks

C++ benchmarks

126.lammps

Fortran benchmarks

Benchmarks using both Fortran and C


Base Portability Flags

121.pop2

126.lammps


Base Optimization Flags

C benchmarks

C++ benchmarks

126.lammps

Fortran benchmarks

Benchmarks using both Fortran and C


Implicitly Included Flags

This section contains descriptions of flags that were included implicitly by other flags, but which do not have a permanent home at SPEC.


System and Other Tuning Information

MPI options and environment variables

Job startup command flags

-n <# of processes> or -np <# of processes>

Use this option to set the number of MPI processes to run the current arg-set.

-perhost <# of processes> or -ppn <# of processes>

Use this option to place the indicated number of consecutive MPI processes on every host in group round robin fashion. The number of processes to start is controlled by the option -n as usual.

-genv <ENVVAR> <value>

Use this option to set the <ENVVAR> environment variable to the specified <value> for all MPI processes.

Environment variables

I_MPI_FABRICS=<fabric>|<intra-node fabric>:<inter-node fabric>

Select the particular network fabric to be used.

tmi - Tag Matching Interface (TMI)-capable network fabrics, such as Intel True Scale Fabric and Myrinet* (through TMI).

shm - Shared-memory only

dapl - Direct Access Programming Library* (DAPL)-capable network fabrics, such as InfiniBand* and iWarp* (through DAPL)

ofi - OpenFabrics Interfaces* (OFI)-capable network fabrics, such as Intel True Scale Fabric and Ethernet (through OFI API).

I_MPI_COMPATIBILITY=<value>

Available values:

3 - The Intel MPI Library 3.x compatible mode

4 - The Intel MPI Library 4.0.x compatible mode

Set this environment variable to choose the Intel MPI Library runtime compatible mode. By default, the library complies with the MPI-3.1 standard.

I_MPI_HYDRA_PMI_CONNECT=<value>

Available values:

nocache - Do not cache PMI messages

cache - Cache PMI messages on the local pmi_proxy management processes to minimize the number of PMI requests. Cached information is automatically propagated to child management processes

lazy-cache - cache mode with on-demand propagation. This is the default value.

alltoall - Information is automatically exchanged between all pmi_proxy before any get request can bedone.

Define the processing method for PMI messages.

FI_PSM2_INJECT_SIZE=<value>

Maximum message size allowed for fi_inject and fi_tinject calls (default: 64)

FI_PSM2_LAZY_CONN=0|1

Control when connections are established between PSM2 endpoints that OFI endpoints are built on top of. When set to 0, connections are established when addresses are inserted into the address vector. This is the eager connection mode. When set to 1, connections are established when addresses are used the first time in communication. This is the lazy connection mode.

I_MPI_PIN_DOMAIN=<mc-shape>

Control process pinning for MPI applications. This environment variable is used to define a number of non-overlapping subsets (domains) of logical processors on a node, and a set of rules on how MPI processes are bound to these domains by the following formula: one MPI process per one domain. The core option means that domain consists of the logical processors that share a particular core. The number of domains on a node is equal to the number of cores on the node.

I_MPI_PIN_ORDER=<value>

This environment variable defines the mapping order for MPI processes to domains as specified by the I_MPI_PIN_DOMAIN environment variable. The bunch option means that the processes are mapped proportionally to sockets and the domains are ordered as close as possible on the sockets

FI_PSM2_DELAY=<value>

Time (seconds) to sleep before closing PSM endpoints. This is a workaround for a bug in some versions of PSM library. The default setting is 1.


Flag description origin markings:

[user] Indicates that the flag description came from the user flags file.
[suite] Indicates that the flag description came from the suite-wide flags file.
[benchmark] Indicates that the flag description came from a per-benchmark flags file.

The flags file that was used to format this result can be browsed at
http://www.spec.org/mpi2007/flags/EM64T_Intel140_flags.20190110.html.

You can also download the XML flags source by saving the following link:
http://www.spec.org/mpi2007/flags/EM64T_Intel140_flags.20190110.xml.


For questions about the meanings of these flags, please contact the tester.
For other inquiries, please contact webmaster@spec.org
Copyright 2006-2010 Standard Performance Evaluation Corporation
Tested with SPEC MPI2007 v2.0.1.
Report generated on Wed Jul 31 16:22:16 2019 by SPEC MPI2007 flags formatter v1445.