SPEC MPI2007 Flag Description for SGI MPT and Intel(R) Compiler 18.0

Sections

Selecting one of the following will take you directly to that section:


Optimization Flags


Portability Flags


Compiler Flags


Other Flags


System and Other Tuning Information

SGI MPT 2.0x options and environment variables

Job startup command and options

mpiexec_mpt [ global_opts ] local_opts cmd [ : local_opts cmd ] ...

The mpiexec_mpt command launches a Message Passing Toolkit (MPT) MPI program in a batch scheduler-managed cluster environment. mpiexec_mpt uses the list of cluster nodes it receives from the batch scheduler to generate and issue an appropriate mpirun command to launch the multi-node job.

-n <# of processes> or -np <# of processes>

Use this option to set the number of MPI processes to run the current arg-set.

mpiexec [ global_opts ] local_opts cmd [ : local_opts cmd ] ...

The PBS Pro's mpiexec command provides the standard mpiexec interface on the Altix running ProPack 4 or greater. It provides equivalent functionality to mpiexec_mpt.

Environment variables

MPI_REQUEST_MAX

Determines the maximum number of nonblocking sends and receives that can simultaneously exist for any single MPI process. MPI generates an error message if this limit (or the default, if not set) is exceeded. Default: 16384

MPI_TYPE_MAX

Determines the maximum number of data types that can simultaneously exist for any single MPI process. MPI generates an error message if this limit (or the default, if not set) is exceeded. Default: 1024

MPI_IB_RAILS

If the MPI library uses the IB driver as the inter-host interconnect it will by default use a single IB fabric. If this is set to 2, the library will try to make use of multiple available separate IB fabrics and split MPI traffic across them. Default: 1

MPI_CONNECTIONS_THRESHOLD

For very large MPI jobs the time and resource cost to create an InfiniBand connection between every pair of ranks at job start time may be prodigious. If this variable is set to a number no greater than the number of ranks, then MPT will create InfiniBand RC Queue Pairs (QPs) lazily on a demand basis. If this variable is set to a number greater than the number of ranks then MPT will attempt to allocate all the InfiniBand RC QPs it needs at job start. If this varibale is not modified and InfiniBand RC QPs are in use then MPT will compare the default value against the number of ranks with the above criteria. If this is not modified and InfiniBand XRC QPs are in use then MPT will attempt to allocate the QPs at job launch but may need to switch to lazy allocation if the space needs are too large. Default: 1025

MPI_IB_DEVS

Directs MPT to open specific IB ports in each rank. If MPI_IB_DEVS is empty or not defined, MPT will assign ranks to IB ports by the formula "local rank modulo number of ports." The first rank on each host will use the first port on that host, etc. By default MPT will only use the first working port on the first HCA with a working port.

Other Tuning Information

ulimit -s unlimited

Removes limits on the maximum size of the automatically- extended stack region of the current process and each process it creates.