Selecting one of the following will take you directly to that section:
Specifies size of off_t data type.
Means "no optimization". This level compiles the fastest and generates the most debuggable code.
If multiple "O" options are used, with or without level numbers, the last such option is the one that is effective. Level 2 is assumed if no value is specified (i.e. "-O". The default is "-O2".
Somewhere between -O0 and -O2.
If multiple "O" options are used, with or without level numbers, the last such option is the one that is effective. Level 2 is assumed if no value is specified (i.e. "-O". The default is "-O2".
Like -O2, except that it enables optimizations that take longer to perform or that may generate larger code (in an attempt to make the program run faster).
If multiple "O" options are used, with or without level numbers, the last such option is the one that is effective. Level 2 is assumed if no value is specified (i.e. "-O". The default is "-O2".
Enables all the optimizations from -O3 along with other aggressive optimizations that may violate strict compliance with language standards. AOCC user guide may be referred for detailed documentation of optimizations enabled under Ofast
Specify that Clang should generate code for a specific processor family member and later. For example, if you specify -march=znver1, the compiler is allowed to generate instructions that are valid on AMD Zen processors, but which may not exist on earlier products.
(For C++ only) Enable generation of unwind information. This allows exceptions to be thrown through Clang compiled stack frames. This is on by default in x86-64.-fno-exceptions disables C++ exception handling.
Generate output files in LLVM formats, suitable for link time optimization. When used with -S this generates LLVM intermediate language assembly files, otherwise this generates LLVM bitcode format object files (which may be passed to the linker depending on the stage selection options).
Note:
-flto requires llvm to be build with gold-linker. The default binary releases of llvm from llvm.org do not have LLVMGold.so and so will not support -flto. To use -flto, you will have to
Download, configure and build binutils for gold with plugin support.
*** $ git clone --depth 1 git://sourceware.org/git/binutils-gdb.git binutils
*** $ mkdir build
*** $ cd build
*** $ ../binutils/configure --enable-gold --enable-plugins --disable-werror
*** $ make all-gold
That should leave you with build/gold/ld-new which supports the -plugin option. Running make will additionally build build/binutils/ar and nm-new binaries supporting plugins.
Build the LLVMgold plugin. Run CMake with -DLLVM_BINUTILS_INCDIR=/path/to/binutils/include. The correct include path will contain the file plugin-api.h.
Replace the existing binutils tools in /usr/bin with the newly built gold enabled binutils tools like ld, nm, ar. It is recommended that you use soft links to back up and replace existing ld, nm, ar with the gold enabled version.
Generate code for a 32-bit environment. The 32-bit environment sets int, long and pointer to 32 bits and generates code that runs on any i386 system. The compiler generates x86 or IA32 32-bit ABI. The default on a 32-bit host is 32-bit ABI. The default on a 64-bit host is 64-bit ABI if the target platform specified is 64-bit, otherwise the default is 32-bit.
Generate code for a 64-bit environment. The 64-bit environment sets int to 32 bits and long and pointer to 64 bits and generates code for AMD's x86-64 architecture. The compiler generates AMD64, INTEL64, x86-64 64-bit ABI. The default on a 32-bit host is 32-bit ABI. The default on a 64-bit host is 64-bit ABI if the target platform specified is 64-bit, otherwise the default is 32-bit.
Instructs the compiler not to allocate arrays from the stack and instead use heap memory.
Given the expression "a = b / c", instructs the compiler to calculate "a = b * (1/c)".
Enables a range of optimizations that provide faster, though sometimes less precise, mathematical operations that may not conform to the IEEE-754 specifications. When this option is specified, __STDC_IEC_559__ macro is ignored even if set by the system headers
-ffp-contract=fast enables floating-point expression contraction such as forming of fused multiply-add operations if the target has native support for them.
Load the plugin code in file dragonegg.so, assumed to be a shared object to be dlopen'd by the compiler.
Instructs the compiler to link with system math libraries
Instructs the compiler to link with AMD-supported math library
Instructs the linker to use the first definition encounterd.
Use the jemalloc library, which is a general purpose malloc implementation that emphasizes fragmentation avoidance and scalable concurrency support.
Use the smartheap library, which is a SmartHeap is a fast, portable, reliable, ANSI-compliant malloc/operator new library.
Instructs the compiler to link with openmp libraries
Instructs the compiler to link with gfortran libraries
Passes the argument list following the flag to the DragonEgg gfortran plugin. Each argument must be enclosed in quotes.
The option combines compare/test instruction with branches wherever possible.
This option enables advanced branch combine optimizations across basic blocks.
Enables splitting of long live ranges of loop induction variables which span loop boundaries. This helps reduce register pressure and can help avoid needless spills to memory and reloads from memory.
Enables loop strength reduction for nested loop structures. By default, the compiler will do loop strength reduction only for the innermost loop
Set the default integer and logical types to an 8 byte type. It does not promote variables with explicit KIND declaration.
Sets the compiler's inlining heuristics to an aggressive level by increasing the inline thresholds.
Sets the compiler's inlining threshold level to the value passed as argument. The inline threshold is used in the inliner heuristics to decide which function should be inlined
Sets the compiler's inlining threshold level to the value passed as argument. The inline threshold is used in the inliner heuristics to decide which function should be inlined. Use the option -fplugin-arg-dragonegg-llvm-option="-inline-threshold:1000" to pass this option to LLVM backend through dragonegg
Certain loops with breaks maybe vectorized by default at O2 and above. In some extreme situations this may result in unsafe behavior. Use this option to disable vectorization of such loops. Use the option -fplugin-arg-dragonegg-llvm-option="-disable-vect-cmp" to pass this option to LLVM backend through dragonegg.
Enables AVX2 (Advanced Vector Extensions, 2nd generation) support.
Restricts the optimization and code generation to first-generation AVX instructions.
The optimization transforms the data layout of a single dimensional array to provide better cache locality by analysing the access patterns.
Certain loops with breaks maybe vectorized by default at O2 and above. In some extreme situations this may result in unsafe behavior. Use this option to disable vectorization of such loops
This option transforms the layout of arrays of structure types and its fields to improve the cache locality. Possible values that can be specified are 1,2 and 3 Aggressive analysis and transformations are performed at higher level of transformations, with -fstruct-layout=3 being the most aggressive. Use -fstruct-layout=3 when you know the allocated size of array of structures fits within 64KB. Use the value of 2 when a similar size exceeds 64KB but does not exceed 4GB. The option is effective only under flto as the whole program analysis is required to perform this optimization.
CPU2017 suite GCC benchmark:
LLVM Clang uses C99 standard by default. Need this portability flag since GCC benchmark in CPU2017 suite uses C89 standard.
Enables the adcx instruction generation support.
The optimization merges duplicate constant uses into a register to reduce instruction width.
Passes the option-name through the compiler frontend to the optimizer.
Instructs the compiler to unroll the loops wherever possible.
The unroll count can be specified explicitly with -unroll_count=_value_ where _value_ is a positive integer. If this value is greater than the trip count, the loop will be fully unrolled.
Sets the limit at which loops will be unrolled. For example,if unroll-threshold is set to 100 then only loops with 100 or less instructions will be unrolled.
Turns on LLVM's (and Clang's) instrumenation based profiling.
Uses the profiling files generated from a program compiled with -fprofile-instr-generate.
This option avoids runtime memory dependency checks to enable aggressive vectorization.
This option enables aggressive loop unswitching heuristic based on usage of branch conditions.
-Wl tells the linker to accept the following argument. In the example, it tells the linker to allow multiple definitions.
Tells the linker to link in the specified library.
CPU2017 suite WRF, POP2 benchmarks:
x86 architecture is little-endian. Since these benchmarks are written keeping big-endian architecture in mind, this portability flag is required for the compiler to generate correct code.
CPU2017 suite GCC benchmark:
LLVM Clang uses C99 standard by default. Need this portability flag since GCC benchmark in CPU2017 suite uses C89 standard.
clang is a C, C++, and Objective-C compiler which encompasses preprocessing, parsing, optimization, code generation, assembly, and linking. Depending on which high-level mode setting is passed, Clang will stop before doing a full link.
The clang executable is actually a small driver which controls the overall execution of other tools such as the compiler, assembler and linker. Typically you do not need to interact with the driver, but you transparently use it to run the other tools.
PreprocessingThis stage handles tokenization of the input source file, macro expansion, #include expansion and handling of other preprocessor directives. The output of this stage is typically called a .i (for C), .ii (for C++), .mi (for Objective-C), or .mii (for Objective-C++) file.
This stage parses the input file, translating preprocessor tokens into a parse tree. Once in the form of a parse tree, it applies semantic analysis to compute types for expressions as well and determine whether the code is well formed. This stage is responsible for generating most of the compiler warnings as well as parse errors. The output of this stage is an Abstract Syntax Tree (AST).
Code Generation and OptimizationThis stage translates an AST into low-level intermediate code (known as LLVM IR) and ultimately to machine code. This phase is responsible for optimizing the generated code and handling target-specific code generation. The output of this stage is typically called a .s file or assembly file.
Clang also supports the use of an integrated assembler, in which the code generator produces object files directly. This avoids the overhead of generating the .s file and of calling the target assembler.
AssemblerThis stage runs the target assembler to translate the output of the compiler into a target object file. The output of this stage is typically called a .o file or object file.
LinkerThis stage runs the target linker to merge multiple object files into an executable or dynamic library. The output of this stage is typically called an a.out, .dylib or .so file.
Invoke the LLVM Fortran compiler
DragonEgg is a gcc plugin that replaces GCC's optimizers and code generators with those from the LLVM project
To build and run Fortran programs:
*** $ gfortran [optimization flags] -fplugin=path/dragonegg.so [plugin optimization flags] -c xyz.f90
*** $ clang -O3 -flto -lgfortran -o xyz xyz.o
*** $ ./xyz
optimization flags:
flags that GFortran frontend will use to generate the IR for DragonEgg plugin. It is recommend to use basic out-of-the-box flags (eg: -m64 -O2, preferably least GFortran optimization(-O0)
plugin optimization flags:
Optimization flags DragonEgg plugin will use to generate the optimized LLVM IR and code generation. Here you can use higher optimization flags like -O3, -mavx etc if required.
Note:
* LLVM releases on llvm.org provides only sources releases.
* Latest release of DragonEgg sources is at http://llvm.org/releases/download.html#3.5.2
* DragonEgg is a self contained plugin with llvm embedded within, so its recommended to use LLVM 3.5.2 sources when building DragonEgg.
Specifies a directory to search for libraries. Use -L to add directories to the search path for library files. Multiple -L options are valid. However, the position of multiple -L options is important relative to -l options supplied.
Specifies a directory to search for include files. Use -I to add directories to the search path for include files. Multiple -I options are valid.
Switch to enable OpenMP.
Using numactl to bind processes and memory to cores
For multi-copy runs or single copy runs on systems with multiple sockets, it is advantageous to bind a process to a particular core. Otherwise, the OS may arbitrarily move your process from one core to another. This can effect performance. To help, SPEC allows the use of a "submit" command where users can specify a utility to use to bind processes. We have found the utility 'numactl' to be the best choice.
numactl runs processes with a specific NUMA scheduling or memory placement policy. The policy is set for a command and inherited by all of its children. The numactl flag "--physcpubind" specifies which core(s) to bind the process. "-l" instructs numactl to keep a process memory on the local node while "-m" specifies which node(s) to place a process memory. For full details on using numactl, please refer to your Linux documentation, 'man numactl'
Note that some versions of numactl, particularly the version found on SLES 10, we have found that the utility incorrectly interprets application arguments as it's own. For example, with the command "numactl --physcpubind=0 -l a.out -m a", numactl will interpret a.out's "-m" option as it's own "-m" option. To work around this problem, a user can put the command to be run in a shell script and then run the shell script using numactl. For example: "echo 'a.out -m a' > run.sh ; numactl --physcpubind=0 bash run.sh"
Transparent Huge Pages (THP)
THP is an abstraction layer that automates most aspects of creating, managing, and using huge pages. THP is designed to hide much of the complexity in using huge pages from system administrators and developers, as normal huge pages must be assigned at boot time, can be difficult to manage manually, and often require significant changes to code in order to be used effectively. Most recent Linux OS releases have THP enabled by default
Linux Huge Page settings
If you need finer control and manually set the Huge Pages you can follow the below steps:
Note that further information about huge pages may be found in your Linux documentation file: /usr/src/linux/Documentation/vm/hugetlbpage.txt
ulimit -s <n>
Sets the stack size to n kbytes, or unlimited to allow the stack size to grow without limit.
ulimit -l <n>
Sets the maximum size of memory that may be locked into physical memory.
OMP_NUM_THREADS
Sets the maximum number of OpenMP parallel threads applications based on OpenMP may use.
powersave -f (on SuSE)
Makes the powersave daemon set the CPUs to the highest supported frequency.
/etc/init.d/cpuspeed stop (on Red Hat)
Disables the cpu frequency scaling program in order to set the CPUs to the highest supported frequency.
LD_LIBRARY_PATH
An environment variable set to include the LLVM, JEMalloc and SmartHeap libraries used during compilation of the binaries. This environment variable setting is not needed when building the binaries on the system under test.
kernel/randomize_va_space
This option can be used to select the type of process address space randomization that is used in the system, for architectures that support this feature.
*** 0 - Turn the process address space randomization off. This is the default for architectures that do not support this feature anyways, and kernels that are booted with the "norandmaps" parameter.
*** 1 - Make the addresses of mmap base, stack and VDSO page randomized. This, among other things, implies that shared libraries will be loaded to random addresses. Also for PIE-linked binaries, the location of code start is randomized. This is the default if the CONFIG_COMPAT_BRK option is enabled.
*** 2 - Additionally enable heap randomization. This is the default if CONFIG_COMPAT_BRK is disabled.
MALLOC_CONF
An environment variable set to tune the jemalloc allocation strategy during the execution of the binaries. This environment variable setting is not needed when building the binaries on the system under test.