Using SPEC CPU2017: The 'runcpu' Command

$Id$ Latest: www.spec.org/cpu2017/Docs/

1. Basics

1.1 Defaults

1.2 Syntax

1.3 Benchmarks and suites

1.4 Run order

1.5 Disk Usage

1.5.1 Directory tree

1.5.2 Hey! Where did all my disk space go?

1.6 Multi-user support and limitations
expid (partial solution) output_root (recommended)

1.7 Actions: build buildsetup report run runsetup setup validate
cleaning: clean clobber onlyrun realclean scrub trash
(alternative: Clean by hand)

2. Commonly used options

--action --check_version --config --copies --flagsurl --help --ignore_errors --iterations --loose --output_format --rawformat --rebuild --reportable --threadsNew --tune

3. Less common options

--baseonly --basepeak --nobuild --comment --define --delay --deletework --expid --fake --fakereport --fakereportable --[no]feedback --[no]graph_auto --graph_max --graph_min --http_proxy --http_timeout --info_wrap_column --keeptmp --labelNew --log_timestamp --make_no_clobber --notes_wrap_column --output_root --parallel_test --parallel_test_workloadsNew --[no]powerNew --preenv --reportonly --review --[no]setprocgroup --size --[no]table --test --undef --update --use_submit_for_compare --use_submit_for_speed --username --verbose --version

4 Removed/unsupported options

4.1 No longer needed: --rate --speed --parallel_setup

4.2 Feature removed: --machine --maxcompares

4.3 Unsupported: --make_bundle --unpack_bundle --use_bundle

5 Quick reference

1. Basics

What is runcpu?   runcpu is the primary tool for SPEC CPU2017. You use it from a Unix shell or the Microsoft Windows command line to build and run benchmarks, with commands such as these:

runcpu --config=eniac.cfg    --action=build 519.lbm_r
runcpu --config=colossus.cfg --threads=16   628.pop2_s
runcpu --config=z3.cfg       --copies=64    fprate 

The first command compiles the benchmark named 519.lbm_r. The second runs the OpenMP benchmark 628.pop2_s using 16 threads. The third runs 64 copies of all the SPECrate Floating Point benchmarks.

New with CPU2017: The former runspec utility is renamed runcpu in SPEC CPU2017.   [Why?]

Before reading this document: If you have not already done so, please install and test your SPEC CPU2017 distribution (ISO image). This document assumes that you have already:

If you have not done the above, please see the brief instructions in the Quick Start guide, or the more detailed section "Testing Your Installation" UnixWindows.

1.1 Defaults

The SPEC CPU default settings described in this document may be adjusted by config files.

The order of precedence for settings is:

Highest precedence: runcpu command
Middle: config file
Lowest: the tools as shipped by SPEC

Therefore, when this document tells you that something is the default, bear in mind that your config file may have changed that setting. With luck, the author of the config file will tell you so.

1.2 Syntax

The syntax for the runcpu command is:

runcpu [options] [list of benchmarks to run]

Options are described in the following sections. There, you will notice that many options have both long and short names. The long names are invoked using two dashes, and the short names use only a single dash. For long names that take a parameter, you can optionally use an equals sign. Long names can also be abbreviated, provided that you still enter enough letters for uniqueness. For example, the following commands all do the same thing:

runcpu --config=dianne_july25a --debug=99 fprate 
runcpu --config dianne_july25a --debug 99 fprate 
runcpu --conf dianne_july25a   --deb 99   fprate 
runcpu -c dianne_july25a       -v 99      fprate 

1.3 Benchmarks and Suites

In the list of benchmarks to run, you can use one or more individual benchmarks, such as 500.perlbench_r, or you can run entire suites, using one of the Short Tags below.

Short
Tag
Suite Contents Metrics How many copies?
What do Higher Scores Mean?
intspeed SPECspeed 2017 Integer 10 integer benchmarks SPECspeed2017_int_base
SPECspeed2017_int_peak
SPECspeed suites always run one copy of each benchmark.
Higher scores indicate that less time is needed.
fpspeed SPECspeed 2017 Floating Point 10 floating point benchmarks SPECspeed2017_fp_base
SPECspeed2017_fp_peak
intrate SPECrate 2017 Integer 10 integer benchmarks SPECrate2017_int_base
SPECrate2017_int_peak
SPECrate suites run multiple concurrent copies of each benchmark.
The tester selects how many.
Higher scores indicate more throughput (work per unit of time).
fprate SPECrate 2017 Floating Point 13 floating point benchmarks SPECrate2017_fp_base
SPECrate2017_fp_peak
The "Short Tag" is the canonical abbreviation for use with runcpu, where context is defined by the tools. In a published document, context may not be clear.
To avoid ambiguity in published documents, the Suite Name or the Metrics should be spelled as shown above.

Supersets: There are several supersets which run more than one of the above.

Synonyms - Suite selection is done with the short tags:    intrate fprate intspeed fpspeed
You can also use full metric names. You can say:   runcpu SPECspeed2017_int_base
Some alternates (such as int_rate or CPU2017) may provoke runcpu to say that it is trying to DWIM (wikipedia) but these are not recommended.

Benchmark names: Individual benchmarks can be named, numbered, or both.
Separate them with a space.
Names can be abbreviated, as long as you enter enough characters for uniqueness.
Each of the following commands does the same thing:

runcpu -c jason_july09d --noreportable 503.bwaves_r 510.parest 603.bwaves_s
runcpu -c jason_july09d --noreportable 503 510 603
runcpu -c jason_july09d --noreportable parest bwaves_r bwaves_s
runcpu -c jason_july09d --noreportable pare bwaves_r bwaves_s

To exclude a benchmark: Use a hat (^, also known as carat, typically found as shift-6). Note that if hat has significance to your shell, you may need to protect it from interpretation by the shell, for example by putting it in single quotes. On Windows, you will need to use both a hat and double quotes for each benchmark you want to exclude.

bash-n.n.n$ runcpu --noreportable -c kathy_sep14c fprate ^503 ^pare 
pickyShell% runcpu --noreportable -c kathy_sep14c fprate '^503' '^pare' 
E:\cpu2017> runcpu --noreportable -c kathy_sep14c fprate "^503" "^pare"

Turning off reportable: If your config file sets reportable=yes then you cannot run a subset unless you turn that option off.

[/usr/cathy/cpu2017]$ runcpu --config cathy_apr21b --noreportable fprate ^parest 

1.4 Run order

A reportable run does these steps:

  1. Test: Set up all of the benchmarks using the test workload. Run them. Verify that they get correct answers. The test workloads are run merely as an additional verification of correct operation of the generated executables; their times are not reported and do not contribute to overall metrics. Therefore multiple benchmarks can be run simultaneously, as in the example below where the tester has set --parallel_test to allow up to 20 simultaneous tests.

  2. Train: Do the same steps for the train workload, for the same reasons, with the same verification, non-reporting, and parallelism.

  3. Ref: Run the refrate (5xx benchmarks) or the refspeed (6xx) workload

    If running refspeed, multiple --threads are optionally allowed.
    If running refrate multiple --copies are optionally allowed, as in the example below which uses 256 copies in base.
    (*) For reportable runs, --iterations must be 2 or 3.

  4. Report:

Summarizing reportable run order: The order can be summarized as:

          setup for test
          test (*)
          setup for train
          train (*)
          setup for ref
          ref1, ref2 [, ref3] (**)

  (*) Multiple benchmarks may overlap if --parallel_test > 1
 (**) One benchmark at a time.  Third run only if --iterations=3.

Reportable order when more than one tuning is present: If you run both base and peak tuning, base is always run first.

          setup for test
          test base and peak (*)
          setup for train
          train base and peak (*)
          setup for ref
          base ref1, base ref2 [, base ref3] (**)
          peak ref1, peak ref2 [, peak ref3] (**)

 (*)  Multiple benchmarks may overlap if --parallel_test > 1
      Peak and base may also overlap.
 (**) One benchmark at a time.  Third run only if --iterations=3.

Reportable order when more than one suite is present: If you start a reportable using more than one suite, all the work is done for one suite before proceeding to the next.

For example runcpu --iterations=3 --reportable intspeed fprate would cause:

          intspeed setup test
          intspeed test     
          intspeed setup train    
          intspeed train    
          intspeed setup refspeed 
          intspeed refspeed #1
          intspeed refspeed #2
          intspeed refspeed #3
          fprate   setup test     
          fprate   test     
          fprate   setup train    
          fprate   train    
          fprate   setup refrate  
          fprate   refrate #1
          fprate   refrate #2
          fprate   refrate #3

(This is a change with CPU2017; the prior suite would run int test, fp test, int train, fp train, int ref, fp ref.)

If you request more than one suite (for example, by using all) then a table is printed to show you the run order:

 Action    Run Mode   Workload      Report Type      Benchmarks
--------   --------   --------   -----------------   --------------------------
validate   rate       refrate    SPECrate2017_fp     fprate                    
validate   speed      refspeed   SPECspeed2017_fp    fpspeed                   
validate   rate       refrate    SPECrate2017_int    intrate                   
validate   speed      refspeed   SPECspeed2017_int   intspeed   
   

Reportable example: A log from a published reportable run is excerpted below. The Unix grep command picks out lines that match one of the quoted strings; Microsoft Windows users could try findstr instead.

$ grep -e 'Running B' -e 'Starting' -e '#' CPU2017.052.log     
Running Benchmarks (up to 20 concurrent processes)
  Starting runcpu for 500.perlbench_r test base oct12a-rate
  Starting runcpu for 502.gcc_r       test base oct12a-rate
  Starting runcpu for 505.mcf_r       test base oct12a-rate
  Starting runcpu for 520.omnetpp_r   test base oct12a-rate
  Starting runcpu for 523.xalancbmk_r test base oct12a-rate
  Starting runcpu for 525.x264_r      test base oct12a-rate
  Starting runcpu for 531.deepsjeng_r test base oct12a-rate
  Starting runcpu for 541.leela_r     test base oct12a-rate
  Starting runcpu for 548.exchange2_r test base oct12a-rate
  Starting runcpu for 557.xz_r        test base oct12a-rate
  Starting runcpu for 999.specrand_ir test base oct12a-rate
  Starting runcpu for 500.perlbench_r test peak oct12a-rate
  Starting runcpu for 502.gcc_r       test peak oct12a-rate
  Starting runcpu for 505.mcf_r       test peak oct12a-rate
  Starting runcpu for 520.omnetpp_r   test peak oct12a-rate
  Starting runcpu for 523.xalancbmk_r test peak oct12a-rate
  Starting runcpu for 525.x264_r      test peak oct12a-rate
  Starting runcpu for 531.deepsjeng_r test peak oct12a-rate
  Starting runcpu for 541.leela_r     test peak oct12a-rate
  Starting runcpu for 548.exchange2_r test peak oct12a-rate
  Starting runcpu for 557.xz_r        test peak oct12a-rate
  Starting runcpu for 999.specrand_ir test peak oct12a-rate
Running Benchmarks (up to 20 concurrent processes)
  Starting runcpu for 500.perlbench_r train base oct12a-rate
  Starting runcpu for 502.gcc_r       train base oct12a-rate
  Starting runcpu for 505.mcf_r       train base oct12a-rate
  Starting runcpu for 520.omnetpp_r   train base oct12a-rate
  Starting runcpu for 523.xalancbmk_r train base oct12a-rate
  Starting runcpu for 525.x264_r      train base oct12a-rate
  Starting runcpu for 531.deepsjeng_r train base oct12a-rate
  Starting runcpu for 541.leela_r     train base oct12a-rate
  Starting runcpu for 548.exchange2_r train base oct12a-rate
  Starting runcpu for 557.xz_r        train base oct12a-rate
  Starting runcpu for 999.specrand_ir train base oct12a-rate
  Starting runcpu for 500.perlbench_r train peak oct12a-rate
  Starting runcpu for 502.gcc_r       train peak oct12a-rate
  Starting runcpu for 505.mcf_r       train peak oct12a-rate
  Starting runcpu for 520.omnetpp_r   train peak oct12a-rate
  Starting runcpu for 523.xalancbmk_r train peak oct12a-rate
  Starting runcpu for 525.x264_r      train peak oct12a-rate
  Starting runcpu for 531.deepsjeng_r train peak oct12a-rate
  Starting runcpu for 541.leela_r     train peak oct12a-rate
  Starting runcpu for 548.exchange2_r train peak oct12a-rate
  Starting runcpu for 557.xz_r        train peak oct12a-rate
  Starting runcpu for 999.specrand_ir train peak oct12a-rate
Running Benchmarks
  Running (#1) 500.perlbench_r refrate (ref) base oct12a-rate (256 copies) [2016-10-12 22:18:24]
  Running (#1) 502.gcc_r       refrate (ref) base oct12a-rate (256 copies) [2016-10-12 23:10:43]
  Running (#1) 505.mcf_r       refrate (ref) base oct12a-rate (256 copies) [2016-10-13 00:11:01]
  Running (#1) 520.omnetpp_r   refrate (ref) base oct12a-rate (256 copies) [2016-10-13 01:53:35]
  Running (#1) 523.xalancbmk_r refrate (ref) base oct12a-rate (256 copies) [2016-10-13 02:40:23]
  Running (#1) 525.x264_r      refrate (ref) base oct12a-rate (256 copies) [2016-10-13 03:21:31]
  Running (#1) 531.deepsjeng_r refrate (ref) base oct12a-rate (256 copies) [2016-10-13 04:36:07]
  Running (#1) 541.leela_r     refrate (ref) base oct12a-rate (256 copies) [2016-10-13 05:12:11]
  Running (#1) 548.exchange2_r refrate (ref) base oct12a-rate (256 copies) [2016-10-13 05:59:16]
  Running (#1) 557.xz_r        refrate (ref) base oct12a-rate (256 copies) [2016-10-13 07:28:53]
  Running (#1) 999.specrand_ir refrate (ref) base oct12a-rate (256 copies) [2016-10-13 08:08:23]
  Running (#2) 500.perlbench_r refrate (ref) base oct12a-rate (256 copies) [2016-10-13 08:11:14]
  Running (#2) 502.gcc_r       refrate (ref) base oct12a-rate (256 copies) [2016-10-13 09:03:38]
  Running (#2) 505.mcf_r       refrate (ref) base oct12a-rate (256 copies) [2016-10-13 10:03:57]
  Running (#2) 520.omnetpp_r   refrate (ref) base oct12a-rate (256 copies) [2016-10-13 11:46:36]
  Running (#2) 523.xalancbmk_r refrate (ref) base oct12a-rate (256 copies) [2016-10-13 12:33:11]
  Running (#2) 525.x264_r      refrate (ref) base oct12a-rate (256 copies) [2016-10-13 13:14:07]
  Running (#2) 531.deepsjeng_r refrate (ref) base oct12a-rate (256 copies) [2016-10-13 14:28:47]
  Running (#2) 541.leela_r     refrate (ref) base oct12a-rate (256 copies) [2016-10-13 15:04:49]
  Running (#2) 548.exchange2_r refrate (ref) base oct12a-rate (256 copies) [2016-10-13 15:51:53]
  Running (#2) 557.xz_r        refrate (ref) base oct12a-rate (256 copies) [2016-10-13 17:21:33]
  Running (#2) 999.specrand_ir refrate (ref) base oct12a-rate (256 copies) [2016-10-13 18:01:04]
  Running (#1) 500.perlbench_r refrate (ref) peak oct12a-rate (224 copies) [2016-10-13 18:03:29]
  Running (#1) 502.gcc_r       refrate (ref) peak oct12a-rate (256 copies) [2016-10-13 18:49:17]
  Running (#1) 505.mcf_r       refrate (ref) peak oct12a-rate  (64 copies) [2016-10-13 19:44:21]
  Running (#1) 520.omnetpp_r   refrate (ref) peak oct12a-rate (256 copies) [2016-10-13 20:06:29]
  Running (#1) 523.xalancbmk_r refrate (ref) peak oct12a-rate (256 copies) [2016-10-13 20:54:49]
  Running (#1) 525.x264_r      refrate (ref) peak oct12a-rate (256 copies) [2016-10-13 21:28:24]
  Running (#1) 531.deepsjeng_r refrate (ref) peak oct12a-rate (256 copies) [2016-10-13 22:41:43]
  Running (#1) 541.leela_r     refrate (ref) peak oct12a-rate (256 copies) [2016-10-13 23:16:40]
  Running (#1) 548.exchange2_r refrate (ref) peak oct12a-rate (256 copies) [2016-10-14 00:01:53]
  Running (#1) 557.xz_r        refrate (ref) peak oct12a-rate (256 copies) [2016-10-14 01:11:23]
  Running (#1) 999.specrand_ir refrate (ref) peak oct12a-rate   (1 copy)   [2016-10-14 01:50:51]
  Running (#2) 500.perlbench_r refrate (ref) peak oct12a-rate (224 copies) [2016-10-14 01:53:13]
  Running (#2) 502.gcc_r       refrate (ref) peak oct12a-rate (256 copies) [2016-10-14 02:39:03]
  Running (#2) 505.mcf_r       refrate (ref) peak oct12a-rate  (64 copies) [2016-10-14 03:33:57]
  Running (#2) 520.omnetpp_r   refrate (ref) peak oct12a-rate (256 copies) [2016-10-14 03:56:04]
  Running (#2) 523.xalancbmk_r refrate (ref) peak oct12a-rate (256 copies) [2016-10-14 04:44:33]
  Running (#2) 525.x264_r      refrate (ref) peak oct12a-rate (256 copies) [2016-10-14 05:18:13]
  Running (#2) 531.deepsjeng_r refrate (ref) peak oct12a-rate (256 copies) [2016-10-14 06:31:34]
  Running (#2) 541.leela_r     refrate (ref) peak oct12a-rate (256 copies) [2016-10-14 07:06:33]
  Running (#2) 548.exchange2_r refrate (ref) peak oct12a-rate (256 copies) [2016-10-14 07:51:48]
  Running (#2) 557.xz_r        refrate (ref) peak oct12a-rate (256 copies) [2016-10-14 09:01:43]
  Running (#2) 999.specrand_ir refrate (ref) peak oct12a-rate   (1 copy)   [2016-10-14 09:41:13]
$ (white space adjusted for readability) 

1.5 Disk Usage

1.5.1 Directory tree

The structure of the CPU2017 directory tree is:

$SPEC or %SPEC% - the root directory
   benchspec    - Some suite-wide files
      CPU         - The benchmarks
   bin          - Tools to run and report on the suite
   config       - Config files
   Docs         - HTML documentation
   Docs.txt     - plaintext documentation
   result       - Log files and reports
   tmp          - Temporary files
   tools        - Sources for the CPU2017 tools

Within each of the individual benchmarks, the structure is:

nnn.benchmark - root for this benchmark
   build      - Benchmark binaries are built here
   data        
      all     - Data used by all runs (if needed by the benchmark)
      ref     - The timed data set
      test    - Data for a simple test that an executable is functional
      train   - Data for feedback-directed optimization
   Docs       - Documentation for this benchmark
   exe        - Compiled versions of the benchmark
   run        - Benchmarks are run here
   Spec       - SPEC metadata about the benchmark
   src        - The sources for the benchmark

Most SPECspeed benchmarks (6nn.benchmark_s) share content that is located under a corresponding SPECrate benchmark (5nn.benchmark_r). Shared sources files may be compiled differently for SPECspeed vs. SPECrate. For example, the sources for 619.lbm_s can be found at 519.lbm_r/src/ and only 619.lbm_s can be compiled with OpenMP.

Look for the output of your runcpu command in the directory $SPEC/result (Unix) or %SPEC%\result (Windows). There, you will find log files and result files. More information about log files can be found in the Config Files document.

The format of the result files depends on what was selected in your config file, but will typically include at least .txt for ASCII text, and will always include .rsf, for raw (unformatted) run data. More information about result formats can be found below, under --output_format. Note that you can always re-generate the output, using the --rawformat option, also documented below.

1.5.2 Hey! Where did all my disk space go?

When you find yourself wondering "Where did all my disk space go?", the answer is usually "The run directories." Most activity takes place in automatically created subdirectories of $SPEC/benchspec/CPU/*/run/ (Unix) or %SPEC%\benchspec\CPU\*\run\ (Windows). Other consumers of disk space underneath individual nnn.benchmark directories include the build/ and exe/ directories.

At the top of the directory tree, space is used by your config/ and result/ directories, and for temporary directories

$SPEC/tmp
output_root/tmp

Usually, the largest amount of space is in the run directories. For example, the tester who generated the result excerpted above is lazy about cleaning, and at the moment this paragraph is written, there are many SPECrate run directories on the system:

---------------------------------------
One lazy user's space. Yours will vary.
---------------------------------------
Directories                       GB
----------------------------     -----
Top-level (config,result,tmp)      0.1 
Benchmarks 
  $SPEC/benchspec/CPU/*/exe        2 
  $SPEC/benchspec/CPU/*/build      9 
  $SPEC/benchspec/CPU/*/run      198
---------------------------------------

If you use the config file label feature, then directories are named to try to make it easy for you to hunt them down, For example, suppose Bob has a config file that he is using to test some new memory optimizations using SPECrate (multi-copy) mode. He has set

label=BobMemoryOpt

in his config file. In that case, the tools would create directories such as these:

$ pwd
/Users/bob/cpu2017/rc4/benchspec/CPU/505.mcf_r
$ ls -d */*Bob*
build/build_base_BobMemoryOpt.0000
exe/mcf_r_base.BobMemoryOpt
run/run_base_refrate_BobMemoryOpt.0000
run/run_base_refrate_BobMemoryOpt.0001
run/run_base_refrate_BobMemoryOpt.0002
run/run_base_refrate_BobMemoryOpt.0003
run/run_base_refrate_BobMemoryOpt.0004
run/run_base_refrate_BobMemoryOpt.0005
run/run_base_refrate_BobMemoryOpt.0006
run/run_base_refrate_BobMemoryOpt.0007
run/run_base_refrate_BobMemoryOpt.0008
run/run_base_refrate_BobMemoryOpt.0009
run/run_base_refrate_BobMemoryOpt.0010
run/run_base_refrate_BobMemoryOpt.0011
run/run_base_refrate_BobMemoryOpt.0012
run/run_base_test_BobMemoryOpt.0000
run/run_base_train_BobMemoryOpt.0000
$  

To get your disk space back, see the documentation of the various cleaning options, below.

1.6 Multi-user support

SPEC CPU2017 supports multiple users sharing an installation; however you must choose carefully regarding file protections. This section describes the multi-user features and protection options.

Features that are always enabled:

Limitations: The default methods impose two key limitations, which will not be safe in some environments:

  1. The directory tree must be writable by each of the users, which means that they have to trust each other not to modify or delete each others' files.
  2. Directories such as result/ and nnn.benchmark/exe/ and nnn.benchmark/run/ are not segregated by user. Therefore you can have only one version of (for example) 500.perlbench/exe/perlbench_base.Ofast and different users will have their result logs intermixed with each others in the result/ directory.

Partial solution(?) expid+conventions:
You can deal with limitation #2 if users adopt certain habits. For example, Darryl could name all his config files darryl-something.cfg. He could use runcpu --expid=darryl or the corresponding config file expid=darryl to cause his results to be placed under $SPEC/result/darryl (or %SPEC%\result\darryl\) and binaries under nnn.benchmark/exe/darryl/. Unfortunately, this alleged solution still requires that the tree be writeable by all users, and will not help Darryl at all when John comes along and blithely does one of the alternate cleaning methods.

Solution(?) Give up:
You could just choose to spend the disk space to give each person their own tree. For SPEC CPU2017 V1.0, this may increase disk space requirement by about 3 GB per user.

Recommended Solution: output_root. The recommended method uses 4 steps:

StepExample (Unix)
(1) Protect most of the SPEC tree read-only chmod -R ugo-w $SPEC
(2) Allow shared access to the config directory chmod 1777 $SPEC/config
chmod u+w $SPEC/config/*cfg
(3) Keep your own config files cp config/assignment1.cfg config/alan1.cfg
(4) Use the --output_root switch or
add an output_root to your config file.
runcpu --output_root=~/cpu2017
output_root = /home/${username}/cpu2017

More detail:

  1. Most of the CPU2017 tree is shared, and can be protected read-only. For example, on a Unix system, you might set protections with:

    chmod -R ugo-w $SPEC
    
  2. The one exception is the config directory, $SPEC/config/ (Unix) or %SPEC%\config\ (Windows), which needs to be a read/write directory shared by all the users, and config files must be writeable. On most Unix system, chmod 1777 is very useful: it lets anyone create files, which they own, control, and protect. (1777 is commonly used for /tmp for this very reason.)

    chmod 1777 $SPEC/config
    chmod u+w $SPEC/config/*cfg
    
  3. Config files usually would not be shared between users. For example, students might create their own copies of a config file:

    Alan enters:

    cd /cs403/cpu2017
    . ./shrc
    cd config
    cp assignment1.cfg alan1.cfg
    chmod u+w alan1.cfg
    runcpu --config=alan1 --action=build 557.xz_r 

    Venkatesh enters:

    cd /cs403/cpu2017
    . ./shrc
    cd config
    cp assignment1.cfg venkatesh1.cfg
    chmod u+w venkatesh1.cfg
    runcpu --config=venkatesh1 --action=build 557.xz_r 
  4. Set output_root in the config files to change the destinations of the outputs. For example, if config files include (near the top):

    output_root=/home/${username}/spec
    label=feb27a
    

    then these directories will be used for the above runcpu command:

    Alan's directories
    build: /home/alan/spec/benchspec/CPU/557.xz_r/build/build_base_feb27a.0001
    Logs:  /home/alan/spec/result
    Venkatesh's
    build: /home/venkatesh/spec/benchspec/CPU/557.xz_r/build/build_base_feb27a.0000
    Logs:  /home/venkatesh/spec/result

Navigation: Unix users can easily navigate an output_root tree using ogo

1.7 Actions

Most runcpu commands perform an action on a set of benchmarks.

(Exceptions: runcpu --rawformat or update.)

The default action is validate.
The actions are described in two tables below: first, actions that relate to building and running; and then actions regarding cleanup.

--action build Compile the benchmarks, using the config file specmake options.
--action buildsetup

Set up build directories for the benchmarks.
Copy the source files to the directory, and create the needed Makefiles.
Do not attempt to actually do the build.

This option may be useful when debugging a build: you can set up a directory and play with it as a private sandbox.

--action onlyrun

Run the benchmarks but do not verify that they got the correct answers.
You cannot use this option to report performance.

This option may be useful while applying CPU2017 for some other purpose, such as tracing instructions for a hardware simulator, or generating a system load while debugging an operating system feature.

--action report Synonym for --fakereport; see also --fakereportable.
--action run Synonym for --action validate.
--action runsetup

Set up the run directory (or directories).
If executables do not exist, build them.
Copy executables and data to the directory(ies).
Create control file specccmds.cmd but do not actually run any benchmarks.

This option may be useful when debugging a run.
See the runsetup sandbox example in the Utilities documentation.

--action setup Synonym for --action runsetup
--action validate Build (if needed), set up directories, run, check for correct answers, generate reports.
This is the default action.

Cleaning actions are listed in order from least thorough to most:

--action clean

Empty run and build directories for the specified benchmark set for the current user. For example, if the current OS username is set to jeff and this command is entered:

D:\cpu2017\> runcpu --action clean --config may12a fprate

then the tools will remove build and run directories with username jeff for fprate benchmarks generated by config file may12a.cfg.

--action clobber Clean + remove the corresponding executables.
--action trash Remove run and build directories for all users and all labels for the specified benchmarks.
--action realclean A synonym for --action trash
--action scrub Trash + remove the corresponding executables.
Caution Fake mode is not implemented for the cleaning actions.
For example, if you say runcpu --fake --action=clean the cleaning really happens.

Clean by hand:
If you prefer, you can clean disk space by entering commands such as the following (on Unix systems):

rm -Rf $SPEC/benchspec/C*/*/run
rm -Rf $SPEC/benchspec/C*/*/build
rm -Rf $SPEC/benchspec/C*/*/exe 

The above commands not only empty the contents of the run and exe directories; they also delete the directories themselves. That's fine; the tools will re-create the run and exe directories if they are needed again later on.

result directories can be cleaned or renamed. Don't worry about creating a new directory; runcpu will do so automatically. You should be careful to ensure no surprises for any currently-running users. If you move result directories, it is a good idea to also clean temporary directories at the same time.
Example:

cd $SPEC
mv result old-result
rm -Rf tmp/
cd output_root     # (If you use an output_root)
rm -Rf tmp/

Windows users: Windows users can achieve similar effects using the rename command to move directories, and the rd command to remove directories.

I have so much disk space, I'll never use all of it:

Run directories are automatically re-used for subsequent runs. If you prefer, you can ask the tools to never touch a used run directory. Do this by setting the environment variable:

     SPEC_CPU2017_NO_RUNDIR_DEL

In this case, you should be prepared to do frequent cleaning, perhaps after reviewing the results of each run.

2 Commonly used options

Most users of runcpu will want to become familiar with the following options.

This section is organized alphabetically without regard to upper/lower case and without regard to presence or absence of no at the start of the switch.

--action action

--check_version

--config name

--copies number

--flagsurl URL[,URL...]

--help

--ignore_errors

--iterations number

--loose

--output_format format

Name|synonyms... Meaning
all implies all of the following except screen, check, and mail
config
   cfg|cfgfile
   configfile
   conffile

config file used for this run, written as a numbered file in the result directory, for example, $SPEC/result/CPU2017.030.fprate.refrate.cfg

  1. The config file is saved on every run, as a compressed portion of the rawfile. Therefore, you can regenerate it later, if desired, using rawformat
  2. Results published by SPEC include your config file. Anyone can download it and try to reproduce your result.
  3. The config file printed by --output_format=config is not identical to the original:

    • The file name matches the other files for this result, not the name you had in your config/ directory.
    • It does not include protected comments
    • It includes a copy of the runcpu line that invoked it.
    • It tells you whether output_root was defined.
    • It includes any result edits you make after the run (see utility.html).
    • It does not include the HASH section.
check
   subcheck
   reportcheck
   reportable
   reportablecheck
   chk|sub|subtest|test
Reportable syntax check (automatically enabled for reportable runs).
  • Causes the format of many fields to be checked, e.g. "Nov-2018", not "11/18" for hw_avail.
  • Consistent formats help readers, especially when searching.
  • check is included by default for reportable runs and when using --rawformat.
  • It can be disabled by adding nocheck to your list of formats.
csv
   spreadsheet

Comma-separated variable. If you populate spreadsheets from your runs, you probably should not cut/paste data from text files; you'll get more accurate data by using --output_format csv. The csv report includes all runs, more decimal places, system information, and even the compiler flags.

default
implies HTML and text
flag|flags
Flag report. Will also be produced when formats that use it are requested (PDF, HTML).
html
   xhtml|www|web
web page
mail
   mailto|email
All generated reports will be sent to an address specified in the config file.
pdf
   adobe
Portable Document Format. This format is the design center for SPEC CPU2017 reporting. Other formats contain less information: text lacks graphs, postscript lacks hyperlinks, and HTML is less structured. (PDF does not appear as part of "default" only because some systems may lack the ability to read it.)
postscript
   ps|printer|print
PostScript
raw
   rsf
The unformatted raw results, written to a numbered file in the result directory that ends with .rsf (e.g. /spec/cpu2017/rc4/result/CPU2017.042.fpspeed.rsf). Your raw result files are your most important, because the other formats are generated from them.
screen|scr|disp
   display|terminal|term
ASCII text output to stdout.
text
   txt|ASCII|asc
Plain ASCII text file

--rawformat rawfiles

--rebuild

--reportable

--threads N

--tune tuning

3. Less common options

This section is organized alphabetically without regard to upper/lower case and without regard to presence or absence of no at the start of the switch.

--baseonly

--basepeak [bench,bench,...]

--nobuild

--comment "comment text"

--define SYMBOL[=VALUE]
--define SYMBOL:VALUE

--delay secs

--deletework

--expid subdir

--fake

--fakereport

--fakereportable

--[no]feedback

--[no]graph_auto

--graph_max N

--graph_min N

--http_proxy proxy[:port]

--http_timeout N

--info_wrap_columns N

--[no]keeptmp

--label name

--[no]log_timestamp

--make_no_clobber

--notes_wrap_columns N

--output_root directory

--parallel_test processes

--parallel_test_workloads workload,...

--power --nopower

--preenv, --nopreenv

--review, --noreview

--setprocgroup, --nosetprocgroup

--size size[,size...]

--table, --notable

--test

--train_with WORKLOAD

--undef SYMBOL

--update

--use_submit_for_compare

--use_submit_for_speed

--username name

--verbose n

--version

4 Removed/unsupported options

4.1 Options that are no longer needed

Rate and Speed

The CPU2006 feature --rate[link goes to CPU2006]
and the CPU2006 feature --speed[link goes to CPU2006]
are not needed in SPEC CPU2017 because of a change of how benchmarks are defined.

In SPEC CPU2006, a given benchmark had a single source code version and a single workload version.
The workload could be run in two ways: either single-copy (SPECspeed) or multi-copy (SPECrate).

For SPEC CPU2017 the SPECrate and SPECspeed versions of a benchmark:

For example:

For more information, see the System Requirements discussion of Using Multiple CPUs.

Parallel setup

The SPEC CPU2006 feature --parallel_setup[link goes to CPU2006]
and the CPU2006 feature --parallel_setup_prefork[link goes to CPU2006]
and the CPU2006 feature --parallel_setup_type[link goes to CPU2006]
are not needed in SPEC CPU2017 because of a change in how benchmarks are set up.

For SPEC CPU2006, every SPECrate copy was set up with its own unique copy of the input data. For large SPECrate runs, large amounts of space were needed, and a lot of time (in some cases, hours).

For SPEC CPU2017, file system hard links are used to avoid copying such large amounts of data, and the features for parallel setup are no longer needed.

For example, on one particular system,

(One particular system, your space may vary.)

4.2 Features removed

The SPEC CPU2006 feature --machine[link goes to CPU2006]
was removed because it was rarely used; and the additional complexity and confusion that it caused was deemed not worthwhile.

The CPU2006 feature --maxcompares[link goes to CPU2006]
was removed due to complexity considerations when implementing the new parallel setup methods.

4.3 Unsupported

The SPEC CPU2006 feature --make_bundle[link goes to CPU2006]
and the CPU2006 feature --unpack_bundle[link goes to CPU2006]
and the CPU2006 feature --use_bundle[link goes to CPU2006]
have not been tested in the CPU2017 environment.
It is not known whether anyone uses the features, and they were deemed not a priority for V1.
It is possible that you might be able to get them to work by following the CPU2006 instructions linked above, but no promises are made.


5 Quick reference

(This table is organized alphabetically, without regard to upper/lower case, and without regard to the presence of a leading "no").

-a Same as --action
--action action Do: build|buildsetup|clean|clobber| onlyrun|realclean|report|run|runsetup|scrub| setup|trash|validate
--basepeak Copy base results to peak (use with --rawformat)
--nobuild Do not attempt to build binaries
-c Same as --config
-C Same as --copies
--check_version Check whether an updated version of CPU2017 is available
--comment "text"Add a comment to the log and the stored configfile.
--config file Set config file for runcpu to use
--copies Set the number of copies for a SPECrate run
-D Same as --rebuild
-d Same as --deletework
--debug Same as --verbose
--define SYMBOL[=VALUE] Define a config preprocessor macro
--delay secs Add delay before and after benchmark invocation
--deletework Force work directories to be rebuilt
--dryrun Same as --fake
--dry-run Same as --fake
--expid=dir Experiment id, a subdirectory to use for results/runs/exe
-F Same as --flagsurl
--fake Show what commands would be executed.
--fakereport Generate a report without compiling codes or doing a run.
--fakereportable Generate a fake report as if "--reportable" were set.
--[no]feedback Control whether builds use feedback directed optimization
--flagsurl url Location (url or filespec) where to find your flags file
--graph_auto Let the tools pick minimum and maximum for the graph
--graph_min N Set the minimum for the graph
--graph_max N Set the maximum for the graph
-h Same as --help
--help Print usage message
--http_proxy Specify the proxy for internet access
--http_timeout Timeout when attempting http access
-I Same as --ignore_errors
-i Same as --size
--ignore_errors Continue with benchmark runs even if some fail
--ignoreerror Same as --ignore_errors
--info_wrap_column NSet wrap width for non-notes informational items
--infowrap Same as --info_wrap_column
--input Same as --size
--iterations N Run each benchmark N times
--keeptmp Keep temporary files
-L Same as --label
-l Same as --loose
--label label Set the label for executables, build directories, and run directories
--loose Do not produce a reportable result
--noloose Same as --reportable
-M Same as --make_no_clobber
--make_no_clobber Do not delete existing object files before building.
--mockup Same as --fakereportable
-n Same as --iterations
-N Same as --nobuild
--notes_wrap_column NSet wrap width for notes lines
-noteswrap Same as --notes_wrap_column
-o Same as --output_format
--output_format format[,format...] Generate: all|cfg|check|csv|flags|html|mail|pdf|ps|raw|screen|text
--output_root=dir Write all files here instead of under $SPEC
--parallel_test Number of test/train workloads to run in parallel
--[no]power Control power measurement during run
--preenv Allow environment settings in config file to be applied
-R Same as --rawformat
--rawformat Format raw file
--rebuild Force a rebuild of binaries
--reportable Produce a reportable result
--noreportable Same as --loose
--reportonly Same as --fakereport
--[no]review Format results for review
-s Same as --reportable
-S SYMBOL[=VALUE] Same as --define
-S SYMBOL:VALUE Same as --define
--[no]setprocgroup [Don't] try to create all processes in one group.
--size size[,size...] Select data set(s): test|train|ref
--strict Same as --reportable
--nostrict Same as --loose
-T Same as --tune
--[no]table Do [not] include a detailed table of results
--threads=N Set number of OpenMP threads for a SPECspeed run
--test Run various perl validation tests on specperl
--train_with Change the training workload
--tune Set the tuning levels to one of: base|peak|all
--tuning Same as --tune
--undef SYMBOL Remove any definition of this config preprocessor macro
-U Same as --username
--update Check www.spec.org for updates to benchmark and example flag files, and config files
--username Name of user to tag as owner for run directories
--use_submit_for_compare If submit was used for the run, use it for comparisons too.
--use_submit_for_speed Use submit commands for SPECspeed (default is only for SPECrate).
-v Same as --verbose
--verbose Set verbosity level for messages to N
-V Same as --version
--version Output lots of version information
-? Same as --help

Using SPEC CPU®2017: The 'runcpu' Command: Copyright © 2017 Standard Performance Evaluation Corporation (SPEC)