SPEC Describes SPEC95 Products And Benchmarks
Jeff Reilly
Intel Corporation
Santa Clara, Calif.
Published September, 1995; see disclaimer.
With the SPEC95 suites announced by SPEC this summer, it is important to
introduce the product and its contents. Below is an introduction to the
SPEC95 product. Upcoming SPEC Newsletter articles will discuss
the benchmarks, analysis and results in more detail.
SPEC95 refers to the total SPEC95 product provided by SPEC. SPEC95 is
composed of two suites of benchmarks:

SPEC CINT95:

a set of eight computeintensive integer/nonfloating point benchmarks

SPEC CFP95:

a set of 10 computeintensive floating point benchmarks
These are intended to provide a measure of computeintensive performance of
the processor, memory hierarchy and compiler components (the 'C' in
CINT95 and CFP95) of a computer system for comparison purposes. They are
not intended to stress the graphics, network or I/O. When you receive the
SPEC95 media, it will contain the following:

The SPEC95 benchmarks:

these are applications (referred to as SPEC benchmarks hereafter)
provided as source code. These are divided into two separate suites.

CINT95:

eight computeintensive integer benchmarks.

CFP95:

10 computeintensive floating point benchmarks.

The SPEC95 tools:

these are executable scripts that are used to compile, run and validate
the results of each of the individual SPEC benchmarks. These will be
provided in binary format for a variety of operating systems. The source
code for these tools will also be provided with directions and scripts
for compiling them in the case you are using an environment not covered
by the provided binaries.

Run Rules:

Rules for running the benchmarks (needed to ensure fairness and
comparability).

Reporting Rules:

Guideline for reporting results.
The procedure for running the benchmarks include:

Read/understand the SPEC documentation, including the SPEC Run and
Reporting Rules.

Compile and create appropriate tool environment.

Edit configuration files to include compiler parameters for the compiler
to be used (in accordance with the SPEC Run and Reporting Rules.

Compile the benchmarks with the SPEC95 tools.

Run and validate the benchmarks with the SPEC95 tools.

Generate the appropriate individual and combined metrics with the SPEC95
tools.
The table below contains a brief description of the benchmarks and their
SPEC reference times (used for calculating SPEC metrics). More detailed
analysis and descriptions will be provided in future issues of the SPEC
Newsletter.
CINT95 Benchmarks
Benchmark

Reference Time (Sec)

Application Area

Specific Task

099.go

4600

Game playing; artificial intelligence

Plays the game Go against itself.

124.m88ksim

1900

Simulation

Simulates the Motorola 88100 processor running Dhrystone and a memory
test program.

126.gcc

1700

Programming & compilation

Compiles preprocessed source into optimized SPARC assembly code.

129.compress

1800

Compression

Compresses large text files (about 16MB) using adaptive LimpelZiv
coding.

130.li

1900

Language interpreter

Lisp interpreter.

132.ijpeg

2400

Imaging

Performs jpeg image compression with various parameters.

134.perl

1900

Shell interpreter

Performs text and numeric manipulations (anagrams/prime number
factoring).

147.vortex

2700

Database

Builds and manipulates three interrelated databases.

CFP95 Benchmarks
Benchmark

Reference Time (Sec)

Application Area

Specific Task

101.tomcatv

3700

Fluid Dynamics / Geometric Translation

Generation of a twodimensional boundaryfitted coordinate system
around general geometric domains.

102.swim

8600

Weather Prediction

Solves shallow water equations using finite difference approximations.
(The only single precision benchmark in CFP95.)

103.su2cor

1400

Quantum Physics

Masses of elementary particles are computed in the QuarkGluon theory.

104.hydro2d

2400

Astrophysics

Hydrodynamical Navier Stokes equations are used to compute galactic
jets.

107.mgrid

2500

Electromagnetism

Calculation of a 3D potential field.

110.applu

2200

Fluid Dynamics/Math

Solves matrix system with pivoting.

125.turb3d

4100

Simulation

Simulates turbulence in a cubic area.

141.apsi

2100

Weather Predication

Calculates statistics on temperature and pollutants in a grid.

145.fpppp

9600

Chemistry

Performs multielectron derivatives.

146.wave

3000

Electromagnetics

Solve's Maxwell's equations on a cartesian mesh.

So how are these benchmarks used to measure and compare performance? Within
the realm of computeintensive performance, several quantities can be
considered. The following categories are part of SPEC95:

integer versus floating point.

conservative versus aggressive compilation.

speed versus throughput.
Based on these choices, the SPEC95 tools allow you to generate the
following composite metrics:
SPEED THROUGHPUT
Aggressive
SPECint95 SPECint_rate95
SPECfp95 SPECfp_rate95
Conservative
SPECint_base95 SPECint_rate_base95
SPECfp_base95 SPECfp_rate_base95
For SPEC purposes, the rules for aggressive and conservative/base
optimizations are listed extensively in the SPEC Run and Reporting Rules
supplied with SPEC95.
Note these are the composite metrics; an individual score is calculated for
each benchmark in CINT95 or CFP95 and used to calculate this composite.
For the speed measures, each benchmark has its own SPECratio. A SPECratio
is the runtime of the benchmark on the measured system divided into the
SPEC reference time.
Mathematically: SPECratio for xxx.benchmark = xxx.benchmark reference time
/xxx.benchmark run time.
The composite metrics are then calculated as:

SPECint95: The geometric mean of eight SPECratios (one for each integer
benchmark) when compiled with aggressive optimizations for each
benchmark.

SPECint_base95: The geometric mean of eight SPECratios (one for each
integer benchmark) when compiled with the conservative optimizations for
each benchmark.

SPECfp95: The geometric mean of 10 normalized ratios (one for each
floating point benchmark) when compiled with aggressive optimization for
each benchmark.

SPECfp_base95: The geometric mean of 10 normalized ratios (one for each
floating point benchmark) when compiled with the conservative
optimizations for each benchmark.
For example, the SPECint95 of a system would be: SPECint95 = (099.go
SPECratio * 124.m88ksim SPECratio ... * 147.vortex SPEratio) ^ (1/8)
For the throughput measures, each benchmark has its own SPECrate. A
SPECrate is a function of the number of copies run, the time it took to
complete all of the those copies and a reference factor.
Mathematically: SPECrate for xxx.benchmark = # of copies run * (reference
time of xxx.benchmark * the number of seconds in a day/longest SPEC95
Reference Time)/runtime of all xxx.benchmark executions.
The composite metrics are then calculated as: Throughput metrics:

SPECint_rate95: The geometric mean of eight normalized SPECrates (one for
each integer benchmark) when compiled with aggressive optimization for
each benchmark.

SPECint_rate_base95: The geometric mean of eight SPECrates (one for each
integer benchmark) when compiled with conservative optimizations for each
benchmark.

SPECfp_rate95: The geometric mean of 10 SPECrates (one for each floating
point benchmark) when compiled with aggressive optimization for each
benchmark.

SPECfp_rate_base95: The geometric mean of 10 SPECrates (one for each
floating point benchmark) when compiled with conservative optimizations
for each benchmark.
For example, the SPECint_rate95 of a system would be: SPECint95 = (099.go
SPECrate * 124.m88ksim SPECrate ... * 147.vortexSPECrate) ^ (1/8)
Jeff Reilly is the Release Manager for SPEC95 and is a Project Lead at
Intel Corporation in Santa Clara, Calif.