The SPECjbb2015 Benchmark Result File Fields

Last updated: October 4, 2019

To check for possible updates to this document, please see http://www.spec.org/jbb2015/docs/SPECjbb2015-Result_File_Fields.html

ABSTRACT
This document describes the various fields in the result file making up the complete the SPECjbb2015 benchmark result disclosure.


Table of Contents

1.SPECjbb2015 Benchmark

2.Top Bar

2.1 Headline

2.2 Test sponsor

2.3 SPEC license #

2.4 Hardware Availability

2.5 Tested by

2.6 Test Location

2.7 Software Availability

2.8 Test Date

2.9 Publication Date

2.10 INVALID or WARNING or COMMENTS

3.Benchmark Results Summary

3.1 Category

3.2 Result Chart

4. Overall SUT (System Under Test) Description

4.1 Vendor

4.2 System Vendor URL

4.3 System Source

4.4 System Designation

4.5 Total System Count

4.6 All SUT Systems Identical

4.7 Total Node Count

4.8 All Nodes Identical

4.9 Nodes Per System

4.10 Total Chips

4.11 Total Cores

4.12 Total Threads

4.13 Total Memory (Gb)

4.14 Total OS Images

4.15 SW Environment

5. SUT Description

5.1 Hardware

5.1.1 HW Name

5.1.2 HW Vendor

5.1.3 HW Vendor URL

5.1.4 HW Available

5.1.5 Model

5.1.6 Number of Systems

5.1.7 Form Factor

5.1.8 Nodes Per System

5.1.9 CPU Name

5.1.10 CPU Characteristics

5.1.11 Chips Per System

5.1.12 Cores Per System

5.1.13 Cores Per Chip

5.1.14 Threads Per System

5.1.15 Threads Per Core

5.1.16 HW Version

5.1.17 CPU Frequency (MHz)

5.1.18 Primary Cache

5.1.19 Secondary Cache

5.1.20 Tertiary Cache

5.1.21 Other Cache

5.1.22 Disk Drive

5.1.23 File System

5.1.24 Memory Amount (GB)

5.1.25 # and size of DIMM(s)

5.1.26 Memory Details

5.1.27 # and type of Network Interface Cards (NICs) Installed

5.1.28 Power Supply Quantity and Rating (W)

5.1.29 Other Hardware

5.1.30 Cabinet/Housing/Enclosure

5.1.31 Shared Description

5.1.32 Shared Comment

5.1.33 Notes

5.2 Other Hardware/Software

5.2.1 Name

5.2.2 Vendor

5.2.3 Vendor URL

5.2.4 Version

5.2.5 Available

5.2.6 Bitness

5.2.7 Notes

5.3 Operating System

5.3.1 OS Name

5.3.2 OS Vendor

5.3.3 OS Vendor URL

5.3.4 OS Version

5.3.5 OS Available

5.3.6 OS Bitness

5.3.7 OS Notes

5.4 Java Virtual Machine (JVM)

5.4.1 JVM Name

5.4.2 JVM Vendor

5.4.3 JVM Vendor URL

5.4.4 JVM Version

5.4.5 JVM Available

5.4.6 JVM Bitness

5.4.7 JVM Notes

6. Topology

7. SUT or Driver configuration

7.1 Hardware

7.1.1 OS Images

7.1.2 Hardware Description

7.1.3 Number of Systems

7.1.4 SW Environment

7.1.5 Tuning

7.1.6 Notes

7.2 OS image

7.2.1 JVM Instances

7.2.2 OS Image Description

7.2.3 Tuning

7.2.4 Notes

7.3 JVM Instance

7.3.1 Parts of Benchmark

7.3.2 JVM Instance Description

7.3.3 Command Line

7.3.4 Tunning

7.3.5 Notes

8. Results Details

8.1 max-jOPS

8.2 critical-jOPS

8.3 Last Success jOPS/First Failure jOPS for SLA points Table

9. Number of probes

10. Request Mix Accuracy

11. Rate of non-critical failures

12. Delay between performance status pings

13. IR/PR accuracy

14. Controller time offset from Time Server

15. Run Properties

16. Validation Details

16.1 Validation Reports

16.1.1 Compliance

16.1.2 Correctness

16.2 OtherChecks


1. SPECjbb2015 Benchmark

The SPECjbb2015 (Java Server Benchmark) is SPEC's benchmark for evaluating the performance of server side Java. Like its predecessors, SPECjbb2000/5, the SPECjbb2015 benchmark evaluates the performance of server side Java by emulating a three-tier client/server system (with emphasis on the middle tier). The benchmark exercises the implementations of the JVM (Java Virtual Machine), JIT (Just-In-Time) compiler, garbage collection, threads and some aspects of the operating system. It also measures the performance of CPUs, caches, memory hierarchy and the scalability of shared memory processors (SMPs). The benchmark also using an approach of reporting response time while gradually increasingly the load and reporting not only full system capacity throughput, but also throughput under response time constraint..

The benchmark suite consists of three separate software modules:

These modules work together in real-time to collect server performance data by exercising the system under test (SUT) with a predefined workload.


2. Top Bar

The top bar shows the measured SPECjbb2015 benchmark result and gives some general information regarding this test run.

2.1 Headline

The headline of the performance report includes one field displaying the hardware vendor and the name of the system under test. If this report is for a historical system the declaration "(Historical)" must be added to the model name. In a second field the max-jOPS and critical-jOPS is printed, eventually prefixed by an "Invalid" indicator, if the current result does not pass the validity checks implemented in the benchmark.

2.2 Test sponsor

The name of the organization or individual that sponsored the test.Generally, this is the name of the license holder.

2.3 SPEC license #

The SPEC license number of the organization or individual that ran the benchmark

2.4 Hardware Availability

The date when all the hardware necessary to run the result is generally available. For example, if the CPU is available in Aug-2007, but the memory is not available until Oct-2007, then the hardware availability date is Oct-2007 (unless some other component pushes it out farther).

2.5 Tested by

The name of the organization or individual that ran the test and submitted the result.

2.6 Test Location

The name of the city, the state and country the test took place. If there are installations in multiple geographic locations, that must also be listed in this field.

2.7 Software Availability

The date when all the software necessary to run the result is generally available. For example, if the operating system is available in Aug-2007, but the JVM is not available until Oct-2007, then the software availability date is Oct-2007 (unless some other component pushes it out farther).

2.8 Test Date

The date when the test is run. This value is automatically supplied by the benchmark software; the time reported by the system under test is recorded in the raw result file .

2.9 Publication Date

The date when this report will be published after finishing the review. This date is automatically filled in with the correct value by the submission tool provided by SPEC. By default this field is set to "Unpublished" by the software generating the report.

2.10 INVALID or WARNING or COMMENTS

Any inconsistencies with the run and reporting rules causing a failure of one of the validity checks implemented in the report generation software will be reported here and all pages of the report file will be stamped with an "Invalid" water mark in case this happens. The printed text will show more details about which of the run rules wasn't met and the reason why. More detailed explanation may as well be at the end of report in sections "Run Properties" or "Validation Details". If there are any special waivers or other comments from SPEC editor, those will also be listed here.


3 Benchmark Results Summary

This section describes the result details as a graph (jOPS and Response time), the SPECjbb2015 benchmark category, number of groups and links to other sections of the report.

3.1 Category

The header of this section decrypts as which the SPECjbb2015 benchmark category was run and how many "number of groups" were set to run using property "specjbb.group.count":

3.2 Result Chart

The raw data from this graph can be found by clicking on the graph. This graph only shows the Response-Throughput (RT) phase of the benchmark. Initial phase of finding High Bound Injection Rate (HBIR) (Approximate High Bound of throughput) and later validation at the end of the run are not part of this graph. X-axis is showing jOPS (Injection Rate : IR) as system is being tested for gradually increasing RT step levels in increments of 1% of HBIR. Y-axis is showing response time (min, various percentiles, max) where 99th percentile determines the critical-jOPS metric being shown a yellow vertical line. The last successful RT step level before the "First Failure" of an RT step level is marked as red vertical line reflecting the max-jOPS metric of the benchmark. Benchmark continues to test few RT step levels beyond the "First Failure" RT step level. Often, there should be very few RT step levels passing beyond "First Failure" RT step level else it indicates that with more tuning system should be able to pass higher max-jOPS. A user need to view either controller.out or level-1 report output to view details about levels beyond "First Failure" RT step level.


4. Overall SUT (System Under Test) Description

The following section of the report file gives the system under test (SUT) overview.

4.1 Vendor

Company which sells the system.

4.2 System Vendor URL

URL of system vendor.

4.3 System Source

Single Supplier or Parts Built

4.4 System Designation

Possible values for this property are:

4.5 Total System Count

The total number of configured systems.

4.6 All SUT Systems Identical

{YES / NO].

4.7 Total Node Count

The total number of configured systems. Please refer to Run and Reporting Rules document for definition of system. As example, a rack based blade system, can be one system with many blade nodes with all running under single OS image or each running its own OS image.

4.8 All Nodes Identical

{YES / NO].

4.9 Nodes Per System

The number of nodes configured on each system.

4.10 Total Chips

The number of total chip installed on all system(s) in overall SUT(s).

4.11 Total Cores

The number of total cores installed on all system(s) in overall SUT(s).

4.12 Total Threads

The number of total thread on all system(s) in overall SUT(s).

4.13 Total Memory (Gb)

The number of total memory installed on all system(s) in overall SUT(s).

4.14 Total OS Images

The number of total OS images installed on all system(s) in overall SUT(s).

4.15 SW Environment

Environment mode. [virtual / Non-virtual]


5. SUT Description

The following section of the report file describes the hardware and the software of the system under test (SUT) used to run the reported benchmark with the level of detail required to reproduce this result.

5.1 Hardware

The following section of the report file describes the hardware and the software of the system under test (SUT) used to run the reported benchmark with the level of detail required to reproduce this result. Same fields are also valid for Driver system(s) HW and SW description. For driver system, some fields like memory etc. may not be needed in as details as for SUT.

5.1.1 HW Name

HW Name.

5.1.2 HW Vendor

The Company name which sells the system.

5.1.3 HW Vendor URL

The URL of the system vendor.

5.1.4 HW Available

The HW availability (month-year) of the system.

5.1.5 Model

The model name identifying the system under test

5.1.6 Number of Systems

The number of systems under test

5.1.7 Form Factor

The form factor for this system.
In multi-node configurations, this is the form factor for a single node. For rack-mounted systems, specify the number of rack units. For blades, specify "Blade". For other types of systems, specify "Tower" or "Other".

5.1.8 Nodes Per System

The number of nodes per system.

5.1.9 CPU Name

A manufacturer-determined processor formal name.

5.1.10 CPU Characteristics

Technical characteristics to help identify the processor, such as number of cores, frequency, cache size etc.
If the CPU is capable of automatically running the processor core(s) faster than the nominal frequency and this feature is enabled, this field should also list the feature and the maximum frequency it enables on that CPU (e.g.: "Intel Turbo Boost Technology up to 3.46GHz").
If this CPU clock feature is present but is disabled, no additional information is required here.

5.1.11 Chips Per System

The numberof Chips Per System.

5.1.12 Cores Per System

The number of Cores Per System.

5.1.13 Cores Per Chip

The number of Cores Per Chip.

5.1.14 Threads Per System

The number of Threads Per System.

5.1.15 Threads Per Core

The number of Threads Per Core.

5.1.16 HW Version

The HW version (if there is one), and the BIOS version.

5.1.17 CPU Frequency (MHz)

The nominal (marked) clock frequency of the CPU, expressed in megahertz.
If the CPU is capable of automatically running the processor core(s) faster than the nominal frequency and this feature is enabled, then the CPU Characteristics field must list additional information, at least the maximum frequency and the use of this feature.
Furthermore if the enabled/disabled status of this feature is changed from the default setting this must be documented in the System Under Test Notes field.

5.1.18 Primary Cache

Description (size and organization) of the CPU's primary cache. This cache is also referred to as "L1 cache".

5.1.19 Secondary Cache

Description (size and organization) of the CPU's secondary cache. This cache is also referred to as "L2 cache".

5.1.20 Tertiary Cache

Description (size and organization) of the CPU's tertiary, or "L3 cache".

5.1.21 Other Cache

Description (size and organization) of any other levels of cache memory.

5.1.22 Disk Drive

A description of the disk drive(s) (count, model, size, type, rotational speed and RAID level if any) used to boot the operating system and to hold the benchmark software and data during the run.

5.1.23 File System

The file system used.

5.1.24 Memory Amount (GB)

Total size of memory in the SUT in GB.

5.1.25 # and size of DIMM(s)

Number and size of memory modules used for testing.

5.1.26 Memory Details

Detailed description of the system main memory technology, sufficient for identifying the memory used in this test.
Potentially there can be multiple instances of this field if different types of DIMMs have been used for this test, one separate field for each DIMM type.
Since the introduction of DDR4 memory there are two slightly different formats.
The recommended formats are described here.

DDR4 Format:
N x gg ss pheRxff PC4v-wwwwaa-m


References:

Example:
8 x 16 GB 2Rx4 PC4-2133P-R

Where: Notes:

DDR3 Format:
N x gg ss eRxff PChv-wwwwwm-aa, ECC CLa; slots k, ... l populated


Reference: Example:
8 x 8 GB 2Rx4 PC3L-12800R-11, ECC CL10; slots 1 - 8 populated

Where: