SPEC

SPEC Cloud® IaaS 2018 benchmark

SPEC.org Submission Requirements Results must be reviewed and accepted by SPEC prior to public disclosure.
SPEC Metrics
  • Replicated Application Instances: <#Valid AIs> copies
    • Performance Score: <#Sum WkldPerfScores >
    • Relative Scalability: <percentage> %
    • Mean Instance Provisioning Time (s)
  • AI Provisioning Success: <percentage> %
  • AI Run Success: <percentage> %
  • Total Instances: <#instances>
  • Scale-out Start Time (yyyy-mm-dd_hh:mm:ss_UTC)
  • Scale-out End Time (yyyy-mm-dd_hh:mm:ss_UTC)
Required Metrics
  • SPEC Cloud® IaaS 2018 Replicated Application Instances <#Valid AIs> copies
    • SPEC Cloud® IaaS 2018 Performance Score: <#Sum WkldPerfScores>
    • SPEC Cloud® IaaS 2018 Relative Scalability: <percentage> %
    • SPEC Cloud® IaaS 2018 Mean Instance Provisioning Time (s)

The required metric must be listed in close proximity to any other measured data from the disclosure or any derived value.

Conditionally Required Metrics The Scale-out Start and End Times must be reported when any of the cloud's resources are not under complete control of the tester The Test Region(s) must be listed in close proximity to the Scale-out Start and End times for the test.
Use of Estimates Not allowed.
Disallowed Comparisons

In addition to the requirements that results not be compared to other benchmarks:

1. The SPEC Cloud IaaS 2018 benchmark uses specific versions of the Yahoo! Cloud Serving Benchmark (YCSB) and K-Means clustering workload from the HiBench Suite as its component workloads as these are established industry-standard workloads. These workloads are run with very specific parameterized constraints specific to this SPEC benchmark to focus on stressing particular aspects of the SUT's resources typical of Cloud IaaS environments. As such, the differences are significant enough that comparisons between the results generated by SPEC Cloud IaaS 2018 benchmark and the original component workloads are not allowed.

2. SPEC Cloud IaaS 2018 metrics are not comparable to SPEC Cloud IaaS 2016 due to changes to workload parameters and metric methodology.

[Back to SPEC Fair Use]

(Retired) SPEC Cloud® IaaS 2016 benchmark

RETIRED

The SPEC Cloud IaaS 2016 benchmark was retired on March 6, 2019 in favor of its successor, the SPEC Cloud® IaaS 2018 benchmark.

  • All public use of results for this benchmark must plainly disclose that the benchmark has been retired, as described above.
  • No further submissions will be accepted for publication at www.spec.org
  • SPEC is no longer reviewing results for this benchmark.
  • Independent publication of new results is not allowed (because the benchmark results required review by SPEC before publication)
SPEC.org Submission Requirements Results must be reviewed and accepted by SPEC prior to public disclosure.
SPEC Metrics
  • Scalability: <#measured> @ <#compliant> Application Instance
  • Elasticity: <percentage> %
  • Mean Instance Provisioning Time (s)
  • AI Provisioning Success: <percentage> %
  • AI Run Success: <percentage> %
  • Total Instances: <#instances>
  • Elasticity Start Time (yyyy-mm-dd_hh:mm:ss_UTC)
  • Elasticity End Time (yyyy-mm-dd_hh:mm:ss_UTC)
Required Metrics
  • SPEC Cloud® IaaS 2016 Scalability: <#measured> @ <#compliant> Application Instance
  • SPEC Cloud® IaaS 2016 Elasticity: <percentage> %
  • SPEC Cloud® IaaS 2016 Mean Instance Provisioning Time (s)

The required metric must be listed in close proximity to any other measured data from the disclosure or any derived value.

Conditionally Required Metrics The Elasticity Start and End Times must be reported when any of the cloud's resources are not under complete control of the tester. The Test Region(s) must be listed in close proximity to the Elasticity Start and End times for the test.
Use of Estimates Not allowed.
Disallowed Comparisons

In addition to the requirements that results not be compared to other benchmarks:

1. The SPEC Cloud IaaS 2016 benchmark uses specific versions of the Yahoo! Cloud Serving Benchmark (YCSB) and K-Means clustering workload from the HiBench Suite as its component workloads as these are established industry-standard workloads. These workloads are run with very specific parameterized constraints specific to this SPEC benchmark to focus on stressing particular aspects of the SUT's resources typical of Cloud IaaS environments. As such, the differences are significant enough that comparisons between the results generated by SPEC Cloud IaaS 2016 benchmark and the original component workloads are not allowed.

[Back to SPEC Fair Use]