Skip navigation

Standard Performance Evaluation Corporation

Facebook logo LinkedIn logo Twitter logo

SPECvirt® Datacenter 2021

The SPECvirt® Datacenter 2021 benchmark is the next generation of virtualization benchmarking for measuring performance of a scaled-out datacenter. The SPECvirt Datacenter 2021 benchmark is a multi-host benchmark using simulated and real-life workloads to measure the overall efficiency of virtualization solutions and their management environments.

The SPECvirt Datacenter 2021 benchmark price is $2500 for new customers and $625 for qualified non profit organizations and accredited academic institutions. To find out if your organization has an existing license for a SPEC product please contact SPEC at

The SPECvirt Datacenter 2021 benchmark differs from the SPEC VIRT_SC® 2013 benchmark in that SPEC VIRT_SC benchmark measures single host performance and provides interesting host-level information. However, most of today's datacenters use clusters for reliability, availability, serviceability, and security. Adding virtualization to a clustered solution enhances server optimization, flexibility, and application availability while reducing costs through server and datacenter consolidation.

Patch updates:

  • A patch for the SPECvirt Datacenter 2021 benchmark was released on April 28, 2023. Patch P1 addresses an issue when the harness fails to verify the guest's active profile against the harness manifest. Use this patch if you encounter MD5 mismatches in your result output. This patch is optional and is not required for compliance.

The benchmark provides a methodical way to measure scalability and is designed to be utilized across multiple vendor platforms. The primary goal of the benchmark is to provide a standard method for measuring a virtualization platform's ability to model a dynamic datacenter virtual environment. It models typical, modern-day usage of virtualized infrastructure, such as virtual machine (VM) resource provisioning, cross-node load balancing including management operations such as VM migrations, and VM power on/off. Its multi-host environment exercises datacenter operations under load. It dynamically provisions new workload tiles by either using a VM template or powering on existing VMs. As load reaches maximum capacity of the cluster, hosts are added to the cluster to measure scheduler efficiency.

Another of benchmark's goals is ease of benchmarking. Manually creating the VM, installing the operating system into it, adjusting specific OS tuning settings, installing workload applications, and generating the workload data can be complicated and prone to error. To address this, SPEC provides a pre-built appliance containing the controller, workload driver clients, and workload VMs for the base metric. The software is pre-loaded and pre-configured in an effort to minimize the benchmarker's intervention and reduce implementation time and effort.

Rather than offering a single workload that attempts to approximate the breadth of consolidated virtualized server characteristics, the benchmark uses a five-workload benchmark design:

  • OLTP database workload (HammerDB)
  • Hadoop / Big Data environment (BigBench)
  • Departmental mail server emulating the resource profile of SPEC VIRT_SC 2013's SPECimap
  • Departmental web server modeled after the workload profile of SPEC VIRT_SC 2013's modified SPECweb® 2005 benchmark
  • Departmental collaboration server emulating a resource profile based on customer data for collaboration services

Scaling is achieved by running additional sets of virtual machines, called "tiles", until overall throughput reaches a peak while all workloads continue to meet required quality of service (QoS) criteria.


Submitted Results: Includes all of the results submitted to SPEC from the SPEC member companies and other licensees of the benchmark.

Search the SPEC virt_datacenter 2021 benchmark results in SPEC's online result database


Press Releases

Press release material, documents, and announcements:

Benchmark Documentation