Skip navigation

Standard Performance Evaluation Corporation

Facebook logo LinkedIn logo Twitter logo
 
 

SPEC CPU2017 and Simulation

Research by University of Texas at Austin

Simulating large programs such as CPU2017 running with reference input, can be extremely time consuming to the point of being impractical. To address this problem, researchers at the University of Texas at Austin have generated "pinballs" for representative regions of the CPU2017 workloads. The accuracy of these representative regions were validated on silicon systems as well as using the Sniper simulator. These checkpoints are available here. These pinballs run on the Pin-based record/replay toolkit and on various simulators.

This builds on prior work. University of California San Diego researchers developed the SimPoint tool, used to find representative regions based on phase analysis of large program runs. Researchers at Intel developed a dynamic analysis tool-chain based on PinPlay to profile large application runs, to feed the profiles to the SimPoint tool, and to use the resulting region description to create checkpoints for input to Pin-based simulation. Researchers at Ghent University ran the tool-chain on CPU2006 reference runs, created checkpoints ("pinballs") for representative regions ("PinPoints"), and validated the representative nature of the regions using a Pin-based simulator called Sniper. They are intended to be used as input to Pin-tools written with Pin-based record/replay toolkit for reproducible analysis. Note this prior work focused on earlier SPEC CPU benchmark suites; the University of Texas Austin work examines the SPEC CPU 2017 suite.

Please refer to the SPEC Fair Use Rules before using these checkpoints. If used as the basis for prediction of SPEC run time or a SPEC metric, any results published must be very clearly tagged as "Estimated" or "Estimated by simulation of pinballs for representative simulation regions (PinPoints)".