Skip navigation

Standard Performance Evaluation Corporation

Facebook logo LinkedIn logo Twitter logo Google+ logo
 
 

The story of a benchmark contributor

When you are developing a benchmark suite such as SPEC CPU2017, it's almost impossible to do it alone, even with the vast international membership represented under the SPEC umbrella: SPEC CPU2017 includes 43 individual benchmarks organized into four suites: SPECspeed 2017 Integer, SPECspeed 2017 Floating Point, SPECrate 2017 Integer, and SPECrate 2017 Floating Point. That's a lot of benchmarking.

Fortunately, the SPEC CPU subcommittee established a search program to encourage benchmark submissions from outside the SPEC membership. The SPEC High-Performance Group wrapped up a similar effort recently.

Below is the story of one of the contributors to the SPEC CPU2017 search program.

Thanks to the author, Deana Totzke, a Communications Specialist II at the Texas A&M Engineering Experiment Station, for allowing us to reprint this article. And special thanks, of course, to Dr. Jian Tao and all the other great contributors to SPEC CPU2017.

January 2018 -- Dr. Jian Tao, a Texas A&M Engineering Experiment Station (TEES) research scientist with affiliation to the Texas A&M High Performance Research Computing (HPRC) group, received a cash award and a free benchmark license for application code and datasets accepted under a benchmark search program sponsored by the Standard Performance Evaluation Corporation (SPEC). The new SPEC CPU2017 benchmark suite replaces SPEC CPU2006, launched 11 years ago.

SPEC is a nonprofit corporation formed to establish, maintain and endorse standardized benchmarks and tools to evaluate performance and energy efficiency for the newest generation of computing systems. Its membership comprises more than 120 leading computer hardware and software vendors, educational institutions, research organizations and government agencies worldwide.

Answering the call

In 2008, SPEC started looking for contributions for its new CPU benchmark suite via the SPEC CPU Benchmark Search Program. As a postdoctoral research associate at Louisiana State University working on XiRel, a National Science Foundation-funded project to build the foundation of next generation numerical relativity code, Tao and his colleagues submitted an entry. Their code went through vigorous tests and six evaluation steps during the past nine years. It was accepted in June 2017 to be officially part of the latest version of the SPEC CPU benchmark suite. Other authors of the benchmark suite are Dr. Gabrielle Allen (professor at the University of Illinois Urbana-Champaign), Dr. Erik Schenetter (research technologies group lead at Perimeter Institute for Theoretical Physics, Canada) and Dr. Peter Diener (research assistant professor of physics at Louisiana State University).

Since its launch in 2006, more than 43,000 SPEC CPU2006 performance results have been published on SPEC's website. The SPEC CPU benchmark suite is the worldwide standard for evaluating performance for purchasing decisions and new hardware development. Thousands of articles appear on news sites each year citing SPEC CPU testing results.

Solving Einstein equations

The benchmarks developed by Tao and his colleagues are based on the Cactus Computational Framework. This benchmark uses an old version of the Einstein Toolkit to solve the Einstein equations in a vacuum. The numerical kernel of this benchmark, McLachlan, is automatically generated from a high level set of partial differential equations with the Kranc code generation package. In this benchmark, a vacuum flat space-time is simulated with finite differencing in space and an explicit time integration method. The Einstein Toolkit is now widely used by researchers to model gravitational waves from binary black hole mergers and colliding neutron stars.

Leveraging high-performance computing

Tao joined Dr. Narasimha Reddy, associate agency director for TEES Strategic Initiatives and centers, in November 2016 to build the College of Engineering High Performance Computing (COE-HPC) team to help faculty members, research staff and students at Texas A&M leverage high-performance computing facilities both on and off campus for their research.

The COE-HPC team's goal is to enhance the advanced computational research activities within Texas A&M Engineering and be a liaison to HPRC to enable the broader research computing community at Texas A&M. In addition to helping faculty members, research staff and graduate students on their research work, they actively participate the education and outreach activities with staff members at HPRC to help organize summer camps, workshops, tutorials, supercomputing booths and more. The COE-HPC works with HPRC staff members to host short courses and workshops on various HPC related subjects on the Texas A&M campus.