Standard Performance Evaluation Corporation
SPEC/HPG Releases SPEC OMPL2001 for Measuring Performance Based on Large OpenMP Applications
WARRENTON, Va., June 3, 2002 - The Standard Performance Evaluation Corp.'s High-Performance Group (SPEC/HPG) today announced SPEC OMPL2001, a new benchmark suite that measures the performance of computing systems running large scalable applications based on the OpenMP standard for shared-memory parallel processing.
"Parallelism now spans from desktop to HPC systems, and SPEC OMPL2001 will provide useful information to the OpenMP community," says David Kuck, an Intel Fellow, general manager of KAI SW Lab, and a leading force in developing OpenMP. "I applaud the people who have pioneered in this area to produce a large SPEC benchmark."
SPEC OMPL2001 is targeted at system vendors, software vendors, and customers of high-performance computing systems. It uses a set of shared-memory, parallel-processing applications to measure the performance of the computing system's processors, memory architecture, operating system and compiler. Nine different application tests - covering everything from quantum chromodynamics to finite-element crash simulation to shallow water modeling - are included in the benchmark suite.
"SPEC OMPL2001 is a great addition to the SPEC family of benchmarks," says Kaivalya Dixit, SPEC president. "It answers the need for standardized benchmarks for memory- and compute-intensive OpenMP-based applications that are used in industry and research."
SPEC OMPL2001 contains larger working sets and longer run times than SPEC/HPG's SPEC OMPM2001, which was released last June. Application benchmarks running under SPEC OMPL2001 use up to 6.4GB of memory and take approximately four hours each to run on a 300MHz, 16-processor reference machine. SPEC OMPM2001 uses medium-sized workloads that require 1.6GB of memory and take an hour and a half each to run on a 350MHz, four-processor reference machine.
"The two SPEC OMP benchmark suites and OpenMP provide a portable, standardized, directive-based approach to parallelism that allows accurate performance measurement," says Wesley Jones, SPEC/HPG chair. "SPEC's run and reporting rules, along with established review processes, will help maintain accurate, repeatable and consistent benchmark results."
Run rules for the SPEC OMP benchmark suites are based on those used for SPEC CPU2000, the worldwide standard for evaluating computer system performance. Performance metrics can be generated to reflect more aggressive optimizations with some code modifications (SPECompLpeak2001), and for conservative options (SPECompLbase2001) such as those suggested by the compiler vendor.
SPEC OMPL2001 and SPEC OMPM2001 are available immediately from SPEC for $1,800 each or $3,000 for both. Discounts are available for qualifying universities and non-profit organizations. More information on the SPEC OMP benchmarks and upgrade pricing is available on the SPEC web site at http://www.spec.org/hpg/omp/ or through e-mail at firstname.lastname@example.org.
The OpenMP Application Program Interface (API) supports multi-platform shared-memory parallel programming in C/C++ and Fortran on all architectures, including Unix platforms and Windows NT platforms. Jointly defined by a group of major computer hardware and software vendors, OpenMP is a portable, scalable model that gives shared-memory parallel programmers a simple and flexible interface for developing parallel applications for platforms ranging from the desktop to the supercomputer. For more information, visit www.openmp.org.
SPEC is a non-profit organization that establishes, maintains and endorses standardized benchmarks to measure the performance of the newest generation of high-performance computers. Its membership comprises leading computer hardware and software vendors, universities, and research organizations worldwide. For more information, contact Dianne Rice, SPEC, 6585 Merchant Place, Ste. 100, Warrenton, VA 20187, USA; phone: 540-349-7878; fax: 540-349-5992; e-mail: email@example.com; web: http://www.spec.org.