SPEC Seal of Reviewal HPC2002 Result
Copyright © 1999-2002 Standard Performance Evaluation Corporation
Hewlett-Packard Company
hp server rx2600 cluster (1500MHz Itanium2)
 SPECenvM2002 = 399    
SPEC license # HPG2116 Tested by: Hewlett-Packard Company Test site: Richardson, Texas Test date: Feb-2004 Hardware Avail: May-2004 Software Avail: May-2004
Benchmark Reference
Runtime Ratio Graph Scale
361.wrf_m 86400 217    399    
361.wrf_m peak result bar (399)
SPECenvM2002 399      

Hardware Vendor: Hewlett-Packard Company
Model Name: hp server rx2600 cluster (1500MHz Itanium2)
CPU: Intel Itanium 2
CPU MHz: 1500
FPU: Integrated
CPU(s) enabled: 48
CPU(s) orderable: 1 to 2 per node, up to 64 nodes
Primary Cache: L1 Inst/Data: 16 KB, associativity = 4
Secondary Cache: L2 Unified: 256 KB, associativity = 8
L3 Cache: L3 Unified: 6144 KB, associativity = 24
Other Cache: None
Memory: 12GB per node (12 x 1 GB DDR 266 DIMMS)
Disk Subsystem: 1x36GB 10k RPM SCSI system disk per node
Other Hardware: See Notes section below.
Parallel: MPI
Processes-Threads: 48
MPI Processes: 48
OpenMP Threads: N/A
Operating System: HPUX11i-TCOE B.11.23
Compiler: HP C/ANSI C Compiler B.11.23
HP aC++ Compiler B.11.23
HP Fortran 90 Compiler B.11.23
HP LIBF90 PHSS_29620
HP F90 Compiler PHSS_29663
HP aC++ Compiler PHSS_29655
HP C Compiler PHSS_29656
u2comp/be/plugin library PHSS_29657
File System: vxfs (system), vxfs through NFS (benchmark files)
System State: Multi-user
Other Software: NetCDF 3.5.0, HP MPI v2.00.01
Notes / Tuning Information
CPU(s) enabled: 48 (two per node, 24 nodes)

Other Hardware:
  Computation Network:
    AB286A PCI-X 2-port Infiniband HCA for HPC
    AB346A 5m copper cable PCI-X Infiniband
    AB353A 7m copper cable PCI-X Infiniband
    AB291A PCI-X 12-port InfiniBand Copper Switch
    Topspin 96-port IB copper switch 99-00020-01 TS170
      98-00045-01 12-port leaf boards (8)
      98-00047-01 power supply
      98-00044-01 controller module
    GigaBit on-board adapter for Administration and NFS 
    PCI GigaBit card for NFS traffic  (GigE-TX adapter A6825A )

  NFS file server:
    rp5470 (PA-RISC) NFS File Server
      4 PA8700 CPUs 750 MHz.  16 GB of memory 
      4 internal disks 73 GB Ultra2 SCSI
      20 external disks 18 GB U160 SCSI striped with 
         LVM across 4 SCSI controllers 
      15 external disks 73 GB FibreChannel mirrored 
         with LVM across 2 FC controllers which 
         contain the NFS filesystems accessed by the benchmark. 
         These NFS filesystems are optimized for 
         security rather than performance.

  File Server Network:
    HP ProCurve 9308 64-port copper Gigabit Ethernet Switch
    Built-in Gigabit Ethernet Adapters (one per node)

 Peak Flags:  MPI
   mpif90  +DD64 +noppu +Ofast   +Oinfo   +U77
   mpicc  -Ae +DD64 +Ofast -DNOUNDERSCORE -DSPEC_HPG_MPI2  
   CPPFLAGS = -I. -C -P                               
   EXTRA_LIBS=   -minshared   -L${NETCDF}/lib/hpux64 -lnetcdf
   NETCDF   = /home/clpack/netcdf-3.5.0

 Alternate Source used for Peak:
     361.wrf_m: module_big_step_utilities_em.F90 
       Improve data locality via loop interchange.
       Available as SPEC HPC2002 Source: env2002-src_hp-20040303.tar.gz

Kernel Paramters (/stand/system):
   maxdsiz         0x7b03a000
   maxdsiz_64bit   0x4000000000
   maxssiz         0x10000000
   maxssiz_64bit   0x40000000
   maxtsiz         1073741824
   maxtsiz_64bit   4294967296
   vps_pagesize    4
   vps_ceiling     64
   dbc_min_pct     3
   dbc_max_pct     3
 Peak User Environment:
   submit = /home/f90pack/clust_mpirun  $command

   mpirun -ITAPI  -f appfile
    -h  rx17  -np 2 -e MPI_FLAGS=y -e MPI_WORKDIR=$cwd  $command 
    -h  rx40  -np 2 -e MPI_FLAGS=y -e MPI_WORKDIR=$cwd  $command

 LSF used to initiate batch job submissions. 
 Appfile is generated from within the LSF run.

 Netcdf source obtained from 
 Netcdf built for HPUX 64 bit mode with:
    setenv CC '/opt/ansic/bin/cc +DD64'
    setenv FC '/opt/fortran90/bin/f90 +DD64'
    setenv FFLAGS -w
    setenv FLIBS -lU77
    setenv CXX '/opt/aCC/bin/aCC +DD64'

For questions about this result, please contact the tester.
For other inquiries, please contact webmaster@spec.org
Copyright © 1999-2002 Standard Performance Evaluation Corporation

First published at SPEC.org on 03-Mar-2004

Generated on Wed Mar 3 21:48:23 2004 by SPEC HPC2002 HTML formatter v1.01