SPECvirt Client Harness User's Guide

Version 1.01



1.0 Requirements
2.0 Setting Up the Test Environment
2.1 Running the SPECvirt Installer
2.2 Setting Up SPECimap, SPECjAppServer2004, and SPECweb2005
2.3 Setting Up SPECpoll
2.3.1 SPECpoll on the VMs
2.3.2 SPECpoll on the Clients
2.4 Setting Up the SPECvirt Prime Controller
2.4.1 Editing the Control.config File
2.4.2 Editing the Testbed.config File
3.0 Setting Benchmark Load Levels
3.1 Compliant Benchmark Load Level Modifications
3.2 Non-compliant Benchmark Load Level Modifications
4.0 Manipulating Tile Ordering
5.0 Starting a Benchmark Run
5.1 Synchronizing System Clocks
5.2 Starting Up the Workload Prime Client and Client Managers
5.2.1 Starting the Prime Client Managers
5.2.2 Starting the Client Managers
5.2.3 One-Tile llustration: Stage 1
5.3 Starting the Power and Temperature Daemons
5.4 Starting the SPECvirt Prime Controller
6.0 Report Generation
6.1 HTML Report Generation
6.2 Submission (.sub) File Generation
Appendix A - Control.config File Properties
Appendix B - Driving Two SPECvirt Tiles from One Physical Client
B.1 Overview
B.1 On the Prime Controller
B.3 On the Client That Drives Two Tiles
B.4 Practical Hints



1.0 Requirements

These instructions assume you have already set up the six VMs that support the modified SPECweb2005, SPECjAppServer2004, SPECimap, and SPECpoll workloads that make up a SPECvirt "tile". Refer to the SPECvirt BaseVM User's Guide and the workload-specific documentation included with the version of these workloads provided in this benchmark kit. The following instructions are intended to assist the user in setting up the SPECvirt prime controller that executes these workloads as subcomponents of its virtualization-based server consolidation benchmark.

These instructions also assume you have read the SPECvirt Design Document and are familiar with concepts and terminology introduced there that are used in these instructions.

2.0 Setting Up the Test Environment

Although these instructions are specific to client-side setup, they also reference server-side components. Therefore, the following figure representing all components of the testbed environment may be a useful visual reference as you work through these instructions. Note that this figure represents a single-tile test configuration. For each additional tile, an additional client box and VM box would be required (and possibly additional external storage, if applicable).

Figure 0
Single-tile, full testbed representation.

2.1 Running the SPECvirt Installer

The SPECvirt installer copies the required benchmark files into the directory of your choosing. For the purposes of this guide, we assume you have chosen to install the SPECVirt benchmark in /opt. Based on this assumption, the following is the expected harness-specific directory structure after running the installer:
opt
- SPECimap
- SPECjAppServer2004
- SPECpoll
- SPECptd
- SPECvirt
- SPECweb2005

2.2 Setting Up SPECimap, SPECjAppServer2004, and SPECweb2005

The setup documentation for the SPECimap, SPECjAppServer2004, and SPECweb2005 workloads is located in these workloads' respective directories. Those instructions are not duplicated here. Note that the versions of these workloads provided in the SPECvirt benchmark do not run as independent benchmarks; they must be run within the SPECvirt harness. Therefore, while using these workloads' documentation to set them up, avoid trying to run these workloads independently. The way to run them independently is within the SPECvirt benchmark harness, as briefly described in the first paragraph of Section 3.2.

2.3 Setting Up SPECpoll

SPECpoll is a "workload" specific to the SPECvirt harness. It serves as a simple polling driver/responder, merely validating that the VM target is running.

A SPECvirt benchmark run has two measurement phases: a "loaded" phase and an "unloaded" phase. During the loaded phase, SPECpoll is used to poll the idle server VM while the other threee workloads are creating request-generated load against their corresponding VMs. During the no-load, or "active idle" phase, SPECpoll is used to poll all VMs. (Please refer to the SPECpower Methodology for more information about "active idle" power measurement.)

There are two different sets of setup instructions: those for the VMs, and those for the clients.

2.3.1 SPECpoll on the VMs

The key difference between this workload and the others is the need for a polling "receiver" on the host VMs to respond to these poll requests. If you used a benchmark VM setup script or a set of example binary images created for this benchmark to create your VMs, these processes may already have been installed on your VMs and be up and running using the RMI port specified in the harness's Test.config file. However, if you set up your VMs manually, or want to reconfigure, here are the instructions for setting up the "listener" on each VM:
  1. Copy the SPECpoll directory to each of the VMs in the desired location (you can actually just copy pollme.jar if you prefer).
  2. Determine the port you wish to listen on. This must be consistent with the RMI_PORT value specified in Test.config on the SPECpoll client that communicates with the VM. (We assume port 8001 in our example)
  3. Determine the network interface on which this process will listen. For all except the dbserver and infraserver VMs, this is the network interface used for communication between the client and server VM. For the database server VM, it is the interface used for communication between the application server and the database server. For the infraserver VM, it is the interface used for communication between the web server and the BeSim server.
  4. From the SPECpoll directory, execute "java -jar pollme.jar [-n <hostname/interface>] -p 8001"
Note: the pollme class by default binds to the network interface that corresponds with the name by which the VM knows itself. If, for example, the infraserver VM's name resolves to the external network interface when the process is required to be bound to the internal network interface, then you need to specify a hostname that resolves to the internal network interface when you invoke pollme to prevent it from binding to the external one. So if, for example, the host name "infraserver-int" in the hosts file resolves to the internal network interface, you can direct it to the correct interface by invoking pollme on the infraserver VM with the following parameters: "java -jar pollme.jar -n infraserver-int -p 8001"

2.3.2 SPECpoll on the Clients

SPECpoll on the clients uses only specpoll.jar and specpollclient.jar -- pollme.jar is not invoked or used. The prime client and client classes contained in specpoll.jar and specpollclient.jar, respectively, are invoked in the same way as other workload prime and client classes, and require no specific configuration other than possibly editing their Test.config file. Modify Test.config accordingly (if needed):
  1. IDLE_SERVER: This is the hostname of the idle server VM (value used only for idle server polling; ignored for active-idle polling)
  2. RMI_PORT: This is the port on which the pollme class on the VMs is listening. This must be the same port for all VMs (in our example, 8001)

2.4 Setting Up the SPECvirt Prime Controller

Install the SPECvirt harness code on all client systems that run any of the benchmark workloads (including SPECpoll, used for the polling the idle server). This is because it is actually the SPECvirt's clientmgr class that starts these benchmark workloads at the beginning of the run and it assumes the path to the workload is local. For our purposes, we assume that the prime clients and all benchmark workloads of one tile will run on the same client. This is a straightforward way to distribute the client processes to your hardware, but is not required. Appropriate modification of the Control.config file allows several different client systems to drive the benchmark workloads for the same tile. While we strongly recommend running the prime controller on a separate physical system from those hosting the workloads, it can also run on any of the systems hosting clientmgr processes.

2.4.1 Editing the Control.config File

Because editing the Control.config file requires adding information about your benchmark workload configurations, make sure you have installed the benchmark and set up each of the workloads before proceeding (see the SPECvirt Base VM User's Guide or workload-specific setup instructions for more information). Please refer to Appendix A for a list of keys in this file and their descriptions. Note that all subsequent references in this document to words in all capital letters refer to properties in the Control.config file.

2.4.2 Editing the Testbed.config File

The Testbed.config file in the SPECvirt home directory is where the testbed-specific configuration information must be entered describing the hardware and software used, tunings applied, and any other details required to reproduce the test environment. Note that any Testbed.config files that are a part of any of the individual workloads are not used by this benchmark (although the type of information entered is very similar).

3.0 Setting Benchmark Load Levels

With workload VMs set up according to the instructions in the SPECvirt BaseVM User's Guide, the following are the key properties used in setting benchmark load levels. Note that this section deals with benchmark load level modifcations. The load levels of the individuals workloads are fixed and controlled by the WORKLOAD_LOAD_LEVEL[] values. While you can modify these values to run non-compliant tests, running at modified workload load levels may require changes to database, web server fileset and mailstore size. These requirements for worklaod-specific load level modifications are beyond the scope of this guide, and should be researched in the workload-specific benchmark documentation.

3.1 Compliant Benchmark Load Level Modifications

Benchmark load is always increased in units of tiles. This is controlled by the property NUM_TILES in Control.config. Of course, adding three workloads worth of load when you want to increase the load on a server may not provide the level of load granularity desired. This is where the indexed version of the LOAD_SCALE_FACTORS[] property can help. For a single tile (specified by the index value) you can set the load for that tile to between 10% and 90% of full load in 10% increments. (Doing this for more than one tile is possible, but non-compliant.)

3.2 Non-compliant Benchmark Load Level Modifications

Of course there are times when running a compliant benchmark configuration is not an objective. For example, you may want to run a subset of the benchmark workloads to focus on issues specific to that one workload. Modifying NUM_WORKLOADS allows you to do that. The key point to keep in mind is that you may have to change the workload number indexes in the Control.config file. For example, if you wanted only to run the web workload, you would need to change the web workload index (and all related web-specific indexes) in Control.config from "1" to "0", and then set "NUM_WORKLOADS = 1".

If you want to run all workloads on all tiles at a higher or lower load level than the default, changing the value of the property LOAD_SCALE_FACTORS allows you to do this. (Of course, for higher load levels, you need to assure that the corresponding workload VM datasets are built to support the higher load levels.) Note that the comma-delimited string of numbers in the non-indexed LOAD_SCALE_FACTORS property determines the number of measurement intervals to run in a single test. Also note that the ",0" at the end of the LOAD_SCALE_FACTORS string only applies to power measurement runs. In the case of power-included runs, wherever a "0" value is included in the LOAD_SCALE_FACTORS string, during that interval the prime controller runs an active idle measurement. However, all "0" values are ignored for non-power tests. The only reason to remove the active idle-specific load reference in this string is if you wanted to skip an active idle measurement in a benchmark run that includes power measurement.

4.0 Manipulating Tile Ordering

Note: This is an advanced benchmark configuration technique that can safely be ignored when setting up your first, single-tile test. However, as these instructions hopefully make clear, this capability may become handy as you start adding tiles.

By default, tile index numbers must start at 0 and increase by one for each added tile. And because the datasets for each tile are tile number-specific, using this default methodology requires that you first set up and run Tile 0, and then set up and add Tile 1, etc. Further, by default each tile can only be run along with all of the lower numbered tiles. That is, you cannot run Tile 1 by itself because the default ordering scheme expects the first tile to be Tile 0.

This is where the TILE_ORDINAL property may come in handy. Using the TILE_ORDINAL property supercedes the default ordering scheme. However, if tile ordinals are used, then they must be specified for all tiles used in a benchmark run. For example, if you use TILE_ORDINAL for a four-tile run, the harness will expect TILE_ORDINAL[0] through TILE_ORDINAL[3] defined in Control.config. (It will ignore any values for indexes greater than 3.)

The simplest and perhaps most common case for using TILE_ORDINAL is when you have just set up your second tile and want to test only that tile in a benchmark run. In that case, you set "TILE_ORDINAL[0] = 1" and then make sure all other tile index references in Control.config for Tile 1 are consistent with that tile (e.g. assure the PRIME_HOST[1][w] values point to the hostnames and ports for Tile 1, etc). When the prime controller begins benchmark execution, it will then see that you want Tile 1 to be your first tile, and will execute accordingly.

With TILE_ORDINALs, the only expectation is that the TILE_ORDINAL indexes start at 0 and increase by one for each additional tile. The values used for the tile numbers and their ordering are not bound by such constraints. For example, assuming you had four tiles set up and wanted to run two of them at a time, in addition to running:

TILE_ORDINAL[0] = 2
TILE_ORDINAL[1] = 3

you could run:

TILE_ORDINAL[0] = 1
TILE_ORDINAL[1] = 3

or even:

TILE_ORDINAL[0] = 3
TILE_ORDINAL[1] = 0

Thus the TILE_ORDINAL property allows running any tile in any order in a benchmark run, provided the corresponding tile indexes for the other properties in Control.config are consistent. For example, in a four-tile run using the TILE_ORDINAL property, LOAD_SCALE_FACTORS[3] no longer also refers to the fourth tile in a run. It now refers specifically to Tile 3. So if Tile 3 was not included as one of the values in the TILE_ORDINAL list, it would skip this tile-specific load scaling and would instead run all tiles at the default LOAD_SCALE_FACTORS rate.

5.0 Starting a Benchmark Run

5.1 Synchronizing System Clocks

At the start of every benchmark run, the specvirt prime controller will perform a two-phase time synchronization check between the clients, prime clients, and VMs in the first phase, and between the prime clients and the prime controller in the second phase. Therefore, we recommend synchronizing system clocks between all of these components before the start of each benchmark run.

If this synchronization is performed via NTP, then you must assure that time synchronization does not occur in the middle of a benchmark run, as time shifts during a run can compromise response time measurements on the clients as well as compromise the jappserver workload's ability to accurately perform post-run database checks.

5.2 Starting Up the Workload Prime Client and Client Managers

For each workload instance in every tile you must start a dedicated client manager process that will start the prime clients at the beginning of a run. Similarly, you need to start one client manager process for each physical client used by these prime clients. If multiple prime clients use the same physical client, however, only a single client manager process is required to drive the clients used by all prime clients.

5.2.1 Starting the Prime Client Managers

For each PRIME_HOST in Control.config, start a client manager process on the specified host and port.

For example, for PRIME_HOST[0][0] = "myhostname:1098", from the SPECvirt directory on myhostname, start a client manager process as follows:

java -jar clientmgr.jar -p 1098 -log

Repeat this for each PRIME_HOST entry in Control.config. For a compliant benchmark, you will start four of these process for each tile (one each for jappserver, specweb, specimap, and specpoll).

5.2.2 Starting the Client Managers

On each physical client used by any of the prime clients, start a client manager process on the specified port. To do so, for each unique host in WORKLOAD_CLIENTS, start a client manager process on the port specified by CLIENT_LISTENER_PORT

ex. For WORKLOAD_CLIENTS[0] = "myhostname:1091" and CLIENT_LISTENER_PORT = "1088", from the SPECvirt directory on myhostname, I would start a client manager process as follows:

java -jar clientmgr.jar -p 1088 -log

Repeat this for each unique client host. Note that you do not use the port specified in WORKLOAD_CLIENTS -- that port is for the workload client to use to listen for RMI commands from its prime client. The CLIENT_LISTENER_PORT (1088) is used for communication between the SPECvirt prime controller and the client manager process.

5.2.3 One-Tile llustration: Stage 1

Using the above instructions to set up the harness to run the load-generating processes for a single tile on one client requires starting four clientmgr processes for each of the four prime clients (SPECjApp, SPECweb, SPECimap, and SPECpoll) and a single clientmgr process that would start all of the client processes for these four workloads. The following picture is a representation of these five processes on a single client, listening for commands on their respective RMI ports. This is the first stage of the startup process.

Figure 1
Stage 1: The clientmgr processes are started.

5.3 Starting the Power and Temperature Daemons

The following instructions apply only to users who intend to collect power and temperature measurements during a benchmark run and these instructions assume you have power and temperature meters already properly connected to the SUT and/or external storage. If you do not have existing power meters but wish to configure and test the use of power and/or temperature daemons during a benchmark run, you can configure the daemons to run in "dummy mode" (though obviously this results in a non-compliant benchmark test).

The following example also assumes the daemons are being started on a "unix-like" prime controller and communication betwen the prime controller and the daemons occurs via the controller's serial ports. However, these daemons need not be local to the controller, and there are Windows executable files available for daemons connected to Windows systems.

Within the installation directory containing the SPECvirt and workload directories (/opt by default for a Unix/Linux environment) is a "SPECptd" directory that contains the ptd executable and script/batch files for starting the power and temperature daemons. The format for starting the (Linux) ptd is:

./ptd-linux-x86 [options] <device-type-#> <device-port>

From the /opt/SPECptd directory, running "./ptd-linux-x86" displays the invocation options for this executable. For communicating with a supported power meter, you can find the number that corresponds to your meter in this output. ("0" starts the ptd in dummy mode.) Of the paramter options listed in the output, the "-t" option (runs the ptd in temperature mode) and the "-p port" option are the most commonly used. Since the ptd by default tries to use port 8888, you must use the "-p port" option to override this value if that port is already in use by another ptd or other process.

As an example:

./ptd-linux-x86 -p 8890 8 /dev/ttyS0

starts a ptd daemon in power mode using port 8890 and communicates with a Yokogawa WT210 power meter connected to /dev/ttyS0 (COM1) of a (Linux-based) prime controller. Alternatively:

./ptd-linux-x86 -t -p8890 1000 /dev/ttyS0

runs the ptd in "temperature mode" with the ptd returning "dummy" temperature data.

Once the PTD executables are able to communicate with the power meters correctly when started, the next step is to tell the prime controller about these PTD settings in Control.config. The first parameter to change is to set USE_PTDS to "1". Once so set, the controller uses all PTDs defined via a PTD_HOSTS[x] entry. PTD_HOST is the hostname of the system running the PTD. For this example, since the three PTDs are running on the prime controller, we can simply set:

PTD_HOST[0] = localhost
PTD_HOST[1] = localhost
PTD_HOST[2] = localhost

Next tell the prime controller what port each of the PTDs are listening on. This must match the ports specified when invoking the PTD:

PTD_PORT[0] = 8888
PTD_PORT[1] = 8889
PTD_PORT[2] = 8890

Lastly tell the prime controller what the specified PTD is measuring: server power (SUT), external storage power(EXT_STOR), or in the case of ambient temperature measurement, which component the temperature sensor is near:

PTD_TARGET[0] = "SUT"
PTD_TARGET[1] = "EXT_STOR"
PTD_TARGET[2] = "SUT"

For the temperature daemon the PTD_TARGET is either SUT or EXT_STOR, depending on where ambient temperature is being measured. SAMPLE_RATE_OVERRIDE and OVERRIDE_RATE_MS should generally not be modified. LOCAL_HOSTNAME and LOCAL_PORT specify the local network interface and port to use to connect with the PTD_HOST. (In most cases, specifying these two properties is unnecessary, and they can be left commented out.) The following picture shows the addition of three power/temperature daemons, listening for commands on their respective RMI ports.

Figure 2
Stage 2: The power and temperature daemons are started.

The last step in configuring the PTDs is to link a specific PTD with a specific power meter description. This is done through the PWR.PTD_INDEX[] and TMP.PTD_INDEX[] properties in Testbed.config. For each power or temperature meter listed in Testbed.config there must be a PTD_INDEX value that corresponds to one of the PTD_HOST indexes in Control.config.

5.4 Starting the SPECvirt Prime Controller

Now that all of the client manager listeners are up and any power and temperature daemons are ready to poll their respective meters, you are ready to start the prime controller to begin a benchmark test. To do so, open a console window on your prime controller system and from the SPECvirt directory, run:

java -jar specvirt.jar -l

Figure 3
Stage 3: The specvirt prime controller is started.

The SPECvirt prime controller (specvirt) next tells the client managers that host the workload clients to start the workload clients:

Figure 4
Stage 4: The specvirt prime controller starts workload clients.

The prime controller then waits PRIME_START_DELAY seconds and then tells the client managers hosting the workload prime clients to start their prime clients.

Figure 5
Stage 5: The specvirt prime controller starts workload prime clients.

With all of the PTD, workload client, and prime client processes running, the benchmark has started and the remainder of the benchmark run involves communication between these processes and the SPECvirt prime controller (specvirt) as illustrated in the following figure.

Figure 6
Stage 6: Client-side benchmark runtime communication

If you have multiple values in LOAD_SCALE_FACTORS, it iterates through that number of load points as independent workload runs of a single SPECvirt benchmark run. That is, results from all iterations are reported in a single SPECvirt benchmark raw file. At the end of each iteration, after all prime clients have reported that their runs have ended, the prime controller cleans everything up and terminates the run. If the client and prime client processes have exited correctly, you should see three extra line-feeds after the "Done killing procs ..." message on each client manager console. These extra lines are added as demarcation points between run intervals, but also provide a quick way of determining whether the client manager process cleaned up everything correctly. If you don't see these line feeds, stop and restart the client manager process before attempting another benchmark run.

Between load point intervals, the prime controller waits QUIESCE_SECONDS and then starts the next load interval. Polling data as well as the performance/QOS-related data from each run interval is included in the specvirt raw file (specvirt-*.raw) in the SPECvirt results subdirectory.

Once the benchmark has ended, to start another run on the SPECvirt prime controller, invoke:

java -jar specvirt.jar -l

The client manager processes on the benchmark clients remain running, as do any ptd processes, so you only need to restart the SPECvirt prime controller. Note that while the number of clientmgr processes increases with increasing numbers of tiles (i.e. increasing load), there is always just one prime controller and one set of power/temperature daemons, regardless of the number of tiles.

6.0 Report Generation

6.1 HTML Report Generation

Once all run intervals have successfully completed, the benchmark reporter will generate the formatted reports, based on the RESULT_TYPE value provided in Control.config. RESULT_TYPE controls not just which formatted results will be generated, but in the case of a benchmark submission to SPEC, this parameter also getermines the result category (or categories) into which the submitter intends to submit this result. The following table lists the valid RESULT_TYPE values and corresponding types of results generated and/or submitted:

RESULT_TYPE perf ppw ppws
1 x

2
x
3 x x
4

x
5 x
x
6
x x
7 x x x
perf:  generate a non-power report (with SPECvirt_sc2010 metric)
ppw:  generate a SUT power-performance report (with SPECvirt_sc2010_PPW metric)
ppws:  generate a server-only (primary metric includes server power only) power-performance report (with SPECvirt_sc2010_ServerPPW metric)

If the report requires editing, modify the properties in the raw file rather than in Control.config or Testbed.config. Within the raw file, however, RESULT_TYPE is the only editable property from Control.config. Other than this, only the properties contained in Testbed.config may be edited in a raw file.

Once edited, to regenerate the formatted results using the edited raw file, invoke the reporter by passing it the name of the raw file as a parameter. Ex:

java -jar reporter.jar -r <raw_file_name>

For a complete list of reporter invocation options, pass the reporter the "-h" argument.

6.2 Submission (.sub) File Generation

If you want to submit a benchmark result to SPEC for review and publication and have not already edited and regenerated the raw file manually, you will need to run the raw file created by the specvirt harness through the reporter with the same syntax as used for formatted html file regeneration:

java -jar reporter.jar -r <raw_file_name> [-t [1-7]]

If you wish to change the type of formatted result files generated without changing the RESULT_TYPE property in the raw file, override the value in the raw file by passing the “-t” parameter with the corresponding result type to the reporter. Otherwise, you can omit this parameter from the invocation string.

If you have a submission file and want to recreate the raw file from which it was generated, you can invoke:

java -jar reporter.jar -s <sub_file_name>

and it will strip out the extra characters from the submission file so that you can view or work with the original raw file. This is the recommended method for editing a file post-submission because it assures you are not working with an outdated version of the corresponding raw file and potentially introducing previously corrected errors into the "corrected" submission file.



































Appendix A - Control.config File Properties

For any of the keys, below, that use indexes, "w" always represents the workload index and "t" always represents the tile ID. The following describes the configuration properties in this file.

CONFIGURABLE BENCHMARK PROPERTIES
KEY DESCRIPTION
NUM_TILES NUM_TILES is the primary property used to increase or decrease the load on the SUT.
SPECVIRT_HOST
SPECVIRT_RMI_PORT
These are the hostname and port on which the SPECvirt prime controller listens for RMI commands. Because the prime clients use this information to contact the prime controller, the hostname used must resolve to the same IP address on both the SPECvirt prime controller and each of the prime clients.
RMI_TIMEOUT This is the number of seconds SPECvirt waits for the prime clients to start their RMI servers before aborting the benchmark run. If your benchmark run is failing because the prime clients need more time for their initial setup, you can increase this value. However, it is unlikely that this value will be too small, so if you get a timeout, first look at the log files or console output on the prime clients and see if something else caused the clients to fail to start correctly.
TILE_ORDINAL[x] Use TILE_ORDINAL to control which sets of PRIME_HOST clients to use for the run. The value specified corresponds to the "tile" number index specified in the PRIME_HOSTS key (i.e. PRIME_HOSTS[tile][workload]. If commented out, then the benchmark starts with PRIME_HOST[0][workload] and increments the PRIME_HOST tile index until it reaches NUM_TILES. If used, you must specify the TILE_ORDINAL index and value for *all* tiles (starting with 0).
PRIME_HOST[t][w] This specifies the hostname and port number for each prime client (or workload controller). The indexes used specify the tile and workload index, respectively, and therefore must be unique. If there are multiple prime clients on a single host, then each must listen on a different port number. There is one PRIME_HOST per workload and "NUM_WORKLOADS" PRIME_HOSTs per TILE. The format is PRIME_HOST[tile][workload] = "<host>:<port>". Values for keys with indexes greater than NUM_TILES - 1 and NUM_WORKLOADS - 1, respectively, are ignored.
SPECVIRT_INIT_SCRIPT
SPECVIRT_EXIT_SCRIPT
The values for SPECVIRT_INIT_SCRIPT and SPECVIRT_EXIT_SCRIPT are the full name and path of any single script you wish to run on the prime controller before or after a benchmark run, respectively. Specifying only the script without the full path is acceptable if the script exists in the current path of the SPECvirt controller.
PRIME_HOST_INIT_SCRIPT[w]
(or PRIME_HOST_INIT_SCRIPT[t][w])
PRIME_HOST_EXIT_SCRIPT[w]
(or PRIME_HOST_EXIT_SCRIPT[t][w])
PRIME_HOST_INIT_SCRIPT and PRIME_HOST_EXIT_SCRIPT are used to run scripts on the prime client systems before or after a benchmark run, respectively. If you include a path with the script name, it must be the full path. Specifying a file name only assumes the file exists in the current working directory of the prime client (typically the location of clientmgr.jar). If you need to run tile-specific initialization or exit scripts, use the double-indexed form of this property.
PRIME_HOST_RMI_PORT[w]
(or PRIME_HOST_RMI_PORT[t][w])
The PRIME_HOST_RMI_PORT is the port on which each prime client is listening for commands from the SPECvirt prime controller. Note that if you have more than one prime client on the same system, you MUST use different port numbers for each. Also, if you run more than one of the same type of workload on the same client, then you must use the double-index ([t][w]) form of this key so that you can set unique port numbers for the identical workloads on different tiles.
PRIME_PATH[w]
(or PRIME_PATH[t][w])
PRIME_PATH is the full path to the prime client. SPECvirt uses this path in order to start the workload's prime client. If you are running multiple prime clients of the same workload type (for different tiles), then you will likely want to use the double-index ([t][w]) form of this key so that you can specify different workload paths for each of the workloads. If running only one tile per client or less, the single-index form is sufficient.
POLL_PRIME_PATH POLL_PRIME_PATH is the path to specpoll.jar that the harness uses during the active idle polling interval. Note that this is used only for the active idle polling interval, and because the harness does not use the IDLE_SERVER value during this interval, you do not need a unique Test.config file for each instance, negating the need for unique path for each instance.
CLIENT_PATH[w]
(or CLIENT_PATH[t][w])
CLIENT_PATH is the full path to the client for a given workload. SPECvirt uses this path in order to start the workload's client. If you are running multiple clients of the same workload type (for different tiles), then you will likely want to use the double-index ([t][w]) form of this key so that you can specify different paths for each of the workloads. If running only one tile per client (or less), the single-index form is sufficient.
POLL_CLIENT_PATH POLL_CLIENT_PATH is the path to specpollclient.jar that the harness uses during the active idle polling interval. Note that this is used only for the active idle polling interval and not for the idle server polling.
FILE_SEPARATOR Use FILE_SEPARATOR if you want to override the use of the prime client OS's file separator. (This may be required when using a product like Cygwin on Windows.)
PRIME_APP[w]
(or PRIME_APP[t][w])
PRIME_APP is the workload prime client process that the client manager process starts for each benchmark workload, with indexes corresponding to the different workloads being run. The double-index form of this key should only be required if there are tile-specific differences between the values used.
POLL_PRIME_APP POLL_PRIME_APP is the invocation string for the idle polling application that the harness uses during the active idle polling interval. Note that this key is not used for idle server polling during a loaded run.
CLIENT_APP[w]
(or CLIENT_APP[t][w])
CLIENT_APP is the name of the client (workload driver) that the clientmgr process starts and that the workload prime client controls. Any arguments that you pass to the client application must follow the name. The double-index form of this key is only required if there are tile-specific differences between the values used.
POLL_CLIENT_APP POLL_CLIENT_APP is the invocation string for the idle polling client application that is used during the idle polling interval. Note that this key is not used for idle server polling during a loaded run.
PRIME_START_DELAY PRIME_START_DELAY is the number of seconds to wait after starting the clients before starting the prime clients. Increase this value if you find that prime clients fail to start because the clients have not finished preparing to listen for prime client commands before these commands are sent.
WORKLOAD_START_DELAY[w]
(or WORKLOAD_START_DELAY[t][w])
WORKLOAD_START_DELAY staggers the time at which clients begin to ramp up their client load by delaying client thread ramp-up by the specified number of seconds. Seconds specified is total time from the beginning of the client ramp-up phase. Therefore, if you have delays of 1, 5, and 3, repectively for three different clients, the order of the start of workload client ramp-up is first, third, and then second.
RAMP_SECONDS[w]
(or RAMP_SECONDS[t][w])
WARMUP_SECONDS[w]
(or WARMUP_SECONDS[t][w])
RAMP_SECONDS and WARMUP_SECONDS supercede any values used in the workload-specific configuration files for ramp-up and warm-up time. (For example, RAMP_SECONDS overrides "triggerTime" in SPECjAppServer2004.) These values need not be identical between workloads or even between tiles, as the SPECvirt harness extends the runtime of any workloads, as needed, to assure the required common polling interval. However, the minimum compliant RAMP_SECONDS value is 180 and the minimum WARMUP_SECONDS value is 300 for all tiles and all workloads.
POLL_INTERVAL_SEC POLL_INTERVAL_SEC is the number of seconds that data is collected once polling starts. This represents the "common" benchmark runtime interval when all workloads are in their runtime measurement phase. The minimum compliant value is 7200.
ECHO_POLL ECHO_POLL controls whether client polling values are mirrored on the prime clients. If set to 0, this polling data is only displayed on the SPECvirt prime controller terminal.
DEBUG_LEVEL DEBUG_LEVEL controls the amount of debug information displayed during a benchmark run by the prime controller.
WORKLOAD_CLIENTS[w]
(or WORKLOAD_CLIENTS[t][w])
The WORKLOAD_CLIENTS values are the client hostnames (or IP addresses) and ports used by the workload clients. The hostname or IP address is specified relative to the workload prime client, and not the SPECvirt controller. For example, specifying 127.0.0.1 (or "localhost") tells the workload prime client to run this client on its host OS's loopback interface, rather than locally on the SPECvirt controller. If, for example, you use the hostname "client1" for all of your clients, and the corresponding prime client resolves this name to a unique IP address on each prime client used, then these keys can be of the form WORKLOAD_CLIENTS[w]. Otherwise, like the PRIME_HOST keys, these need to be of the form WORKLOAD_CLIENTS[t][w].
CLIENT_LISTENER_PORT CLIENT_LISTENER_PORT is the port used by the clientmgr listener on each physical client system (driver) to start the client processes for each workload on that physical client.
POLLING_RMI_PORT POLLING_RMI_PORT is the port used to communicate with the pollme processes running on the benchmark VMs. Pass this value to the pollme listeners when starting them on all VMs.
PRIME_CONFIG_FILE[w]
(or PRIME_CONFIG_FILE[t][w])
PRIME_CONFIG_FILE is the list of any files to copy from the corresponding LOCAL_CONFIG_DIR directory on the SPECvirt prime controller to the PRIME_CONFIG_DIR directory on the corresponding PRIME_HOST. Leave these as empty strings if you do not want to overwrite the workload configuration files on each prime client.
LOCAL_CONFIG_DIR[w]
(or LOCAL_CONFIG_DIR[t][w])
PRIME_CONFIG_DIR[w]
(or PRIME_CONFIG_DIR[t][w])
LOCAL_CONFIG_DIR is the source location on the SPECvirt prime controller for the configuration files to copy to the workload prime clients. PRIME_CONFIG_DIR is the target location on the workload prime client for the config files copied from the source location.
POLL_CONFIG_FILE
POLL_LOCAL_CFG_DIR
POLL_PRIME_CFG_DIR
These are the keys corresponding to PRIME_CONFIG_FILE, LOCAL_CONFIG_DIR, and PRIME_CONFIG_DIR, respectively, for the active idle polling interval.
USE_RESULT_SUBDIRS Setting USE_RESULT_SUBDIRS to 1 puts each set of result files in a different results subdirectory with a unique timestamp-based name. Setting to 0 avoids creating a unique subdirectory, and any earlier results in the parent "results" directory are overwritten by newer test results. Setting USE_RESULT_SUBDIRS to 0 is only recommended for use with Faban. (Conversely, setting USE_RESULT_SUBDIRS to 1 is not recommended when using Faban.)
USE_PTDS USE_PTDS controls whether the power/temp daemons (PTDs) are used during the benchmark. Set to 0 to run without taking power or temperature measurements.
PTD_HOST[x] PTD_HOST is the hostname of the system running the PTD. For more than one PTD, copy, paste, and increment the index (x) for each PTD.
PTD_PORT[x] PTD_PORT is the corresponding port the PTD is listening on.
PTD_TARGET[x] PTD_TARGET is the type of component the power/temp meter is monitoring. ("SUT" identifies meter as monitoring a main system/server; "EXT_STOR" identifies meter as monitoring any external storage used.)
SAMPLE_RATE_OVERRIDE[x]
OVERRIDE_RATE_MS[x]
Setting SAMPLE_RATE_OVERRIDE for any PTD allows you to override the default sample rate for the power or temperature meter. This is not recommended in most cases. However, if overridden, OVERRIDE_RATE_MS is the sample rate (in milliseconds) used instead of the meter's default, .
LOCAL_HOSTNAME[x]
LOCAL_PORT[x]
LOCAL_HOSTNAME and LOCAL_PORT are used to specify the local network interface and port to use to connect with the PTD_HOST. In most cases, you do not need to specify these values. Leave them commented out unless needed.
LOAD_SCALE_FACTORS[t] This is the tile-specific format of the fixed property LOAD_SCALE_FACTORS. Tile "t" runs at the specified load scaling factor. Compliant values are between 0.1 and 0.9 in increments of 0.1. This property allows for one tile to run at reduced load. Defining more than one tile to run at a reduced load, or for any tile to run at greater-than-full'load (i.e. LOAD_SCALE_FACTORS value > 1.0) results in a non-compliant run.
RESULT_TYPE Use RESULT_TYPE to control the type of result submissions and/or formatted reports you would like to create. The following table lists the possible values and which combinations of reports are generated for each value:
value perf ppw ppws
1 x

2
x
3 x x
4

x
5 x
x
6
x x
7 x x x
perf:  generate a non-power report (with SPECvirt_sc2010 metric)
ppw:  generate a SUT power-performance report (with SPECvirt_sc2010_PPW metric)
ppws:  generate a server-only (primary metric includes server power only) power-performance report (with SPECvirt_sc2010_ServerPPW metric)
IGNORE_CLOCK_SKEW
CLOCK_SKEW_ALLOWED
Setting IGNORE_CLOCK_SKEW to "1" causes the prime controller to skip the system clock synchronization check at the beginning of a benchmark run. Setting to "0" (default) means the prime controller and the prime clients perform this check to assure all prime clients, clients, and VMs are in time sync with the prime controller. If set to "0", CLOCK_SKEW_ALLOWED is the number of seconds of clock skew the prime controller and prime clients will allow at the beginning of a benchmark run without aborting.
FIXED BENCHMARK PROPERTIES
(changing these values results in a non-compliant test)
KEY DESCRIPTION
NUM_WORKLOADS
VMS_PER_TILE
NUM_WORKLOADS defines the number of workloads per tile used to drive the SUT. VMS_PER_TILE is the number of VMs that are used in each tile. For a compliant run, NUM_WORKLOADS must be 4 and VMS_PER_TILE must be 6.
WORKLOAD_LABEL[w] These values serve as descriptive labels of each of the workloads used in the benchmark. Assuming NUM_WORKLOADS = 4, there should be four corresponding values for each of the workloads.
IDLE_RAMP_SEC
IDLE_WARMUP_SEC
IDLE_POLL_SEC
IDLE_RAMP_SEC, IDLE_WARMUP_SEC, and IDLE_POLL_SEC are the ramp, warmup, and polling/runtime values used for the active-idle measurement phase only.
POLL_MASTERS POLL_MASTERS controls whether or not to request polling data from the prime clients. If set to 0, the harness does not conduct prime client polling during the polling interval.
INTERVAL_POLL_VALUES Set this to 0 for cumulative polling data over the entire measurement interval. Set it to 1 if you want only the polling data that is added between polling intervals. Note: some workloads do not support polling-interval-based results reporting and ignore a non-zero value. Therefore, the only value that assures consistency across workloads is 0.
POLL_DELAY_SEC POLL_DELAY_SEC is the number of seconds after all prime clients have started running that the prime controller waits before starting to request polling data.
BEAT_INTERVAL BEAT_INTERVAL is the number of seconds between prime client pollings. This controls the frequency that the harness polls the prime clients for runtime data (if POLL_MASTERS is set to 1).
RESULT_FILE_NAMES[w]
POLL_RES_FILE_NAMES
RESULT_FILE_NAMES are the names of the results files created by the workload that the prime controller collects from the prime clients after a run has completed. The indexes correspond with the workload indexes. POLL_RES_FILE_NAMES is the corresponding equivalent result file collected during an active-idle run.
USE_WEIGHTED_QOS USE_WEIGHTED_QOS controls the manner of calculating QOS for the workloads. A value of 0 means to apply the same weight to all QOS-related fields used to calculate the aggregate QOS value. A value of 1 (or higher) results in a weighted QOS based on frequency being used to calculate aggregate QOS.
PTD_POLL Set PTD_POLL to 1 in order to poll the PTDs during the POLL_INTERVAL; set to 0 to avoid PTD polling.
POWER_POLL_VAL POWER_POLL_VAL selects which value to poll from any power meter used during the test (possible values: "Watts", "Volts", "Amps", "PF").
TEMP_POLL_VAL TEMP_POLL_VAL controls which value to poll from any temperature meter used during the test (options: "Temperature", "Humidity").
LOAD_SCALE_FACTORS
QUIESCE_SECONDS
LOAD_SCALE_FACTORS is the list of multipliers to the load levels for the individual workload levels. For each value and in the order listed, the benchmark harness runs a full run at the calculated load rate with a QUIESCE_SECONDS wait interval between each point. The number of values in this list control the number of iterations the benchmark will execute.
WORKLOAD_SCORE_TMAX_VALUE[w] WORKLOAD_SCORE_TMAX_VALUE is the theoretical maximum throughput rate for each workload. Comment these values out if you do not want to normalize scores to the theoretical max. Setting the value to 0 has the effect of not using this workload's score in calculating the result.
WORKLOAD_LOAD_LEVEL[w] WORKLOAD_LOAD_LEVEL supercedes any values used in the workload-specific configuration files to control client load. For the jApp workload, txRate is overwritten with this value. For web, SIMULTANEOUS_SESSIONS is overwritten. For imap, the number of users is set to this value.

Appendix B - Driving Two SPECvirt Tiles from One Physical Client

B.1 Overview

With SPECvirt_sc2010 it is possible to run the client-side processes for two tiles within the same client-OS instance, that is, on the same physical box. The method for maintaining the workload configuration files is the same whether you run one or two tiles from one client: you store and edit the files in a central location on the prime client, and the SPECvirt harness distributes them at runtime. If you choose to drive two tiles on one client, you need to perform three additional configuration steps so that the client processes can communicate with the prime controller and the assigned VMs:
  1. Create separate, tile-specific master configuration file directories on the prime controller. You accomplish this by creating master directories for workload configuration files on the prime controller (for example, /opt/master_confs/tile2 and /opt/master_confs/tile3 as described in detail below) and putting the customized master configuration files for each workload in these directories.

  2. The client processes of the four workloads for the first tile need separate configuration files from those for the second tile on the client and, in addition, separate workload result directories for each tile. You accomplish this by separating the workload home directories on the client to create a clear mapping of the workload result directories for each tile and by editing the Control.config file to match the resulting mappings. 
    Specifically, create separate workload home directories on the client as complete copies (for example, /opt/SPECimap and /opt/SPECimap_B). For SPECimap and SPECjAppServer also define different port numbers for RMI communication in the configuration files for both instances of each workload. For SPECjAppServer also define tile-specific result directories in order to get distinct directories for each tile. Finally, edit Control.config to reflect these new directory mappings as well as to assign tile indexes and workload indexes for each specific workload and client. 

  3. The SPECjAppServer2004 aliases specdelivery and specemulator for the appserver VM are no longer used and instead require the explicit VM host names. You accomplish this by editing the master configuration files for this workload and editing the hosts file on the client.

When you run two tiles per client, the hosts file on each client must contain tile-specific VM names (for example, infraserver2-ext rather than infraserver) with their IP addresses. For SPECjAppServer2004 you also need to assign the specific VM host names (appserver2-ext and dbserver2-ext) in the hosts instead of using the aliases specdelivery, specemulator, and specdb.

The following sections describe how to configure one physical client to drive two tiles. The following example shows a four-tile configuration and how to set up tile 2 (tile index [1]) and tile 3 (tile index [2]) to run on one client. Note that the following description applies for a GLASSFISH application server and Linux clients. Treat other situations analogously.

B.2 On the Prime Controller

1. Create tile-specific directories to store the master configuration files for both tiles. Alternately, you can use the create-master-conf-directories.sh helper script.

mkdir -p /opt/master_confs/tile2
cd /opt/master_confs/tile2
mkdir -p SPECjAppServer2004/config SPECimap SPECweb2005 SPECpoll
cp /opt/SPECjAppServer2004/config/*.env SPECjAppServer2004/config
cp /opt/SPECjAppServer2004/config/run.properties SPECjAppServer2004/config
cp /opt/SPECimap/IMAP*.rc SPECimap
cp /opt/SPECweb2005/*.config SPECweb2005  
cp /opt/SPECpoll/Test.config SPECpoll

mkdir -p /opt/master_confs/tile3
cd /opt/master_confs/tile3
mkdir -p SPECjAppServer2004/config SPECimap SPECweb2005 SPECpoll
cp /opt/SPECjAppServer2004/config/*.env SPECjAppServer2004/config
cp /opt/SPECjAppServer2004/config/run.properties SPECjAppServer2004/config
cp /opt/SPECimap/IMAP*.rc SPECimap
cp /opt/SPECweb2005/*.config SPECweb2005  
cp /opt/SPECpoll/Test.config SPECpoll

2. Edit Control.config, or adapt the 2tileControl.config helper file.

The harness first checks for a double-indexed tile and workload pair for each parameter (for example, [1][0] where 1 is the tile index and 0 is the workload index). If the harness fails to find it, it looks for a value using only the workload index (for example, [1] where the workload index is 1). Therefore you can consider the workload-only indexed values as your “default” values, and you do not need to comment out these parameters in Control.config. Instead, add the parameter and specify a double-indexed tile and workload pair if driving two tiles on one client.

Adjust the client names and unique port numbers for PRIME_HOST. To do this, use the proper client hostname (client02) and unique port numbers for tile 3. Additions and changes are in bold:

PRIME_HOST[0][0] = "client01:1098"
PRIME_HOST[0][1] = "client01:1096"
PRIME_HOST[0][2] = "client01:1094"
PRIME_HOST[0][3] = "client01:1092"
PRIME_HOST[1][0] = "client02:1098"
PRIME_HOST[1][1] = "client02:1096"
PRIME_HOST[1][2] = "client02:1094"
PRIME_HOST[1][3] = "client02:1092"
#PRIME_HOST[2][0] = "client03:1098"
#PRIME_HOST[2][1] = "client03:1096"
#PRIME_HOST[2][2] = "client03:1094"
#PRIME_HOST[2][3] = "client03:1092"
PRIME_HOST[2][0] = "client02:1198"
PRIME_HOST[2][1] = "client02:1196"
PRIME_HOST[2][2] = "client02:1194"
PRIME_HOST[2][3] = "client02:1192"

Adjust the client names for WORKLOAD_CLIENTS. To do this, use the proper client hostname (client02) for tile 3:

WORKLOAD_CLIENTS[0][0] = "client01:1091"
WORKLOAD_CLIENTS[0][1] = "client01:1010"
WORKLOAD_CLIENTS[0][2] = "client01:1200"
WORKLOAD_CLIENTS[0][3] = "client01:1900"
WORKLOAD_CLIENTS[1][0] = "client02:1191"
WORKLOAD_CLIENTS[1][1] = "client02:1110"
WORKLOAD_CLIENTS[1][2] = "client02:1210"
WORKLOAD_CLIENTS[1][3] = "client02:1910"
#WORKLOAD_CLIENTS[2][0] = "client03:1291"
#WORKLOAD_CLIENTS[2][1] = "client03:1120"
#WORKLOAD_CLIENTS[2][2] = "client03:1220"
#WORKLOAD_CLIENTS[2][3] = "client03:1920"
WORKLOAD_CLIENTS[2][0] = "client02:1291"
WORKLOAD_CLIENTS[2][1] = "client02:1120"
WORKLOAD_CLIENTS[2][2] = "client02:1220"
WORKLOAD_CLIENTS[2][3] = "client02:1920"


The PRIME_HOST_RMI_PORT values must use port numbers that are specific for each workload and tile number. To do this, add entries with the double-indexed syntax and unique port numbers:

PRIME_HOST_RMI_PORT[0] = 9900
PRIME_HOST_RMI_PORT[1] = 9901
PRIME_HOST_RMI_PORT[2] = 9902
PRIME_HOST_RMI_PORT[3] = 9903
# Add tile index and unique port for client driving two tiles
PRIME_HOST_RMI_PORT[2][0] = 9920
PRIME_HOST_RMI_PORT[2][1] = 9921
PRIME_HOST_RMI_PORT[2][2] = 9922
PRIME_HOST_RMI_PORT[2][3] = 9923

PRIME_PATH, CLIENT_PATH, LOCAL_CONFIG_DIR, and PRIME_CONFIG_DIR require different directories for different tiles, so again use the double-indexed syntax to express workload- and tile-specific values:

PRIME_PATH[0] = "/home/spec/SPECjAppServer2004/classes"
PRIME_PATH[1] = "/opt/SPECweb2005"
PRIME_PATH[2] = "/opt/SPECimap"
PRIME_PATH[3] = "/opt/SPECpoll"
# Add tile index and unique directory for client driving two tiles
PRIME_PATH[2][0] = "/home/spec/SPECjAppServer2004_B/classes"
PRIME_PATH[2][1] = "/opt/SPECweb2005_B"
PRIME_PATH[2][2] = "/opt/SPECimap_B"
PRIME_PATH[2][3] = "/opt/SPECpoll_B"
CLIENT_PATH[0] = "/home/spec/SPECjAppServer2004/classes"
CLIENT_PATH[1] = "/opt/SPECweb2005"CLIENT_PATH[2] = "/opt/SPECimap"
CLIENT_PATH[3] = "/opt/SPECpoll"
# Add tile index and unique directory for client driving two tiles
CLIENT_PATH[2][0] = "/home/spec/SPECjAppServer2004_B/classes"
CLIENT_PATH[2][1] = "/opt/SPECweb2005_B"
CLIENT_PATH[2][2] = "/opt/SPECimap_B"
CLIENT_PATH[2][3] = "/opt/SPECpoll_B"
LOCAL_CONFIG_DIR[0] = "/opt/SPECjAppServer2004/config"
LOCAL_CONFIG_DIR[1] = "/opt/SPECweb2005
LOCAL_CONFIG_DIR[2] = "/opt/SPECimap
LOCAL_CONFIG_DIR[3] = "/opt/SPECpoll
# Add tile index and unique directory for client driving two tiles
LOCAL_CONFIG_DIR[1][0] = "/opt/master_confs/tile2/SPECjAppServer2004/config"
LOCAL_CONFIG_DIR[1][1] = "/opt/master_confs/tile2/SPECweb2005
LOCAL_CONFIG_DIR[1][2] = "/opt/master_confs/tile2/SPECimap
LOCAL_CONFIG_DIR[1][3] = "/opt/master_confs/tile2/SPECpoll
LOCAL_CONFIG_DIR[2][0] = "/opt/master_confs/tile3/SPECjAppServer2004/config"
LOCAL_CONFIG_DIR[2][1] = "/opt/master_confs/tile3/SPECweb2005"
LOCAL_CONFIG_DIR[2][2] = "/opt/master_confs/tile3/SPECimap"
LOCAL_CONFIG_DIR[2][3] = "/opt/master_confs/tile3/SPECpoll"
PRIME_CONFIG_DIR[0] = "/home/spec/SPECjAppServer2004/config"
PRIME_CONFIG_DIR[1] = "/opt/SPECweb2005"
PRIME_CONFIG_DIR[2] = "/opt/SPECimap"
PRIME_CONFIG_DIR[3] = "/opt/SPECpoll"
# Add tile index and unique directory for client driving two tiles
PRIME_CONFIG_DIR[2][0] = "/home/spec/SPECjAppServer2004_B/config"
PRIME_CONFIG_DIR[2][1] = "/opt/SPECweb2005_B"
PRIME_CONFIG_DIR[2][2] = "/opt/SPECimap_B"
PRIME_CONFIG_DIR[2][3] = "/opt/SPECpoll_B" 

If you have workload-specific initialization or exit scripts (PRIME_HOST_INIT_SCRIPT and PRIME_HOST_EXIT_SCRIPT), again use the double-indexed syntax to express workload- and tile-specific values. Also, rather than use the generic *Init.sh helper scripts, use the *Init_2tile.sh helper script. The *Init_2tile.sh scripts take the tile index parameter which you pass it in the value of PRIME_HOST_INIT_SCRIPT:

PRIME_HOST_INIT_SCRIPT[0] = "jappInitRstr.sh"
PRIME_HOST_INIT_SCRIPT[1] = "webInit.sh"
PRIME_HOST_INIT_SCRIPT[2] = "mailInit.sh"
PRIME_HOST_INIT_SCRIPT[3] = "idleInit.sh"
# Add tile index and unique filename for client driving two tiles
PRIME_HOST_INIT_SCRIPT[1][0] = "jappInitRstr_2tile.sh 2"
PRIME_HOST_INIT_SCRIPT[1][1] = "webInit_2tile.sh 2"
PRIME_HOST_INIT_SCRIPT[1][2] = "mailInit_2tile.sh 2"
PRIME_HOST_INIT_SCRIPT[1][3] = "idleInit_2tile.sh 2"
PRIME_HOST_INIT_SCRIPT[2][0] = "jappInitRstr_2tile.sh 3"
PRIME_HOST_INIT_SCRIPT[2][1] = "webInit_2tile.sh 3"
PRIME_HOST_INIT_SCRIPT[2][2] = "mailInit_2tile.sh 3"
PRIME_HOST_INIT_SCRIPT[2][3] = "idleInit_2tile.sh 3"

For SPECjAppServer2004, add glassfish.env to the list of configuration files  (PRIME_CONFIG_FILE). This file contains tile-specific values necessary for the tile-specific client processes:

##PRIME_CONFIG_FILE[0] = "run.properties,default.env"
# Add configuration file for client driving two tiles
PRIME_CONFIG_FILE[0] = "run.properties,default.env,glassfish.env"

3. Edit SPECjAppServer2004 configuration files.

Specify the port and host names (such as appserver2-ext) for specdelivery and specemulator rather than use these aliases. Edit the appserver and dbserver master configuration files:

/opt/master_confs/tile2/SPECjAppServer2004/config/glassfish.env

##JAS_HOST=specdelivery
##EMULATOR_HOST=specemulator
##DB_HOST=dbserver
# Specify hostname for client driving two tiles
JAS_HOST=appserver2-ext
EMULATOR_HOST=appserver2-ext
DB_HOST=dbserver2-int

/opt/master_confs/tile3/SPECjAppServer2004/config/glassfish.env

##JAS_HOME=/home/spec/SPECjAppServer2004
##JAS_HOST=specdelivery
##EMULATOR_HOST=specemulator
##DB_HOST=dbserver
# Specify directory and hostname for client driving two tiles
JAS_HOME=/home/spec/SPECjAppServer2004_B
JAS_HOST=appserver3-ext
EMULATOR_HOST=appserver3-ext
DB_HOST=dbserver3-int

/opt/master_confs/tile2/SPECjAppServer2004/config/default.env

##JAS_HOST=specdelivery
##EMULATOR_HOST=specemulator
# Specify hostname for client driving two tiles
JAS_HOST=appserver2-ext
EMULATOR_HOST=appserver2-ext

/opt/master_confs/tile3/SPECjAppServer2004/config/default.env

##JAS_HOST=specdelivery
##EMULATOR_HOST=specemulator
# Specify hostname for client driving two tiles
JAS_HOST=appserver3-ext
EMULATOR_HOST=appserver3-ext

/opt/master_confs/tile2/SPECjAppServer2004/config/run.properties

##Url=http://specdelivery:8000/SPECjAppServer/app?
# Specify hostname and directory for client driving two tiles
Url=http://appserver2-ext:8000/SPECjAppServer/app?

/opt/master_confs/tile3/SPECjAppServer2004/config/run.properties

##Url=http://specdelivery:8000/SPECjAppServer/app?
##outDir = /home/spec/output
## rmiPort = 1099
# Specify hostname, directory, and unique port for client driving two tiles
Url=http://appserver3-ext:8000/SPECjAppServer/app?
outDir = /home/spec/output_B
rmiPort = 1199

4. Edit SPECweb2005 configuration files.

Specify the webserver and infraserver host names in the webserver master config file:

/opt/master_confs/tile2/SPECweb2005/Test.config

##WEB_SERVER = webserver
##BESIM_SERVER = infraserver
# Specify hostname for client driving two tiles
WEB_SERVER = webserver2-ext
BESIM_SERVER = "infraserver2-ext"

/opt/master_confs/tile3/SPECweb2005/Test.config

##WEB_SERVER = webserver
##BESIM_SERVER = infraserver
# Specify hostname for client driving two tiles
WEB_SERVER = webserver3-ext
BESIM_SERVER = "infraserver3-ext"

5. Edit SPECimap configuration files.

Specify the mailserver host name and port in the mailserver master configuration file:

/opt/master_confs/tile2/SPECimap/IMAP_config.rc

##IMAP_SERVER = mailserver
##POP3_SERVER = mailserver
##LOCAL_DOMAIN = mailserver
# Specify hostname for client driving two tiles
IMAP_SERVER = mailserver2-ext
POP3_SERVER = mailserver2-ext
LOCAL_DOMAIN = mailserver2-ext

/opt/master_confs/tile3/SPECimap/IMAP_config.rc

##IMAP_SERVER = mailserver
##POP3_SERVER = mailserver
##LOCAL_DOMAIN = mailserver
##PRIME_RMI_PORT = 1090
# Specify hostname for client driving two tiles
IMAP_SERVER = mailserver3-ext
POP3_SERVER = mailserver3-ext
LOCAL_DOMAIN = mailserver3-ext  
PRIME_RMI_PORT = 1095

6. Edit SPECpoll configuration files.

Specify the idleserver host name in the idleserver master configuration file:

/opt/master_confs/tile2/SPECpoll/Test.config

##IDLE_SERVER = idleserver
# Specify hostname for client driving two tiles
IDLE_SERVER = idleserver2-ext

/opt/master_confs/tile3/SPECpoll/Test.config

##IDLE_SERVER = idleserver
# Specify hostname for client driving two tiles
IDLE_SERVER = idleserver3-ext

7. Create a custom runspecvirt.sh script, or adapt the 2tilerunspecvirt.sh helper script.

Specify the client names and the name of the custom Clientmgr.sh file (see step three below) on the client driving two tiles:

/opt/SPECvirt/2tilerunspecvirt.sh

...
#ssh $CLIENT$i "cd /opt/SPECvirt; ./Clientmgr.sh $i "
ssh client01 "cd /opt/SPECvirt; ./Clientmgr.sh 1"
ssh client02 "cd /opt/SPECvirt; ./2tileClientmgr.sh"    # drives tile 2 and tile 3
...

B.3 On the Client That Drives Two Tiles

1. Create a separate copy of each workload directory that corresponds to the second tile.

cd /opt
cp -r SPECjAppServer2004 SPECjAppServer2004_B
cp -r SPECimap SPECimap_B
cp -r SPECweb2005 SPECweb2005_B
cp -r SPECpoll SPECpoll_B
su - spec
mkdir output_B

2. Edit the hosts file on the client.

/etc/hosts

##192.168.1.11      infraserver
##192.168.1.12      webserver
##192.168.1.13      mailserver
##192.168.1.14      appserver specdelivery specemulator
##192.168.1.15      dbserver specdb
##192.168.1.16      idleserver
# Specify hostnames for client driving two tiles
192.168.1.11      infraserver1-ext infraserver1
192.168.1.12      webserver1-ext webserver1
192.168.1.13      mailserver1-ext mailserver1
192.168.1.14      appserver1-ext appserver1
192.168.1.15      dbserver1-ext dbserver1
192.168.1.16      idleserver1-ext idleserver1
192.168.1.21      infraserver2-ext infraserver2
192.168.1.22      webserver2-ext webserver2
192.168.1.23      mailserver2-ext mailserver2
192.168.1.24      appserver2-ext appserver2
192.168.1.25      dbserver2-ext dbserver2
192.168.1.26      idleserver2-ext idleserver2
192.168.1.31      infraserver3-ext infraserver3
192.168.1.32      webserver3-ext webserver3
192.168.1.33      mailserver3-ext mailserver3
192.168.1.34      appserver3-ext appserver3
192.168.1.35      dbserver3-ext dbserver3
192.168.1.36      idleserver3-ext idleserver3
...

3. Create a custom Clientmgr.sh helper script, or adapt the 2tileClientmgr.sh helper script.

For each of the two tiles you start four clientmgr.jar processes with the correct port numbers for the prime clients and one clientmgr.jar for all workload clients. So overall there are nine invocations necessary on the physical client.

In the shell output of the workload client’s clientmgr.jar, the frequent SPECjAppServer messages appear alternately from both SPECjAppServer-client workloads, and the clientmgr’s shell output becomes completely fragmented and confusing. Also, when running one or more of the workloads with a higher debug level, it can be difficult to determine which client is having problems since the error messages are mixed in with the success messages. The workload-specific output files have a better readability than the clientmgr’s shell output where the output fragments additionally can be expanded due to the debug level, so consider invoking the clientmgr.jar process for the workload client’s clientmgr.jar (“-p 1088”) with the option "-log” as in the following example. The log switch creates readable workload-specific output files and is indispensable.

In this case, name the script 2tileClientmgr.sh to match the entry in 2tilerunspecvirt.sh on the prime controller (see step seven above). Explicitly assign output file names with their associated ports:

/opt/SPECvirt/2tileClientmgr.sh

# Script called from runspecvirt.sh
#
java -jar clientmgr.jar -p 1098 > Clientmgr2_1098.out 2>&1 &
java -jar clientmgr.jar -p 1096 > Clientmgr2_1096.out 2>&1 &
java -jar clientmgr.jar -p 1094 > Clientmgr2_1094.out 2>&1 &
java -jar clientmgr.jar -p 1092 > Clientmgr2_1092.out 2>&1 &
java -jar clientmgr.jar -p 1088 -log > Clientmgr2_1088.out 2>&1 &
java -jar clientmgr.jar -p 1198 > Clientmgr2_1198.out 2>&1 &
java -jar clientmgr.jar -p 1196 > Clientmgr2_1196.out 2>&1 &
java -jar clientmgr.jar -p 1194 > Clientmgr2_1194.out 2>&1 &
java -jar clientmgr.jar -p 1192 > Clientmgr2_1192.out 2>&1 &

B.4 Practical Hints

Take care when updating the SPECvirt harness to a higher version. The copied workload home directories must be updated as well.

Since there are nine invocations clientmgr.jar on the physical client, ensure that you have sufficient CPU and memory on the physical client to support these processes, especially memory since each client may require over 2 GB of memory. For SPECjAppServer2004, to balance memory usage with QoS requirements, consider these edits to run.properties:

runDealerEntry = 1
dlrAgentMinHeapMB = 1280
dlrAgentMaxHeapMB = 1280

Add memory to the physical client if you fail SPECjAppServer2004 QoS with the above settings or encounter either of the following errors in the client log files:

error adding cars to cart for customer - Part 2 473289  java.lang.NullPointerException
expired max-wait-time. Cannot allocate more connections.