Compliant Run For Result Submission

What is a Compliant Run?

A Compliant Run is a test run where the all the components of the Cloud SUT and Benchmark harness comply with all run and reporting rules. All hardware and software configuration details needed to reproduce the test should be collected using cloud and instance configuration gathering scripts referenced below. Sample scripts have been included with in the kit to use as examples. The tester is responsible for writing or revising these scripts to ensure that data for their test environment is collected and a copy of their configuration gathering scripts are included in the submission. Configuration data that can not be collected by scripts but are required for the full disclosure report can be collected manually and included in the submission package.

Setting Up Parameters

Please make sure that the following parameters are correctly set in the osgcloud_rules.yaml file.

Set the results directory. The recommendation is to keep the default results directory and use and appropriate SPECRUN id for a compliant run. In case, the result directory needs to be set, the following parameter should be changed:

results_dir: HOMEDIR/results

Instance support evidence flag must be set to true:

instance_support_evidence: true

Linux user id for instances and SSH keys that are used must be correctly set:

instance_user: cbuser
instance_keypath: HOMEDIR/osgcloud/spec_ssh_keys/spec_ssh

Cloud config supporting evidence flag must be set to true:

cloud_config_support_evidence: true

Ensure that appropriate cloud configuration scripts that invoke cloud APIs have been written and tested. The OpenStack scripts are provided in the following directory. The details of these scripts are present in Cloud Config Scripts from a Cloud Consumer Perspective:

HOMEDIR/osgcloud/driver/support_script/cloud_config/openstack/

YCSB thread count is set to the desired value. The default is 8.:

thread_count: 8

Iteration count for baseline phase must be set to five:

iteration_count: 5

Timeserver field, if uncommented, must be the same as NTP servers used for benchmark harness machine. Please ensure that the NTP server is running on the machine specified in the parameter, NTP server is reachable from benchmark harness and instance machines, and benchmark harness and instance machines can resolve the hostname of NTP server (if specified):

#timeserver: 0.ubuntu.pool.ntp.org

QoS must not be ignored for a compliant run:

ignore_qos_when_max_is_set: false

maximum_ais (maximum ais that can be provisioned with success or error) and reported_ais (maximum AIs that can report results from one or more AI runs) should be set to a reasonable value for your cloud. Typically, reported_ais is set to a value that is smaller than maximum_ais. A good rule of thumb is to set it to half of maximum_ais:

maximum_ais: 24
reported_ais: 12

Parameters for checking successful provisioning must be appropriately set for your cloud:

vm_defaults:
    update_attempts: 60
    update_frequency: 5

The values for update_attempts and update_frequency determine whether benchmark considers the provisioning to have failed. A tester can set them to a small value to force provisioning failures for testing.

During a compliant run, the value for update_attempts is automatically calculated based on the max of the average AI provisioning time of YCSB and KMeans.

Pre-run Check List

Please make sure that you have done the following:

  • Tested creation of an instance in CBTOOL by executing the commands in Launch a VM and Test It
  • Tested that instance support evidence collection works. Follow the instructions in Testing Instance Supporting Evidence Collection for testing gathering of instance supporting evidence prior to a run.
  • Implemented and tested scripts for gathering cloud configuration in HOMEDIR/osgcloud/driver/support_script/cloud_config directory. See section Cloud Configuration Gathering Scripts Through Cloud APIs for details.
  • Measured baseline provisioning time and set the update_attempts to a value such that update_attempts times update_frequency is unlikely to exceed the maximum AI provisioning time measured during a compliant run.
  • Set the instance_support_evidence and cloud_config_support_evidence flag to true in osgcloud_rules.yaml
  • Ensure that other parameters are set up correctly as described in the earlier section.

Running

Before running the benchmark, please perform a hard reset on CBTOOL:

cd ~/osgcloud/cbtool
./cb --hard_reset

Then, in another terminal, run the entire benchmark as follows:

cd ~/osgcloud/driver
./all_run.sh SPECRUNID

The script will delete the directory SPECRUNID, and then start the run.

If during baseline or elasticity + scalability phase, an error is encountered, the benchmark will terminate. The tester will have to rerun the entire benchmark for a compliant run.

As part of all_run.sh , supporting evidence from instances created during baseline phase is automatically collected, since these instances are terminated after each run of baseline phase. Supporting evidence from instances created during elasticity + scalability phase is not collected as part of all-run.sh script. The reason is that a large number of instances are created during an elasticity run (e.g., > 100) and it may take a while to gather supporting evidence from all instances.

Post-Run

As soon as the all_run.sh script ends, the tester must verify the following:

  • A submission file has been generated.
  • Instance supporting evidence has been automatically collected from all instances. This information includes standard machine details as well as Hadoop and Cassandra specific configuration. It takes 1-2 minutes to gather supporting evidence from all instances within an application instance. Depending on how many application instances were present, the supporting evidence gathering may take more or less time.
  • Results look reasonable.

The tester can choose to update the osgcloud_environment.yaml at the end of a compliant run. If so, the tester must populate the appropriate values in osgcloud_environment.yaml and rerun the osgcloud_fdr.py script to generate the submission file:

python osgcloud_fdr.py --baseline_yaml "PATHTOPERFDIR/baseline_SPECRUNID.yaml" --elasticity_yaml "PATHTOPERFDIR/elasticity_SPECRUNID.yaml"  --elasticity_results_path PATHTOPERFDIR/ELASTICITYDIR --exp_id SPECRUNID --runrules_yaml osgcloud_rules.yaml --environment_yaml osgcloud_environment.yaml

The tester must consider the following when updating the osgcloud_environment.yaml:

  • indentation must be preserved or it will fail
  • semicolons must not be deleted
  • keys must not be changed nor added except where explicitly directed
  • comments have a #, if the hash is removed the entry must be valid.
  • “Notes:” sections are free format but MUST have a matching “End_Notes:” key.
  • no tabs are allowed. All indenting must be done with spaces.
  • SUT_type: must either be blackbox or whitebox.

Finally, the tester tests the generation of submission file by generating the HTML report:

python osgcloud_fdr_html.py --exp_id SPECRUNID --networkfile arch.png

where arch.png contains the instance configuration diagram (for whitebox and blackbox clouds) as well as whitebox architecture diagram. See sample FDR for reference.

Results Automatically Collected by Scripts Shipped With Kit

If the results directory is not changed in osgcloud_rules.yaml, the entire benchmark results will be present in:

~/results/SPECRUNID/

The directory structure will resemble this:

|-- cloud_config
|   |-- blackbox
|   |-- image_instructions
|   |-- instance_arch_diag
|   `-- whitebox
|       |-- arch_diagram
|       |-- cloud_mgmt_software
|       |-- compute_nodes
|       `-- controller_nodes
|-- code
|   |-- cloud_config_scripts
|   |-- harness_scripts
|   `-- white_box_sup_evid_scripts
|-- harness
|   |-- config_files
|   |-- machine_info
|   `-- software
|-- instance_evidence_dir
|   |-- baseline
|   |-- baseline_pre
|   |-- elasticity
|   |-- elasticity_post
|   |-- kmeans_baseline_post
|   `-- ycsb_baseline_post
`-- perf

The following files will be present in the perf directory after a successful run:

baseline_SPECRUNID.yaml
elasticity_SPECRUNID.yaml
fdr_ai_SPECRUNID.yaml
osgcloud_elasticity_SPECRUNID-20150811234204UTC.log
osgcloud_fdr_SPECRUNID-20150811234502UTC.log
osgcloud_kmeans_baseline_SPECRUNID-20150811233302UTC.log
osgcloud_rules.yaml
osgcloud_ycsb_baseline_SPECRUNID-20150811233732UTC.log
SPECRUNIDELASTICITY20150811234204UTC
SPECRUNIDKMEANSBASELINE020150811233302UTC
SPECRUNIDKMEANSBASELINE120150811233302UTC
SPECRUNIDKMEANSBASELINE220150811233302UTC
SPECRUNIDKMEANSBASELINE320150811233302UTC
SPECRUNIDKMEANSBASELINE420150811233302UTC
SPECRUNIDYCSBBASELINE020150811233732UTC
SPECRUNIDYCSBBASELINE120150811233732UTC
SPECRUNIDYCSBBASELINE220150811233732UTC
SPECRUNIDYCSBBASELINE320150811233732UTC
SPECRUNIDYCSBBASELINE420150811233732UTC
run_SPECRUNID.log
sub_file_SPECRUNID.txt

The baseline supporting evidence will be collected in the following directory:

~/results/SPECRUNID/instance_perf_dir/baseline

It will look similar to:

|instance_evidence_dir/baseline/
|-- SPECRUNIDKMEANSBASELINE020150816002546UTC
|   `-- ai_1
|       |-- cb-root-CLOUDUNDERTEST-vm1-hadoopmaster-ai-1
|       |   |-- INSTANCE
|       |   `-- SW
|       |-- cb-root-CLOUDUNDERTEST-vm2-hadoopslave-ai-1
|       |   |-- INSTANCE
|       |   `-- SW
|       |-- cb-root-CLOUDUNDERTEST-vm3-hadoopslave-ai-1
|       |   |-- INSTANCE
|       |   `-- SW
|       |-- cb-root-CLOUDUNDERTEST-vm4-hadoopslave-ai-1
|       |   |-- INSTANCE
|       |   `-- SW
|       |-- cb-root-CLOUDUNDERTEST-vm5-hadoopslave-ai-1
|       |   |-- INSTANCE
|       |   `-- SW
|       `-- cb-root-CLOUDUNDERTEST-vm6-hadoopslave-ai-1
|           |-- INSTANCE
|           `-- SW
|-- SPECRUNIDKMEANSBASELINE120150816002546UTC
|   `-- ai_2
|       |-- cb-root-CLOUDUNDERTEST-vm10-hadoopslave-ai-2
|       |   |-- INSTANCE
|       |   `-- SW
|       |-- cb-root-CLOUDUNDERTEST-vm11-hadoopslave-ai-2
|       |   |-- INSTANCE
|       |   `-- SW
|       |-- cb-root-CLOUDUNDERTEST-vm12-hadoopslave-ai-2
|       |   |-- INSTANCE
|       |   `-- SW
|       |-- cb-root-CLOUDUNDERTEST-vm7-hadoopslave-ai-2
|       |   |-- INSTANCE
|       |   `-- SW
|       |-- cb-root-CLOUDUNDERTEST-vm8-hadoopslave-ai-2
|       |   |-- INSTANCE
|       |   `-- SW
|       `-- cb-root-CLOUDUNDERTEST-vm9-hadoopmaster-ai-2
|           |-- INSTANCE
|           `-- SW
|-- SPECRUNIDKMEANSBASELINE220150816002546UTC
|   `-- ai_3
|       |-- cb-root-CLOUDUNDERTEST-vm13-hadoopmaster-ai-3
|       |   |-- INSTANCE
|       |   `-- SW
|       |-- cb-root-CLOUDUNDERTEST-vm14-hadoopslave-ai-3
|       |   |-- INSTANCE
|       |   `-- SW
|       |-- cb-root-CLOUDUNDERTEST-vm15-hadoopslave-ai-3
|       |   |-- INSTANCE
|       |   `-- SW
|       |-- cb-root-CLOUDUNDERTEST-vm16-hadoopslave-ai-3
|       |   |-- INSTANCE
|       |   `-- SW
|       |-- cb-root-CLOUDUNDERTEST-vm17-hadoopslave-ai-3
|       |   |-- INSTANCE
|       |   `-- SW
|       `-- cb-root-CLOUDUNDERTEST-vm18-hadoopslave-ai-3
|           |-- INSTANCE
|           `-- SW
|-- SPECRUNIDKMEANSBASELINE320150816002546UTC
|   `-- ai_4
|       |-- cb-root-CLOUDUNDERTEST-vm19-hadoopmaster-ai-4
|       |   |-- INSTANCE
|       |   `-- SW
|       |-- cb-root-CLOUDUNDERTEST-vm20-hadoopslave-ai-4
|       |   |-- INSTANCE
|       |   `-- SW
|       |-- cb-root-CLOUDUNDERTEST-vm21-hadoopslave-ai-4
|       |   |-- INSTANCE
|       |   `-- SW
|       |-- cb-root-CLOUDUNDERTEST-vm22-hadoopslave-ai-4
|       |   |-- INSTANCE
|       |   `-- SW
|       |-- cb-root-CLOUDUNDERTEST-vm23-hadoopslave-ai-4
|       |   |-- INSTANCE
|       |   `-- SW
|       `-- cb-root-CLOUDUNDERTEST-vm24-hadoopslave-ai-4
|           |-- INSTANCE
|           `-- SW
|-- SPECRUNIDKMEANSBASELINE420150816002546UTC
|   `-- ai_5
|       |-- cb-root-CLOUDUNDERTEST-vm25-hadoopslave-ai-5
|       |   |-- INSTANCE
|       |   `-- SW
|       |-- cb-root-CLOUDUNDERTEST-vm26-hadoopmaster-ai-5
|       |   |-- INSTANCE
|       |   `-- SW
|       |-- cb-root-CLOUDUNDERTEST-vm27-hadoopslave-ai-5
|       |   |-- INSTANCE
|       |   `-- SW
|       |-- cb-root-CLOUDUNDERTEST-vm28-hadoopslave-ai-5
|       |   |-- INSTANCE
|       |   `-- SW
|       |-- cb-root-CLOUDUNDERTEST-vm29-hadoopslave-ai-5
|       |   |-- INSTANCE
|       |   `-- SW
|       `-- cb-root-CLOUDUNDERTEST-vm30-hadoopslave-ai-5
|           |-- INSTANCE
|           `-- SW
|-- SPECRUNIDYCSBBASELINE020150816014230UTC
|   `-- ai_6
|       |-- cb-root-CLOUDUNDERTEST-vm31-ycsb-ai-6
|       |   |-- INSTANCE
|       |   `-- SW
|       |-- cb-root-CLOUDUNDERTEST-vm32-seed-ai-6
|       |   |-- INSTANCE
|       |   `-- SW
|       |-- cb-root-CLOUDUNDERTEST-vm33-seed-ai-6
|       |   |-- INSTANCE
|       |   `-- SW
|       |-- cb-root-CLOUDUNDERTEST-vm34-seed-ai-6
|       |   |-- INSTANCE
|       |   `-- SW
|       |-- cb-root-CLOUDUNDERTEST-vm35-seed-ai-6
|       |   |-- INSTANCE
|       |   `-- SW
|       |-- cb-root-CLOUDUNDERTEST-vm36-seed-ai-6
|       |   |-- INSTANCE
|       |   `-- SW
|       `-- cb-root-CLOUDUNDERTEST-vm37-seed-ai-6
|           |-- INSTANCE
|           `-- SW
|-- SPECRUNIDYCSBBASELINE120150816014230UTC
|   `-- ai_7
|       |-- cb-root-CLOUDUNDERTEST-vm38-ycsb-ai-7
|       |   |-- INSTANCE
|       |   `-- SW
|       |-- cb-root-CLOUDUNDERTEST-vm39-seed-ai-7
|       |   |-- INSTANCE
|       |   `-- SW
|       |-- cb-root-CLOUDUNDERTEST-vm40-seed-ai-7
|       |   |-- INSTANCE
|       |   `-- SW
|       |-- cb-root-CLOUDUNDERTEST-vm41-seed-ai-7
|       |   |-- INSTANCE
|       |   `-- SW
|       |-- cb-root-CLOUDUNDERTEST-vm42-seed-ai-7
|       |   |-- INSTANCE
|       |   `-- SW
|       |-- cb-root-CLOUDUNDERTEST-vm43-seed-ai-7
|       |   |-- INSTANCE
|       |   `-- SW
|       `-- cb-root-CLOUDUNDERTEST-vm44-seed-ai-7
|           |-- INSTANCE
|           `-- SW
|-- SPECRUNIDYCSBBASELINE220150816014230UTC
|   `-- ai_8
|       |-- cb-root-CLOUDUNDERTEST-vm45-seed-ai-8
|       |   |-- INSTANCE
|       |   `-- SW
|       |-- cb-root-CLOUDUNDERTEST-vm46-ycsb-ai-8
|       |   |-- INSTANCE
|       |   `-- SW
|       |-- cb-root-CLOUDUNDERTEST-vm47-seed-ai-8
|       |   |-- INSTANCE
|       |   `-- SW
|       |-- cb-root-CLOUDUNDERTEST-vm48-seed-ai-8
|       |   |-- INSTANCE
|       |   `-- SW
|       |-- cb-root-CLOUDUNDERTEST-vm49-seed-ai-8
|       |   |-- INSTANCE
|       |   `-- SW
|       |-- cb-root-CLOUDUNDERTEST-vm50-seed-ai-8
|       |   |-- INSTANCE
|       |   `-- SW
|       `-- cb-root-CLOUDUNDERTEST-vm51-seed-ai-8
|           |-- INSTANCE
|           `-- SW
|-- SPECRUNIDYCSBBASELINE320150816014230UTC
|   `-- ai_9
|       |-- cb-root-CLOUDUNDERTEST-vm52-ycsb-ai-9
|       |   |-- INSTANCE
|       |   `-- SW
|       |-- cb-root-CLOUDUNDERTEST-vm53-seed-ai-9
|       |   |-- INSTANCE
|       |   `-- SW
|       |-- cb-root-CLOUDUNDERTEST-vm54-seed-ai-9
|       |   |-- INSTANCE
|       |   `-- SW
|       |-- cb-root-CLOUDUNDERTEST-vm55-seed-ai-9
|       |   |-- INSTANCE
|       |   `-- SW
|       |-- cb-root-CLOUDUNDERTEST-vm56-seed-ai-9
|       |   |-- INSTANCE
|       |   `-- SW
|       |-- cb-root-CLOUDUNDERTEST-vm57-seed-ai-9
|       |   |-- INSTANCE
|       |   `-- SW
|       `-- cb-root-CLOUDUNDERTEST-vm58-seed-ai-9
|           |-- INSTANCE
|           `-- SW
`-- SPECRUNIDYCSBBASELINE420150816014230UTC
    `-- ai_10
        |-- cb-root-CLOUDUNDERTEST-vm59-ycsb-ai-10
        |   |-- INSTANCE
        |   `-- SW
        |-- cb-root-CLOUDUNDERTEST-vm60-seed-ai-10
        |   |-- INSTANCE
        |   `-- SW
        |-- cb-root-CLOUDUNDERTEST-vm61-seed-ai-10
        |   |-- INSTANCE
        |   `-- SW
        |-- cb-root-CLOUDUNDERTEST-vm62-seed-ai-10
        |   |-- INSTANCE
        |   `-- SW
        |-- cb-root-CLOUDUNDERTEST-vm63-seed-ai-10
        |   |-- INSTANCE
        |   `-- SW
        |-- cb-root-CLOUDUNDERTEST-vm64-seed-ai-10
        |   |-- INSTANCE
        |   `-- SW
        `-- cb-root-CLOUDUNDERTEST-vm65-seed-ai-10
            |-- INSTANCE
            `-- SW

The INSTANCE directory will contain files similar to:

INSTANCE/
|-- date.txt
|-- df.txt
|-- dpkg.txt
|-- etc
|-- hostname.txt
|-- ifconfig.txt
|-- lspci.txt
|-- mount.txt
|-- netstat.txt
|-- ntp.conf
|-- proc
|-- route.txt
`-- var

For a Hadoop AI, the SW directory will contain files similar to:

SW
|-- hadoop
|   |-- datahdfs
|   |-- dfsadmin_report
|   |-- du_datanodedir
|   |-- du_namenodedir
|   |-- input_hdfs_size
|   |-- output_hdfs_size
|   `-- version
|-- hadoop_conf
|   |-- capacity-scheduler.xml
|   |-- configuration.xsl
|   |-- container-executor.cfg
|   |-- core-site.xml
|   |-- hadoop-env.cmd
|   |-- hadoop-env.sh
|   |-- hadoop-metrics2.properties
|   |-- hadoop-metrics.properties
|   |-- hadoop-policy.xml
|   |-- hdfs-site.xml
|   |-- httpfs-env.sh
|   |-- httpfs-log4j.properties
|   |-- httpfs-signature.secret
|   |-- httpfs-site.xml
|   |-- kms-acls.xml
|   |-- kms-env.sh
|   |-- kms-log4j.properties
|   |-- kms-site.xml
|   |-- log4j.properties
|   |-- mapred-env.cmd
|   |-- mapred-env.sh
|   |-- mapred-queues.xml.template
|   |-- mapred-site.xml
|   |-- mapred-site.xml.template
|   |-- masters
|   |-- slaves
|   |-- ssl-client.xml.example
|   |-- ssl-server.xml.example
|   |-- yarn-env.cmd
|   |-- yarn-env.sh
|   `-- yarn-site.xml
|-- javaVersion.out
`-- role

For a Cassandra seed node, the directory will contain files similar to:

SW
|-- SW/cassandra
|   |-- SW/cassandra/du_datadir
|   |-- SW/cassandra/du_datadir_cassandra
|   |-- SW/cassandra/du_datadir_cassandra_usertable
|   |-- SW/cassandra/nodetool_cfstats
|   `-- SW/cassandra/nodetool_status
|-- SW/cassandra_conf
|   |-- SW/cassandra_conf/cassandra-env.sh
|   |-- SW/cassandra_conf/cassandra-rackdc.properties
|   |-- SW/cassandra_conf/cassandra-topology.properties
|   |-- SW/cassandra_conf/cassandra-topology.yaml
|   |-- SW/cassandra_conf/cassandra.yaml
|   |-- SW/cassandra_conf/commitlog_archiving.properties
|   |-- SW/cassandra_conf/logback-tools.xml
|   |-- SW/cassandra_conf/logback.xml
|   `-- SW/cassandra_conf/triggers
|       `-- SW/cassandra_conf/triggers/README.txt
|-- SW/javaVersion.out
`-- SW/role

The supporting evidence from instances in elasticity + scalability phase is present at:

SPECRUNID/instance_evidence_dir/elasticity

In the example below, the supporting evidence for AI 11-19 is present:

`-- SPECRUNIDELASTICITY20150901061549UTC
|-- ai_11
|   |-- cb-root-MYCLOUD-vm66-seed-ai-11
|   |-- cb-root-MYCLOUD-vm67-ycsb-ai-11
|   |-- cb-root-MYCLOUD-vm68-seed-ai-11
|   |-- cb-root-MYCLOUD-vm69-seed-ai-11
|   |-- cb-root-MYCLOUD-vm70-seed-ai-11
|   |-- cb-root-MYCLOUD-vm71-seed-ai-11
|   `-- cb-root-MYCLOUD-vm72-seed-ai-11
|-- ai_12
|   |-- cb-root-MYCLOUD-vm73-hadoopmaster-ai-12
|   |-- cb-root-MYCLOUD-vm74-hadoopslave-ai-12
|   |-- cb-root-MYCLOUD-vm75-hadoopslave-ai-12
|   |-- cb-root-MYCLOUD-vm76-hadoopslave-ai-12
|   |-- cb-root-MYCLOUD-vm77-hadoopslave-ai-12
|   `-- cb-root-MYCLOUD-vm78-hadoopslave-ai-12
|-- ai_13
|   |-- cb-root-MYCLOUD-vm79-hadoopmaster-ai-13
|   |-- cb-root-MYCLOUD-vm80-hadoopslave-ai-13
|   |-- cb-root-MYCLOUD-vm81-hadoopslave-ai-13
|   |-- cb-root-MYCLOUD-vm82-hadoopslave-ai-13
|   |-- cb-root-MYCLOUD-vm83-hadoopslave-ai-13
|   `-- cb-root-MYCLOUD-vm84-hadoopslave-ai-13
|-- ai_14
|   |-- cb-root-MYCLOUD-vm85-ycsb-ai-14
|   |-- cb-root-MYCLOUD-vm86-seed-ai-14
|   |-- cb-root-MYCLOUD-vm87-seed-ai-14
|   |-- cb-root-MYCLOUD-vm88-seed-ai-14
|   |-- cb-root-MYCLOUD-vm89-seed-ai-14
|   |-- cb-root-MYCLOUD-vm90-seed-ai-14
|   `-- cb-root-MYCLOUD-vm91-seed-ai-14
|-- ai_15
|   |-- cb-root-MYCLOUD-vm92-ycsb-ai-15
|   |-- cb-root-MYCLOUD-vm93-seed-ai-15
|   |-- cb-root-MYCLOUD-vm94-seed-ai-15
|   |-- cb-root-MYCLOUD-vm95-seed-ai-15
|   |-- cb-root-MYCLOUD-vm96-seed-ai-15
|   |-- cb-root-MYCLOUD-vm97-seed-ai-15
|   `-- cb-root-MYCLOUD-vm98-seed-ai-15
|-- ai_16
|   |-- cb-root-MYCLOUD-vm100-hadoopslave-ai-16
|   |-- cb-root-MYCLOUD-vm101-hadoopslave-ai-16
|   |-- cb-root-MYCLOUD-vm102-hadoopslave-ai-16
|   |-- cb-root-MYCLOUD-vm103-hadoopslave-ai-16
|   |-- cb-root-MYCLOUD-vm104-hadoopslave-ai-16
|   `-- cb-root-MYCLOUD-vm99-hadoopmaster-ai-16
|-- ai_17
|   |-- cb-root-MYCLOUD-vm105-seed-ai-17
|   |-- cb-root-MYCLOUD-vm106-seed-ai-17
|   |-- cb-root-MYCLOUD-vm107-ycsb-ai-17
|   |-- cb-root-MYCLOUD-vm108-seed-ai-17
|   |-- cb-root-MYCLOUD-vm109-seed-ai-17
|   |-- cb-root-MYCLOUD-vm110-seed-ai-17
|   `-- cb-root-MYCLOUD-vm111-seed-ai-17
|-- ai_18
|   |-- cb-root-MYCLOUD-vm112-hadoopslave-ai-18
|   |-- cb-root-MYCLOUD-vm113-hadoopmaster-ai-18
|   |-- cb-root-MYCLOUD-vm114-hadoopslave-ai-18
|   |-- cb-root-MYCLOUD-vm115-hadoopslave-ai-18
|   |-- cb-root-MYCLOUD-vm116-hadoopslave-ai-18
|   `-- cb-root-MYCLOUD-vm117-hadoopslave-ai-18
`-- ai_19
    |-- cb-root-MYCLOUD-vm118-ycsb-ai-19
    |-- cb-root-MYCLOUD-vm119-seed-ai-19
    |-- cb-root-MYCLOUD-vm120-seed-ai-19
    |-- cb-root-MYCLOUD-vm121-seed-ai-19
    |-- cb-root-MYCLOUD-vm122-seed-ai-19
    |-- cb-root-MYCLOUD-vm123-seed-ai-19
    `-- cb-root-MYCLOUD-vm124-seed-ai-19

The instance and cloud configuration gathered using Cloud API/CLIs is present in the following directory:

SPECRUNID/instance_evidence_dir/baseline_pre
SPECRUNID/instance_evidence_dir/kmeans_baseline_post
SPECRUNID/instance_evidence_dir/ycsb_baseline_post
SPECRUNID/instance_evidence_dir/elasticity_post

The configuration parameters for CBTOOL for a compliant run are dumped into:

SPECRUNID/harness/harness_config.yaml

Log Paths for Cassandra, YCSB, and Hadoop

If Cassandra is configured as specified in the user guide, the logs are available at:

/var/log/cassandra

YCSB logs are present in /tmp. For every run which includes data and generation, two logs are generated. These logs are automatically collected by CBTOOL. The log names will resemble the following:

tmp.CiM8PN2CUL
tmp.CiM8PN2CUL.run

Hadoop logs are present at:

/usr/local/hadoop/logs

Information to be Manually Added by the Tester

In the directory structure that is automatically created as part of a run, some information needs to be manually collected by the tester. This information is described in the Run Rules document and available as a sample FDR for a whitebox cloud. The information to be collected is listed below:

|-- cloud_config (to be manually collected by the tester)
|   |-- blackbox (remove for whitebox submission)
|   |-- image_instructions
|   |-- instance_arch_diag
|   `-- whitebox (remove for blackbox submission)
|       |-- arch_diagram
|       |-- cloud_mgmt_software
|       |-- compute_nodes
|       `-- controller_nodes
|-- code (to be manually collected by the tester)
|   |-- cloud_config_scripts
|   |-- harness_scripts
|   `-- white_box_sup_evid_scripts
|-- harness (to be manually collected by the tester)
|   |-- config_files
|   |-- machine_info
|   `-- software
|-- instance_evidence_dir (automatically collected)
|   |-- baseline
|   |-- baseline_pre
|   |-- elasticity
|   |-- elasticity_post
|   |-- kmeans_baseline_post
|   `-- ycsb_baseline_post
`-- perf (automatically collected)

In particular, cloud_config for whitebox needs to includes scripts or other documentation that were used to configure the physical infrastructure for the cloud under test.