Configure Your Cloud - Some Tips

Configuring CBTOOL For Your OpenStack Cloud

The instructions below describe how to configure CBTOOL running on benchmark harness machine for your OpenStack cloud. These instructions assume that CBTOOL image has been created with Ubuntu Trusty distribution. The Linux username of the image is ubuntu.

If an OpenStack cloud has different public and private API endpoints, please ensure that CBTOOL running on the benchmark harness machine is able to reach both public and private endpoints directly or through a jump box. Otherwise, CBTOOL will not be able to run experiments.

These instructions also assume that a network and virtual routers have been configured in the OpenStack cloud. The benchmark harness machine running CBTOOL must be able to reach the instances. For initial testing, it is recommended that a ‘Flat’ network is configured in the OpenStack cloud. If the instances will have both a private and a public IP address, the floating IP address that is reachable from CBTOOL machine must be automatically assigned to the instances upon instance creation.

For testing the benchmark, the benchmark harness machine running CBTOOL can be set up in the same network as instances.

  1. Get the spec_key file from kit into cbtool credentials directory. It is recommended that to create a new set of ssh keys to copy into cbtool credentials directory.:

    cp /home/ubuntu/osgcloud/spec_ssh_keys/* /home/ubuntu/osgcloud/cbtool/credentials/
    
  2. Add hostname of OpenStack controller in your /etc/hostsfile:

    sudo vi /etc/hosts
    IPADDROSCONTROLLER    HOSTNAMEOSCONTROLLER
    

If your Linux username used to log into VM was ubuntu, a file, namely, ubuntu_cloud_definitions.txt must be present. Make edits into this file for configuring it to talk to OpenStack cloud. If the file does not exist, rerun the CBTOOL installation.:

cd /home/ubuntu/osgcloud/cbtool/configs
ls
cloud_definitions.txt  ubuntu_cloud_definitions.txt  templates

Edit that file:

vi ubuntu_cloud_definitions.txt

and replace the section under CLOUDOPTION_MYOPENSTACK with the following. Make sure to:

  1. Configure the IP address of your OpenStack controller (public endpoint) in
OSK_ACCESS parameter.

2. Replace RegionOne with appropriate name for region configured in OpenStack cloud for the OSK_INITIAL_VMCS parameter. 3. Replace public with the network name in the OSK_NETNAME parameter to which instances in your cloud will be configured with. The network and virtual routers if any must be preconfigured in the OpenStack cloud. 4. The OSK_KEY_NAME is the name of the (public) key that will be “injected” on an image before it is booted. For instance, in OpenStack, the key is normally injected in /root/.ssh/authorized_keys, allowing a user to login on a VM after the boot as root. This attribute refers to a key that is managed directly by the OpenStack cloud. On other hand, OSK_SSH_KEY_NAME is the name of the key used to login on a VM as the user (non-root) specified by OSK_LOGIN. This key will not (necessarily) be injected on the image. It is expected that the key and username specified by OSK_SSH_KEY_NAME and OSK_LOGIN are pre-defined on the VM. These attributes are not managed (or known at all) by the OpenStack cloud.:

         [USER-DEFINED : CLOUDOPTION_MYOPENSTACK]
         OSK_ACCESS = http://PUBLICIP:5000/v2.0/                   # Address of controlled node (where nova-api runs)
         OSK_CREDENTIALS =  admin-admin-admin                      # user-tenant-password
         OSK_SECURITY_GROUPS = default                             # Make sure that this group exists first
         OSK_INITIAL_VMCS = RegionOne                              # Change "RegionOne" accordingly
         OSK_LOGIN = cbuser                                        # The username that logins on the VMs
         OSK_KEY_NAME = spec_key                                   # SSH key for logging into workload VMs
         OSK_SSH_KEY_NAME = spec_key                               # SSH key for logging into workload VMs
         OSK_NETNAME = public

Change :bash:`STARTUP_CLOUD` to :bash:`MYOPENSTACK` in :bash:`ubuntu_cloud_definitions.txt`.


If your cloud supports HTTPS, enter :bash:`OSK_ACCESS` as::

     OSK_ACCESS = https://PUBLICIP/v2.0/ (or the keystone URL)
  1. Floating IPs. If your cloud needs floating IP addresses, set the following in the cloud configuration file after the OpenStack configuration set above:

    [VM_DEFAULTS]
    USE_FLOATING_IP = $True
    
  1. Start CBTOOL:

    cd /home/ubuntu/osgcloud/cbtool
    ./cb --hard_reset
    

The successful output will be similar to the following:

Cbtool version is "7b33da7"
Parsing "cloud definitions" file..... "/home/ubuntu/osgcloud/cbtool/lib/auxiliary//../..//configs/ubuntu_cloud_definitions.txt" opened and parsed successfully.
Checking "Object Store".....An Object Store of the kind "Redis" (shared) on node 9.47.240.202, TCP port 6379, database id "0" seems to be running.
Checking "Log Store".....A Log Store of the kind "rsyslog" (private) on node 9.47.240.202, UDP port 5114 seems to be running.
Checking "Metric Store".....A Metric Store of the kind "MongoDB" (shared) on node 9.47.240.202, TCP port 27017, database id "metrics" seems to be running.
Executing "hard" reset: (killing all running toolkit processes and flushing stores) before starting the experiment......
Killing all processes... done
Flushing Object Store... done
Flushing Metric Store... done
Flushing Log Store... done
Checking for a running API service daemon.....API Service daemon was successfully started. The process id is 16020 (http://9.47.240.202:7070).
Checking for a running GUI service daemon.....GUI Service daemon was successfully started. The process id is 16044, listening on port 8080. Full url is "http://9.47.240.202:8080".
 status: Checking if the ssh key pair "spec_ssh" is created on VMC RegionOne....
 status: Checking if the security group "default" is created on VMC RegionOne....
 status: Checking if the network "public" can be found on VMC RegionOne...
 status: Checking if the imageids associated to each "VM role" are registered on VMC RegionOne....
 status: WARNING Image id for VM roles "giraphmaster,giraphslave": "cb_giraph" is NOT registered (attaching VMs with any of these roles will result in error).
         WARNING Image id for VM roles "mongos,redis,mongo_cfg_server,mongodb": "cb_ycsb" is NOT registered (attaching VMs with any of these roles will result in error).
         WARNING Image id for VM roles "coremark,driver_coremark": "cb_coremark" is NOT registered (attaching VMs with any of these roles will result in error).
         WARNING Image id for VM roles "specjbb": "cb_specjbb" is NOT registered (attaching VMs with any of these roles will result in error).
         WARNING Image id for VM roles "driver_netperf,hadoopslave,hadoopmaster,driver_hadoop": "cb_hadoop" is NOT registered (attaching VMs with any of these roles will result in error).
         WARNING Image id for VM roles "filebench,driver_filebench": "cb_filebench" is NOT registered (attaching VMs with any of these roles will result in error).
         WARNING Image id for VM roles "tinyvm": "cb_nullworkload" is NOT registered (attaching VMs with any of these roles will result in error).
         WARNING Image id for VM roles "windows,client_windows": "cb_windows" is NOT registered (attaching VMs with any of these roles will result in error).
         WARNING Image id for VM roles "ddgen": "cb_ddgen" is NOT registered (attaching VMs with any of these roles will result in error).
         WARNING Image id for VM roles "driver_tradelite,client_tradelite": "cb_tradelite" is NOT registered (attaching VMs with any of these roles will result in error).
         WARNING Image id for VM roles "iperfserver,iperfclient": "cb_iperf" is NOT registered (attaching VMs with any of these roles will result in error).
         WARNING Image id for VM roles "cn_hpc,fen_hpc": "cb_hpcc" is NOT registered (attaching VMs with any of these roles will result in error).
         WARNING Image id for VM roles "lb,db2,driver_daytrader,geronimo,mysql,client_daytrader,was": "cb_daytrader" is NOT registered (attaching VMs with any of these roles will result in error).
         WARNING Image id for VM roles "netserver,netclient": "cb_netperf" is NOT registered (attaching VMs with any of these roles will result in error).
         WARNING Image id for VM roles "fioclient,fioserver,driver_fio": "cb_fio" is NOT registered (attaching VMs with any of these roles will result in error)
 status: VMC "RegionOne" was successfully tested.
The "osk" cloud named "MYOPENSTACK" was successfully attached to this experiment.
The experiment identifier is EXP-02-04-2015-04-55-50-PM-UTC

 status: Removing all VMs previously created on VMC "RegionOne" (only VM names starting with "cb-ubuntu-MYOPENSTACK").....
 status: Removing all VVs previously created on VMC "RegionOne" (only VV names starting with "cb-ubuntu-MYOPENSTACK").....
 status: The host list for VMC "RegionOne" is empty ("discover_hosts" was set to "false"). Skipping Host OS performance monitor daemon startup
 status: Attribute "collect_from_host" was set to "false". Skipping Host OS performance monitor daemon startup
All VMCs successfully attached to this experiment.
(MYOPENSTACK)

Here is an example of an unsuccessful output:

status: VMC "default" did not pass the connection test." : OpenStack connection failure: ('Connection aborted.', BadStatusLine("''",))
The "osk" cloud named "MYOPENSTACK" could not be attached to this experiment: VMC "default" did not pass the connection test." : OpenStack connection failure: ('Connection aborted.', BadStatusLine("''",))
Usage: vmcattach <cloud_name> <identifier> [temp_attr_list = empty=empty] [mode]
()

5. The cloud name must also be entered in osgcloud_rules.yaml file. If the instructions were followed, the file should be in:

cd ~/osgcloud/driver

Configuring CBTOOL For Amazon Elastic Compute Cloud

Below we describe instructions to connect to Amazon Elastic Compute Cloud (EC2). These instructions assume that the tester connects to EC2 similar to how a cloud consumer would normally do.

Connecting to EC2 requires AWS access key id, the name of current security group, and AWS secret access key. The AWS access and the AWS secret access key can be obtained from the security dashboard on AWS.

  1. If the Linux username used to log into VM was ubuntu, a file, namely, ubuntu_cloud_definitions.txt must be present. Make edits into this file for configuring it to talk to OpenStack cloud. If the file does not exist, rerun the CBTOOL installation.:

    cd /home/ubuntu/osgcloud/configs
    ls
    cloud_definitions.txt  ubuntu_cloud_definitions.txt  templates
    

Edit that file:

vi cloud_definitions.txt

and replace the section under CLOUDOPTION_MYAMAZON with the following.:

[USER-DEFINED : CLOUDOPTION_MYAMAZON]

EC2_ACCESS = AKIAJ36T4WERTSWEUQIA                          # This is the AWS access key id
EC2_SECURITY_GROUPS = mWeb                                 # Make sure that this group exists first
EC2_CREDENTIALS = GX/idfgw/GqjVeUl9PzWeIOIwpFhAyAOdq0v1C1R # This is the AWS secret access key
EC2_KEY_NAME = YOURSSHKEY                                          # Make sure that this key exists first
EC2_INITIAL_VMCS = us-west-2:sut                           # Change "us-east-1" accordingly
EC2_SSH_KEY_NAME = cbtool_rsa                                  # SSH key for logging into workload VMs
EC2_LOGIN = ubuntu                                         # The username that logins on the VMs

Change STARTUP_CLOUD to MYAMAZON in ubuntu_cloud_definitions.txt.

  1. Start CBTOOL:

    cd /home/ubuntu/osgcloud/cbtool
    

The successful output looks like:

ubuntu@cbtool-new:~/osgcloud/cbtool$ sudo ./cb --hard_reset

Cbtool version is "7b33da7"
Parsing "cloud definitions" file..... "/home/ubuntu/osgcloud/cbtool/lib/auxiliary//../..//configs/cloud_definitions.txt" opened and parsed successfully.
Checking "Object Store".....An Object Store of the kind "Redis" (shared) on node 172.30.0.172, TCP port 6379, database id "0" seems to be running.
Checking "Log Store".....A Log Store of the kind "rsyslog" (private) on node 172.30.0.172, UDP port 5114 seems to be running.
Checking "Metric Store".....A Metric Store of the kind "MongoDB" (shared) on node 172.30.0.172, TCP port 27017, database id "metrics" seems to be running.
Executing "hard" reset: (killing all running toolkit processes and flushing stores) before starting the experiment......
Killing all processes... done
Flushing Object Store... done
Flushing Metric Store... done
Flushing Log Store... done
Checking for a running API service daemon.....API Service daemon was successfully started. The process id is 1686 (http://172.30.0.172:7070).
Checking for a running GUI service daemon.....GUI Service daemon was successfully started. The process id is 1710, listening on port 8080. Full url is "http://172.30.0.172:8080".
 status: Checking if the ssh key pair "mW" is created on VMC us-west-2....
 status: Checking if the security group "mWeb" is created on VMC us-west-2....
 status: Checking if the imageids associated to each "VM role" are registered on VMC us-west-2....
 status: VMC "us-west-2" was successfully tested.
The "ec2" cloud named "MYAMAZON" was successfully attached to this experiment.
The experiment identifier is EXP-02-17-2015-09-59-09-PM-UTC

 status: Removing all VMs previously created on VMC "us-west-2" (only VMs names starting with "cb-root-MYAMAZON").....
 status: The host list for VMC "us-west-2" is empty ("discover_hosts" was set to "false"). Skipping Host OS performance monitor daemon startup
 status: Attribute "collect_from_host" was set to "false". Skipping Host OS performance monitor daemon startup
All VMCs successfully attached to this experiment.
(MYAMAZON)
  1. The cloud name must also be entered in osgcloud_rules.yaml file. If the instructions were followed, the file should be in:

    cd ~/osgcloud/driver
    

Table Of Contents

This Page