Hello,
I am currently writing my master thesis and developed a Chauffeur Worklet. The worklet runs native C++ code through JNI.
During measurements I came across what I suspect is a bug in Chauffeur. The load level scaling did not work properly. While at a 100% load level, it reached the calibrated transaction throughput and all other load levels are reported as accurate. Yet the transaction count does not fit. It seems that for load levels below 100%, a different calibration value, lower than reported is used.
I am going to rerun the worklet and see if the problem can be reproduced but would like to hear some advice where I can start looking for the root cause. The native code ran fine on measurements before and after this specific one and the transactions are blocking until they are executed. Could the problem be related to the large number of transactions?
Phase | Interval | Actual | Score | Host | Client | Elapsed | Transaction | Transaction | Transaction
| | Load | | CV | CV | Measurement | | Count | Time (s)
| | | | | | Time (s) | | |
----------- | -------- | ------ | --------------- | ------ | ------ | ----------- | --------------- | ------------ | ------------
warmup | max | | 1,711,211.135 | 0.0% | 3.1% | 120.052 | PetTransaction | 205,432,535 | 852.048
calibration | max | | 1,544,903.586 | 0.0% | 3.2% | 120.052 | PetTransaction | 185,467,074 | 857.551
| max | | 1,706,569.464 | 0.0% | 2.0% | 120.044 | PetTransaction | 204,862,882 | 855.405
| max | | 1,693,478.034 | 0.0% | 2.0% | 120.048 | PetTransaction | 203,298,379 | 855.923
| Calibrat | | 1,700,021.825 | | | | | |
| ion | | | | | | | |
| Result | | | | | | | |
measurement | 100% | 100.0% | 1,695,528.229 | 0.0% | 2.1% | 120.040 | PetTransaction | 203,530,810 | 856.197
| 90% | 90.0% | 1,006,323.644 | 0.0% | 0.0% | 120.064 | PetTransaction | 120,812,465 | 718.304
| 80% | 80.0% | 894,418.676 | 0.0% | 0.0% | 120.044 | PetTransaction | 107,369,451 | 614.566
| 70% | 70.0% | 782,627.988 | 0.0% | 0.0% | 120.043 | PetTransaction | 93,949,039 | 584.892
| 60% | 60.0% | 670,896.028 | 0.0% | 0.0% | 120.044 | PetTransaction | 80,536,603 | 579.594
| 50% | 50.0% | 559,069.424 | 0.0% | 0.0% | 120.044 | PetTransaction | 67,112,856 | 574.692
| 40% | 40.0% | 447,211.301 | 0.0% | 0.0% | 120.044 | PetTransaction | 53,684,693 | 582.760
| 30% | 30.0% | 335,443.047 | 0.0% | 0.0% | 120.040 | PetTransaction | 40,266,545 | 464.538
| 20% | 20.0% | 223,659.301 | 0.0% | 0.0% | 120.044 | PetTransaction | 26,848,700 | 287.977
| 10% | 10.0% | 111,755.848 | 0.0% | 0.1% | 120.044 | PetTransaction | 13,415,605 | 152.997
The config.xml:
<?xml version="1.0" encoding="UTF-8"?>
<chauffeur xmlns="http://spec.org/power_chauffeur" xmlns:xi="http://www.w3.org/2001/XInclude">
<xi:include href="test-environment.xml"/>
<definitions>
<!-- Interval length for calibration and measurement intervals. -->
<interval-length id="testIntervalLength">
<premeasurement>30s</premeasurement>
<measurement>120s</measurement>
<postmeasurement>10s</postmeasurement>
</interval-length>
<!-- Interval length for warmup intervals. Experience shows that
web server loads (such as BUNGEE or HTTP_DS2) profit from long
warmup periods. These increase stability for the measurements. -->
<interval-length id="warmupLength">
<premeasurement>30s</premeasurement>
<measurement>120s</measurement>
<postmeasurement>10s</postmeasurement>
</interval-length>
</definitions>
<warmup-phase id="petwarmup">
<sequence>
<interval-series className="NoDelaySeries">
<scenario-mix-factory>javaScenarioMix</scenario-mix-factory>
<interval-count>1</interval-count>
</interval-series>
<interval-length ref="warmupLength"/>
</sequence>
</warmup-phase>
<calibration-phase id="petcalibration">
<sequence>
<interval-series className="NoDelaySeries">
<scenario-mix-factory>javaScenarioMix</scenario-mix-factory>
<interval-count>3</interval-count>
</interval-series>
<interval-length ref="testIntervalLength"/>
</sequence>
<calibrator className="AverageThroughputCalibrator">
<average-intervals>2</average-intervals>
</calibrator>
</calibration-phase>
<measurement-phase id="petmeasurement">
<sequence>
<interval-series className="GraduatedMeasurementSeries">
<scenario-mix-factory>javaScenarioMix</scenario-mix-factory>
<interval-count>10</interval-count>
</interval-series>
<interval-length ref="testIntervalLength"/>
</sequence>
</measurement-phase>
<suite>
<client-configuration>
<clients key="PetConfig">
<count>logicalCores</count>
<option-set/>
</clients>
</client-configuration>
<description className="tools.descartes.power.chauffeur.pet.PetSuite" classpath="lib/pet.jar"/>
<xi:include href="listeners.xml"/>
<workload enabled="true">
<name>Pet</name>
<worklet enabled="true">
<name>Pet_Native</name>
<launch-definition id="launchDef">
<configuration-key>PetConfig</configuration-key>
</launch-definition>
<workletDefinition>
<location>tools/descartes/power/chauffeur/pet/pet.xml</location>
<classpath>
<entry>lib/pet.jar</entry>
</classpath>
</workletDefinition>
<!-- Sets the number of users per Client to 1. This setting really
doesn't matter, as BUNGEE users don't do anything of note. -->
<max-per-client-users>1</max-per-client-users>
<!-- Sets the number of transaction threads to 1. Set this if you
want BUNGEE to only use one driver thread that sends requests to
the internal webserver. Increase this number for more drivers.
Removing (or commenting) this line should set the amount of driver
threads to the amount of logical cores on the SUT. -->
<num-transaction-threads>1</num-transaction-threads>
<warmup-phase ref="petwarmup"/>
<calibration-phase ref="petcalibration"/>
<measurement-phase ref="petmeasurement"/>
</worklet>
</workload>
</suite>
</chauffeur>