Quanta Cloud Technology QuantaGrid D54Q-2U 471309 SPECjbb2015-MultiJVM max-jOPS
275975 SPECjbb2015-MultiJVM critical-jOPS
Tested by: Quanta Computer Inc. Test Sponsor: Quanta Computer Inc. Test location: Taoyuan, TW Test date: Nov 29, 2023
SPEC license #: 9050 Hardware Availability: Dec-2023 Software Availability: Oct-2023 Publication: Thu Dec 14 10:44:19 EST 2023
Benchmark Results Summary
 
Overall Throughput RT curve
Overall SUT (System Under Test) Description
VendorQuanta Cloud Technology
Vendor URLhttps://www.qct.io/
System SourceSingle Supplier
System DesignationRack Mount Chassis
Total Systems1
All SUT Systems IdenticalYES
Total Nodes1
All Nodes IdenticalYES
Nodes Per System1
Total Chips2
Total Cores128
Total Threads256
Total Memory Amount (GB)1024
Total OS Images1
SW EnvironmentNon-virtual
 
Hardware hw_1
NameQuantaGrid D54Q-2U
VendorQuanta Cloud Technology
Vendor URLhttps://www.qct.io/
AvailableDec-2023
ModelQuantaGrid D54Q-2U
Form Factor2U
CPU NameIntel Xeon Platinum 8592+
CPU Characteristics64 Core, 1.90 GHz, 320 MB L3 Cache (Turbo Boost Technology up to 3.90 GHz)
Number of Systems1
Nodes Per System1
Chips Per System2
Cores Per System128
Cores Per Chip64
Threads Per System256
Threads Per Core2
Version3B05.QCT4T1 11/06/2023
CPU Frequency (MHz)1900
Primary Cache32 KB(I) + 48 KB(D) on chip per core
Secondary Cache2 MB (I+D) on chip per core
Tertiary Cache320 MB (I+D) on chip per chip
Other CacheNone
Disk1 x 256GB NVME M.2 SSD
File Systemext4
Memory Amount (GB)1024
# and size of DIMM(s)16 x 64 GB
Memory Details64 GB 2Rx4 PC5-5600B-R
# and type of Network Interface Cards (NICs)1 x 10 GbE two ports OCP3.0 PCIE NIC
Power Supply Quantity and Rating (W)1 x 2600W
Other HardwareNone
Cabinet/Housing/EnclosureNone
Shared DescriptionNone
Shared CommentNone
Notes
  • Yes: The test sponsor attests, as of date of publication, that CVE-2017-5754 (Meltdown) is mitigated in the system as tested and documented.
  • Yes: The test sponsor attests, as of date of publication, that CVE-2017-5753 (Spectre variant 1) is mitigated in the system as tested and documented.
  • Yes: The test sponsor attests, as of date of publication, that CVE-2017-5715 (Spectre variant 2) is mitigated in the system as tested and documented.
Other Hardware network_1
NameNone
VendorNone
Vendor URLNone
VersionNone
AvailableNone
BitnessNone
NotesNone
Operating System os_1
NameUbuntu 22.04.3 LTS
VendorUbuntu
Vendor URLhttp://ubuntu.com/
Version5.15.0-89-generic.x86_64
AvailableAug-2023
Bitness64
NotesNone
Java Virtual Machine jvm_1
NameOracle Java SE 17.0.9
VendorOracle
Vendor URLhttp://www.oracle.com/
VersionJava HotSpot 64-bit Server VM, version 17.0.9
AvailableOct-2023
Bitness64
NotesNone
Other Software other_1
NameNone
VendorNone
Vendor URLNone
VersionNone
AvailableNone
BitnessNone
NotesNone
Hardware
OS Images os_Image_1(1)
Hardware Description hw_1
Number of Systems 1
SW Environment non-virtual
Tuning BIOS Configuration:
  • Sub NUMA Cluster set to Enabled SNC2 (2-clusters)
  • Patrol Scrub set to Disabled
Notes None
OS Image os_Image_1
JVM Instances jvm_Ctr_1(1), jvm_Backend_1(8), jvm_TxInjector_1(8)
OS Image Description os_1
Tuning

  • cpupower -c all frequency-set -g performance
  • tuned-adm profile throughput-performance
  • echo "performance" | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
  • echo always > /sys/kernel/mm/transparent_hugepage/defrag
  • echo always > /sys/kernel/mm/transparent_hugepage/enabled
  • echo 300 > /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages
  • echo 8000 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
  • systemctl stop systemd-update-utmp-runlevel.service
  • echo 10000 > /proc/sys/kernel/sched_cfs_bandwidth_slice_us
  • echo 0 > /proc/sys/kernel/sched_child_runs_first
  • echo 56000000 > /sys/kernel/debug/sched/latency_ns
  • echo 1000 > /sys/kernel/debug/sched/migration_cost_ns
  • echo 16000000 > /sys/kernel/debug/sched/min_granularity_ns
  • echo 100 > /proc/sys/kernel/sched_rr_timeslice_ms
  • echo 1000000 > /proc/sys/kernel/sched_rt_period_us
  • echo 990000 > /proc/sys/kernel/sched_rt_runtime_us
  • echo 0 > /proc/sys/kernel/sched_schedstats
  • echo 1 > /sys/kernel/debug/sched/tunable_scaling
  • echo 50000000 > /sys/kernel/debug/sched/wakeup_granularity_ns
  • echo 3000 > /proc/sys/vm/dirty_expire_centisecs
  • echo 500 > /proc/sys/vm/dirty_writeback_centisecs
  • echo 40 > /proc/sys/vm/dirty_ratio
  • echo 10 > /proc/sys/vm/dirty_background_ratio
  • echo 10 > /proc/sys/vm/swappiness
  • echo 0 > /proc/sys/kernel/numa_balancing
  • ulimit -n 1024000
  • ulimit -v 800000000
  • ulimit -m 800000000
  • ulimit -l 800000000
  • echo 274877906944 > /proc/sys/kernel/shmmax
  • echo 274877906944 > /proc/sys/kernel/shmall

Notes None
JVM Instance jvm_Ctr_1
Parts of Benchmark Controller
JVM Instance Description jvm_1
Command Line -server -Xms2g -Xmx2g -Xmn1536m -XX:+UseLargePages -XX:LargePageSizeInBytes=1G -XX:+UseParallelGC -XX:ParallelGCThreads=2
Tuning numactl used to interleave the controller amongst all available nodes, eg:
  • numactl --interleave=all
Notes None
JVM Instance jvm_Backend_1
Parts of Benchmark Backend
JVM Instance Description jvm_1
Command Line -XX:+UseParallelGC -XX:+UseLargePages -XX:+AlwaysPreTouch -XX:-UseAdaptiveSizePolicy -XX:MaxTenuringThreshold=15 -XX:InlineSmallCode=10k -verbose:gc -XX:-UseCountedLoopSafepoints -XX:LoopUnrollLimit=20 -server -XX:TargetSurvivorRatio=95 -XX:SurvivorRatio=28 -XX:LargePageSizeInBytes=1G -XX:MaxGCPauseMillis=500 -XX:AdaptiveSizeMajorGCDecayTimeScale=12 -XX:AdaptiveSizeDecrementScaleFactor=2 -XX:AllocatePrefetchLines=3 -XX:AllocateInstancePrefetchLines=2 -XX:AllocatePrefetchStepSize=128 -XX:AllocatePrefetchDistance=384 -Xms29g -Xmx29g -Xmn27g -XX:UseAVX=0 -XX:ParallelGCThreads=32 -XX:+UseHugeTLBFS
Tuning Used numactl to affinitize each Backend JVM to physcical cores in a NUMA node
    • Group1: numactl --physcpubind=0-15,128-143 --localalloc
    • Group2:numactl --physcpubind=16-31,144-159 --localalloc
    • Group3:numactl --physcpubind=32-47,160-175 --localalloc
    • Group4:numactl --physcpubind=48-63,176-191 --localalloc
    • Group5: numactl --physcpubind=64-79,192-207 --localalloc
    • Group6:numactl --physcpubind=80-95,208-223 --localalloc
    • Group7:numactl --physcpubind=96-111,224-239 --localalloc
    • Group8:numactl --physcpubind=112-127,240-255 --localalloc
Notes None
JVM Instance jvm_TxInjector_1
Parts of Benchmark TxInjector
JVM Instance Description jvm_1
Command Line

-server -Xms2g -Xmx2g -Xmn1536m -XX:+UseLargePages -XX:LargePageSizeInBytes=1G -XX:+UseParallelGC -XX:ParallelGCThreads=2

Tuning Used numactl to affinitize each TxInjector JVM to physcical cores in a NUMA node
    • Group1: numactl --physcpubind=0-15,128-143 --localalloc
    • Group2:numactl --physcpubind=16-31,144-159 --localalloc
    • Group3:numactl --physcpubind=32-47,160-175 --localalloc
    • Group4:numactl --physcpubind=48-63,176-191 --localalloc
    • Group5: numactl --physcpubind=64-79,192-207 --localalloc
    • Group6:numactl --physcpubind=80-95,208-223 --localalloc
    • Group7:numactl --physcpubind=96-111,224-239 --localalloc
    • Group8:numactl --physcpubind=112-127,240-255 --localalloc
Notes None
max-jOPS = jOPS passed before the First Failure
Pass/Fail Pass Pass Fail Fail Fail
jOPS 465630 471309 476987 482666 488344
critical-jOPS = Geomean ( jOPS @ 10000; 25000; 50000; 75000; 100000; SLAs )
Response time percentile is 99-th
SLA (us) 10000 25000 50000 75000 100000 Geomean
jOPS 184548 224297 298117 343544 377615 275975
  Percentile
  10-th 50-th 90-th 95-th 99-th 100-th
500us 5678 / 11357 5678 / 11357 - / 5678 - / 5678 - / 5678 - / 5678
1000us 295278 / 300956 22714 / 28392 11357 / 17035 11357 / 17035 5678 / 11357 - / 5678
5000us 454274 / 459952 408846 / 414525 306635 / 312313 215780 / 221458 90855 / 96533 11357 / 5678
10000us 459952 / 465630 420203 / 425881 340705 / 346384 295278 / 300956 181709 / 187388 11357 / 5678
25000us 465630 / 471309 442917 / 448595 391811 / 397489 363419 / 369097 221458 / 227137 11357 / 5678
50000us 471309 / - 448595 / 454274 408846 / 414525 386133 / 391811 295278 / 300956 11357 / 17035
75000us 471309 / - 454274 / 459952 420203 / 425881 397489 / 403168 340705 / 346384 96533 / 34071
100000us 471309 / - 459952 / 465630 425881 / 431560 414525 / 420203 374776 / 380454 176031 / 56784
200000us 471309 / - 471309 / - 448595 / 454274 442917 / 448595 425881 / 431560 323670 / 317992
500000us 471309 / - 471309 / - 471309 / - 471309 / - 459952 / 465630 425881 / 408846
1000000us 471309 / - 471309 / - 471309 / - 471309 / - 471309 / - 454274 / 459952
Probes jOPS / Total jOPS
Request Mix Accuracy
Note
(Actual % in the Mix - Expected % in the Mix) must be within:
'Main Tx' limit of +/-5.0% for the requests whose expected % in the mix is >= 10.0%
'Minor Tx' limit of +/-1.0% for the requests whose expected % in the mix is < 10.0%
There were no non-critical failures in Response Time curve building
Delay between status pings
IR/PR Accuracy
This section lists properties only set by user
Property Name Default Controller Group1.Backend.beJVM Group1.TxInjector.txiJVM1 Group2.Backend.beJVM Group2.TxInjector.txiJVM2 Group3.Backend.beJVM Group3.TxInjector.txiJVM3 Group4.Backend.beJVM Group4.TxInjector.txiJVM4 Group5.Backend.beJVM Group5.TxInjector.txiJVM5 Group6.Backend.beJVM Group6.TxInjector.txiJVM6 Group7.Backend.beJVM Group7.TxInjector.txiJVM7 Group8.Backend.beJVM Group8.TxInjector.txiJVM8
specjbb.comm.connect.client.pool.size 256 232 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256
specjbb.comm.connect.selector.runner.count 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
specjbb.comm.connect.timeouts.connect 60000 600000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000
specjbb.comm.connect.timeouts.read 60000 600000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000
specjbb.comm.connect.timeouts.write 60000 600000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000
specjbb.comm.connect.worker.pool.max 256 81 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256
specjbb.comm.connect.worker.pool.min 1 24 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
specjbb.controller.host localhost localhost
specjbb.controller.port 24000 24000
specjbb.controller.type HBIR_RT HBIR_RT
specjbb.customerDriver.threads 64 {=75, probe=69, saturate=85}
specjbb.forkjoin.workers 256 {Tier1=310, Tier2=7, Tier3=32}
specjbb.group.count 1 8
specjbb.mapreducer.pool.size 256 223
specjbb.txi.pergroup.count 1 1
View table in csv format
 
Level: COMPLIANCE
Check Agent Result
Check properties on compliance All PASSED
 
Level: CORRECTNESS
Check Agent Result
Compare SM and HQ Inventory All PASSED
High-bound (max attempted) is 567842 IR
High-bound (settled) is 498893 IR