SPEC CPU2017 Platform Settings for ZTE Systems
- cpupower:
-
The OS 'cpupower' utility is used to change CPU power governors settings. Available settings are:
- Performance: Run the CPU at the maximum frequency.
- ondemand(default): Quick dynamic adjustment of CPU frequency on demand, and will reach the maximum frequency.
- tuned-adm:
-
The 'tuned' provides a number of predefined profiles for typical use cases. The 'tuned-adm' command is used to change settings of the tuned daemon. The tuned-adm command can query current settings, list available profiles, recommend a tuning profile for the system, change profiles directly, or turn off tuning. Available profiles are:
- accelerator-performance: Throughput-performance based tuning with disabled higher latency STOP states.
- balanced: General non-specialized tuned profile.
- desktop: Optimize for the desktop use-case.
- hpc-compute: Optimize for HPC compute workloads.
- intel-sst: Configure for Intel Speed Select Base Frequency.
- latency-performance: Optimize for deterministic performance at the cost of increased power consumption.
- network-latency: Optimize for deterministic performance at the cost of increased power consumption, focused on low latency network performance.
- network-throughput: Optimize for streaming network throughput, generally only necessary on older CPUs or 40G+ networks.
- optimize-serial-console: Optimize for serial console use.
- powersave: Optimize for low power consumption.
- throughput-performance: Broadly applicable tuning that provides excellent performance across a variety of common server workloads.
- virtual-guest: Optimize for running inside a virtual guest.
- virtual-host: Optimize for running KVM guests.
- ulimit -s <n>:
-
Sets the stack size to n kbytes, or unlimited to allow the stack size to grow without limit.
- drop_caches:
-
Writing to this will cause the kernel to drop clean caches, as well as reclaimable slab objects like dentries and inodes. Once dropped, their memory becomes free.
- To free pagecache: "echo 1 > /proc/sys/vm/drop_caches" or "sysctl -w vm.drop_caches=1"
- To free reclaimable slab objects (includes dentries and inodes): "echo 2 > /proc/sys/vm/drop_caches" or "sysctl -w vm.drop_caches=2"
- To free slab objects and pagecache: "echo 3 > /proc/sys/vm/drop_caches" or "sysctl -w vm.drop_caches=3"
- Transparent Hugepages (THP)
-
THP is an abstraction layer that automates most aspects of creating, managing,
and using huge pages. It is designed to hide much of the complexity in using
huge pages from system administrators and developers. Huge pages
increase the memory page size from 4 kilobytes to 2 megabytes. This provides
significant performance advantages on systems with highly contended resources
and large memory workloads. If memory utilization is too high or memory is badly
fragmented which prevents hugepages being allocated, the kernel will assign
smaller 4k pages instead. Most recent Linux OS releases have THP enabled by default.
THP usage is controlled by the sysfs setting /sys/kernel/mm/transparent_hugepage/enabled.
Possible values:
- never: entirely disable THP usage.
- madvise: enable THP usage only inside regions marked MADV_HUGEPAGE using madvise(3).
- always: enable THP usage system-wide. This is the default.
THP creation is controlled by the sysfs setting /sys/kernel/mm/transparent_hugepage/defrag.
Possible values:
- never: if no THP are available to satisfy a request, do not attempt to make any.
- defer: an allocation requesting THP when none are available get normal pages while requesting THP creation in the background.
- defer+madvise: acts like "always", but only for allocations in regions marked MADV_HUGEPAGE using madvise(3); for all other regions it's like "defer".
- madvise: acts like "always", but only for allocations in regions marked MADV_HUGEPAGE using madvise(3). This is the default.
- always: an allocation requesting THP when none are available will stall until some are made.
An application that "always" requests THP often can benefit from waiting for an allocation until those huge pages can be assembled.
For more information see the Linux transparent hugepage documentation.
- kernel.randomize_va_space (ASLR)
-
This setting can be used to select the type of process address space randomization. Defaults differ based on whether the architecture supports ASLR, whether the kernel was built with the CONFIG_COMPAT_BRK option or not, or the kernel boot options used.
Possible settings:
- 0: Turn process address space randomization off.
- 1: Randomize addresses of mmap base, stack, and VDSO pages.
- 2: Additionally randomize the heap. (This is probably the default.)
Disabling ASLR can make process execution more deterministic and runtimes more consistent.
For more information see the randomize_va_space entry in the Linux sysctl documentation.
- dirty_ratio:
-
A percentage value. When this percentage of total system memory is modified, the system begins writing the modifications to disk with the pdflush operation. The default value is 20 percent. To tell the kernel to free local node memory rather than grabbing free memory from remote nodes, use a command like "echo 1 > /proc/sys/vm/zone_reclaim_mode". This can be set through a command "echo 8 > /proc/sys/vm/dirty_ratio" or "sysctl -w vm.dirty_ratio=8".
- swappiness:
-
This control is used to define how aggressively the kernel swaps out anonymous memory relative to pagecache and other caches. Increasing the value increases the amount of swapping. The default value is 60. A value of 1 tells the kernel to only swap processes to disk if absolutely necessary. This can be set through a command like "echo 1 > /proc/sys/vm/swappiness" or "sysctl -w vm.swappiness=1".
- zone reclaim mode:
-
This parameter controls whether memory reclaim is performed on a local NUMA node even if there is plenty of memory free on other nodes. This parameter is automatically turned on on machines with more pronounced NUMA characteristics. To tell the kernel to free local node memory rather than grabbing free memory from remote nodes, use a command like "echo 1 > /proc/sys/vm/zone_reclaim_mode" or "sysctl -w vm.zone_reclaim_mode=1".
- SMT Control:
-
This feature allows enabling or disabling of symmetric multithreading on processors. Values for this BIOS option can be: Auto, Enabled, Disabled. Default is Auto.
- SR-IOV Support:
-
In virtualization, single root input/output virtualization or SR-IOV is a specification that allows the isolation of the PCI Express resources for manageability and performance reasons.
A single physical PCI Express can be shared on a virtual environment using the SR-IOV specification.
If system has SR-IOV capable PCIe Devices, this option Enables or Disables Single Root IO Virtualization Support. Values for this BIOS option can be: Enabled/Disabled. Default is Enabled.
- Determinism Control:
-
This option allows for customized determinism slider mode to control performance. Default is Auto.
- Auto: Use the fused determinism slider mode.
- Manual: Let user specifies customized determinism slider mode.
- Determinism Enable:
-
This option allows for AGESA determinism to control performance.
- Performance: Provides predictable performance across all processors of the same type.
- Power: Maximizes performance within the power limits defined by cTDP and PPT.
- TDP Control:
-
This feature allows user can set customized value for TDP. Available setting are:
- Auto(Default setting): Use the fused TDP value.
- Manual: Let user specifies customized TDP value.
- TDP:
-
TDP is an acronym for "Thermal Design Power." TDP is the recommended target for power used when designing the cooling capacity for a server.
EPYC processors are able to control this target power consumption within certain limits. This capability is referred to as "configurable TDP" or "cTDP."
TDP can be used to reduce power consumption for greater efficiency, or in some cases, increase power consumption above the default value to provide additional performance.
TDP is controlled using a BIOS option.
The default EPYC TDP value corresponds with the microprocessor's nominal TDP.
The default TDP value is set at a good balance between performance and energy efficiency.
The EPYC 9654 TDP can be reduced as low as 320W, which will minimize the power consumption for the processor under load, but at the expense of peak performance.
Increasing the EPYC 9654 TDP to 400W will maximize peak performance by allowing the CPU to maintain higher dynamic clock speeds, but will make the microprocessor less energy efficient.
Note that at maximum TDP, the CPU thermal solution must be capable of dissipating at least 400W or the EPYC 9654 processor might engage in thermal throttling under load.
The available TDP ranges for each EPYC model are in the table below:
Model | Minimum TDP | Maximum TDP |
EPYC 9654 | 320 | 400 |
EPYC 9654P | 320 | 400 |
EPYC 9554 | 320 | 400 |
EPYC 9554P | 320 | 400 |
EPYC 9534 | 240 | 300 |
EPYC 9474F | 320 | 400 |
EPYC 9374F | 320 | 400 |
EPYC 9354 | 240 | 300 |
EPYC 9354P | 240 | 300 |
EPYC 9334 | 200 | 240 |
EPYC 9224 | 200 | 240 |
EPYC 9174F | 320 | 400 |
EPYC 9124 | 200 | 240 |
* TDP must remain below the thermal solution design parameters or thermal throttling could be frequently encountered.
- PPT Control:
-
This bios option allows user can set customized value for processor package power limit(PPT). Values for this BIOS option can be: Auto, Manual
- Auto: Use the fused PPT.
- Manual: Let user specifies customized PPT.
- PPT:
-
Set customize processor Package Power Limit (PPT) value to be used on all populated processors in the system. Current default value is 0
- NUMA nodes per socket:
-
This bios feature specifies the number of desired NUMA nodes per populated socket in system. Available settings are:
- NPS1: Each physical processor is a NUMA node, and memory accesses are interleaved across all memory channels directly connected to the physical processor.
- NPS2: Each physical processor is two NUMA nodes, and memory accesses are interleaved across 4 memory channels.
- NPS4: Each physical processor is four NUMA nodes, and memory accesses are interleaved across 2 memory channels.
- Auto(Default setting): Use AGESA default value. Current default is NPS1.
- ACPI SRAT L3 Cache as NUMA Domain:
-
Each L3 Cache will be exposed as a NUMA node when enabling ACPI SRAT L3 Cache as a NUMA node.
On a dual processor system, with up to 8 L3 Caches per processor, this setting will expose 16 NUMA domains. Available settings are:
- Auto (Default setting): Disable this function.
- Enabled: Enable this function.
- L1 Stream HW Prefetcher:
-
Enable/Disable L1 Stream HW Prefetcher. Most workloads will benefit from the L1 Stream Hardware prefetchers gathering data and keeping the core pipeline busy. By default, L1 Stream HW Prefetche is enabled.
- L2 Stream HW Prefetcher:
-
Enable/Disable L2 Stream HW Prefetcher. Most workloads will benefit from the L2 Stream Hardware prefetchers gathering data and keeping the core pipeline busy. By default, L2 Stream HW Prefetche is enabled.
- APBDIS:
-
Application Power Management (APM) allows the processor to provide maximum performance while remaining within the specified power delivery and removal envelope.
APM dynamically monitors processor activity and generates an approximation of power consumption. If power consumption exceeds a defined power limit,
a P-state limit is applied by APM hardware to reduce power consumption. APM ensures that average power consumption over a thermally significant time period remains at or below the defined power limit.
Set APBDIS=1 will disable Data Fabric APM and the SOC P-state will be fixed. Default is "Auto".
- 0: Disable APBDIS.
- 1: Enable APBDIS.
- Auto: Use default value for APBDIS. The default value is 0.
- IOMMU:
-
The Input-Output Memory Management Unit(IOMMU) provides several benefits and is required when using x2APIC. Enabling the IOMMU allows devices (such as the EPYC integrated SATA controller) to present separate IRQs for each attached device instead of one IRQ for the subsystem.
The IOMMU also allows operating systems to provide additional protection for DMA capable I/O devices.
Values for this BIOS option can be: Auto/Enabled/Disabled. The default value is Enable.
- DRAM Scrub time:
-
DRAM scrubbing is a mechanism for the memory controller to periodically read all memory locations and write back corrected data.
The time interval for scrubbing the entire memory can be: Disabled/1 hour/4 hours/8 hours/16 hours/24 hours/48 hours/Auto. Current default is Auto(AGESA default value).
- SVM Mode:
-
This is CPU virtualization function. With SVM enabled you'll be able to install a virtual machine on your system.
Values for this BIOS option can be: Enabled/Disabled. Current default is Enabled.