SPEC CPU2017 Platform Settings for GIGA-BYTE
- kernel.randomize_va_space (ASLR)
-
This setting can be used to select the type of process address space
randomization. Defaults differ based on whether the architecture supports
ASLR, whether the kernel was built with the CONFIG_COMPAT_BRK
option or not, or the kernel boot options used.
Possible settings:
- 0: Turn process address space randomization off.
- 1: Randomize addresses of mmap base, stack, and VDSO pages.
- 2: Additionally randomize the heap. (This is probably the default.)
Disabling ASLR can make process execution more deterministic and runtimes more consistent.
For more information see the randomize_va_space entry in the
Linux sysctl
documentation.
- drop_caches:
-
Writing to this will cause the kernel to drop clean caches, as well as reclaimable slab objects like dentries and inodes. Once dropped, their memory becomes free.
- To free pagecache:
echo 1 > /proc/sys/vm/drop_caches
- To free reclaimable slab objects (includes dentries and inodes):
echo 2 > /proc/sys/vm/drop_caches
- To free slab objects and pagecache:
echo 3 > /proc/sys/vm/drop_caches
- Transparent Hugepages (THP)
-
THP is an abstraction layer that automates most aspects of creating, managing,
and using huge pages. It is designed to hide much of the complexity in using
huge pages from system administrators and developers. Huge pages
increase the memory page size from 4 kilobytes to 2 megabytes. This provides
significant performance advantages on systems with highly contended resources
and large memory workloads. If memory utilization is too high or memory is badly
fragmented which prevents hugepages being allocated, the kernel will assign
smaller 4k pages instead. Most recent Linux OS releases have THP enabled by default.
THP usage is controlled by the sysfs setting /sys/kernel/mm/transparent_hugepage/enabled.
Possible values:
- never: entirely disable THP usage.
- madvise: enable THP usage only inside regions marked MADV_HUGEPAGE using madvise(3).
- always: enable THP usage system-wide. This is the default.
THP creation is controlled by the sysfs setting /sys/kernel/mm/transparent_hugepage/defrag.
Possible values:
- never: if no THP are available to satisfy a request, do not attempt to make any.
- defer: an allocation requesting THP when none are available get normal pages while requesting THP creation in the background.
- defer+madvise: acts like "always", but only for allocations in regions marked MADV_HUGEPAGE using madvise(3); for all other regions it's like "defer".
- madvise: acts like "always", but only for allocations in regions marked MADV_HUGEPAGE using madvise(3). This is the default.
- always: an allocation requesting THP when none are available will stall until some are made.
An application that "always" requests THP often can benefit from waiting for an allocation until those huge pages can be assembled.
For more information see the Linux transparent hugepage documentation.
- SEV Control: (Default = Enable)
-
Used to disable or enable SEV (Secure Encrypted Virtualization).
- TSME:(Default = Auto)
-
Used to disable or enable TSME (Transparent Secure Memory Encryption).
- TDP Control:(Default = Auto)
-
Auto = Use the fused TDP, Manual = User can set customized TDP.
Values for this BIOS option can be: Auto/Manual.
Configures the maximum power that the CPU will consume, up to the
platform power limit (PPT). Valid values vary by CPU model. If value
outside the valid range is set, the CPU will automatically adjust the
value so that it does fall within the valid range. When increasing TDP,
additional power will only be consumed up to the Package Power Limit
(PPT), which may be less than the TDP setting.
- TDP:(Default = Auto)
-
Specifies the maximum power that each CPU package may consume in the system. The actual power limit is the maximum of the TDP.
Valid settings are:
Model | Minimum TDP | Maximum TDP |
EPYC 9684X | 320 | 400 |
EPYC 9384X | 320 | 400 |
EPYC 9184X | 320 | 400 |
- PPT Control:(Default = Auto)
-
Auto = Use the fused PPT, Manual = User can set customized PPT.
Values for this BIOS option can be: Auto/Manual.
Configures the maximum power that the CPU will consume, up to the
platform power limit (PPT). Valid values vary by CPU model. If value
outside the valid range is set, the CPU will automatically adjust the
value so that it does fall within the valid range. When increasing TDP,
additional power will only be consumed up to the Package Power Limit
(PPT), which may be less than the TDP setting.
- PPT:(Default = Auto)
-
Specifies the maximum power that each CPU package may consume in the system. The actual power limit is the maximum of the PPT.
Valid settings are:
Model | Minimum TDP | Maximum TDP |
EPYC 9684X | 320 | 400 |
EPYC 9384X | 320 | 400 |
EPYC 9184X | 320 | 400 |
- NUMA nodes per socket:(Default = NPS4)
-
Specifies the number of desired NUMA nodes per populated socket in the system:
- NPS1: Each physical processor is a NUMA node, and memory accesses are interleaved across all memory channels directly connected to the physical processor.
- NPS2: Each physical processor is two NUMA nodes, and memory accesses are interleaved across 4 memory channels.
- NPS4: Each physical processor is four NUMA nodes, and memory accesses are interleaved across 2 memory channels.
- SMT Mode: (Default = Enabled)
-
Can be used to disable symmetric multithreading. To re-enable SMT, a POWER CYCLE is needed after selecting the 'Auto' option. WARNING - S3 is NOT SUPPORTED on systems where SMT is disabled.
- IOMMU: (Default = Enabled)
-
Enable: Enables the I/O Memory Management Unit (IOMMU), which extends the AMD64 system architecture by adding support for address translation and system memory access protection on DMA transfers from peripheral devices.
- 4-link xGMI max speed:(Default = Auto)
-
xGMI (Global Memory Interface) is the Socket SP3 processor socket-to-socket interconnection topology comprised of four x16 links.
Each x16 link is comprised of 16 lanes. Each lane is comprised of two unidirectional differential signals.
Since xGMI is the interconnection between processor sockets, these xGMI settings are not applicable for 1S platforms.
NUMA-unaware workloads may need maximum xGMI bandwidth/speed while other compute efficient platforms may need to minimize xGMI power.
The xGMI speed can be lowered, lane width can be reduced from x16 to x8 (or x2), or an xGMI link can be disabled if power consumption is too high.
The default value for this option on Milan platforms is "Auto" which corresponds to "32Gbps". On platforms that support higher speeds, it can be raised to increase performance on workloads that benefit from higher cross-socket bandwidth at the cost of some additional power consumption.
- ACPI SRAT L3 Cache as NUMA Domain:(Default = Auto)
-
Enable/Disable report each L3 cache as a NUMA Domain to the OS.
Options available: Auto, Enabled, Disabled.
- Memory Interleaving:(Default = Auto)
-
Memory interleaving is a technique that CPUs use to increase the memory bandwidth available for an application.
Without interleaving, consecutive memory blocks, often cache lines, are read from the same memory bank.
Because of this, software that reads consecutive memory must wait for a memory transfer to complete before starting the next memory access, reducing throughput and increasing latency.
By enabling memory interleaving, consecutive memory blocks are in different banks and can all contribute to the overall memory bandwidth, thus increasing throughput and lowering latency.
Values for this BIOS option can be: Auto/Disabled/Enabled.
- Determinism Control:(Default = Auto)
-
Auto = Use default performance determinism settings.
Manual = User can set custom performance determinism settings.
- Determinism Enable: (Default = Power)
-
Selects the determinism mode for the CPU:
- Power: Maximizes performance within the power limits defined by TDP and PPT.
- Performance: Provides predictable performance across all processors of the same type.