﻿<?xml version="1.0"?>
<!DOCTYPE flagsdescription SYSTEM "http://www.spec.org/dtd/cpuflags2.dtd">
<flagsdescription>

<filename>Dell-Platform-Flags-PowerEdge-AMD-Milan-rev2.2</filename>

<title>Platform Settings for Dell PowerEdge Servers</title>

<os_tuning>
 <![CDATA[

   <dl>

    <dt><b>kernel.randomize_va_space</b> (ASLR)</dt>
   <dd>
     This setting can be used to select the type of process address space
     randomization. Defaults differ based on whether the architecture supports
     ASLR, whether the kernel was built with the CONFIG_COMPAT_BRK
     option or not, or the kernel boot options used.<br />
     Possible settings:
     <ul>
        <li>0: Turn process address space randomization off.</li>
	<li>1: Randomize addresses of mmap base, stack, and VDSO pages.</li>
	<li>2: Additionally randomize the heap. (This is probably the default.)</li>
     </ul>
     Disabling ASLR can make process execution more deterministic and runtimes more consistent.
     For more information see the <tt>randomize_va_space</tt> entry in the
     <a href="https://www.kernel.org/doc/Documentation/sysctl/kernel.txt">Linux sysctl
	     documentation</a>.
    </dd>

    <dt><br/><b>Transparent Hugepages (THP)</b></dt>
    <dd>
      THP is an abstraction layer that automates most aspects of creating, managing,
      and using huge pages. It is designed to hide much of the complexity in using
      huge pages from system administrators and developers. Huge pages
      increase the memory page size from 4 kilobytes to 2 megabytes. This provides
      significant performance advantages on systems with highly contended resources
      and large memory workloads. If memory utilization is too high or memory is badly
      fragmented which prevents hugepages being allocated, the kernel will assign
      smaller 4k pages instead. Most recent Linux OS releases have THP enabled by default.<br />
      THP usage is controlled by the sysfs setting <tt>/sys/kernel/mm/transparent_hugepage/enabled</tt>.
      Possible values:
      <ul>
         <li>never: entirely disable THP usage.</li>
	 <li>madvise: enable THP usage only inside regions marked MADV_HUGEPAGE using madvise(3).</li>
	 <li>always: enable THP usage system-wide. This is the default.</li>
      </ul>
      THP creation is controlled by the sysfs setting <tt>/sys/kernel/mm/transparent_hugepage/defrag</tt>.
      Possible values:
      <ul>
	 <li>never: if no THP are available to satisfy a request, do not attempt to make any.</li>
	 <li>defer: an allocation requesting THP when none are available get normal pages while requesting THP creation in the background.</li>
	 <li>defer+madvise: acts like "always", but only for allocations in regions marked MADV_HUGEPAGE using madvise(3); for all other regions it's like "defer".</li>
	 <li>madvise: acts like "always", but only for allocations in regions marked MADV_HUGEPAGE using madvise(3). This is the default.</li>
	 <li>always: an allocation requesting THP when none are available will stall until some are made.</li>
       </ul>
       An application that "always" requests THP often can benefit from waiting for an allocation until those huge pages can be assembled.<br/>
       For more information see the <a href="https://www.kernel.org/doc/Documentation/vm/transhuge.txt">Linux transparent hugepage documentation</a>.
   </dd>

   </dl>

]]> 
</os_tuning>

<firmware>
 <![CDATA[

<dl>
  <dt><b>Logical Processor</b></dt>
  <dd>
    Default: Enabled
    <br />
    <br />
    Each processor core supports up to two logical processors. When set to Enabled, the BIOS
    reports all logical processors. When set to Disabled, the BIOS only reports one
    logical processor per core. Generally, higher processor count results in increased
    performance for most multi-threaded workloads and the recommendation is to keep this enabled.
    However, there are some floating point/scientific workloads, including HPC workloads, where
    disabling this feature may result in higher performance.
  </dd>

  <dt><br/><b>Virtualization Technology</b></dt>
  <dd>
    Default: Enabled
    <br />
    <br />
    When set to Enabled, the BIOS will enable processor Virtualization features and provide the virtualization
    support to the Operating System (OS) through the DMAR table. In general, only virtualized environments
    such as VMware(r) ESX (tm), Microsoft Hyper-V(r) , Red Hat(r) KVM, and other virtualized operating systems
    will take advantage of these features. Disabling this feature is not known to significantly alter the
    performance or power characteristics of the system, so leaving this option Enabled is advised for most cases.
  </dd>
 
  <dt><br/><b>Memory Interleaving</b></dt>
  <dd>
    Default: Auto
    <br />
    <br />
    Memory interleaving is supported if a symmetric memory configuration is installed. When the field is set to Disabled, the system supports Non-Uniform Memory Access (NUMA)
    (asymmetric) memory configurations. Channel interleaving is available with all configurations and is the intra-die memory interleave option.
    With channel interleaving, the memory behind each UMC will be interleaved and seen as (1) NUMA domain per die.
  </dd>

  <dt><br/><b>NUMA Nodes per Socket</b></dt>
  <dd>
    Default: 1 
    <br />
    <br />
    This field allows configuration of the memory NUMA domains per socket. The configuration can consist of one whole doman (NPS1), two domains (NPS2) or four domains (NPS4).
    In the case of two-socket platforms, an additional NPS profile is available to have whole system memory be mapped as a single NUMA domain (NPS0).
  </dd>

  <dt><br/><b>L3 Cache as NUMA Domain </b></dt>
  <dd>
    Default: Disabled
    <br />
    <br />
    This field specifies that each CCX within the processor will be declared as a NUMA domain.
  </dd>

  <dt><br/><b>DRAM Refresh Delay</b></dt>
  <dd>
    Default: Minimum
    <br />
    <br />
    By enabling the CPU memory controller to delay running the REFRESH command, you can improve the performance for some workloads. My minimizing the delay time, it is ensured that the
    memory controller runs the REFRESH command at regular intervals.
  </dd>

  <dt><br/><b>System Profile</b></dt>
  <dd>
    Default: Performance Per Watt (DAPC)
    <br />
    <br />
    When set to Custom, other settings can changed for Memory Patrol Scrub, CPU Power Management, CIE, C States, Energy Efficiency Policy.
  </dd>

  <dt><br/><b>CPU Power Management</b></dt>
  <dd>
    Default: System DBPM (DAPC)
    <br />
    <br />
    Allows selection of CPU power management methodology. Maximum Performance is typically selected for performance-centric workloads where it is
    acceptable to consume additional power to achieve the highest possible performance for the computing environment. This mode drives processor
    frequency to the maximum across all cores (although idled cores can still be frequency reduced by C-state enforcement through BIOS or
    OS mechanisms if enabled). This mode also offers the lowest latency of the CPU Power Management Mode options, so is always preferred for
    latency-sensitive environments. OS DBPM is another performance-per-watt option that relies on the operating system to dynamically control
    individual cores in order to save power.
  </dd>

  <dt><br/><b>Memory Patrol Scrub</b></dt>
  <dd>
    Default: Standard
    <br />
    <br />
    Patrol Scrubbing searches the memory for errors and repairs correctable errors to prevent
    the accumulation of memory errors. When set to Disabled, no patrol scrubbing will occur.
    When set to Standard Mode, the entire memory array will be scrubbed once in a 24 hour period.
    When set to Extended Mode, the entire memory array will be scrubbed more frequently to further
    increase system reliability. 
  </dd>

  <dt><br/><b>PCI ASPM L1 Link Power Management</b></dt>
  <dd>
    Default: Enabled
    <br />
    <br />
    When enabled, PCIe Advanced State Power Management (ASPM) can reduce overall system power a bit while slightly reducing
    system performance. NOTE: Some devices may not perform properly (they may hang or cause the system to hang) when ASPM is
    enabled. For this reason L1 will only be enabled for validated qualified cards.
  </dd>

  <dt><br/><b>CPU Interconnect Bus Link Power Management</b></dt>
  <dd>
    Default: Enabled
    <br />
    <br />
    When Enabled, CPU interconnect bus link power management can reduce overall system power a
    bit while slightly reducing system performance.
  </dd>

  <dt><br/><b>Algorithm Performance Boost Disable (ApbDis)</b></dt>
  <dd>
    Default: Disabled
    <br />
    <ul>
       <li>Enabled: a specific hard-fused Data Fabric (SoC) P-state is forced for optimizing workloads sensitive to latency or throughput. (For higher performance) </li>
       <li>Disabled: P-states will be automatically managed by the Application Power Management, allowing the processor to provide maximum performance while remaining within a specified power-delivery and thermal envelope. (For power savings) </li>
    </ul>
  </dd>

  <dt><br/><b>Fan Speed Offset</b></dt>
  <dd>
    Default: Off
    <br />
    <br />
    Configuring this option allows additional cooling to the server. In case hardware is added (example, new PCIe cards), it may require additional cooling.
    A fan speed offset causes fan speeds to increase (by the offset % value) over baseline fan speeds calculated by the Thermal Control algorithm.
  </dd>
</dl>
 
]]> 
</firmware>

</flagsdescription>
