git clone https://github.com/kubernetes-sigs/node-feature-discovery
cd node-feature-discovery
Docker build
Build the container image
See customizing the build below for altering the container image registry, for example.
make
Push the container image
Optional, this example with Docker.
docker push <IMAGE_TAG>
@@ -77,4 +77,4 @@ Usage of nfd-topology-updater:
Namespace to watch pods (for testing/debugging purpose). Use *for all namespaces. (default "*")
NOTE:
NFD topology updater needs certain directories and/or files from the host mounted inside the NFD container. Thus, you need to provide Docker with the correct --volume options in order for them to work correctly when run stand-alone directly with docker run. See the template spec for up-to-date information about the required volume mounts.
PodResource API is a prerequisite for nfd-topology-updater. Preceding Kubernetes v1.23, the kubelet must be started with the following flag: --feature-gates=KubeletPodResourcesGetAllocatable=true. Starting Kubernetes v1.23, the GetAllocatableResources is enabled by default through KubeletPodResourcesGetAllocatablefeature gate.
Documentation
All documentation resides under the docs directory in the source tree. It is designed to be served as a html site by GitHub Pages.
Building the documentation is containerized in order to fix the build environment. The recommended way for developing documentation is to run:
make site-serve
This will build the documentation in a container and serve it under localhost:4000/ making it easy to verify the results. Any changes made to the docs/ will automatically re-trigger a rebuild and are reflected in the served content and can be inspected with a simple browser refresh.
In order to just build the html documentation run:
make site-build
-
This will generate html documentation under docs/_site/.
To quickly view available command line flags execute nfd-master -help. In a docker container:
docker run gcr.io/k8s-staging-nfd/node-feature-discovery:master nfd-master -help
-h, -help
Print usage and exit.
-version
Print version and exit.
-prune
The -prune flag is a sub-command like option for cleaning up the cluster. It causes nfd-master to remove all NFD related labels, annotations and extended resources from all Node objects of the cluster and exit.
-port
The -port flag specifies the TCP port that nfd-master listens for incoming requests.
Default: 8080
Example:
nfd-master -port=443
-instance
The -instance flag makes it possible to run multiple NFD deployments in parallel. In practice, it separates the node annotations between deployments so that each of them can store metadata independently. The instance name must start and end with an alphanumeric character and may only contain alphanumeric characters, -, _ or ..
Default: empty
Example:
nfd-master -instance=network
-ca-file
The -ca-file is one of the three flags (together with -cert-file and -key-file) controlling master-worker mutual TLS authentication on the nfd-master side. This flag specifies the TLS root certificate that is used for authenticating incoming connections. NFD-Worker side needs to have matching key and cert files configured in order for the incoming requests to be accepted.
Default: empty
Note: Must be specified together with -cert-file and -key-file
The -label-whitelist specifies a regular expression for filtering feature labels based on their name. Each label must match against the given reqular expression in order to be published.
Note: The regular expression is only matches against the "basename" part of the label, i.e. to the part of the name after ‘/'. The label namespace is omitted.
Default: empty
Example:
nfd-master -label-whitelist='.*cpuid\.'
-extra-label-ns
The -extra-label-ns flag specifies a comma-separated list of allowed feature label namespaces. By default, nfd-master only allows creating labels in the default feature.node.kubernetes.io and profile.node.kubernetes.io label namespaces and their sub-namespaces (e.g. vendor.feature.node.kubernetes.io and sub.ns.profile.node.kubernetes.io). This option can be used to allow other vendor or application specific namespaces for custom labels from the local and custom feature sources.
The same namespace control and this flag applies Extended Resources (created with -resource-labels), too.
The -resource-labels flag specifies a comma-separated list of features to be advertised as extended resources instead of labels. Features that have integer values can be published as Extended Resources by listing them in this flag.
The -ca-file is one of the three flags (together with -cert-file and -key-file) controlling the mutual TLS authentication on the topology-updater side. This flag specifies the TLS root certificate that is used for verifying the authenticity of nfd-master.
Default: empty
Note: Must be specified together with -cert-file and -key-file
The -cert-file is one of the three flags (together with -ca-file and -key-file) controlling mutual TLS authentication on the topology-updater side. This flag specifies the TLS certificate presented for authenticating outgoing requests.
Default: empty
Note: Must be specified together with -ca-file and -key-file
The -watch-namespace specifies the namespace to ensure that resource hardware topology examination only happens for the pods running in the specified namespace. Pods that are not running in the specified namespace are not considered during resource accounting. This is particularly useful for testing/debugging purpose. A "*" value would mean that all the pods would be considered during the accounting process.
Default: "*"
Example:
nfd-topology-updater -watch-namespace=rte
-kubelet-config-file
The -kubelet-config-file specifies the path to the Kubelet's configuration file.
The -podresources-socket specifies the path to the Unix socket where kubelet exports a gRPC service to enable discovery of in-use CPUs and devices, and to provide metadata for them.
The -options flag may be used to specify and override configuration file options directly from the command line. The required format is the same as in the config file i.e. JSON or YAML. Configuration options specified via this flag will override those from the configuration file:
The -label-whitelist specifies a regular expression for filtering feature labels based on their name. Each label must match against the given reqular expression in order to be published.
Note: The regular expression is only matches against the "basename" part of the label, i.e. to the part of the name after ‘/'. The label namespace is omitted.
Note: This flag takes precedence over the core.labelWhiteList configuration file option.
Default: empty
Example:
nfd-worker -label-whitelist='.*cpuid\.'
DEPRECATED: you should use the core.labelWhiteList option in the configuration file, instead.
-oneshot
The -oneshot flag causes nfd-worker to exit after one pass of feature detection.
Default: false
Example:
nfd-worker -oneshot-no-publish
-sleep-interval
The -sleep-interval specifies the interval between feature re-detection (and node re-labeling). A non-positive value implies infinite sleep interval, i.e. no re-detection or re-labeling is done.
Note: This flag takes precedence over the core.sleepInterval configuration file option.
Default: 60s
Example:
nfd-worker -sleep-interval=1h
-
DEPRECATED: you should use the core.sleepInterval option in the configuration file, instead.
Logging
The following logging-related flags are inherited from the klog package.
Note: The logger setup can also be specified via the core.klog configuration file options. However, the command line flags take precedence over any corresponding config file options specified.
-add_dir_header
If true, adds the file directory to the header of the log messages.
Default: false
-alsologtostderr
Log to standard error as well as files.
Default: false
-log_backtrace_at
When logging hits line file:N, emit a stack trace.
Default: empty
-log_dir
If non-empty, write log files in this directory.
Default: empty
-log_file
If non-empty, use this log file.
Default: empty
-log_file_max_size
Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited.
Default: 1800
-logtostderr
Log to standard error instead of files
Default: true
-skip_headers
If true, avoid header prefixes in the log messages.
Default: false
-skip_log_headers
If true, avoid headers when opening log files.
Default: false
-stderrthreshold
Logs at or above this threshold go to stderr.
Default: 2
-v
Number for the log level verbosity.
Default: 0
-vmodule
Comma-separated list of pattern=N settings for file-filtered logging.
DEPRECATED: you should use the core.sleepInterval option in the configuration file, instead.
Logging
The following logging-related flags are inherited from the klog package.
Note: The logger setup can also be specified via the core.klog configuration file options. However, the command line flags take precedence over any corresponding config file options specified.
-add_dir_header
If true, adds the file directory to the header of the log messages.
Default: false
-alsologtostderr
Log to standard error as well as files.
Default: false
-log_backtrace_at
When logging hits line file:N, emit a stack trace.
Default: empty
-log_dir
If non-empty, write log files in this directory.
Default: empty
-log_file
If non-empty, use this log file.
Default: empty
-log_file_max_size
Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited.
Default: 1800
-logtostderr
Log to standard error instead of files
Default: true
-skip_headers
If true, avoid header prefixes in the log messages.
Default: false
-skip_log_headers
If true, avoid headers when opening log files.
Default: false
-stderrthreshold
Logs at or above this threshold go to stderr.
Default: 2
-v
Number for the log level verbosity.
Default: 0
-vmodule
Comma-separated list of pattern=N settings for file-filtered logging.
The core section contains common configuration settings that are not specific to any particular feature source.
core.sleepInterval
core.sleepInterval specifies the interval between consecutive passes of feature (re-)detection, and thus also the interval between node re-labeling. A non-positive value implies infinite sleep interval, i.e. no re-detection or re-labeling is done.
Note: Overridden by the deprecated --sleep-interval command line flag (if specified).
The core section contains common configuration settings that are not specific to any particular feature source.
core.sleepInterval
core.sleepInterval specifies the interval between consecutive passes of feature (re-)detection, and thus also the interval between node re-labeling. A non-positive value implies infinite sleep interval, i.e. no re-detection or re-labeling is done.
Note: Overridden by the deprecated -sleep-interval command line flag (if specified).
Default: 60s
Example:
core:sleepInterval:60s
-
core.sources
core.sources specifies the list of enabled feature sources. A special value all enables all feature sources.
Note: Overridden by the deprecated --sources command line flag (if specified).
Default: [all]
Example:
core:
+
core.sources
core.sources specifies the list of enabled feature sources. A special value all enables all feature sources.
Note: Overridden by the deprecated -sources command line flag (if specified).
Default: [all]
Example:
core:sources:-system-custom
-
core.labelWhiteList
core.labelWhiteList specifies a regular expression for filtering feature labels based on the label name. Non-matching labels are not published.
Note: The regular expression is only matches against the "basename" part of the label, i.e. to the part of the name after ‘/'. The label prefix (or namespace) is omitted.
Note: Overridden by the deprecated --label-whitelist command line flag (if specified).
Default: null
Example:
core:
+
core.labelWhiteList
core.labelWhiteList specifies a regular expression for filtering feature labels based on the label name. Non-matching labels are not published.
Note: The regular expression is only matches against the "basename" part of the label, i.e. to the part of the name after ‘/'. The label prefix (or namespace) is omitted.
Note: Overridden by the deprecated -label-whitelist command line flag (if specified).
Default: null
Example:
core:labelWhiteList:'^cpu-cpuid'
-
core.noPublish
Setting core.noPublish to true disables all communication with the nfd-master. It is effectively a "dry-run" flag: nfd-worker runs feature detection normally, but no labeling requests are sent to nfd-master.
Note: Overridden by the --no-publish command line flag (if specified).
Default: false
Example:
core:
+
core.noPublish
Setting core.noPublish to true disables all communication with the nfd-master. It is effectively a "dry-run" flag: nfd-worker runs feature detection normally, but no labeling requests are sent to nfd-master.
Note: Overridden by the -no-publish command line flag (if specified).
Default: false
Example:
core:noPublish:true
core.klog
The following options specify the logger configuration. Most of which can be dynamically adjusted at run-time.
Note: The logger options can also be specified via command line flags which take precedence over any corresponding config file options.
core.klog.addDirHeader
If true, adds the file directory to the header of the log messages.
Default: false
Run-time configurable: yes
core.klog.alsologtostderr
Log to standard error as well as files.
Default: false
Run-time configurable: yes
core.klog.logBacktraceAt
When logging hits line file:N, emit a stack trace.
Default: empty
Run-time configurable: yes
core.klog.logDir
If non-empty, write log files in this directory.
Default: empty
Run-time configurable: no
core.klog.logFile
If non-empty, use this log file.
Default: empty
Run-time configurable: no
core.klog.logFileMaxSize
Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited.
Default: 1800
Run-time configurable: no
core.klog.logtostderr
Log to standard error instead of files
Default: true
Run-time configurable: yes
core.klog.skipHeaders
If true, avoid header prefixes in the log messages.
Default: false
Run-time configurable: yes
core.klog.skipLogHeaders
If true, avoid headers when opening log files.
Default: false
Run-time configurable: no
core.klog.stderrthreshold
Logs at or above this threshold go to stderr (default 2)
Run-time configurable: yes
core.klog.v
Number for the log level verbosity.
Default: 0
Run-time configurable: yes
core.klog.vmodule
Comma-separated list of pattern=N settings for file-filtered logging.
Default: empty
Run-time configurable: yes
sources
The sources section contains feature source specific configuration parameters.
sources.cpu
sources.cpu.cpuid
sources.cpu.cpuid.attributeBlacklist
Prevent publishing cpuid features listed in this option.
Note: overridden by sources.cpu.cpuid.attributeWhitelist (if specified)
This is a SIG-node subproject, hosted under the Kubernetes SIGs organization in Github. The project was established in 2016 and was migrated to Kubernetes SIGs in 2018.
This is a SIG-node subproject, hosted under the Kubernetes SIGs organization in Github. The project was established in 2016 and was migrated to Kubernetes SIGs in 2018.
\ No newline at end of file
diff --git a/master/data.json b/master/data.json
index 8287d7c81..a5c9c9da3 100644
--- a/master/data.json
+++ b/master/data.json
@@ -1 +1 @@
-[{"title":"Get started","layout":"default","sort":1,"content":"
Node Feature Discovery
\n\n
Welcome to Node Feature Discovery – a Kubernetes add-on for detecting hardware\nfeatures and system configuration!
This software enables node feature discovery for Kubernetes. It detects\nhardware features available on each node in a Kubernetes cluster, and\nadvertises those features using node labels.
\n\n
NFD consists of three software components:
\n\n\n
nfd-master
\n
nfd-worker
\n
nfd-topology-updater
\n\n\n
NFD-Master
\n\n
NFD-Master is the daemon responsible for communication towards the Kubernetes\nAPI. That is, it receives labeling requests from the worker and modifies node\nobjects accordingly.
\n\n
NFD-Worker
\n\n
NFD-Worker is a daemon responsible for feature detection. It then communicates\nthe information to nfd-master which does the actual node labeling. One\ninstance of nfd-worker is supposed to be running on each node of the cluster,
\n\n
NFD-Topology-Updater
\n\n
NFD-Topology-Updater is a daemon responsible for examining allocated\nresources on a worker node to account for resources available to be allocated\nto new pod on a per-zone basis (where a zone can be a NUMA node). It then\ncommunicates the information to nfd-master which does the\nNodeResourceTopology CR creation corresponding\nto all the nodes in the cluster. One instance of nfd-topology-updater is\nsupposed to be running on each node of the cluster.
\n\n
Feature Discovery
\n\n
Feature discovery is divided into domain-specific feature sources:
\n\n
\n
CPU
\n
IOMMU
\n
Kernel
\n
Memory
\n
Network
\n
PCI
\n
Storage
\n
System
\n
USB
\n
Custom (rule-based custom features)
\n
Local (hooks for user-specific features)
\n
\n\n
Each feature source is responsible for detecting a set of features which. in\nturn, are turned into node feature labels. Feature labels are prefixed with\nfeature.node.kubernetes.io/ and also contain the name of the feature source.\nNon-standard user-specific feature labels can be created with the local and\ncustom feature sources.
See customizing the build below for altering the\ncontainer image registry, for example.
\n\n
make\n
\n\n
Push the container image
\n\n
Optional, this example with Docker.
\n\n
docker push <IMAGE_TAG>\n
\n\n
Change the job spec to use your custom image (optional)
\n\n
To use your published image from the step above instead of the\nk8s.gcr.io/nfd/node-feature-discovery image, edit image\nattribute in the spec template(s) to the new location\n(<registry-name>/<image-name>[:<version>]).
\n\n
Deployment
\n\n
The yamls makefile generates a kustomization.yaml matching your locally\nbuilt image and using the deploy/overlays/default deployment. See\nbuild customization below for configurability, e.g.\nchanging the deployment namespace.
\n\n
K8S_NAMESPACE=my-ns make yamls\nkubectl apply -k.\n
\n\n
You can use alternative deployment methods by modifying the auto-generated\nkustomization file. For example, deploying worker and master in the same pod by\npointing to deployment/overlays/default-combined.
\n\n
Building locally
\n\n
You can also build the binaries locally
\n\n
make build\n
\n\n
This will compile binaries under bin/
\n\n
Customizing the build
\n\n
There are several Makefile variables that control the build process and the\nname of the resulting container image. The following are targeted targeted for\nbuild customization and they can be specified via environment variables or\nmakefile overrides.
\n\n
\n \n
\n
Variable
\n
Description
\n
Default value
\n
\n \n \n
\n
HOSTMOUNT_PREFIX
\n
Prefix of system directories for feature discovery (local builds)
\n
/ (local builds) /host- (container builds)
\n
\n
\n
IMAGE_BUILD_CMD
\n
Command to build the image
\n
docker build
\n
\n
\n
IMAGE_BUILD_EXTRA_OPTS
\n
Extra options to pass to build command
\n
empty
\n
\n
\n
IMAGE_PUSH_CMD
\n
Command to push the image to remote registry
\n
docker push
\n
\n
\n
IMAGE_REGISTRY
\n
Container image registry to use
\n
k8s.gcr.io/nfd
\n
\n
\n
IMAGE_TAG_NAME
\n
Container image tag name
\n
<nfd version>
\n
\n
\n
IMAGE_EXTRA_TAG_NAMES
\n
Additional container image tag(s) to create when building image
Non-empty value enables OpenShift specific support (currently only effective in e2e tests)
\n
empty
\n
\n
\n
BASE_IMAGE_FULL
\n
Container base image for target image full (–target full)
\n
debian:buster-slim
\n
\n
\n
BASE_IMAGE_MINIMAL
\n
Container base image for target image minimal (–target minimal)
\n
gcr.io/distroless/base
\n
\n \n
\n\n
For example, to use a custom registry:
\n\n
make IMAGE_REGISTRY=<my custom registry uri>\n
\n\n
Or to specify a build tool different from Docker, It can be done in 2 ways:
\n\n\n
\n
via environment
\n\n
IMAGE_BUILD_CMD=\"buildah bud\" make\n
\n
\n
\n
by overriding the variable value
\n\n
make IMAGE_BUILD_CMD=\"buildah bud\"\n
\n
\n\n\n
Testing
\n\n
Unit tests are automatically run as part of the container image build. You can\nalso run them manually in the source code tree by simply running:
\n\n
make test\n
\n\n
End-to-end tests are built on top of the e2e test framework of Kubernetes, and,\nthey required a cluster to run them on. For running the tests on your test\ncluster you need to specify the kubeconfig to be used:
\n\n
make e2e-test KUBECONFIG=$HOME/.kube/config\n
\n\n
Running locally
\n\n
You can run NFD locally, either directly on your host OS or in containers for\ntesting and development purposes. This may be useful e.g. for checking\nfeatures-detection.
\n\n
NFD-Master
\n\n
When running as a standalone container labeling is expected to fail because\nKubernetes API is not available. Thus, it is recommended to use -no-publish\ncommand line flag. E.g.
\n\n
$ export NFD_CONTAINER_IMAGE=gcr.io/k8s-staging-nfd/node-feature-discovery:master\n$ docker run --rm--name=nfd-test ${NFD_CONTAINER_IMAGE} nfd-master -no-publish\n2019/02/01 14:48:21 Node Feature Discovery Master <NFD_VERSION>\n2019/02/01 14:48:21 gRPC server serving on port: 8080\n
\n\n
NFD-Worker
\n\n
In order to run nfd-worker as a “stand-alone” container against your\nstandalone nfd-master you need to run them in the same network namespace:
If you just want to try out feature discovery without connecting to nfd-master,\npass the -no-publish flag to nfd-worker.
\n\n
NOTE Some feature sources need certain directories and/or files from the\nhost mounted inside the NFD container. Thus, you need to provide Docker with the\ncorrect --volume options in order for them to work correctly when run\nstand-alone directly with docker run. See the\ndefault deployment\nfor up-to-date information about the required volume mounts.
\n\n
NFD-Topology-Updater
\n\n
In order to run nfd-topology-updater as a “stand-alone” container against your\nstandalone nfd-master you need to run them in the same network namespace:
If you just want to try out feature discovery without connecting to nfd-master,\npass the -no-publish flag to nfd-topology-updater.
\n\n
Command line flags of nfd-topology-updater:
\n\n
$ docker run --rm${NFD_CONTAINER_IMAGE} nfd-topology-updater -help\ndocker run --rm quay.io/swsehgal/node-feature-discovery:v0.10.0-devel-64-g93a0a9f-dirty nfd-topology-updater -help\nUsage of nfd-topology-updater:\n -add_dir_header\n If true, adds the file directory to the header of the log messages\n -alsologtostderr\n log to standard error as well as files\n -ca-file string\n Root certificate for verifying connections\n -cert-file string\n Certificate used for authenticating connections\n -key-file string\n Private key matching -cert-file\n -kubeconfig string\n Kube config file.\n -kubelet-config-file string\n Kubelet config file path. (default \"/host-var/lib/kubelet/config.yaml\")\n -log_backtrace_at value\n when logging hits line file:N, emit a stack trace\n -log_dir string\n If non-empty, write log files in this directory\n -log_file string\n If non-empty, use this log file\n -log_file_max_size uint\n Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)\n -logtostderr\n log to standard error instead of files (default true)\n -no-publish\n Do not publish discovered features to the cluster-local Kubernetes API server.\n -one_output\n If true, only write logs to their native severity level (vs also writing to each lower severity level)\n -oneshot\n Update once and exit\n -podresources-socket string\n Pod Resource Socket path to use. (default \"/host-var/lib/kubelet/pod-resources/kubelet.sock\")\n -server string\n NFD server address to connecto to. (default \"localhost:8080\")\n -server-name-override string\n Hostname expected from server certificate, useful in testing\n -skip_headers\n If true, avoid header prefixes in the log messages\n -skip_log_headers\n If true, avoid headers when opening log files\n -sleep-interval duration\n Time to sleep between CR updates. Non-positive value implies no CR updatation (i.e. infinite sleep).[Default: 60s] (default 1m0s)\n -stderrthreshold value\n logs at or above this threshold go to stderr (default 2)\n -v value\n number for the log level verbosity\n -version\n Print version and exit.\n -vmodule value\n comma-separated list of pattern=N settings for file-filtered logging\n -watch-namespace string\n Namespace to watch pods (for testing/debugging purpose). Use *for all namespaces. (default \"*\")\n
\n\n
NOTE:
\n\n
NFD topology updater needs certain directories and/or files from the\nhost mounted inside the NFD container. Thus, you need to provide Docker with the\ncorrect --volume options in order for them to work correctly when run\nstand-alone directly with docker run. See the\ntemplate spec\nfor up-to-date information about the required volume mounts.
\n\n
PodResource API is a prerequisite for nfd-topology-updater.\nPreceding Kubernetes v1.23, the kubelet must be started with the following flag:\n--feature-gates=KubeletPodResourcesGetAllocatable=true.\nStarting Kubernetes v1.23, the GetAllocatableResources is enabled by default\nthrough KubeletPodResourcesGetAllocatablefeature gate.
\n\n
Documentation
\n\n
All documentation resides under the\ndocs\ndirectory in the source tree. It is designed to be served as a html site by\nGitHub Pages.
\n\n
Building the documentation is containerized in order to fix the build\nenvironment. The recommended way for developing documentation is to run:
\n\n
make site-serve\n
\n\n
This will build the documentation in a container and serve it under\nlocalhost:4000/ making it easy to verify the results.\nAny changes made to the docs/ will automatically re-trigger a rebuild and are\nreflected in the served content and can be inspected with a simple browser\nrefresh.
\n\n
In order to just build the html documentation run:
\n\n
make site-build\n
\n\n
This will generate html documentation under docs/_site/.
To quickly view available command line flags execute nfd-master -help.\nIn a docker container:
\n\n
docker run gcr.io/k8s-staging-nfd/node-feature-discovery:master nfd-master -help\n
\n\n
-h, -help
\n\n
Print usage and exit.
\n\n
-version
\n\n
Print version and exit.
\n\n
-prune
\n\n
The -prune flag is a sub-command like option for cleaning up the cluster. It\ncauses nfd-master to remove all NFD related labels, annotations and extended\nresources from all Node objects of the cluster and exit.
\n\n
-port
\n\n
The -port flag specifies the TCP port that nfd-master listens for incoming requests.
\n\n
Default: 8080
\n\n
Example:
\n\n
nfd-master -port=443\n
\n\n
-instance
\n\n
The -instance flag makes it possible to run multiple NFD deployments in\nparallel. In practice, it separates the node annotations between deployments so\nthat each of them can store metadata independently. The instance name must\nstart and end with an alphanumeric character and may only contain alphanumeric\ncharacters, -, _ or ..
\n\n
Default: empty
\n\n
Example:
\n\n
nfd-master -instance=network\n
\n\n
-ca-file
\n\n
The -ca-file is one of the three flags (together with -cert-file and\n-key-file) controlling master-worker mutual TLS authentication on the\nnfd-master side. This flag specifies the TLS root certificate that is used for\nauthenticating incoming connections. NFD-Worker side needs to have matching key\nand cert files configured in order for the incoming requests to be accepted.
\n\n
Default: empty
\n\n
Note: Must be specified together with -cert-file and -key-file
The -cert-file is one of the three flags (together with -ca-file and\n-key-file) controlling master-worker mutual TLS authentication on the\nnfd-master side. This flag specifies the TLS certificate presented for\nauthenticating outgoing traffic towards nfd-worker.
\n\n
Default: empty
\n\n
Note: Must be specified together with -ca-file and -key-file
The -key-file is one of the three flags (together with -ca-file and\n-cert-file) controlling master-worker mutual TLS authentication on the\nnfd-master side. This flag specifies the private key corresponding the given\ncertificate file (-cert-file) that is used for authenticating outgoing\ntraffic.
\n\n
Default: empty
\n\n
Note: Must be specified together with -cert-file and -ca-file
The -verify-node-name flag controls the NodeName based authorization of\nincoming requests and only has effect when mTLS authentication has been enabled\n(with -ca-file, -cert-file and -key-file). If enabled, the worker node\nname of the incoming must match with the CN or a SAN in its TLS certificate. Thus,\nworkers are only able to label the node they are running on (or the node whose\ncertificate they present).
\n\n
Node Name based authorization is disabled by default.
The -no-publish flag disables updates to the Node objects in the Kubernetes\nAPI server, making a “dry-run” flag for nfd-master. No Labels, Annotations or\nExtendedResources of nodes are updated.
\n\n
Default: false
\n\n
Example:
\n\n
nfd-master -no-publish\n
\n\n
-featurerules-controller
\n\n
The -featurerules-controller flag controlers the processing of\nNodeFeatureRule objects, effectively enabling/disabling labels from these\ncustom labeling rules.
\n\n
Default: true
\n\n
Example:
\n\n
nfd-master -featurerules-controller=false\n
\n\n
-label-whitelist
\n\n
The -label-whitelist specifies a regular expression for filtering feature\nlabels based on their name. Each label must match against the given reqular\nexpression in order to be published.
\n\n
Note: The regular expression is only matches against the “basename” part of the\nlabel, i.e. to the part of the name after ‘/’. The label namespace is omitted.
\n\n
Default: empty
\n\n
Example:
\n\n
nfd-master -label-whitelist='.*cpuid\\.'\n
\n\n
-extra-label-ns
\n\n
The -extra-label-ns flag specifies a comma-separated list of allowed feature\nlabel namespaces. By default, nfd-master only allows creating labels in the\ndefault feature.node.kubernetes.io and profile.node.kubernetes.io label\nnamespaces and their sub-namespaces (e.g. vendor.feature.node.kubernetes.io\nand sub.ns.profile.node.kubernetes.io). This option can be used to allow\nother vendor or application specific namespaces for custom labels from the\nlocal and custom feature sources.
\n\n
The same namespace control and this flag applies Extended Resources (created\nwith -resource-labels), too.
The -resource-labels flag specifies a comma-separated list of features to be\nadvertised as extended resources instead of labels. Features that have integer\nvalues can be published as Extended Resources by listing them in this flag.
$ kubectl get po feature-dependent-pod -o wide\nNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES\nfeature-dependent-pod 1/1 Running 0 23s 10.36.0.4 node-2 <none> <none>\n
\n\n
Additional Optional Installation Steps
\n\n
In order to deploy nfd-master and nfd-topology-updater daemons\nuse topologyupdater overlay.
\n\n
Deploy with kustomize – creates a new namespace, service and required RBAC\nrules and nfd-master and nfd-topology-updater daemons.
The CR instances created can be used to gain insight into the allocatable\nresources along with the granularity of those resources at a per-zone level\n(represented by node-0 and node-1 in the above example) or can be used by an\nexternal entity (e.g. topology-aware scheduler plugin) to take an action based\non the gathered information.
This is a\nSIG-node\nsubproject, hosted under the\nKubernetes SIGs organization in Github.\nThe project was established in 2016 and was migrated to Kubernetes SIGs in 2018.
\n","dir":"/contributing/","name":"index.md","path":"contributing/index.md","url":"/contributing/"},{"title":"Deployment and usage","layout":"default","sort":3,"content":"
kubectl v1.21 or\nlater (properly set up and configured to work with your Kubernetes cluster)
\n\n\n
Image variants
\n\n
NFD currently offers two variants of the container image. The “full” variant is\ncurrently deployed by default.
\n\n
Full
\n\n
This image is based on\ndebian:buster-slim and contains a full Linux\nsystem for running shell-based nfd-worker hooks and doing live debugging and\ndiagnosis of the NFD images.
\n\n
Minimal
\n\n
This is a minimal image based on\ngcr.io/distroless/base\nand only supports running statically linked binaries.
\n\n
The container image tag has suffix -minimal\n(e.g. gcr.io/k8s-staging-nfd/node-feature-discovery:master-minimal)
Alternatively you can clone the repository and customize the deployment by\ncreating your own overlays. For example, to deploy the minimal\nimage. See kustomize for more information about managing\ndeployment configurations.
\n\n
Default overlays
\n\n
The NFD repository hosts a set of overlays for different usages and deployment\nscenarios under\ndeployment/overlays
\n\n
\n
default:\ndefault deployment of nfd-worker as a daemonset, descibed above
samples/custom-rules:\nan example for spicing up the default deployment with a separately managed\nconfigmap of custom labeling rules, see\nCustom feature source for more information about\ncustom node labels
\n
\n\n
Master-worker pod
\n\n
You can also run nfd-master and nfd-worker inside the same pod
This creates a DaemonSet that runs nfd-worker and nfd-master in the same Pod.\nIn this case no nfd-master is run on the master node(s), but, the worker nodes\nare able to label themselves which may be desirable e.g. in single-node setups.
\n\n
NOTE: nfd-topology-updater is not deployed by the default-combined overlay.\nTo enable nfd-topology-updater in this scenario,the users must customize the\ndeployment themselves.
\n\n
Worker one-shot
\n\n
Feature discovery can alternatively be configured as a one-shot job.\nThe default-job overlay may be used to achieve this:
\n\n
NUM_NODES=$(kubectl get no -ojsonpath='{.items[*].metadata.name}' | wc-w)\nkubectl kustomize https://github.com/kubernetes-sigs/node-feature-discovery/deployment/overlays/default-job?ref=master | \\\n sed s\"/NUM_NODES/$NUM_NODES/\" | \\\n kubectl apply -f -\n
\n\n
The example above launches as many jobs as there are non-master nodes. Note that\nthis approach does not guarantee running once on every node. For example,\ntainted, non-ready nodes or some other reasons in Job scheduling may cause some\nnode(s) will run extra job instance(s) to satisfy the request.
\n\n
Master Worker Topologyupdater
\n\n
NFD Master, NFD worker and NFD Topologyupdater can be configured to be deployed\nas separate pods. The master-worker-topologyupdater overlay may be used to\nachieve this:
NFD Topologyupdater can be configured along with the default overlay\n(which deploys NFD worker and NFD master) where all the software components\nare deployed as separate pods. The topologyupdater overlay may be used\nalong with default overlay to achieve this:
The command removes all the Kubernetes components associated with the chart and\ndeletes the release.
\n\n
Chart parameters
\n\n
In order to tailor the deployment of the Node Feature Discovery to your cluster needs\nWe have introduced the following Chart parameters.
\n\n
General parameters
\n\n
\n \n
\n
Name
\n
Type
\n
Default
\n
description
\n
\n \n \n
\n
image.repository
\n
string
\n
gcr.io/k8s-staging-nfd/node-feature-discovery
\n
NFD image repository
\n
\n
\n
image.tag
\n
string
\n
master
\n
NFD image tag
\n
\n
\n
image.pullPolicy
\n
string
\n
Always
\n
Image pull policy
\n
\n
\n
imagePullSecrets
\n
list
\n
[]
\n
ImagePullSecrets is an optional list of references to secrets in the same namespace to use for pulling any of the images used by this PodSpec. If specified, these secrets will be passed to individual puller implementations for them to use. For example, in the case of docker, only DockerConfig type secrets are honored. More info
\n
\n
\n
serviceAccount.create
\n
bool
\n
true
\n
Specifies whether a service account should be created
\n
\n
\n
serviceAccount.annotations
\n
dict
\n
{}
\n
Annotations to add to the service account
\n
\n
\n
serviceAccount.name
\n
string
\n
\n
The name of the service account to use. If not set and create is true, a name is generated using the fullname template
Specifies whether the NFD Topology Updater should be created
\n
\n
\n
topologyUpdater.createCRDs
\n
bool
\n
false
\n
Specifies whether the NFD Topology Updater CRDs should be created
\n
\n
\n
topologyUpdater.serviceAccount.create
\n
bool
\n
true
\n
Specifies whether the service account for topology updater should be created
\n
\n
\n
topologyUpdater.serviceAccount.annotations
\n
dict
\n
{}
\n
Annotations to add to the service account for topology updater
\n
\n
\n
topologyUpdater.serviceAccount.name
\n
string
\n
\n
The name of the service account for topology updater to use. If not set and create is true, a name is generated using the fullname template and -topology-updater suffix
If you want to use the latest development version (master branch) you need to\nbuild your own custom image.\nSee the Developer Guide for instructions how to\nbuild images and deploy them on your cluster.
\n\n
Usage
\n\n
NFD-Master
\n\n
NFD-Master runs as a deployment (with a replica count of 1), by default\nit prefers running on the cluster’s master nodes but will run on worker\nnodes if no master nodes are found.
\n\n
For High Availability, you should simply increase the replica count of\nthe deployment object. You should also look into adding\ninter-pod\naffinity to prevent masters from running on the same node.\nHowever note that inter-pod affinity is costly and is not recommended\nin bigger clusters.
\n\n
NFD-Master listens for connections from nfd-worker(s) and connects to the\nKubernetes API server to add node labels advertised by them.
\n\n
If you have RBAC authorization enabled (as is the default e.g. with clusters\ninitialized with kubeadm) you need to configure the appropriate ClusterRoles,\nClusterRoleBindings and a ServiceAccount in order for NFD to create node\nlabels. The provided template will configure these for you.
\n\n
NFD-Worker
\n\n
NFD-Worker is preferably run as a Kubernetes DaemonSet. This assures\nre-labeling on regular intervals capturing changes in the system configuration\nand makes sure that new nodes are labeled as they are added to the cluster.\nWorker connects to the nfd-master service to advertise hardware features.
\n\n
When run as a daemonset, nodes are re-labeled at an default interval of 60s.\nThis can be changed by using the\ncore.sleepInterval\nconfig option (or\n-sleep-interval\ncommand line flag).
\n\n
The worker configuration file is watched and re-read on every change which\nprovides a simple mechanism of dynamic run-time reconfiguration. See\nworker configuration for more details.
\n\n
NFD-Topology-Updater
\n\n
NFD-Topology-Updater is preferably run as a Kubernetes DaemonSet. This assures\nre-examination (and CR updates) on regular intervals capturing changes in\nthe allocated resources and hence the allocatable resources on a per zone\nbasis. It makes sure that more CR instances are created as new nodes get\nadded to the cluster. Topology-Updater connects to the nfd-master service\nto create CR instances corresponding to nodes.
\n\n
When run as a daemonset, nodes are re-examined for the allocated resources\n(to determine the information of the allocatable resources on a per zone basis\nwhere a zone can be a NUMA node) at an interval specified using the\n-sleep-interval option. The default sleep interval is set to 60s which is the\n the value when no -sleep-interval is specified.
\n\n
Communication security with TLS
\n\n
NFD supports mutual TLS authentication between the nfd-master and nfd-worker\ninstances. That is, nfd-worker and nfd-master both verify that the other end\npresents a valid certificate.
\n\n
TLS authentication is enabled by specifying -ca-file, -key-file and\n-cert-file args, on both the nfd-master and nfd-worker instances.\nThe template specs provided with NFD contain (commented out) example\nconfiguration for enabling TLS authentication.
\n\n
The Common Name (CN) of the nfd-master certificate must match the DNS name of\nthe nfd-master Service of the cluster. By default, nfd-master only check that\nthe nfd-worker has been signed by the specified root certificate (-ca-file).\nAdditional hardening can be enabled by specifying -verify-node-name in\nnfd-master args, in which case nfd-master verifies that the NodeName presented\nby nfd-worker matches the Common Name (CN) or a Subject Alternative Name (SAN)\nof its certificate.
\n\n
Automated TLS certificate management using cert-manager
\n\n
cert-manager can be used to automate certificate\nmanagement between nfd-master and the nfd-worker pods.
\n\n
NFD source code repository contains an example kustomize overlay that can be\nused to deploy NFD with cert-manager supplied certificates enabled. The\ninstructions below describe steps how to generate a self-signed CA certificate\nand set up cert-manager’s\nCA Issuer to sign\nCertificate requests for NFD components in node-feature-discovery\nnamespace.
NFD-Worker supports dynamic configuration through a configuration file. The\ndefault location is /etc/kubernetes/node-feature-discovery/nfd-worker.conf,\nbut, this can be changed by specifying the-config command line flag.\nConfiguration file is re-read whenever it is modified which makes run-time\nre-configuration of nfd-worker straightforward.
\n\n
Worker configuration file is read inside the container, and thus, Volumes and\nVolumeMounts are needed to make your configuration available for NFD. The\npreferred method is to use a ConfigMap which provides easy deployment and\nre-configurability.
\n\n
The provided nfd-worker deployment templates create an empty configmap and\nmount it inside the nfd-worker containers. In kustomize deployments,\nconfiguration can be edited with:
In Helm deployments, Worker pod parameter\nworker.config can be used to edit the respective configuration.
\n\n
See\nnfd-worker configuration file reference\nfor more details.\nThe (empty-by-default)\nexample config\ncontains all available configuration options and can be used as a reference\nfor creating creating a configuration.
\n\n
Configuration options can also be specified via the -options command line\nflag, in which case no mounts need to be used. The same format as in the config\nfile must be used, i.e. JSON (or YAML). For example:
Configuration options specified from the command line will override those read\nfrom the config file.
\n\n
Using node labels
\n\n
Nodes with specific features can be targeted using the nodeSelector field. The\nfollowing example shows how to target nodes with Intel TurboBoost enabled.
See the node-feature-discovery-operator and OLM project\ndocumentation for instructions for uninstalling the operator and operator\nlifecycle manager, respectively.
\n\n
Manual
\n\n
Simplest way is to invoke kubectl delete on the deployment files you used.\nBeware that this will also delete the namespace that NFD is running in. For\nexample, in case the default deployment from the repo was used:
The -options flag may be used to specify and override configuration file\noptions directly from the command line. The required format is the same as in\nthe config file i.e. JSON or YAML. Configuration options specified via this\nflag will override those from the configuration file:
The -ca-file is one of the three flags (together with -cert-file and\n-key-file) controlling the mutual TLS authentication on the worker side.\nThis flag specifies the TLS root certificate that is used for verifying the\nauthenticity of nfd-master.
\n\n
Default: empty
\n\n
Note: Must be specified together with -cert-file and -key-file
The -cert-file is one of the three flags (together with -ca-file and\n-key-file) controlling mutual TLS authentication on the worker side. This\nflag specifies the TLS certificate presented for authenticating outgoing\nrequests.
\n\n
Default: empty
\n\n
Note: Must be specified together with -ca-file and -key-file
The -key-file is one of the three flags (together with -ca-file and\n-cert-file) controlling the mutual TLS authentication on the worker side.\nThis flag specifies the private key corresponding the given certificate file\n(-cert-file) that is used for authenticating outgoing requests.
\n\n
Default: empty
\n\n
Note: Must be specified together with -cert-file and -ca-file
The -server-name-override flag specifies the common name (CN) which to\nexpect from the nfd-master TLS certificate. This flag is mostly intended for\ndevelopment and debugging purposes.
\n\n
Default: empty
\n\n
Example:
\n\n
nfd-worker -server-name-override=localhost\n
\n\n
-sources
\n\n
The -sources flag specifies a comma-separated list of enabled feature\nsources. A special value all enables all feature sources.
\n\n
Note: This flag takes precedence over the core.sources configuration\nfile option.
\n\n
Default: all
\n\n
Example:
\n\n
nfd-worker -sources=kernel,system,local\n
\n\n
DEPRECATED: you should use the core.sources option in the\nconfiguration file, instead.
\n\n
-no-publish
\n\n
The -no-publish flag disables all communication with the nfd-master, making\nit a “dry-run” flag for nfd-worker. NFD-Worker runs feature detection normally,\nbut no labeling requests are sent to nfd-master.
\n\n
Default: false
\n\n
Example:
\n\n
nfd-worker -no-publish\n
\n\n
-label-whitelist
\n\n
The -label-whitelist specifies a regular expression for filtering feature\nlabels based on their name. Each label must match against the given reqular\nexpression in order to be published.
\n\n
Note: The regular expression is only matches against the “basename” part of the\nlabel, i.e. to the part of the name after ‘/’. The label namespace is omitted.
\n\n
Note: This flag takes precedence over the core.labelWhiteList configuration\nfile option.
\n\n
Default: empty
\n\n
Example:
\n\n
nfd-worker -label-whitelist='.*cpuid\\.'\n
\n\n
DEPRECATED: you should use the core.labelWhiteList option in the\nconfiguration file, instead.
\n\n
-oneshot
\n\n
The -oneshot flag causes nfd-worker to exit after one pass of feature\ndetection.
\n\n
Default: false
\n\n
Example:
\n\n
nfd-worker -oneshot-no-publish\n
\n\n
-sleep-interval
\n\n
The -sleep-interval specifies the interval between feature re-detection (and\nnode re-labeling). A non-positive value implies infinite sleep interval, i.e.\nno re-detection or re-labeling is done.
\n\n
Note: This flag takes precedence over the core.sleepInterval configuration\nfile option.
\n\n
Default: 60s
\n\n
Example:
\n\n
nfd-worker -sleep-interval=1h\n
\n\n
DEPRECATED: you should use the core.sleepInterval option in the\nconfiguration file, instead.
\n\n
Logging
\n\n
The following logging-related flags are inherited from the\nklog package.
\n\n
Note: The logger setup can also be specified via the core.klog configuration\nfile options. However, the command line flags take precedence over any\ncorresponding config file options specified.
\n\n
-add_dir_header
\n\n
If true, adds the file directory to the header of the log messages.
\n\n
Default: false
\n\n
-alsologtostderr
\n\n
Log to standard error as well as files.
\n\n
Default: false
\n\n
-log_backtrace_at
\n\n
When logging hits line file:N, emit a stack trace.
\n\n
Default: empty
\n\n
-log_dir
\n\n
If non-empty, write log files in this directory.
\n\n
Default: empty
\n\n
-log_file
\n\n
If non-empty, use this log file.
\n\n
Default: empty
\n\n
-log_file_max_size
\n\n
Defines the maximum size a log file can grow to. Unit is megabytes. If the\nvalue is 0, the maximum file size is unlimited.
\n\n
Default: 1800
\n\n
-logtostderr
\n\n
Log to standard error instead of files
\n\n
Default: true
\n\n
-skip_headers
\n\n
If true, avoid header prefixes in the log messages.
\n\n
Default: false
\n\n
-skip_log_headers
\n\n
If true, avoid headers when opening log files.
\n\n
Default: false
\n\n
-stderrthreshold
\n\n
Logs at or above this threshold go to stderr.
\n\n
Default: 2
\n\n
-v
\n\n
Number for the log level verbosity.
\n\n
Default: 0
\n\n
-vmodule
\n\n
Comma-separated list of pattern=N settings for file-filtered logging.
Feature discovery in nfd-worker is performed by a set of separate modules\ncalled feature sources. Most of them are specifically responsible for certain\ndomain of features (e.g. cpu). In addition there are two highly customizable\nfeature sources that work accross the system.
\n\n
Feature labels
\n\n
Each discovered feature is advertised a label in the Kubernetes Node object.\nThe published node labels encode a few pieces of information:
\n\n
\n
Namespace\n
\n
all built-in labels use feature.node.kubernetes.io
feature.node.kubernetes.io and profile.node.kubernetes.io plus their\nsub-namespaces (e.g. vendor.profile.node.kubernetes.io and\nsub.ns.profile.node.kubernetes.io) are allowed by default
\n
additional namespaces may be enabled with the\n--extra-label-ns\ncommand line flag of nfd-master
\n
\n
\n
\n
\n
The source for each label (e.g. cpu).
\n
The name of the discovered feature as it appears in the underlying\nsource, (e.g. cpuid.AESNI from cpu).
\n
The value of the discovered feature.
\n
\n\n
Feature label names adhere to the following pattern:
The last component (i.e. attribute-name) is optional, and only used if a\nfeature logically has sub-hierarchy, e.g. sriov.capable and\nsriov.configure from the network source.
\n\n
The -sources flag controls which sources to use for discovery.
\n\n
Note: Consecutive runs of nfd-worker will update the labels on a\ngiven node. If features are not discovered on a consecutive run, the corresponding\nlabel will be removed. This includes any restrictions placed on the consecutive run,\nsuch as restricting discovered features with the -label-whitelist option.
\n\n
Feature sources
\n\n
CPU
\n\n
The cpu feature source supports the following labels:
\n\n
\n \n
\n
Feature name
\n
Attribute
\n
Description
\n
\n \n \n
\n
cpuid
\n
<cpuid flag>
\n
CPU capability is supported
\n
\n
\n
hardware_multithreading
\n
\n
Hardware multithreading, such as Intel HTT, enabled (number of logical CPUs is greater than physical CPUs)
Set to ‘true’ if Intel SGX is enabled in BIOS (based a non-zero sum value of SGX EPC section sizes).
\n
\n \n
\n\n
The (sub-)set of CPUID attributes to publish is configurable via the\nattributeBlacklist and attributeWhitelist cpuid options of the cpu source.\nIf whitelist is specified, only whitelisted attributes will be published. With\nblacklist, only blacklisted attributes are filtered out. attributeWhitelist\nhas priority over attributeBlacklist. For examples and more information\nabout configurability, see\nconfiguration.\nBy default, the following CPUID flags have been blacklisted:\nBMI1, BMI2, CLMUL, CMOV, CX16, ERMS, F16C, HTT, LZCNT, MMX, MMXEXT, NX, POPCNT,\nRDRAND, RDSEED, RDTSCP, SGX, SSE, SSE2, SSE3, SSE4, SSE42 and SSSE3.
\n\n
NOTE The cpuid features advertise supported CPU capabilities, that is, a\ncapability might be supported but not enabled.
Integer divide instructions available in Thumb mode
\n
\n
\n
THUMB
\n
Thumb instructions
\n
\n
\n
FASTMUL
\n
Fast multiplication
\n
\n
\n
VFP
\n
Vector floating point instruction extension (VFP)
\n
\n
\n
VFPv3
\n
Vector floating point extension v3
\n
\n
\n
VFPv4
\n
Vector floating point extension v4
\n
\n
\n
VFPD32
\n
VFP with 32 D-registers
\n
\n
\n
HALF
\n
Half-word loads and stores
\n
\n
\n
EDSP
\n
DSP extensions
\n
\n
\n
NEON
\n
NEON SIMD instructions
\n
\n
\n
LPAE
\n
Large Physical Address Extensions
\n
\n \n
\n\n
Arm64 CPUID attribute (partial list)
\n\n
\n \n
\n
Attribute
\n
Description
\n
\n \n \n
\n
AES
\n
Announcing the Advanced Encryption Standard
\n
\n
\n
EVSTRM
\n
Event Stream Frequency Features
\n
\n
\n
FPHP
\n
Half Precision(16bit) Floating Point Data Processing Instructions
\n
\n
\n
ASIMDHP
\n
Half Precision(16bit) Asimd Data Processing Instructions
\n
\n
\n
ATOMICS
\n
Atomic Instructions to the A64
\n
\n
\n
ASIMRDM
\n
Support for Rounding Double Multiply Add/Subtract
\n
\n
\n
PMULL
\n
Optional Cryptographic and CRC32 Instructions
\n
\n
\n
JSCVT
\n
Perform Conversion to Match Javascript
\n
\n
\n
DCPOP
\n
Persistent Memory Support
\n
\n \n
\n\n
Custom
\n\n
The Custom feature source allows the user to define features based on a mix of\npredefined rules. A rule is provided input witch affects its process of\nmatching for a defined feature. The rules are specified in the\nnfd-worker configuration file. See\nconfiguration for instructions\nand examples how to set-up and manage the worker configuration.
\n\n
To aid in making Custom Features clearer, we define a general and a per rule\nnomenclature, keeping things as consistent as possible.
\n\n
Additional configuration directory
\n\n
Additionally to the rules defined in the nfd-worker configuration file, the\nCustom feature can read more configuration files located in the\n/etc/kubernetes/node-feature-discovery/custom.d/ directory. This makes more\ndynamic and flexible configuration easier. This directory must be available\ninside the NFD worker container, so Volumes and VolumeMounts must be used for\nmounting e.g. ConfigMap(s). The example deployment manifests provide an example\n(commented out) for providing Custom configuration with an additional\nConfigMap, mounted into the custom.d directory.
\n\n
General nomenclature & definitions
\n\n
Rule :Represents a matching logic that is used to match on a feature.\nRule Input :The input a Rule is provided. This determines how a Rule performs the match operation.\nMatcher :A composition of Rules, each Matcher may be composed of at most one instance of each Rule.\n
\n\n
Custom features format (using the nomenclature defined above)
\n\n
Rules are specified under sources.custom in the nfd-worker configuration\nfile.
Specifying Rules to match on a feature is done by providing a list of Matchers.\nEach Matcher contains one or more Rules.
\n\n
Logical OR is performed between Matchers and logical AND is performed\nbetween Rules of a given Matcher.
\n\n
Rules
\n\n
pciid rule
\n\n
Nomenclature
\n\n
Attribute :A PCI attribute.\nElement :An identifier of the PCI attribute.\n
\n\n
The PciId Rule allows matching the PCI devices in the system on the following\nAttributes: class,vendor and device. A list of Elements is provided for\neach Attribute.
Matching is done by performing a logical OR between Elements of an Attribute\nand logical AND between the specified Attributes for each PCI device in the\nsystem. At least one Attribute must be specified. Missing attributes will not\npartake in the matching process.
\n\n
UsbId rule
\n\n
Nomenclature
\n\n
Attribute :A USB attribute.\nElement :An identifier of the USB attribute.\n
\n\n
The UsbId Rule allows matching the USB devices in the system on the following\nAttributes: class,vendor, device and serial. A list of Elements is\nprovided for each Attribute.
Matching is done by performing a logical OR between Elements of an Attribute\nand logical AND between the specified Attributes for each USB device in the\nsystem. At least one Attribute must be specified. Missing attributes will not\npartake in the matching process.
\n\n
LoadedKMod rule
\n\n
Nomenclature
\n\n
Element :A kernel module\n
\n\n
The LoadedKMod Rule allows matching the loaded kernel modules in the system\nagainst a provided list of Elements.
\n\n
Format
\n\n
loadedKMod :[<kernel module>,...]\n
\n\n
Matching is done by performing logical AND for each provided Element, i.e\nthe Rule will match if all provided Elements (kernel modules) are loaded in the\nsystem.
\n\n
CpuId rule
\n\n
Nomenclature
\n\n
Element :A CPUID flag\n
\n\n
The Rule allows matching the available CPUID flags in the system against a\nprovided list of Elements.
\n\n
Format
\n\n
cpuId :[<CPUID flag string>,...]\n
\n\n
Matching is done by performing logical AND for each provided Element, i.e the\nRule will match if all provided Elements (CPUID flag strings) are available in\nthe system.
\n\n
Kconfig rule
\n\n
Nomenclature
\n\n
Element :A Kconfig option\n
\n\n
The Rule allows matching the kconfig options in the system against a provided\nlist of Elements.
\n\n
Format
\n\n
kConfig:[<kernel config option ('y' or 'm') or '=<value>'>,...]\n
\n\n
Matching is done by performing logical AND for each provided Element, i.e the\nRule will match if all provided Elements (kernel config options) are enabled\n(y or m) or matching =<value> in the kernel.
\n\n
Nodename rule
\n\n
Nomenclature
\n\n
Element :A nodename regexp pattern\n
\n\n
The Rule allows matching the node’s name against a provided list of Elements.
\n\n
Format
\n\n
nodename:[<nodename regexp pattern>,...]\n
\n\n
Matching is done by performing logical OR for each provided Element, i.e the\nRule will match if one of the provided Elements (nodename regexp pattern)\nmatches the node’s name.
A node would contain the label:\nfeature.node.kubernetes.io/custom-my.kernel.feature=true if the node has\nkmod1ANDkmod2 kernel modules loaded.
\n
A node would contain the label:\nfeature.node.kubernetes.io/custom-my.pci.feature=true if the node contains\na PCI device with a PCI vendor ID of 15b3AND PCI device ID of 1014OR\n1017.
\n
A node would contain the label:\nfeature.node.kubernetes.io/custom-my.usb.feature=true if the node contains\na USB device with a USB vendor ID of 1d6bAND USB device ID of 0003.
\n
A node would contain the label:\nfeature.node.kubernetes.io/custom-my.combined.feature=true if\nvendor_kmod1ANDvendor_kmod2 kernel modules are loaded AND the node\ncontains a PCI device\nwith a PCI vendor ID of 15b3AND PCI device ID of 1014or1017.
\n
A node would contain the label:\nvendor.feature.node.kubernetes.io/accumulated.feature=true if\nsome_kmod1ANDsome_kmod2 kernel modules are loaded OR the node\ncontains a PCI device\nwith a PCI vendor ID of 15b3AND PCI device ID of 1014OR1017.
\n
A node would contain the label:\nfeature.node.kubernetes.io/custom-my.kernel.featureneedscpu=true if\nKVM_INTEL kernel config is enabled AND the node CPU supports VMX\nvirtual machine extensions
\n
A node would contain the label:\nfeature.node.kubernetes.io/custom-my.kernel.modulecompiler=true if the\nin-tree kmod1 kernel module is loaded AND it’s built with\nGCC_VERSION=100101.
\n
A node would contain the label:\nprofile.node.kubernetes.io/my-datacenter=datacenter-1 if the node’s name\nmatches the node-datacenter1-rack.*-server.* pattern, e.g.\nnode-datacenter1-rack2-server42
\n
\n\n
Statically defined features
\n\n
Some feature labels which are common and generic are defined statically in the\ncustom feature source. A user may add additional Matchers to these feature\nlabels by defining them in the nfd-worker configuration file.
\n\n
\n \n
\n
Feature
\n
Attribute
\n
Description
\n
\n \n \n
\n
rdma
\n
capable
\n
The node has an RDMA capable Network adapter
\n
\n
\n
rdma
\n
enabled
\n
The node has the needed RDMA modules loaded to run RDMA traffic
\n
\n \n
\n\n
IOMMU
\n\n
The iommu feature source supports the following labels:
\n\n
\n \n
\n
Feature name
\n
Description
\n
\n \n \n
\n
enabled
\n
IOMMU is present and enabled in the kernel
\n
\n \n
\n\n
Kernel
\n\n
The kernel feature source supports the following labels:
\n\n
\n \n
\n
Feature
\n
Attribute
\n
Description
\n
\n \n \n
\n
config
\n
<option name>
\n
Kernel config option is enabled (set ‘y’ or ‘m’). Default options are NO_HZ, NO_HZ_IDLE, NO_HZ_FULL and PREEMPT
\n
\n
\n
selinux
\n
enabled
\n
Selinux is enabled on the node
\n
\n
\n
version
\n
full
\n
Full kernel version as reported by /proc/sys/kernel/osrelease (e.g. ‘4.5.6-7-g123abcde’)
\n
\n
\n
\n
major
\n
First component of the kernel version (e.g. ‘4’)
\n
\n
\n
\n
minor
\n
Second component of the kernel version (e.g. ‘5’)
\n
\n
\n
\n
revision
\n
Third component of the kernel version (e.g. ‘6’)
\n
\n \n
\n\n
Kernel config file to use, and, the set of config options to be detected are\nconfigurable. See configuration\nfor more information.
\n\n
Memory
\n\n
The memory feature source supports the following labels:
\n\n
\n \n
\n
Feature
\n
Attribute
\n
Description
\n
\n \n \n
\n
numa
\n
\n
Multiple memory nodes i.e. NUMA architecture detected
\n
\n
\n
nv
\n
present
\n
NVDIMM device(s) are present
\n
\n
\n
nv
\n
dax
\n
NVDIMM region(s) configured in DAX mode are present
\n
\n \n
\n\n
Network
\n\n
The network feature source supports the following labels:
<device label> is composed of raw PCI IDs, separated by underscores. The set\nof fields used in <device label> is configurable, valid fields being class,\nvendor, device, subsystem_vendor and subsystem_device. Defaults are\nclass and vendor. An example label using the default label fields:
Also the set of PCI device classes that the feature source detects is\nconfigurable. By default, device classes (0x)03, (0x)0b40 and (0x)12, i.e.\nGPUs, co-processors and accelerator cards are detected.
\n\n
USB
\n\n
The usb feature source supports the following labels:
\n\n
\n \n
\n
Feature
\n
Attribute
\n
Description
\n
\n \n \n
\n
<device label>
\n
present
\n
USB device is detected
\n
\n \n
\n\n
<device label> is composed of raw USB IDs, separated by underscores. The set\nof fields used in <device label> is configurable, valid fields being class,\nvendor, device and serial. Defaults are class, vendor and device.\nAn example label using the default label fields:
See configuration for more\ninformation on NFD config.
\n\n
Storage
\n\n
The storage feature source supports the following labels:
\n\n
\n \n
\n
Feature name
\n
Description
\n
\n \n \n
\n
nonrotationaldisk
\n
Non-rotational disk, like SSD, is present in the node
\n
\n \n
\n\n
System
\n\n
The system feature source supports the following labels:
\n\n
\n \n
\n
Feature
\n
Attribute
\n
Description
\n
\n \n \n
\n
os_release
\n
ID
\n
Operating system identifier
\n
\n
\n
\n
VERSION_ID
\n
Operating system version identifier (e.g. ‘6.7’)
\n
\n
\n
\n
VERSION_ID.major
\n
First component of the OS version id (e.g. ‘6’)
\n
\n
\n
\n
VERSION_ID.minor
\n
Second component of the OS version id (e.g. ‘7’)
\n
\n \n
\n\n
Local – user-specific features
\n\n
NFD has a special feature source named local which is designed for getting\nthe labels from user-specific feature detector. It provides a mechanism for\nusers to implement custom feature sources in a pluggable way, without modifying\nnfd source code or Docker images. The local feature source can be used to\nadvertise new user-specific features, and, for overriding labels created by the\nother feature sources.
\n\n
The local feature source gets its labels by two different ways:
\n\n
\n
It tries to execute files found under\n/etc/kubernetes/node-feature-discovery/source.d/ directory. The hook files\nmust be executable and they are supposed to print all discovered features in\nstdout, one per line. With ELF binaries static linking is recommended as\nthe selection of system libraries available in the NFD release image is very\nlimited. Other runtimes currently supported by the NFD stock image are bash\nand perl.
\n
It reads files found under\n/etc/kubernetes/node-feature-discovery/features.d/ directory. The file\ncontent is expected to be similar to the hook output (described above).
\n
\n\n
NOTE: The minimal image variant only\nsupports running statically linked binaries.
\n\n
These directories must be available inside the Docker image so Volumes and\nVolumeMounts must be used if standard NFD images are used. The given template\nfiles mount by default the source.d and the features.d directories\nrespectively from /etc/kubernetes/node-feature-discovery/source.d/ and\n/etc/kubernetes/node-feature-discovery/features.d/ from the host. You should\nupdate them to match your needs.
\n\n
In both cases, the labels can be binary or non binary, using either <name> or\n<name>=<value> format.
\n\n
Unlike the other feature sources, the name of the file, instead of the name of\nthe feature source (that would be local in this case), is used as a prefix in\nthe label name, normally. However, if the <name> of the label starts with a\nslash (/) it is used as the label name as is, without any additional prefix.\nThis makes it possible for the user to fully control the feature label names,\ne.g. for overriding labels created by other feature sources.
\n\n
You can also override the default namespace of your labels using this format:\n<namespace>/<name>[=<value>]. If using something else than\n[<sub-ns>.]feature.node.kubernetes.io or\n[<sub-ns>.]profile.node.kubernetes.io, you must whitelist your namespace\nusing the -extra-label-ns option on the master.\nIn this case, the name of the\nfile will not be added to the label name. For example, if you want to add the\nlabel my.namespace.org/my-label=value, your hook output or file must contains\nmy.namespace.org/my-label=value and you must add\n-extra-label-ns=my.namespace.org on the master command line.
\n\n
stderr output of the hooks is propagated to NFD log so it can be used for\ndebugging and logging.
\n\n
Injecting labels from other pods
\n\n
One use case for the hooks and/or feature files is detecting features in other\nPods outside NFD, e.g. in Kubernetes device plugins. It is possible to mount\nthe source.d and/or features.d directories common with the NFD Pod and\ndeploy the custom hooks/features there. NFD will periodically scan the\ndirectories and run any hooks and read any feature files it finds. The\ndefault deployments contain hostPath mounts for sources.d and features.d\ndirectories. By using the same mounts in the secondary Pod (e.g. device plugin)\nyou have created a shared area for delivering hooks and feature files to NFD.
\n\n
A hook example
\n\n
User has a shell script\n/etc/kubernetes/node-feature-discovery/source.d/my-source which has the\nfollowing stdout output:
NFD tries to run any regular files found from the hooks directory. Any\nadditional data files your hook might need (e.g. a configuration file) should\nbe placed in a separate directory in order to avoid NFD unnecessarily trying to\nexecute these. You can use a subdirectory under the hooks directory, for\nexample /etc/kubernetes/node-feature-discovery/source.d/conf/.
\n\n
NOTE! NFD will blindly run any executables placed/mounted in the hooks\ndirectory. It is the user’s responsibility to review the hooks for e.g.\npossible security implications.
\n\n
NOTE! Be careful when creating and/or updating hook or feature files while\nNFD is running. In order to avoid race conditions you should write into a\ntemporary file (outside the source.d and features.d directories), and,\natomically create/update the original file by doing a filesystem move\noperation.
\n\n
Extended resources
\n\n
This feature is experimental and by no means a replacement for the usage of\ndevice plugins.
\n\n
Labels which have integer values, can be promoted to Kubernetes extended\nresources by listing them to the master -resource-labels command line flag.\nThese labels won’t then show in the node label section, they will appear only\nas extended resources.
\n\n
An example use-case for the extended resources could be based on a hook which\ncreates a label for the node SGX EPC memory section size. By giving the name of\nthat label in the -resource-labels flag, that value will then turn into an\nextended resource of the node, allowing PODs to request that resource and the\nKubernetes scheduler to schedule such PODs to only those nodes which have a\nsufficient capacity of said resource left.
\n\n
Similar to labels, the default namespace feature.node.kubernetes.io is\nautomatically prefixed to the extended resource, if the promoted label doesn’t\nhave a namespace.
\n\n
Example usage of the command line arguments, using a new namespace:\nnfd-master -resource-labels=my_source-my.feature,sgx.some.ns/epc -extra-label-ns=sgx.some.ns
\n\n
The above would result in following extended resources provided that related\nlabels exist:
The core section contains common configuration settings that are not specific\nto any particular feature source.
\n\n
core.sleepInterval
\n\n
core.sleepInterval specifies the interval between consecutive passes of\nfeature (re-)detection, and thus also the interval between node re-labeling. A\nnon-positive value implies infinite sleep interval, i.e. no re-detection or\nre-labeling is done.
\n\n
Note: Overridden by the deprecated --sleep-interval command line flag (if\nspecified).
\n\n
Default: 60s
\n\n
Example:
\n\n
core:\n sleepInterval:60s\n
\n\n
core.sources
\n\n
core.sources specifies the list of enabled feature sources. A special value\nall enables all feature sources.
\n\n
Note: Overridden by the deprecated --sources command line flag (if\nspecified).
\n\n
Default: [all]
\n\n
Example:
\n\n
core:\n sources:\n -system\n -custom\n
\n\n
core.labelWhiteList
\n\n
core.labelWhiteList specifies a regular expression for filtering feature\nlabels based on the label name. Non-matching labels are not published.
\n\n
Note: The regular expression is only matches against the “basename” part of the\nlabel, i.e. to the part of the name after ‘/’. The label prefix (or namespace)\nis omitted.
\n\n
Note: Overridden by the deprecated --label-whitelist command line flag (if\nspecified).
\n\n
Default: null
\n\n
Example:
\n\n
core:\n labelWhiteList:'^cpu-cpuid'\n
\n\n
core.noPublish
\n\n
Setting core.noPublish to true disables all communication with the\nnfd-master. It is effectively a “dry-run” flag: nfd-worker runs feature\ndetection normally, but no labeling requests are sent to nfd-master.
\n\n
Note: Overridden by the --no-publish command line flag (if specified).
\n\n
Default: false
\n\n
Example:
\n\n
core:\n noPublish:true\n
\n\n
core.klog
\n\n
The following options specify the logger configuration. Most of which can be\ndynamically adjusted at run-time.
\n\n
Note: The logger options can also be specified via command line flags which\ntake precedence over any corresponding config file options.
\n\n
core.klog.addDirHeader
\n\n
If true, adds the file directory to the header of the log messages.
\n\n
Default: false
\n\n
Run-time configurable: yes
\n\n
core.klog.alsologtostderr
\n\n
Log to standard error as well as files.
\n\n
Default: false
\n\n
Run-time configurable: yes
\n\n
core.klog.logBacktraceAt
\n\n
When logging hits line file:N, emit a stack trace.
\n\n
Default: empty
\n\n
Run-time configurable: yes
\n\n
core.klog.logDir
\n\n
If non-empty, write log files in this directory.
\n\n
Default: empty
\n\n
Run-time configurable: no
\n\n
core.klog.logFile
\n\n
If non-empty, use this log file.
\n\n
Default: empty
\n\n
Run-time configurable: no
\n\n
core.klog.logFileMaxSize
\n\n
Defines the maximum size a log file can grow to. Unit is megabytes. If the\nvalue is 0, the maximum file size is unlimited.
\n\n
Default: 1800
\n\n
Run-time configurable: no
\n\n
core.klog.logtostderr
\n\n
Log to standard error instead of files
\n\n
Default: true
\n\n
Run-time configurable: yes
\n\n
core.klog.skipHeaders
\n\n
If true, avoid header prefixes in the log messages.
\n\n
Default: false
\n\n
Run-time configurable: yes
\n\n
core.klog.skipLogHeaders
\n\n
If true, avoid headers when opening log files.
\n\n
Default: false
\n\n
Run-time configurable: no
\n\n
core.klog.stderrthreshold
\n\n
Logs at or above this threshold go to stderr (default 2)
\n\n
Run-time configurable: yes
\n\n
core.klog.v
\n\n
Number for the log level verbosity.
\n\n
Default: 0
\n\n
Run-time configurable: yes
\n\n
core.klog.vmodule
\n\n
Comma-separated list of pattern=N settings for file-filtered logging.
\n\n
Default: empty
\n\n
Run-time configurable: yes
\n\n
sources
\n\n
The sources section contains feature source specific configuration parameters.
\n\n
sources.cpu
\n\n
sources.cpu.cpuid
\n\n
sources.cpu.cpuid.attributeBlacklist
\n\n
Prevent publishing cpuid features listed in this option.
\n\n
Note: overridden by sources.cpu.cpuid.attributeWhitelist (if specified)
Kernel configuration options to publish as feature labels.
\n\n
Default: [NO_HZ, NO_HZ_IDLE, NO_HZ_FULL, PREEMPT]
\n\n
Example:
\n\n
sources:\n kernel:\n configOpts:[NO_HZ,X86,DMI]\n
\n\n
soures.pci
\n\n
soures.pci.deviceClassWhitelist
\n\n
List of PCI device class IDs for which to\npublish a label. Can be specified as a main class only (e.g. 03) or full\nclass-subclass combination (e.g. 0300) - the former implies that all\nsubclasses are accepted. The format of the labels can be further configured\nwith deviceLabelFields.
The set of PCI ID fields to use when constructing the name of the feature\nlabel. Valid fields are class, vendor, device, subsystem_vendor and\nsubsystem_device.
With the example config above NFD would publish labels like:\nfeature.node.kubernetes.io/usb-<class-id>_<vendor-id>.present=true
\n\n
sources.custom
\n\n
List of rules to process in the custom feature source to create user-specific\nlabels. Refer to the documentation of the\ncustom feature source for details of\nthe available rules and their configuration.
The -ca-file is one of the three flags (together with -cert-file and\n-key-file) controlling the mutual TLS authentication on the topology-updater side.\nThis flag specifies the TLS root certificate that is used for verifying the\nauthenticity of nfd-master.
\n\n
Default: empty
\n\n
Note: Must be specified together with -cert-file and -key-file
The -cert-file is one of the three flags (together with -ca-file and\n-key-file) controlling mutual TLS authentication on the topology-updater\nside. This flag specifies the TLS certificate presented for authenticating\noutgoing requests.
\n\n
Default: empty
\n\n
Note: Must be specified together with -ca-file and -key-file
The -key-file is one of the three flags (together with -ca-file and\n-cert-file) controlling the mutual TLS authentication on topology-updater\nside. This flag specifies the private key corresponding the given certificate file\n(-cert-file) that is used for authenticating outgoing requests.
\n\n
Default: empty
\n\n
Note: Must be specified together with -cert-file and -ca-file
The -server-name-override flag specifies the common name (CN) which to\nexpect from the nfd-master TLS certificate. This flag is mostly intended for\ndevelopment and debugging purposes.
The -no-publish flag disables all communication with the nfd-master, making\nit a “dry-run” flag for nfd-topology-updater. NFD-Topology-Updater runs\nresource hardware topology detection normally, but no CR requests are sent to\nnfd-master.
\n\n
Default: false
\n\n
Example:
\n\n
nfd-topology-updater -no-publish\n
\n\n
-oneshot
\n\n
The -oneshot flag causes nfd-topology-updater to exit after one pass of\nresource hardware topology detection.
\n\n
Default: false
\n\n
Example:
\n\n
nfd-topology-updater -oneshot-no-publish\n
\n\n
-sleep-interval
\n\n
The -sleep-interval specifies the interval between resource hardware\ntopology re-examination (and CR updates). A non-positive value implies\ninfinite sleep interval, i.e. no re-detection is done.
\n\n
Default: 60s
\n\n
Example:
\n\n
nfd-topology-updater -sleep-interval=1h\n
\n\n
-watch-namespace
\n\n
The -watch-namespace specifies the namespace to ensure that resource\nhardware topology examination only happens for the pods running in the\nspecified namespace. Pods that are not running in the specified namespace\nare not considered during resource accounting. This is particularly useful\nfor testing/debugging purpose. A “*” value would mean that all the pods would\nbe considered during the accounting process.
\n\n
Default: “*”
\n\n
Example:
\n\n
nfd-topology-updater -watch-namespace=rte\n
\n\n
-kubelet-config-file
\n\n
The -kubelet-config-file specifies the path to the Kubelet’s configuration\nfile.
The -podresources-socket specifies the path to the Unix socket where kubelet\nexports a gRPC service to enable discovery of in-use CPUs and devices, and to\nprovide metadata for them.
\n","dir":"/advanced/","name":"topology-updater-commandline-reference.md","path":"advanced/topology-updater-commandline-reference.md","url":"/advanced/topology-updater-commandline-reference.html"},{"title":"Examples and demos","layout":"default","sort":5,"content":"
A demo on the benefits of using node feature discovery can be found in the\nsource code repository under\ndemo/.
\n","dir":"/get-started/","name":"examples-and-demos.md","path":"get-started/examples-and-demos.md","url":"/get-started/examples-and-demos.html"}]
\ No newline at end of file
+[{"layout":"default","title":null,"content":"--\ntitle: \"Feature discovery\"\nlayout: default\nsort: 4\n---\n\n# Feature discovery\n{: .no_toc}\n\n## Table of contents\n{: .no_toc .text-delta}\n\n1. TOC\n{:toc}\n\n---\n\nFeature discovery in nfd-worker is performed by a set of separate modules\ncalled feature sources. Most of them are specifically responsible for certain\ndomain of features (e.g. cpu). In addition there are two highly customizable\nfeature sources that work accross the system.\n\n## Feature labels\n\nEach discovered feature is advertised a label in the Kubernetes Node object.\nThe published node labels encode a few pieces of information:\n\n- Namespace\n - all built-in labels use `feature.node.kubernetes.io`\n - user-specified custom labels ([custom](#custom) and\n [local](#local--user-specific-features) feature sources)\n - `feature.node.kubernetes.io` and `profile.node.kubernetes.io` plus their\n sub-namespaces (e.g. `vendor.profile.node.kubernetes.io` and\n `sub.ns.profile.node.kubernetes.io`) are allowed by default\n - additional namespaces may be enabled with the\n [`-extra-label-ns`](../advanced/master-commandline-reference#-extra-label-ns)\n command line flag of nfd-master\n- The source for each label (e.g. `cpu`).\n- The name of the discovered feature as it appears in the underlying\n source, (e.g. `cpuid.AESNI` from cpu).\n- The value of the discovered feature.\n\nFeature label names adhere to the following pattern:\n\n```plaintext\n/-[.]\n```\n\nThe last component (i.e. `attribute-name`) is optional, and only used if a\nfeature logically has sub-hierarchy, e.g. `sriov.capable` and\n`sriov.configure` from the `network` source.\n\nThe `-sources` flag controls which sources to use for discovery.\n\n*Note: Consecutive runs of nfd-worker will update the labels on a\ngiven node. If features are not discovered on a consecutive run, the corresponding\nlabel will be removed. This includes any restrictions placed on the consecutive run,\nsuch as restricting discovered features with the -label-whitelist option.*\n\n## Feature sources\n\n### CPU\n\nThe **cpu** feature source supports the following labels:\n\n| Feature name | Attribute | Description |\n| ----------------------- | ------------------ | ----------------------------- |\n| cpuid | <cpuid flag> | CPU capability is supported\n| hardware_multithreading | | Hardware multithreading, such as Intel HTT, enabled (number of logical CPUs is greater than physical CPUs)\n| power | sst_bf.enabled | Intel SST-BF ([Intel Speed Select Technology][intel-sst] - Base frequency) enabled\n| [pstate][intel-pstate] | status | The status of the Intel pstate driver when in use and enabled, either 'active' or 'passive'.\n| | turbo | Set to 'true' if turbo frequencies are enabled in Intel pstate driver, set to 'false' if they have been disabled.\n| | scaling_governor | The value of the Intel pstate scaling_governor when in use, either 'powersave' or 'performance'.\n| cstate | enabled | Set to 'true' if cstates are set in the intel_idle driver, otherwise set to 'false'. Unset if intel_idle cpuidle driver is not active.\n| [rdt][intel-rdt] | RDTMON | Intel RDT Monitoring Technology\n| | RDTCMT | Intel Cache Monitoring (CMT)\n| | RDTMBM | Intel Memory Bandwidth Monitoring (MBM)\n| | RDTL3CA | Intel L3 Cache Allocation Technology\n| | RDTL2CA | Intel L2 Cache Allocation Technology\n| | RDTMBA | Intel Memory Bandwidth Allocation (MBA) Technology\n| sgx | enabled | Set to 'true' if Intel SGX is enabled in BIOS (based a non-zero sum value of SGX EPC section sizes).\n\nThe (sub-)set of CPUID attributes to publish is configurable via the\n`attributeBlacklist` and `attributeWhitelist` cpuid options of the cpu source.\nIf whitelist is specified, only whitelisted attributes will be published. With\nblacklist, only blacklisted attributes are filtered out. `attributeWhitelist`\nhas priority over `attributeBlacklist`. For examples and more information\nabout configurability, see\n[configuration](deployment-and-usage#worker-configuration).\nBy default, the following CPUID flags have been blacklisted:\nBMI1, BMI2, CLMUL, CMOV, CX16, ERMS, F16C, HTT, LZCNT, MMX, MMXEXT, NX, POPCNT,\nRDRAND, RDSEED, RDTSCP, SGX, SSE, SSE2, SSE3, SSE4, SSE42 and SSSE3.\n\n**NOTE** The cpuid features advertise *supported* CPU capabilities, that is, a\ncapability might be supported but not enabled.\n\n#### X86 CPUID attributes (partial list)\n\n| Attribute | Description |\n| --------- | ---------------------------------------------------------------- |\n| ADX | Multi-Precision Add-Carry Instruction Extensions (ADX)\n| AESNI | Advanced Encryption Standard (AES) New Instructions (AES-NI)\n| AVX | Advanced Vector Extensions (AVX)\n| AVX2 | Advanced Vector Extensions 2 (AVX2)\n\nSee the full list in [github.com/klauspost/cpuid][klauspost-cpuid].\n\n#### Arm CPUID attribute (partial list)\n\n| Attribute | Description |\n| --------- | ---------------------------------------------------------------- |\n| IDIVA | Integer divide instructions available in ARM mode\n| IDIVT | Integer divide instructions available in Thumb mode\n| THUMB | Thumb instructions\n| FASTMUL | Fast multiplication\n| VFP | Vector floating point instruction extension (VFP)\n| VFPv3 | Vector floating point extension v3\n| VFPv4 | Vector floating point extension v4\n| VFPD32 | VFP with 32 D-registers\n| HALF | Half-word loads and stores\n| EDSP | DSP extensions\n| NEON | NEON SIMD instructions\n| LPAE | Large Physical Address Extensions\n\n#### Arm64 CPUID attribute (partial list)\n\n| Attribute | Description |\n| --------- | ---------------------------------------------------------------- |\n| AES | Announcing the Advanced Encryption Standard\n| EVSTRM | Event Stream Frequency Features\n| FPHP | Half Precision(16bit) Floating Point Data Processing Instructions\n| ASIMDHP | Half Precision(16bit) Asimd Data Processing Instructions\n| ATOMICS | Atomic Instructions to the A64\n| ASIMRDM | Support for Rounding Double Multiply Add/Subtract\n| PMULL | Optional Cryptographic and CRC32 Instructions\n| JSCVT | Perform Conversion to Match Javascript\n| DCPOP | Persistent Memory Support\n\n### Custom\n\nThe Custom feature source allows the user to define features based on a mix of\npredefined rules. A rule is provided input witch affects its process of\nmatching for a defined feature. The rules are specified in the\nnfd-worker configuration file. See\n[configuration](/node-feature-discovery/master/get-started/deployment-and-usage.html#worker-configuration) for instructions\nand examples how to set-up and manage the worker configuration.\n\nTo aid in making Custom Features clearer, we define a general and a per rule\nnomenclature, keeping things as consistent as possible.\n\n#### Additional configuration directory\n\nAdditionally to the rules defined in the nfd-worker configuration file, the\nCustom feature can read more configuration files located in the\n`/etc/kubernetes/node-feature-discovery/custom.d/` directory. This makes more\ndynamic and flexible configuration easier. This directory must be available\ninside the NFD worker container, so Volumes and VolumeMounts must be used for\nmounting e.g. ConfigMap(s). The example deployment manifests provide an example\n(commented out) for providing Custom configuration with an additional\nConfigMap, mounted into the `custom.d` directory.\n\n#### General nomenclature & definitions\n\n```plaintext\nRule :Represents a matching logic that is used to match on a feature.\nRule Input :The input a Rule is provided. This determines how a Rule performs the match operation.\nMatcher :A composition of Rules, each Matcher may be composed of at most one instance of each Rule.\n```\n\n#### Custom features format (using the nomenclature defined above)\n\nRules are specified under `sources.custom` in the nfd-worker configuration\nfile.\n\n```yaml\nsources:\n custom:\n - name: \n value: \n matchOn:\n - : \n [: ]\n - \n - ...\n - ...\n - \n - \n - ...\n - ...\n - \n```\n\n#### Matching process\n\nSpecifying Rules to match on a feature is done by providing a list of Matchers.\nEach Matcher contains one or more Rules.\n\nLogical _OR_ is performed between Matchers and logical _AND_ is performed\nbetween Rules of a given Matcher.\n\n#### Rules\n\n##### pciid rule\n\n###### Nomenclature\n\n```plaintext\nAttribute :A PCI attribute.\nElement :An identifier of the PCI attribute.\n```\n\nThe PciId Rule allows matching the PCI devices in the system on the following\nAttributes: `class`,`vendor` and `device`. A list of Elements is provided for\neach Attribute.\n\n###### Format\n\n```yaml\npciId :\n class: [, ...]\n vendor: [, ...]\n device: [, ...]\n```\n\nMatching is done by performing a logical _OR_ between Elements of an Attribute\nand logical _AND_ between the specified Attributes for each PCI device in the\nsystem. At least one Attribute must be specified. Missing attributes will not\npartake in the matching process.\n\n##### UsbId rule\n\n###### Nomenclature\n\n```plaintext\nAttribute :A USB attribute.\nElement :An identifier of the USB attribute.\n```\n\nThe UsbId Rule allows matching the USB devices in the system on the following\nAttributes: `class`,`vendor`, `device` and `serial`. A list of Elements is\nprovided for each Attribute.\n\n###### Format\n\n```yaml\nusbId :\n class: [, ...]\n vendor: [, ...]\n device: [, ...]\n serial: [, ...]\n```\n\nMatching is done by performing a logical _OR_ between Elements of an Attribute\nand logical _AND_ between the specified Attributes for each USB device in the\nsystem. At least one Attribute must be specified. Missing attributes will not\npartake in the matching process.\n\n##### LoadedKMod rule\n\n###### Nomenclature\n\n```plaintext\nElement :A kernel module\n```\n\nThe LoadedKMod Rule allows matching the loaded kernel modules in the system\nagainst a provided list of Elements.\n\n###### Format\n\n```yaml\nloadedKMod : [, ...]\n```\n\nMatching is done by performing logical _AND_ for each provided Element, i.e\nthe Rule will match if all provided Elements (kernel modules) are loaded in the\nsystem.\n\n##### CpuId rule\n\n###### Nomenclature\n\n```plaintext\nElement :A CPUID flag\n```\n\nThe Rule allows matching the available CPUID flags in the system against a\nprovided list of Elements.\n\n###### Format\n\n```yaml\ncpuId : [, ...]\n```\n\nMatching is done by performing logical _AND_ for each provided Element, i.e the\nRule will match if all provided Elements (CPUID flag strings) are available in\nthe system.\n\n##### Kconfig rule\n\n###### Nomenclature\n\n```plaintext\nElement :A Kconfig option\n```\n\nThe Rule allows matching the kconfig options in the system against a provided\nlist of Elements.\n\n###### Format\n\n```yaml\nkConfig: ['>, ...]\n```\n\nMatching is done by performing logical _AND_ for each provided Element, i.e the\nRule will match if all provided Elements (kernel config options) are enabled\n(`y` or `m`) or matching `=` in the kernel.\n\n##### Nodename rule\n\n###### Nomenclature\n\n```plaintext\nElement :A nodename regexp pattern\n```\n\nThe Rule allows matching the node's name against a provided list of Elements.\n\n###### Format\n\n```yaml\nnodename: [ , ... ]\n```\n\nMatching is done by performing logical _OR_ for each provided Element, i.e the\nRule will match if one of the provided Elements (nodename regexp pattern)\nmatches the node's name.\n\n#### Example\n\n```yaml\ncustom:\n - name: \"my.kernel.feature\"\n matchOn:\n - loadedKMod: [\"kmod1\", \"kmod2\"]\n - name: \"my.pci.feature\"\n matchOn:\n - pciId:\n vendor: [\"15b3\"]\n device: [\"1014\", \"1017\"]\n - name: \"my.usb.feature\"\n matchOn:\n - usbId:\n vendor: [\"1d6b\"]\n device: [\"0003\"]\n serial: [\"090129a\"]\n - name: \"my.combined.feature\"\n matchOn:\n - loadedKMod : [\"vendor_kmod1\", \"vendor_kmod2\"]\n pciId:\n vendor: [\"15b3\"]\n device: [\"1014\", \"1017\"]\n - name: \"vendor.feature.node.kubernetes.io/accumulated.feature\"\n matchOn:\n - loadedKMod : [\"some_kmod1\", \"some_kmod2\"]\n - pciId:\n vendor: [\"15b3\"]\n device: [\"1014\", \"1017\"]\n - name: \"my.kernel.featureneedscpu\"\n matchOn:\n - kConfig: [\"KVM_INTEL\"]\n - cpuId: [\"VMX\"]\n - name: \"my.kernel.modulecompiler\"\n matchOn:\n - kConfig: [\"GCC_VERSION=100101\"]\n loadedKMod: [\"kmod1\"]\n - name: \"profile.node.kubernetes.io/my-datacenter\"\n value: \"datacenter-1\"\n matchOn:\n - nodename: [ \"node-datacenter1-rack.*-server.*\" ]\n```\n\n__In the example above:__\n\n- A node would contain the label:\n `feature.node.kubernetes.io/custom-my.kernel.feature=true` if the node has\n `kmod1` _AND_ `kmod2` kernel modules loaded.\n- A node would contain the label:\n `feature.node.kubernetes.io/custom-my.pci.feature=true` if the node contains\n a PCI device with a PCI vendor ID of `15b3` _AND_ PCI device ID of `1014` _OR_\n `1017`.\n- A node would contain the label:\n `feature.node.kubernetes.io/custom-my.usb.feature=true` if the node contains\n a USB device with a USB vendor ID of `1d6b` _AND_ USB device ID of `0003`.\n- A node would contain the label:\n `feature.node.kubernetes.io/custom-my.combined.feature=true` if\n `vendor_kmod1` _AND_ `vendor_kmod2` kernel modules are loaded __AND__ the node\n contains a PCI device\n with a PCI vendor ID of `15b3` _AND_ PCI device ID of `1014` _or_ `1017`.\n- A node would contain the label:\n `vendor.feature.node.kubernetes.io/accumulated.feature=true` if\n `some_kmod1` _AND_ `some_kmod2` kernel modules are loaded __OR__ the node\n contains a PCI device\n with a PCI vendor ID of `15b3` _AND_ PCI device ID of `1014` _OR_ `1017`.\n- A node would contain the label:\n `feature.node.kubernetes.io/custom-my.kernel.featureneedscpu=true` if\n `KVM_INTEL` kernel config is enabled __AND__ the node CPU supports `VMX`\n virtual machine extensions\n- A node would contain the label:\n `feature.node.kubernetes.io/custom-my.kernel.modulecompiler=true` if the\n in-tree `kmod1` kernel module is loaded __AND__ it's built with\n `GCC_VERSION=100101`.\n- A node would contain the label:\n `profile.node.kubernetes.io/my-datacenter=datacenter-1` if the node's name\n matches the `node-datacenter1-rack.*-server.*` pattern, e.g.\n `node-datacenter1-rack2-server42`\n\n#### Statically defined features\n\nSome feature labels which are common and generic are defined statically in the\n`custom` feature source. A user may add additional Matchers to these feature\nlabels by defining them in the `nfd-worker` configuration file.\n\n| Feature | Attribute | Description |\n| ------- | --------- | -----------|\n| rdma | capable | The node has an RDMA capable Network adapter |\n| rdma | enabled | The node has the needed RDMA modules loaded to run RDMA traffic |\n\n### IOMMU\n\nThe **iommu** feature source supports the following labels:\n\n| Feature name | Description |\n| :------------: | :---------------------------------------------------------: |\n| enabled | IOMMU is present and enabled in the kernel\n\n### Kernel\n\nThe **kernel** feature source supports the following labels:\n\n| Feature | Attribute | Description |\n| ------- | ------------------- | -------------------------------------------- |\n| config | <option name> | Kernel config option is enabled (set 'y' or 'm'). Default options are `NO_HZ`, `NO_HZ_IDLE`, `NO_HZ_FULL` and `PREEMPT`\n| selinux | enabled | Selinux is enabled on the node\n| version | full | Full kernel version as reported by `/proc/sys/kernel/osrelease` (e.g. '4.5.6-7-g123abcde')\n| | major | First component of the kernel version (e.g. '4')\n| | minor | Second component of the kernel version (e.g. '5')\n| | revision | Third component of the kernel version (e.g. '6')\n\nKernel config file to use, and, the set of config options to be detected are\nconfigurable. See [configuration](deployment-and-usage#worker-configuration)\nfor more information.\n\n### Memory\n\nThe **memory** feature source supports the following labels:\n\n| Feature | Attribute | Description |\n| ------- | --------- | ------------------------------------------------------ |\n| numa | | Multiple memory nodes i.e. NUMA architecture detected\n| nv | present | NVDIMM device(s) are present\n| nv | dax | NVDIMM region(s) configured in DAX mode are present\n\n### Network\n\nThe **network** feature source supports the following labels:\n\n| Feature | Attribute | Description |\n| ------- | ---------- | ----------------------------------------------------- |\n| sriov | capable | [Single Root Input/Output Virtualization][sriov] (SR-IOV) enabled Network Interface Card(s) present\n| | configured | SR-IOV virtual functions have been configured\n\n### PCI\n\nThe **pci** feature source supports the following labels:\n\n| Feature | Attribute | Description |\n| -------------------- | ------------- | ------------------------------------- |\n| <device label> | present | PCI device is detected\n| <device label> | sriov.capable | [Single Root Input/Output Virtualization][sriov] (SR-IOV) enabled PCI device present\n\n`` is composed of raw PCI IDs, separated by underscores. The set\nof fields used in `` is configurable, valid fields being `class`,\n`vendor`, `device`, `subsystem_vendor` and `subsystem_device`. Defaults are\n`class` and `vendor`. An example label using the default label fields:\n\n```plaintext\nfeature.node.kubernetes.io/pci-1200_8086.present=true\n```\n\nAlso the set of PCI device classes that the feature source detects is\nconfigurable. By default, device classes (0x)03, (0x)0b40 and (0x)12, i.e.\nGPUs, co-processors and accelerator cards are detected.\n\n### USB\n\nThe **usb** feature source supports the following labels:\n\n| Feature | Attribute | Description |\n| -------------------- | ------------- | ------------------------------------- |\n| <device label> | present | USB device is detected\n\n`` is composed of raw USB IDs, separated by underscores. The set\nof fields used in `` is configurable, valid fields being `class`,\n`vendor`, `device` and `serial`. Defaults are `class`, `vendor` and `device`.\nAn example label using the default label fields:\n\n```plaintext\nfeature.node.kubernetes.io/usb-fe_1a6e_089a.present=true\n```\n\nSee [configuration](deployment-and-usage#worker-configuration) for more\ninformation on NFD config.\n\n### Storage\n\nThe **storage** feature source supports the following labels:\n\n| Feature name | Description |\n| ------------------ | ------------------------------------------------------- |\n| nonrotationaldisk | Non-rotational disk, like SSD, is present in the node\n\n### System\n\nThe **system** feature source supports the following labels:\n\n| Feature | Attribute | Description |\n| ----------- | ---------------- | --------------------------------------------|\n| os_release | ID | Operating system identifier\n| | VERSION_ID | Operating system version identifier (e.g. '6.7')\n| | VERSION_ID.major | First component of the OS version id (e.g. '6')\n| | VERSION_ID.minor | Second component of the OS version id (e.g. '7')\n\n### Local -- user-specific features\n\nNFD has a special feature source named *local* which is designed for getting\nthe labels from user-specific feature detector. It provides a mechanism for\nusers to implement custom feature sources in a pluggable way, without modifying\nnfd source code or Docker images. The local feature source can be used to\nadvertise new user-specific features, and, for overriding labels created by the\nother feature sources.\n\nThe *local* feature source gets its labels by two different ways:\n\n- It tries to execute files found under\n `/etc/kubernetes/node-feature-discovery/source.d/` directory. The hook files\n must be executable and they are supposed to print all discovered features in\n `stdout`, one per line. With ELF binaries static linking is recommended as\n the selection of system libraries available in the NFD release image is very\n limited. Other runtimes currently supported by the NFD stock image are bash\n and perl.\n- It reads files found under\n `/etc/kubernetes/node-feature-discovery/features.d/` directory. The file\n content is expected to be similar to the hook output (described above).\n\n**NOTE:** The [minimal](deployment-and-usage#minimal) image variant only\nsupports running statically linked binaries.\n\nThese directories must be available inside the Docker image so Volumes and\nVolumeMounts must be used if standard NFD images are used. The given template\nfiles mount by default the `source.d` and the `features.d` directories\nrespectively from `/etc/kubernetes/node-feature-discovery/source.d/` and\n`/etc/kubernetes/node-feature-discovery/features.d/` from the host. You should\nupdate them to match your needs.\n\nIn both cases, the labels can be binary or non binary, using either `` or\n`=` format.\n\nUnlike the other feature sources, the name of the file, instead of the name of\nthe feature source (that would be `local` in this case), is used as a prefix in\nthe label name, normally. However, if the `` of the label starts with a\nslash (`/`) it is used as the label name as is, without any additional prefix.\nThis makes it possible for the user to fully control the feature label names,\ne.g. for overriding labels created by other feature sources.\n\nYou can also override the default namespace of your labels using this format:\n`/[=]`. If using something else than\n`[.]feature.node.kubernetes.io` or\n`[.]profile.node.kubernetes.io`, you must whitelist your namespace\nusing the `-extra-label-ns` option on the master.\nIn this case, the name of the\nfile will not be added to the label name. For example, if you want to add the\nlabel `my.namespace.org/my-label=value`, your hook output or file must contains\n`my.namespace.org/my-label=value` and you must add\n`-extra-label-ns=my.namespace.org` on the master command line.\n\n`stderr` output of the hooks is propagated to NFD log so it can be used for\ndebugging and logging.\n\n#### Injecting labels from other pods\n\nOne use case for the hooks and/or feature files is detecting features in other\nPods outside NFD, e.g. in Kubernetes device plugins. It is possible to mount\nthe `source.d` and/or `features.d` directories common with the NFD Pod and\ndeploy the custom hooks/features there. NFD will periodically scan the\ndirectories and run any hooks and read any feature files it finds. The\ndefault deployments contain `hostPath` mounts for `sources.d` and `features.d`\ndirectories. By using the same mounts in the secondary Pod (e.g. device plugin)\nyou have created a shared area for delivering hooks and feature files to NFD.\n\n#### A hook example\n\nUser has a shell script\n`/etc/kubernetes/node-feature-discovery/source.d/my-source` which has the\nfollowing `stdout` output:\n\n```plaintext\nMY_FEATURE_1\nMY_FEATURE_2=myvalue\n/override_source-OVERRIDE_BOOL\n/override_source-OVERRIDE_VALUE=123\noverride.namespace/value=456\n```\n\nwhich, in turn, will translate into the following node labels:\n\n```plaintext\nfeature.node.kubernetes.io/my-source-MY_FEATURE_1=true\nfeature.node.kubernetes.io/my-source-MY_FEATURE_2=myvalue\nfeature.node.kubernetes.io/override_source-OVERRIDE_BOOL=true\nfeature.node.kubernetes.io/override_source-OVERRIDE_VALUE=123\noverride.namespace/value=456\n```\n\n#### A file example\n\nUser has a file `/etc/kubernetes/node-feature-discovery/features.d/my-source`\nwhich contains the following lines:\n\n```plaintext\nMY_FEATURE_1\nMY_FEATURE_2=myvalue\n/override_source-OVERRIDE_BOOL\n/override_source-OVERRIDE_VALUE=123\noverride.namespace/value=456\n```\n\nwhich, in turn, will translate into the following node labels:\n\n```plaintext\nfeature.node.kubernetes.io/my-source-MY_FEATURE_1=true\nfeature.node.kubernetes.io/my-source-MY_FEATURE_2=myvalue\nfeature.node.kubernetes.io/override_source-OVERRIDE_BOOL=true\nfeature.node.kubernetes.io/override_source-OVERRIDE_VALUE=123\noverride.namespace/value=456\n```\n\nNFD tries to run any regular files found from the hooks directory. Any\nadditional data files your hook might need (e.g. a configuration file) should\nbe placed in a separate directory in order to avoid NFD unnecessarily trying to\nexecute these. You can use a subdirectory under the hooks directory, for\nexample `/etc/kubernetes/node-feature-discovery/source.d/conf/`.\n\n**NOTE!** NFD will blindly run any executables placed/mounted in the hooks\ndirectory. It is the user's responsibility to review the hooks for e.g.\npossible security implications.\n\n**NOTE!** Be careful when creating and/or updating hook or feature files while\nNFD is running. In order to avoid race conditions you should write into a\ntemporary file (outside the `source.d` and `features.d` directories), and,\natomically create/update the original file by doing a filesystem move\noperation.\n\n## Extended resources\n\nThis feature is experimental and by no means a replacement for the usage of\ndevice plugins.\n\nLabels which have integer values, can be promoted to Kubernetes extended\nresources by listing them to the master `-resource-labels` command line flag.\nThese labels won't then show in the node label section, they will appear only\nas extended resources.\n\nAn example use-case for the extended resources could be based on a hook which\ncreates a label for the node SGX EPC memory section size. By giving the name of\nthat label in the `-resource-labels` flag, that value will then turn into an\nextended resource of the node, allowing PODs to request that resource and the\nKubernetes scheduler to schedule such PODs to only those nodes which have a\nsufficient capacity of said resource left.\n\nSimilar to labels, the default namespace `feature.node.kubernetes.io` is\nautomatically prefixed to the extended resource, if the promoted label doesn't\nhave a namespace.\n\nExample usage of the command line arguments, using a new namespace:\n`nfd-master -resource-labels=my_source-my.feature,sgx.some.ns/epc -extra-label-ns=sgx.some.ns`\n\nThe above would result in following extended resources provided that related\nlabels exist:\n\n```plaintext\n sgx.some.ns/epc: