404
Not Found
diff --git a/v0.7/404.html b/v0.7/404.html index 93ac33f45..fd89c754b 100644 --- a/v0.7/404.html +++ b/v0.7/404.html @@ -1 +1 @@ -
Not Found
Not Found
git clone https://github.com/kubernetes-sigs/node-feature-discovery
+ Developer guide · Node Feature Discovery
Developer guide
Table of contents
Building from source
Download the source code
git clone https://github.com/kubernetes-sigs/node-feature-discovery
cd node-feature-discovery
Docker build
Build the container image
See customizing the build below for altering the container image registry, for example.
make
Push the container image
Optional, this example with Docker.
docker push <IMAGE_TAG>
@@ -110,4 +110,4 @@ Usage:
NOTE Some feature sources need certain directories and/or files from the host mounted inside the NFD container. Thus, you need to provide Docker with the correct --volume
options in order for them to work correctly when run stand-alone directly with docker run
. See the template spec for up-to-date information about the required volume mounts.
Documentation
All documentation resides under the docs directory in the source tree. It is designed to be served as a html site by GitHub Pages.
Building the documentation is containerized in order to fix the build environment. The recommended way for developing documentation is to run:
make site-serve
This will build the documentation in a container and serve it under localhost:4000/ making it easy to verify the results. Any changes made to the docs/
will automatically re-trigger a rebuild and are reflected in the served content and can be inspected with a simple browser refresh.
In order to just build the html documentation run:
make site-build
-
This will generate html documentation under docs/_site/
.
Node Feature Discovery v0.7
\ No newline at end of file
+
This will generate html documentation under docs/_site/
.
Advanced topics and reference.
Advanced topics and reference.
To quickly view available command line flags execute nfd-master --help
. In a docker container:
docker run k8s.gcr.io/nfd/node-feature-discovery:v0.7.0 nfd-master --help
+ Master cmdline reference · Node Feature Discovery
Nfd-master commandline flags
Table of contents
- -h, –help
- –version
- –prune
- –port
- –ca-file
- –cert-file
- –key-file
- –verify-node-name
- –no-publish
- –label-whitelist
- –extra-label-ns
- –resource-labels
To quickly view available command line flags execute nfd-master --help
. In a docker container:
docker run k8s.gcr.io/nfd/node-feature-discovery:v0.7.0 nfd-master --help
-h, –help
Print usage and exit.
–version
Print version and exit.
–prune
The --prune
flag is a sub-command like option for cleaning up the cluster. It causes nfd-master to remove all NFD related labels, annotations and extended resources from all Node objects of the cluster and exit.
–port
The --port
flag specifies the TCP port that nfd-master listens for incoming requests.
Default: 8080
Example:
nfd-master --port=443
–ca-file
The --ca-file
is one of the three flags (together with --cert-file
and --key-file
) controlling master-worker mutual TLS authentication on the nfd-master side. This flag specifies the TLS root certificate that is used for authenticating incoming connections. NFD-Worker side needs to have matching key and cert files configured in order for the incoming requests to be accepted.
Default: empty
Note: Must be specified together with --cert-file
and --key-file
Example:
nfd-master --ca-file=/opt/nfd/ca.crt --cert-file=/opt/nfd/master.crt --key-file=/opt/nfd/master.key
–cert-file
The --cert-file
is one of the three flags (together with --ca-file
and --key-file
) controlling master-worker mutual TLS authentication on the nfd-master side. This flag specifies the TLS certificate presented for authenticating outgoing traffic towards nfd-worker.
Default: empty
Note: Must be specified together with --ca-file
and --key-file
Example:
nfd-master --cert-file=/opt/nfd/master.crt --key-file=/opt/nfd/master.key --ca-file=/opt/nfd/ca.crt
@@ -9,4 +9,4 @@
–label-whitelist
The --label-whitelist
specifies a regular expression for filtering feature labels based on their name. Each label must match against the given reqular expression in order to be published.
Note: The regular expression is only matches against the "basename" part of the label, i.e. to the part of the name after ‘/'. The label namespace is omitted.
Default: empty
Example:
nfd-master --label-whitelist='.*cpuid\.'
–extra-label-ns
The --extra-label-ns
flag specifies a comma-separated list of allowed feature label namespaces. By default, nfd-master only allows creating labels in the default feature.node.kubernetes.io
label namespace. This option can be used to allow vendor-specific namespaces for custom labels from the local and custom feature sources.
The same namespace control and this flag applies Extended Resources (created with --resource-labels
), too.
Default: empty
Example:
nfd-master --extra-label-ns=vendor-1.com,vendor-2.io
–resource-labels
The --resource-labels
flag specifies a comma-separated list of features to be advertised as extended resources instead of labels. Features that have integer values can be published as Extended Resources by listing them in this flag.
Default: empty
Example:
nfd-master --resource-labels=vendor-1.com/feature-1,vendor-2.io/feature-2
-
Node Feature Discovery v0.7
\ No newline at end of file
+
To quickly view available command line flags execute nfd-worker --help
. In a docker container:
docker run k8s.gcr.io/nfd/node-feature-discovery:v0.7.0 nfd-worker --help
+ Worker cmdline reference · Node Feature Discovery
Nfd-worker commandline flags
Table of contents
- -h, –help
- –version
- –config
- –options
- –server
- –ca-file
- –cert-file
- –key-file
- –server-name-override
- –sources
- –no-publish
- –label-whitelist
- –oneshot
- –sleep-interval
To quickly view available command line flags execute nfd-worker --help
. In a docker container:
docker run k8s.gcr.io/nfd/node-feature-discovery:v0.7.0 nfd-worker --help
-h, –help
Print usage and exit.
–version
Print version and exit.
–config
The --config
flag specifies the path of the nfd-worker configuration file to use.
Default: /etc/kubernetes/node-feature-discovery/nfd-worker.conf
Example:
nfd-worker --config=/opt/nfd/worker.conf
–options
The --options
flag may be used to specify and override configuration file options directly from the command line. The required format is the same as in the config file i.e. JSON or YAML. Configuration options specified via this flag will override those from the configuration file:
Default: empty
Example:
nfd-worker --options='{"sources":{"cpu":{"cpuid":{"attributeWhitelist":["AVX","AVX2"]}}}}'
–server
The --server
flag specifies the address of the nfd-master endpoint where to connect to.
Default: localhost:8080
Example:
nfd-worker --server=nfd-master.nfd.svc.cluster.local:443
@@ -11,4 +11,4 @@
–label-whitelist
The --label-whitelist
specifies a regular expression for filtering feature labels based on their name. Each label must match against the given reqular expression in order to be published.
Note: The regular expression is only matches against the "basename" part of the label, i.e. to the part of the name after ‘/'. The label namespace is omitted.
Default: empty
Example:
nfd-worker --label-whitelist='.*cpuid\.'
–oneshot
The --oneshot
flag causes nfd-worker to exit after one pass of feature detection.
Default: false
Example:
nfd-worker --oneshot --no-publish
–sleep-interval
The --sleep-interval
specifies the interval between feature re-detection (and node re-labeling). A non-positive value implies infinite sleep interval, i.e. no re-detection or re-labeling is done.
Default: 60s
Example:
nfd-worker --sleep-interval=1h
-
Node Feature Discovery v0.7
\ No newline at end of file
+
You can reach us via the following channels:
This is a SIG-node subproject, hosted under the Kubernetes SIGs organization in Github. The project was established in 2016 and was migrated to Kubernetes SIGs in 2018.
This is open source software released under the Apache 2.0 License.
You can reach us via the following channels:
This is a SIG-node subproject, hosted under the Kubernetes SIGs organization in Github. The project was established in 2016 and was migrated to Kubernetes SIGs in 2018.
This is open source software released under the Apache 2.0 License.
Deployment using the Node Feature Discovery Operator is recommended to be done via operatorhub.io.
kubectl create -f https://operatorhub.io/install/nfd-operator.yaml
+ Deployment and usage · Node Feature Discovery
Deployment and usage
Table of contents
Requirements
- Linux (x86_64/Arm64/Arm)
- kubectl (properly set up and configured to work with your Kubernetes cluster)
Deployment options
Operator
Deployment using the Node Feature Discovery Operator is recommended to be done via operatorhub.io.
- You need to have OLM installed. If you don't, take a look at the latest release for detailed instructions.
- Install the operator:
kubectl create -f https://operatorhub.io/install/nfd-operator.yaml
- Create NodeFeatureDiscovery resource (in
nfd
namespace here): cat << EOF | kubectl apply -f -
apiVersion: v1
kind: Namespace
@@ -46,4 +46,4 @@ kubectl delete clusterrolebinding nfd-master
Removing feature labels
NFD-Master has a special --prune
command line flag for removing all nfd-related node labels, annotations and extended resources from the cluster.
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/node-feature-discovery/v0.7.0/nfd-prune.yaml.template
kubectl -n node-feature-discovery wait job.batch/nfd-prune --for=condition=complete && \
kubectl delete -f kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/node-feature-discovery/v0.7.0/nfd-prune.yaml.template
-
NOTE: You must run prune before removing the RBAC rules (serviceaccount, clusterrole and clusterrolebinding).
Node Feature Discovery v0.7
\ No newline at end of file
+
NOTE: You must run prune before removing the RBAC rules (serviceaccount, clusterrole and clusterrolebinding).
This page contains usage examples and demos.
A demo on the benefits of using node feature discovery can be found in the source code repository under demo/.
This page contains usage examples and demos.
A demo on the benefits of using node feature discovery can be found in the source code repository under demo/.
Feature discovery in nfd-worker is performed by a set of separate modules called feature sources. Most of them are specifically responsible for certain domain of features (e.g. cpu). In addition there are two highly customizable feature sources that work accross the system.
Each discovered feature is advertised a label in the Kubernetes Node object. The published node labels encode a few pieces of information:
feature.node.kubernetes.io
)cpu
).cpuid.AESNI
from cpu).Feature label names adhere to the following pattern:
<namespace>/<source name>-<feature name>[.<attribute name>]
+ Feature discovery · Node Feature Discovery
Feature discovery
Table of contents
Feature discovery in nfd-worker is performed by a set of separate modules called feature sources. Most of them are specifically responsible for certain domain of features (e.g. cpu). In addition there are two highly customizable feature sources that work accross the system.
Feature labels
Each discovered feature is advertised a label in the Kubernetes Node object. The published node labels encode a few pieces of information:
- Namespace, (all built-in labels use
feature.node.kubernetes.io
) - The source for each label (e.g.
cpu
). - The name of the discovered feature as it appears in the underlying source, (e.g.
cpuid.AESNI
from cpu). - The value of the discovered feature.
Feature label names adhere to the following pattern:
<namespace>/<source name>-<feature name>[.<attribute name>]
The last component (i.e. attribute-name
) is optional, and only used if a feature logically has sub-hierarchy, e.g. sriov.capable
and sriov.configure
from the network
source.
The --sources
flag controls which sources to use for discovery.
Note: Consecutive runs of nfd-worker will update the labels on a given node. If features are not discovered on a consecutive run, the corresponding label will be removed. This includes any restrictions placed on the consecutive run, such as restricting discovered features with the –label-whitelist option.
Feature sources
CPU
The cpu feature source supports the following labels:
Feature name Attribute Description cpuid <cpuid flag> CPU capability is supported hardware_multithreading Hardware multithreading, such as Intel HTT, enabled (number of logical CPUs is greater than physical CPUs) power sst_bf.enabled Intel SST-BF (Intel Speed Select Technology - Base frequency) enabled pstate turbo Set to ‘true' if turbo frequencies are enabled in Intel pstate driver, set to ‘false' if they have been disabled. rdt RDTMON Intel RDT Monitoring Technology RDTCMT Intel Cache Monitoring (CMT) RDTMBM Intel Memory Bandwidth Monitoring (MBM) RDTL3CA Intel L3 Cache Allocation Technology RDTL2CA Intel L2 Cache Allocation Technology RDTMBA Intel Memory Bandwidth Allocation (MBA) Technology
The (sub-)set of CPUID attributes to publish is configurable via the attributeBlacklist
and attributeWhitelist
cpuid options of the cpu source. If whitelist is specified, only whitelisted attributes will be published. With blacklist, only blacklisted attributes are filtered out. attributeWhitelist
has priority over attributeBlacklist
. For examples and more information about configurability, see configuration. By default, the following CPUID flags have been blacklisted: BMI1, BMI2, CLMUL, CMOV, CX16, ERMS, F16C, HTT, LZCNT, MMX, MMXEXT, NX, POPCNT, RDRAND, RDSEED, RDTSCP, SGX, SSE, SSE2, SSE3, SSE4.1, SSE4.2 and SSSE3.
NOTE The cpuid features advertise supported CPU capabilities, that is, a capability might be supported but not enabled.
X86 CPUID attributes (partial list)
Attribute Description ADX Multi-Precision Add-Carry Instruction Extensions (ADX) AESNI Advanced Encryption Standard (AES) New Instructions (AES-NI) AVX Advanced Vector Extensions (AVX) AVX2 Advanced Vector Extensions 2 (AVX2)
See the full list in github.com/klauspost/cpuid.
Arm CPUID attribute (partial list)
Attribute Description IDIVA Integer divide instructions available in ARM mode IDIVT Integer divide instructions available in Thumb mode THUMB Thumb instructions FASTMUL Fast multiplication VFP Vector floating point instruction extension (VFP) VFPv3 Vector floating point extension v3 VFPv4 Vector floating point extension v4 VFPD32 VFP with 32 D-registers HALF Half-word loads and stores EDSP DSP extensions NEON NEON SIMD instructions LPAE Large Physical Address Extensions
Arm64 cpuid attribute (partial list)
Attribute Description AES Announcing the Advanced Encryption Standard EVSTRM Event Stream Frequency Features FPHP Half Precision(16bit) Floating Point Data Processing Instructions ASIMDHP Half Precision(16bit) Asimd Data Processing Instructions ATOMICS Atomic Instructions to the A64 ASIMRDM Support for Rounding Double Multiply Add/Subtract PMULL Optional Cryptographic and CRC32 Instructions JSCVT Perform Conversion to Match Javascript DCPOP Persistent Memory Support
Custom
The Custom feature source allows the user to define features based on a mix of predefined rules. A rule is provided input witch affects its process of matching for a defined feature. The rules are specified in the nfd-worker configuration file. See configuration for instructions and examples how to set-up and manage the worker configuration.
To aid in making Custom Features clearer, we define a general and a per rule nomenclature, keeping things as consistent as possible.
General nomenclature & definitions
Rule :Represents a matching logic that is used to match on a feature.
Rule Input :The input a Rule is provided. This determines how a Rule performs the match operation.
Matcher :A composition of Rules, each Matcher may be composed of at most one instance of each Rule.
@@ -92,4 +92,4 @@ feature.node.kubernetes.io/override_source-OVERRIDE_VALUE=123
override.namespace/value=456
NFD tries to run any regular files found from the hooks directory. Any additional data files your hook might need (e.g. a configuration file) should be placed in a separate directory in order to avoid NFD unnecessarily trying to execute these. You can use a subdirectory under the hooks directory, for example /etc/kubernetes/node-feature-discovery/source.d/conf/
.
NOTE! NFD will blindly run any executables placed/mounted in the hooks directory. It is the user's responsibility to review the hooks for e.g. possible security implications.
NOTE! Be careful when creating and/or updating hook or feature files while NFD is running. In order to avoid race conditions you should write into a temporary file (outside the source.d
and features.d
directories), and, atomically create/update the original file by doing a filesystem move operation.
Extended resources
This feature is experimental and by no means a replacement for the usage of device plugins.
Labels which have integer values, can be promoted to Kubernetes extended resources by listing them to the master --resource-labels
command line flag. These labels won't then show in the node label section, they will appear only as extended resources.
An example use-case for the extended resources could be based on a hook which creates a label for the node SGX EPC memory section size. By giving the name of that label in the --resource-labels
flag, that value will then turn into an extended resource of the node, allowing PODs to request that resource and the Kubernetes scheduler to schedule such PODs to only those nodes which have a sufficient capacity of said resource left.
Similar to labels, the default namespace feature.node.kubernetes.io
is automatically prefixed to the extended resource, if the promoted label doesn't have a namespace.
Example usage of the command line arguments, using a new namespace: nfd-master --resource-labels=my_source-my.feature,sgx.some.ns/epc --extra-label-ns=sgx.some.ns
The above would result in following extended resources provided that related labels exist:
sgx.some.ns/epc: <label value>
feature.node.kubernetes.io/my_source-my.feature: <label value>
-
Node Feature Discovery v0.7
\ No newline at end of file
+
Welcome to Node Feature Discovery – a Kubernetes add-on for detecting hardware features and system configuration!
Continue to:
Introduction for more details on the project.
Quick start for quick step-by-step instructions on how to get NFD running on your cluster.
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/node-feature-discovery/v0.7.0/nfd-master.yaml.template
+ Get started · Node Feature Discovery
Node Feature Discovery
Welcome to Node Feature Discovery – a Kubernetes add-on for detecting hardware features and system configuration!
Continue to:
-
Introduction for more details on the project.
-
Quick start for quick step-by-step instructions on how to get NFD running on your cluster.
Quick-start – the short-short version
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/node-feature-discovery/v0.7.0/nfd-master.yaml.template
namespace/node-feature-discovery created
...
@@ -19,4 +19,4 @@
"feature.node.kubernetes.io/cpu-cpuid.AESNI": "true",
...
-
Node Feature Discovery v0.7
\ No newline at end of file
+
This software enables node feature discovery for Kubernetes. It detects hardware features available on each node in a Kubernetes cluster, and advertises those features using node labels.
NFD consists of two software components:
NFD-Master is the daemon responsible for communication towards the Kubernetes API. That is, it receives labeling requests from the worker and modifies node objects accordingly.
NFD-Worker is a daemon responsible for feature detection. It then communicates the information to nfd-master which does the actual node labeling. One instance of nfd-worker is supposed to be running on each node of the cluster,
Feature discovery is divided into domain-specific feature sources:
Each feature source is responsible for detecting a set of features which. in turn, are turned into node feature labels. Feature labels are prefixed with feature.node.kubernetes.io/
and also contain the name of the feature source. Non-standard user-specific feature labels can be created with the local and custom feature sources.
An overview of the default feature labels:
{
+ Introduction · Node Feature Discovery
Introduction
Table of contents
This software enables node feature discovery for Kubernetes. It detects hardware features available on each node in a Kubernetes cluster, and advertises those features using node labels.
NFD consists of two software components:
- nfd-master
- nfd-worker
NFD-Master
NFD-Master is the daemon responsible for communication towards the Kubernetes API. That is, it receives labeling requests from the worker and modifies node objects accordingly.
NFD-Worker
NFD-Worker is a daemon responsible for feature detection. It then communicates the information to nfd-master which does the actual node labeling. One instance of nfd-worker is supposed to be running on each node of the cluster,
Feature discovery
Feature discovery is divided into domain-specific feature sources:
- CPU
- IOMMU
- Kernel
- Memory
- Network
- PCI
- Storage
- System
- USB
- Custom (rule-based custom features)
- Local (hooks for user-specific features)
Each feature source is responsible for detecting a set of features which. in turn, are turned into node feature labels. Feature labels are prefixed with feature.node.kubernetes.io/
and also contain the name of the feature source. Non-standard user-specific feature labels can be created with the local and custom feature sources.
An overview of the default feature labels:
{
"feature.node.kubernetes.io/cpu-<feature-name>": "true",
"feature.node.kubernetes.io/custom-<feature-name>": "true",
"feature.node.kubernetes.io/iommu-<feature-name>": "true",
@@ -11,4 +11,4 @@
"feature.node.kubernetes.io/usb-<device label>.present": "<feature value>",
"feature.node.kubernetes.io/<file name>-<feature name>": "<feature value>"
}
-
Node annotations
NFD also annotates nodes it is running on:
Annotation Description nfd.node.kubernetes.io/master.version Version of the nfd-master instance running on the node. Informative use only. nfd.node.kubernetes.io/worker.version Version of the nfd-worker instance running on the node. Informative use only. nfd.node.kubernetes.io/feature-labels Comma-separated list of node labels managed by NFD. NFD uses this internally so must not be edited by users. nfd.node.kubernetes.io/extended-resources Comma-separated list of node extended resources managed by NFD. NFD uses this internally so must not be edited by users.
Unapplicable annotations are not created, i.e. for example master.version is only created on nodes running nfd-master.
Node Feature Discovery v0.7
\ No newline at end of file
+
NFD also annotates nodes it is running on:
Annotation | Description |
---|---|
nfd.node.kubernetes.io/master.version | Version of the nfd-master instance running on the node. Informative use only. |
nfd.node.kubernetes.io/worker.version | Version of the nfd-worker instance running on the node. Informative use only. |
nfd.node.kubernetes.io/feature-labels | Comma-separated list of node labels managed by NFD. NFD uses this internally so must not be edited by users. |
nfd.node.kubernetes.io/extended-resources | Comma-separated list of node extended resources managed by NFD. NFD uses this internally so must not be edited by users. |
Unapplicable annotations are not created, i.e. for example master.version is only created on nodes running nfd-master.
Minimal steps to deploy latest released version of NFD in your cluster.
Deploy nfd-master – creates a new namespace, service and required RBAC rules
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/node-feature-discovery/v0.7.0/nfd-master.yaml.template
+ Quick start · Node Feature Discovery
Quick start
Minimal steps to deploy latest released version of NFD in your cluster.
Installation
Deploy nfd-master – creates a new namespace, service and required RBAC rules
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/node-feature-discovery/v0.7.0/nfd-master.yaml.template
Deploy nfd-worker as a daemonset
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/node-feature-discovery/v0.7.0/nfd-worker-daemonset.yaml.template
Verify
Wait until NFD master and worker are running.
$ kubectl -n node-feature-discovery get ds,deploy
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
@@ -30,4 +30,4 @@ spec:
See that the pod is running on a desired node
$ kubectl get po feature-dependent-pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
feature-dependent-pod 1/1 Running 0 23s 10.36.0.4 node-2 <none> <none>
-
Node Feature Discovery v0.7
\ No newline at end of file
+