a0fb0c6561
Point to the latest release in the README, and, point out that a user-built custom image is required to run the latest development version. Update the deployment instructions to reflect the need to specify the container image when using the deployment spec template(s). Also, update the Job deployment script to set a user-defined container image. |
||
---|---|---|
cmd | ||
demo | ||
pkg | ||
source | ||
test/data | ||
.dockerignore | ||
.gitignore | ||
.travis.yml | ||
code-of-conduct.md | ||
CONTRIBUTING.md | ||
Dockerfile | ||
Gopkg.lock | ||
Gopkg.toml | ||
label-nodes.sh | ||
LICENSE | ||
Makefile | ||
nfd-daemonset-combined.yaml.template | ||
nfd-master.yaml.template | ||
nfd-worker-daemonset.yaml.template | ||
nfd-worker-job.yaml.template | ||
nfd-worker.conf.example | ||
OWNERS | ||
README.md | ||
RELEASE.md | ||
SECURITY_CONTACTS |
Node feature discovery for Kubernetes
- Overview
- Command line interface
- Feature discovery
- Getting started
- Building from source
- Targeting nodes with specific features
- References
- License
- Demo
Overview
This software enables node feature discovery for Kubernetes. It detects hardware features available on each node in a Kubernetes cluster, and advertises those features using node labels.
NFD consists of two software components:
- nfd-master is responsible for labeling Kubernetes node objects
- nfd-worker is detects features and communicates them to nfd-master. One instance of nfd-worker is supposed to be run on each node of the cluster
Command line interface
You can run NFD in stand-alone Docker containers e.g. for testing purposes. This is useful for checking features-detection.
NFD-Master
When running as a standalone container labeling is expected to fail because Kubernetes API is not available. Thus, it is recommended to use --no-publish command line flag. E.g.
$ docker run --rm --name=nfd-test <NFD_CONTAINER_IMAGE> nfd-master --no-publish
2019/02/01 14:48:21 Node Feature Discovery Master <NFD_VERSION>
2019/02/01 14:48:21 gRPC server serving on port: 8080
Command line flags of nfd-master:
$ docker run --rm <NFD_CONTAINER_IMAGE> nfd-master --help
...
nfd-master.
Usage:
nfd-master [--no-publish] [--label-whitelist=<pattern>] [--port=<port>]
[--ca-file=<path>] [--cert-file=<path>] [--key-file=<path>]
[--verify-node-name]
nfd-master -h | --help
nfd-master --version
Options:
-h --help Show this screen.
--version Output version and exit.
--port=<port> Port on which to listen for connections.
[Default: 8080]
--ca-file=<path> Root certificate for verifying connections
[Default: ]
--cert-file=<path> Certificate used for authenticating connections
[Default: ]
--key-file=<path> Private key matching --cert-file
[Default: ]
--verify-node-name Verify worker node name against CN from the TLS
certificate. Only has effect when TLS authentication
has been enabled.
--no-publish Do not publish feature labels
--label-whitelist=<pattern> Regular expression to filter label names to
publish to the Kubernetes API server. [Default: ]
NFD-Worker
In order to run nfd-worker
as a "stand-alone" container against your
standalone nfd-master you need to run them in the same network namespace:
$ docker run --rm --network=container:nfd-test <NFD_CONTAINER_IMAGE> nfd-worker
2019/02/01 14:48:56 Node Feature Discovery Worker <NFD_VERSION>
...
If you just want to try out feature discovery without connecting to nfd-master,
pass the --no-publish
flag to nfd-worker.
Command line flags of nfd-worker:
$ docker run --rm <CONTAINER_IMAGE_ID> nfd-worker --help
...
nfd-worker.
Usage:
nfd-worker [--no-publish] [--sources=<sources>] [--label-whitelist=<pattern>]
[--oneshot | --sleep-interval=<seconds>] [--config=<path>]
[--options=<config>] [--server=<server>] [--server-name-override=<name>]
[--ca-file=<path>] [--cert-file=<path>] [--key-file=<path>]
nfd-worker -h | --help
nfd-worker --version
Options:
-h --help Show this screen.
--version Output version and exit.
--config=<path> Config file to use.
[Default: /etc/kubernetes/node-feature-discovery/nfd-worker.conf]
--options=<config> Specify config options from command line. Config
options are specified in the same format as in the
config file (i.e. json or yaml). These options
will override settings read from the config file.
[Default: ]
--ca-file=<path> Root certificate for verifying connections
[Default: ]
--cert-file=<path> Certificate used for authenticating connections
[Default: ]
--key-file=<path> Private key matching --cert-file
[Default: ]
--server=<server> NFD server address to connecto to.
[Default: localhost:8080]
--server-name-override=<name> Name (CN) expect from server certificate, useful
in testing
[Default: ]
--sources=<sources> Comma separated list of feature sources.
[Default: cpu,cpuid,iommu,kernel,local,memory,network,pci,pstate,rdt,storage,system]
--no-publish Do not publish discovered features to the
cluster-local Kubernetes API server.
--label-whitelist=<pattern> Regular expression to filter label names to
publish to the Kubernetes API server. [Default: ]
--oneshot Label once and exit.
--sleep-interval=<seconds> Time to sleep between re-labeling. Non-positive
value implies no re-labeling (i.e. infinite
sleep). [Default: 60s]
NOTE Some feature sources need certain directories and/or files from the
host mounted inside the NFD container. Thus, you need to provide Docker with the
correct --volume
options in order for them to work correctly when run
stand-alone directly with docker run
. See the
template spec
for up-to-date information about the required volume mounts.
Feature discovery
Feature sources
The current set of feature sources are the following:
- CPU
- CPUID for x86/Arm64 CPU details
- IOMMU
- Kernel
- Memory
- Network
- PCI
- Pstate (Intel P-State driver)
- RDT (Intel Resource Director Technology)
- Storage
- System
- Local (hooks for user-specific features)
Feature labels
The published node labels encode a few pieces of information:
- Namespace, i.e.
feature.node.kubernetes.io
- The source for each label (e.g.
cpuid
). - The name of the discovered feature as it appears in the underlying
source, (e.g.
AESNI
from cpuid). - The value of the discovered feature.
Feature label names adhere to the following pattern:
<namespace>/<source name>-<feature name>[.<attribute name>]
The last component (i.e. attribute-name
) is optional, and only used if a
feature logically has sub-hierarchy, e.g. sriov.capable
and
sriov.configure
from the network
source.
Note: only features that are available on a given node are labeled, so
the only label value published for features is the string "true"
.
{
"feature.node.kubernetes.io/cpu-<feature-name>": "true",
"feature.node.kubernetes.io/cpuid-<feature-name>": "true",
"feature.node.kubernetes.io/iommu-<feature-name>": "true",
"feature.node.kubernetes.io/kernel-<feature name>": "<feature value>",
"feature.node.kubernetes.io/memory-<feature-name>": "true",
"feature.node.kubernetes.io/network-<feature-name>": "true",
"feature.node.kubernetes.io/pci-<device label>.present": "true",
"feature.node.kubernetes.io/pstate-<feature-name>": "true",
"feature.node.kubernetes.io/rdt-<feature-name>": "true",
"feature.node.kubernetes.io/storage-<feature-name>": "true",
"feature.node.kubernetes.io/system-<feature name>": "<feature value>",
"feature.node.kubernetes.io/<file name>-<feature name>": "<feature value>"
}
The --sources
flag controls which sources to use for discovery.
Note: Consecutive runs of node-feature-discovery will update the labels on a given node. If features are not discovered on a consecutive run, the corresponding label will be removed. This includes any restrictions placed on the consecutive run, such as restricting discovered features with the --label-whitelist option.
CPU Features
The CPU feature source differs from the CPUID feature source in that it
discovers CPU related features that are actually enabled, whereas CPUID only
reports supported CPU capabilities (i.e. a capability might be supported but
not enabled) as reported by the cpuid
instruction.
Feature | Attribute | Description |
---|---|---|
hardware_multithreading | Hardware multithreading, such as Intel HTT, enabled (number of locical CPUs is greater than physical CPUs) | |
power | sst_bf.enabled | Intel SST-BF (Intel Speed Select Technology - Base frequency) enabled |
X86 CPUID Features (Partial List)
Feature name | Description |
---|---|
ADX | Multi-Precision Add-Carry Instruction Extensions (ADX) |
AESNI | Advanced Encryption Standard (AES) New Instructions (AES-NI) |
AVX | Advanced Vector Extensions (AVX) |
AVX2 | Advanced Vector Extensions 2 (AVX2) |
BMI1 | Bit Manipulation Instruction Set 1 (BMI) |
BMI2 | Bit Manipulation Instruction Set 2 (BMI2) |
SSE4.1 | Streaming SIMD Extensions 4.1 (SSE4.1) |
SSE4.2 | Streaming SIMD Extensions 4.2 (SSE4.2) |
SGX | Software Guard Extensions (SGX) |
Arm64 CPUID Features (Partial List)
Feature name | Description |
---|---|
AES | Announcing the Advanced Encryption Standard |
EVSTRM | Event Stream Frequency Features |
FPHP | Half Precision(16bit) Floating Point Data Processing Instructions |
ASIMDHP | Half Precision(16bit) Asimd Data Processing Instructions |
ATOMICS | Atomic Instructions to the A64 |
ASIMRDM | Support for Rounding Double Multiply Add/Subtract |
PMULL | Optional Cryptographic and CRC32 Instructions |
JSCVT | Perform Conversion to Match Javascript |
DCPOP | Persistent Memory Support |
IOMMU Features
Feature name | Description |
---|---|
enabled | IOMMU is present and enabled in the kernel |
Kernel Features
Feature | Attribute | Description |
---|---|---|
config | <option name> | Kernel config option is enabled (set 'y' or 'm'). Default options are NO_HZ , NO_HZ_IDLE , NO_HZ_FULL and PREEMPT |
selinux | enabled | Selinux is enabled on the node |
version | full | Full kernel version as reported by /proc/sys/kernel/osrelease (e.g. '4.5.6-7-g123abcde') |
major | First component of the kernel version (e.g. '4') | |
minor | Second component of the kernel version (e.g. '5') | |
revision | Third component of the kernel version (e.g. '6') |
Kernel config file to use, and, the set of config options to be detected are configurable. See configuration options for more information.
P-State Features
Feature name | Description |
---|---|
turbo | Turbo frequencies are enabled in Intel pstate driver |
Memory Features
Feature | Attribute | Description |
---|---|---|
numa | Multiple memory nodes i.e. NUMA architecture detected | |
nv | present | NVDIMM device(s) are present |
Network Features
Feature | Attribute | Description |
---|---|---|
sriov | capable | Single Root Input/Output Virtualization (SR-IOV) enabled Network Interface Card(s) present |
configured | SR-IOV virtual functions have been configured |
PCI Features
Feature | Attribute | Description |
---|---|---|
<device label> | present | PCI device is detected |
<device label>
is composed of raw PCI IDs, separated by underscores.
The set of fields used in <device label>
is configurable, valid fields being
class
, vendor
, device
, subsystem_vendor
and subsystem_device
.
Defaults are class
and vendor
. An example label using the default
label fields:
feature.node.kubernetes.io/pci-1200_8086.present=true
Also the set of PCI device classes that the feature source detects is configurable. By default, device classes (0x)03, (0x)0b40 and (0x)12, i.e. GPUs, co-processors and accelerator cards are detected.
See configuration options for more information on NFD config.
RDT (Intel Resource Director Technology) Features
Feature name | Description |
---|---|
RDTMON | Intel RDT Monitoring Technology |
RDTCMT | Intel Cache Monitoring (CMT) |
RDTMBM | Intel Memory Bandwidth Monitoring (MBM) |
RDTL3CA | Intel L3 Cache Allocation Technology |
RDTL2CA | Intel L2 Cache Allocation Technology |
RDTMBA | Intel Memory Bandwidth Allocation (MBA) Technology |
Storage Features
Feature name | Description |
---|---|
nonrotationaldisk | Non-rotational disk, like SSD, is present in the node |
System Features
Feature | Attribute | Description |
---|---|---|
os_release | ID | Operating system identifier |
VERSION_ID | Operating system version identifier (e.g. '6.7') | |
VERSION_ID.major | First component of the OS version id (e.g. '6') | |
VERSION_ID.minor | Second component of the OS version id (e.g. '7') |
Feature Detector Hooks (User-specific Features)
NFD has a special feature source named local which is designed for getting the labels from user-specific feature detector. It provides a mechanism for users to implement custom feature sources in a pluggable way, without modifying nfd source code or Docker images. The local feature source can be used to advertise new user-specific features, and, for overriding labels created by the other feature sources.
The local feature source gets its labels by two different ways:
- It tries to execute files found under
/etc/kubernetes/node-feature-discovery/source.d/
directory. The hook files must be executable. When executed, the hooks are supposed to print all discovered features instdout
, one per line. - It reads files found under
/etc/kubernetes/node-feature-discovery/features.d/
directory. The file content is expected to be similar to the hook output (described above).
These directories must be available inside the Docker image so Volumes and
VolumeMounts must be used if standard NFD images are used. The given template
files mount by default the source.d
and the features.d
directories
respectively from /etc/kubernetes/node-feature-discovery/source.d/
and
/etc/kubernetes/node-feature-discovery/features.d/
from the host. You should
update them to match your needs.
In both cases, the labels can be binary or non binary, using either <name>
or
<name>=<value>
format.
Unlike the other feature sources, the name of the file, instead of the name of
the feature source (that would be local
in this case), is used as a prefix in
the label name, normally. However, if the <name>
of the label starts with a
slash (/
) it is used as the label name as is, without any additional prefix.
This makes it possible for the user to fully control the feature label names,
e.g. for overriding labels created by other feature sources.
The value of the label is either true
(for binary labels) or <value>
(for non-binary labels).
stderr
output of the hooks is propagated to NFD log so it can be used for
debugging and logging.
A hook example:
User has a shell script
/etc/kubernetes/node-feature-discovery/source.d/my-source
which has the
following stdout
output:
MY_FEATURE_1
MY_FEATURE_2=myvalue
/override_source-OVERRIDE_BOOL
/override_source-OVERRIDE_VALUE=123
which, in turn, will translate into the following node labels:
feature.node.kubernetes.io/my-source-MY_FEATURE_1=true
feature.node.kubernetes.io/my-source-MY_FEATURE_2=myvalue
feature.node.kubernetes.io/override_source-OVERRIDE_BOOL=true
feature.node.kubernetes.io/override_source-OVERRIDE_VALUE=123
A file example:
User has a file
/etc/kubernetes/node-feature-discovery/features.d/my-source
which contains the
following lines:
MY_FEATURE_1
MY_FEATURE_2=myvalue
/override_source-OVERRIDE_BOOL
/override_source-OVERRIDE_VALUE=123
which, in turn, will translate into the following node labels:
feature.node.kubernetes.io/my-source-MY_FEATURE_1=true
feature.node.kubernetes.io/my-source-MY_FEATURE_2=myvalue
feature.node.kubernetes.io/override_source-OVERRIDE_BOOL=true
feature.node.kubernetes.io/override_source-OVERRIDE_VALUE=123
NFD tries to run any regular files found from the hooks directory. Any
additional data files your hook might need (e.g. a configuration file) should
be placed in a separate directory in order to avoid NFD unnecessarily trying to
execute these. You can use a subdirectory under the hooks directory, for
example /etc/kubernetes/node-feature-discovery/source.d/conf/
.
NOTE! NFD will blindly run any executables placed/mounted in the hooks directory. It is the user's responsibility to review the hooks for e.g. possible security implications.
Getting started
For a stable version with ready-built images see the latest released version (release notes).
If you want to use the latest development version (master branch) you need to build your own custom image.
System requirements
- Linux (x86_64/Arm64)
- kubectl (properly set up and configured to work with your Kubernetes cluster)
- Docker (only required to build and push docker images)
Usage
nfd-master
Nfd-master runs as a DaemonSet, by default in the master node(s) only. You can use the template spec provided to deploy nfd-master. You only need to update the template to use the correct image:
sed -E s'/^(\s*)image:.+$/\1image: <YOUR_IMAGE_REPO>:<YOUR_IMAGE_TAG>/' nfd-master.yaml.template > nfd-master.yaml
kubectl create -f nfd-master.yaml
Nfd-master listens for connections from nfd-worker(s) and connects to the Kubernetes API server to adds node labels advertised by them.
If you have RBAC authorization enabled (as is the default e.g. with clusters initialized with kubeadm) you need to configure the appropriate ClusterRoles, ClusterRoleBindings and a ServiceAccount in order for NFD to create node labels. The provided template will configure these for you.
nfd-worker
Nfd-worker is preferably run as a Kubernetes DaemonSet. There is an example spec that can be used as a template, or, as is when just trying out the service. Similarly to nfd-master above, you need to update the template with the correct image:
sed -E s'/^(\s*)image:.+$/\1image: <YOUR_IMAGE_REPO>:<YOUR_IMAGE_TAG>/' nfd-worker-daemonset.yaml.template > nfd-worker-daemonset.yaml
kubectl create -f nfd-worker-daemonset.yaml
Nfd-worker connects to the nfd-master service to advertise hardware features.
When run as a daemonset, nodes are re-labeled at an interval specified using
the --sleep-interval
option. In the template the default interval is set to 60s
which is also the default when no --sleep-interval
is specified.
Feature discovery can alternatively be configured as a one-shot job. There is an example script in this repo that demonstrates how to deploy the job in the cluster.
./label-nodes.sh <YOUR_IMAGE_REPO>:<YOUR_IMAGE_TAG>
The label-nodes.sh script tries to launch as many jobs as there are Ready nodes. Note that this approach does not guarantee running once on every node. For example, if some node is tainted NoSchedule or fails to start a job for some other reason, then some other node will run extra job instance(s) to satisfy the request and the tainted/failed node does not get labeled.
nfd-master and nfd-worker in the same Pod
You can also run nfd-master and nfd-worker inside a single pod:
sed -E s'/^(\s*)image:.+$/\1image: <YOUR_IMAGE_REPO>:<YOUR_IMAGE_TAG>/' nfd-daemonset-combined.yaml.template > nfd-daemonset-combined.yaml
kubectl apply -f nfd-daemonset-combined.yaml
Similar to the nfd-worker setup above, this creates a DaemonSet that schedules an NFD Pod an all worker nodes, with the difference that the Pod also also contains an nfd-master instance. In this case no nfd-master service is run on the master node(s), but, the worker nodes are able to label themselves.
This may be desirable e.g. in single-node setups.
TLS authentication
NFD supports mutual TLS authentication between the nfd-master and nfd-worker instances. That is, nfd-worker and nfd-master both verify that the other end presents a valid certificate.
TLS authentication is enabled by specifying --ca-file
, --key-file
and
--cert-file
args, on both the nfd-master and nfd-worker instances.
The template specs provided with NFD contain (commented out) example
configuration for enabling TLS authentication.
The Common Name (CN) of the nfd-master certificate must match the DNS name of the nfd-master Service of the cluster. By default, nfd-master only check that the nfd-worker has been signed by the specified root certificate (--ca-file). Additional hardening can be enabled by specifying --verify-node-name in nfd-master args, in which case nfd-master verifies that the NodeName presented by nfd-worker matches the Common Name (CN) of its certificate. This means that each nfd-worker requires a individual node-specific TLS certificate.
Usage demo
Configuration options
Nfd-worker supports a configuration file. The default location is
/etc/kubernetes/node-feature-discovery/nfd-worker.conf
, but,
this can be changed by specifying the--config
command line flag. The file is
read inside the container, and thus, Volumes and VolumeMounts are needed to
make your configuration available for NFD. The preferred method is to use a
ConfigMap.
For example, create a config map using the example config as a template:
cp nfd-worker.conf.example nfd-worker.conf
vim nfd-worker.conf # edit the configuration
kubectl create configmap nfd-worker-config --from-file=nfd-worker.conf
Then, configure Volumes and VolumeMounts in the Pod spec (just the relevant snippets shown below):
...
containers:
volumeMounts:
- name: nfd-worker-config
mountPath: "/etc/kubernetes/node-feature-discovery/"
...
volumes:
- name: nfd-worker-config
configMap:
name: nfd-worker-config
...
You could also use other types of volumes, of course. That is, hostPath if different config for different nodes would be required, for example.
The (empty-by-default) example config is used as a config in the NFD Docker image. Thus, this can be used as a default configuration in custom-built images.
Configuration options can also be specified via the --options
command line
flag, in which case no mounts need to be used. The same format as in the config
file must be used, i.e. JSON (or YAML). For example:
--options='{"sources": { "pci": { "deviceClassWhitelist": ["12"] } } }'
Configuration options specified from the command line will override those read from the config file.
Currently, the only available configuration options are related to the PCI and Kernel feature sources.
Building from source
Download the source code:
git clone https://github.com/kubernetes-sigs/node-feature-discovery
Build the container image:
See customizing the build below for altering the
container image registry, for example.
cd <project-root>
make
Push the container image:
Optional, this example with Docker.
docker push <image registry>/<image-name>:<version>
Change the job spec to use your custom image (optional):
To use your published image from the step above instead of the
quay.io/kubernetes_incubator/node-feature-discovery
image, edit image
attribute in the spec template(s) to the new location
(<quay-domain-name>/<registry-user>/<image-name>[:<version>]
).
Customizing the Build
There are several Makefile variables that control the build process and the name of the resulting container image.
Variable | Description | Default value |
---|---|---|
IMAGE_BUILD_CMD | Command to build the image | docker build |
IMAGE_REGISTRY | Container image registry to use | quay.io/kubernetes_incubator |
IMAGE_NAME | Container image name | node-feature-discovery |
IMAGE_TAG_NAME | Container image tag name | <nfd version> |
IMAGE_REPO | Container image repository to use | <IMAGE_REGISTRY>/<IMAGE_NAME> |
IMAGE_TAG | Full image:tag to tag the image with | <IMAGE_REPO>/<IMAGE_NAME> |
For example, to use a custom registry:
make IMAGE_REGISTRY=<my custom registry uri>
Or to specify a build tool different from Docker:
make IMAGE_BUILD_CMD="buildah bud"
Targeting Nodes with Specific Features
Nodes with specific features can be targeted using the nodeSelector
field. The
following example shows how to target nodes with Intel TurboBoost enabled.
apiVersion: v1
kind: Pod
metadata:
labels:
env: test
name: golang-test
spec:
containers:
- image: golang
name: go1
nodeSelector:
feature.node.kubernetes.io/pstate-turbo: 'true'
For more details on targeting nodes, see node selection.
References
Github issues
Governance
This is a SIG-node subproject, hosted under the Kubernetes SIGs organization in Github. The project was established in 2016 as a Kubernetes Incubator project and migrated to Kubernetes SIGs in 2018.
License
This is open source software released under the Apache 2.0 License.
Demo
A demo on the benefits of using node feature discovery can be found in demo.