1
0
Fork 0
mirror of https://github.com/kubernetes-sigs/node-feature-discovery.git synced 2025-03-06 08:47:04 +00:00
node-feature-discovery/docs/get-started/deployment-and-usage.md

232 lines
8.1 KiB
Markdown
Raw Normal View History

---
title: "Deployment and Usage"
layout: default
sort: 3
---
# Deployment and Usage
{: .no_toc }
## Table of Contents
{: .no_toc .text-delta }
1. TOC
{:toc}
---
## Requirements
1. Linux (x86_64/Arm64/Arm)
1. [kubectl][https://kubernetes.io/docs/tasks/tools/install-kubectl]
(properly set up and configured to work with your Kubernetes cluster)
## Usage
### NFD-Master
NFD-Master runs as a deployment (with a replica count of 1), by default
it prefers running on the cluster's master nodes but will run on worker
nodes if no master nodes are found.
For High Availability, you should simply increase the replica count of
the deployment object. You should also look into adding [inter-pod](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity)
affinity to prevent masters from running on the same node.
However note that inter-pod affinity is costly and is not recommended
in bigger clusters.
You can use the template spec provided to deploy nfd-master, or
use `nfd-master.yaml` generated by `Makefile`. The latter includes
`image:` and `namespace:` definitions that match the latest built
image. Example:
```
make IMAGE_TAG=<IMAGE_TAG>
docker push <IMAGE_TAG>
kubectl create -f nfd-master.yaml
```
NFD-Master listens for connections from nfd-worker(s) and connects to the
Kubernetes API server to add node labels advertised by them.
If you have RBAC authorization enabled (as is the default e.g. with clusters
initialized with kubeadm) you need to configure the appropriate ClusterRoles,
ClusterRoleBindings and a ServiceAccount in order for NFD to create node
labels. The provided template will configure these for you.
### NFD-Worker
NFD-Worker is preferably run as a Kubernetes DaemonSet. There is an
example spec (`nfd-worker-daemonset.yaml.template`) that can be used
as a template, or, as is when just trying out the service. Similarly
to nfd-master above, the `Makefile` also generates
`nfd-worker-daemonset.yaml` from the template that you can use to
deploy the latest image. Example:
```
make IMAGE_TAG=<IMAGE_TAG>
docker push <IMAGE_TAG>
kubectl create -f nfd-worker-daemonset.yaml
```
NFD-Worker connects to the nfd-master service to advertise hardware features.
When run as a daemonset, nodes are re-labeled at an interval specified using
the `--sleep-interval` option. In the
[template](https://github.com/kubernetes-sigs/node-feature-discovery/blob/master/nfd-worker-daemonset.yaml.template#L26)
the default interval is set to 60s which is also the default when no
`--sleep-interval` is specified. Also, the configuration file is re-read on
each iteration providing a simple mechanism of run-time reconfiguration.
Feature discovery can alternatively be configured as a one-shot job. There is
an example script in this repo that demonstrates how to deploy the job in the cluster.
```
./label-nodes.sh [<IMAGE_TAG>]
```
The label-nodes.sh script tries to launch as many jobs as there are Ready nodes.
Note that this approach does not guarantee running once on every node.
For example, if some node is tainted NoSchedule or fails to start a job for some other reason, then some other node will run extra job instance(s) to satisfy the request and the tainted/failed node does not get labeled.
### NFD-Master and NFD-Worker in the same Pod
You can also run nfd-master and nfd-worker inside a single pod (skip the `sed`
part if running the latest released version):
```
sed -E s',^(\s*)image:.+$,\1image: <YOUR_IMAGE_REPO>:<YOUR_IMAGE_TAG>,' nfd-daemonset-combined.yaml.template > nfd-daemonset-combined.yaml
kubectl apply -f nfd-daemonset-combined.yaml
```
Similar to the nfd-worker setup above, this creates a DaemonSet that schedules
an NFD Pod an all worker nodes, with the difference that the Pod also also
contains an nfd-master instance. In this case no nfd-master service is run on
the master node(s), but, the worker nodes are able to label themselves.
This may be desirable e.g. in single-node setups.
### TLS authentication
NFD supports mutual TLS authentication between the nfd-master and nfd-worker
instances. That is, nfd-worker and nfd-master both verify that the other end
presents a valid certificate.
TLS authentication is enabled by specifying `--ca-file`, `--key-file` and
`--cert-file` args, on both the nfd-master and nfd-worker instances.
The template specs provided with NFD contain (commented out) example
configuration for enabling TLS authentication.
The Common Name (CN) of the nfd-master certificate must match the DNS name of
the nfd-master Service of the cluster. By default, nfd-master only check that
the nfd-worker has been signed by the specified root certificate (--ca-file).
Additional hardening can be enabled by specifying --verify-node-name in
nfd-master args, in which case nfd-master verifies that the NodeName presented
by nfd-worker matches the Common Name (CN) of its certificate. This means that
each nfd-worker requires a individual node-specific TLS certificate.
## Deployment options
### Operator
*WORK IN PROGRESS...*
### Deployment Templates
For a stable version with ready-built images see the
[latest released version](https://github.com/kubernetes-sigs/node-feature-discovery/tree/v0.6.0) ([release notes](https://github.com/kubernetes-sigs/node-feature-discovery/releases/latest)).
*WORK IN PROGRESS...*
### Build Your Own
If you want to use the latest development version (master branch) you need to
build your own custom image.
See the [Developer Guide](/advanced/developer-guide) for instructions how to
build images and deploy them on your cluster.
## Configuration
NFD-Worker supports a configuration file. The default location is
`/etc/kubernetes/node-feature-discovery/nfd-worker.conf`, but,
this can be changed by specifying the`--config` command line flag.
Configuration file is re-read on each labeling pass (determined by
`--sleep-interval`) which makes run-time re-configuration of nfd-worker
possible.
Worker configuration file is read inside the container, and thus, Volumes and
VolumeMounts are needed to make your configuration available for NFD. The
preferred method is to use a ConfigMap which provides easy deployment and
re-configurability. For example, create a config map using the example config
as a template:
```
cp nfd-worker.conf.example nfd-worker.conf
vim nfd-worker.conf # edit the configuration
kubectl create configmap nfd-worker-config --from-file=nfd-worker.conf
```
Then, configure Volumes and VolumeMounts in the Pod spec (just the relevant
snippets shown below):
```
...
containers:
volumeMounts:
- name: nfd-worker-config
mountPath: "/etc/kubernetes/node-feature-discovery/"
...
volumes:
- name: nfd-worker-config
configMap:
name: nfd-worker-config
...
```
You could also use other types of volumes, of course. That is, hostPath if
different config for different nodes would be required, for example.
The (empty-by-default)
[example config](https://github.com/kubernetes-sigs/node-feature-discovery/blob/master/nfd-worker.conf.example)
is used as a config in the NFD Docker image. Thus, this can be used as a default
configuration in custom-built images.
Configuration options can also be specified via the `--options` command line
flag, in which case no mounts need to be used. The same format as in the config
file must be used, i.e. JSON (or YAML). For example:
```
--options='{"sources": { "pci": { "deviceClassWhitelist": ["12"] } } }'
```
Configuration options specified from the command line will override those read
from the config file.
Currently, the only available configuration options are related to the
[CPU](#cpu-features), [PCI](#pci-features) and [Kernel](#kernel-features)
feature sources.
## Using Node Labels
Nodes with specific features can be targeted using the `nodeSelector` field. The
following example shows how to target nodes with Intel TurboBoost enabled.
```yaml
apiVersion: v1
kind: Pod
metadata:
labels:
env: test
name: golang-test
spec:
containers:
- image: golang
name: go1
nodeSelector:
feature.node.kubernetes.io/cpu-pstate.turbo: 'true'
```
For more details on targeting nodes, see
[node selection][https://kubernetes.io/docs/tasks/tools/install-kubectl].
## Uninstallation
*WORK IN PROGRESS...*