1
0
Fork 0
mirror of https://github.com/kubernetes-sigs/node-feature-discovery.git synced 2025-03-14 20:56:42 +00:00

docs: describe deployment using templates

Use the existing content as a base but with heave editing. Move local
examples involving make to the developers guide.

Drop the really hackish label-nodes.sh. Just replace it with command
line examples in the documentation. If somebody really is dying for this
write it from scratch and put under scripts/hack.
This commit is contained in:
Markus Lehtonen 2020-10-01 15:37:34 +03:00
parent 0c276b6298
commit 409ad01a1c
4 changed files with 103 additions and 116 deletions

View file

@ -49,6 +49,35 @@ To use your published image from the step above instead of the
attribute in the spec template(s) to the new location
(`<registry-name>/<image-name>[:<version>]`).
### Deployment
The `yamls` makefile generates deployment specs matching your locally built
image. See [build customization](#customizing-the-build) below for
configurability, e.g. changing the deployment namespace.
```bash
K8S_NAMESPACE=my-ns make yamls
kubectl apply -f nfd-master.yaml
kubectl apply -f nfd-worker-daemonset.yaml
```
Alternatively, deploying worker and master in the same pod:
```bash
K8S_NAMESPACE=my-ns make yamls
kubectl apply -f nfd-master.yaml
kubectl apply -f nfd-daemonset-combined.yaml
```
Or worker as a one-shot job:
```bash
K8S_NAMESPACE=my-ns make yamls
kubectl apply -f nfd-master.yaml
NUM_NODES=$(kubectl get no -o jsonpath='{.items[*].metadata.name}' | wc -w)
sed s"/NUM_NODES/$NUM_NODES/" nfd-worker-job.yaml | kubectl apply -f -
```
### Building Locally
You can also build the binaries locally

View file

@ -18,9 +18,65 @@ sort: 3
## Requirements
1. Linux (x86_64/Arm64/Arm)
1. [kubectl][https://kubernetes.io/docs/tasks/tools/install-kubectl]
1. [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl)
(properly set up and configured to work with your Kubernetes cluster)
## Deployment options
### Operator
*WORK IN PROGRESS...*
### Deployment Templates
The template specs provided in the repo can be used directly:
```bash
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/node-feature-discovery/master/nfd-master.yaml.template
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/node-feature-discovery/master/nfd-worker-daemonset.yaml.template
```
This will required RBAC rules and deploy nfd-master (as a deployment) and
nfd-worker (as a daemonset) in the `node-feature-discovery` namespace.
Alternatively you can download the templates and customize the deployment
manually.
#### Master-Worker Pod
You can also run nfd-master and nfd-worker inside the same pod
```bash
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/node-feature-discovery/master/nfd-daemonset-combined.yaml.template
```
This creates a DaemonSet runs both nfd-worker and nfd-master in the same Pod.
In this case no nfd-master is run on the master node(s), but, the worker nodes
are able to label themselves which may be desirable e.g. in single-node setups.
#### Worker One-shot
Feature discovery can alternatively be configured as a one-shot job.
The Job template may be used to achieve this:
```bash
NUM_NODES=$(kubectl get no -o jsonpath='{.items[*].metadata.name}' | wc -w)
curl -fs https://raw.githubusercontent.com/kubernetes-sigs/node-feature-discovery/master/nfd-worker-job.yaml.template | \
sed s"/NUM_NODES/$NUM_NODES/" | \
kubectl apply -f -
```
The example above launces as many jobs as there are non-master nodes. Note that
this approach does not guarantee running once on every node. For example,
tainted, non-ready nodes or some other reasons in Job scheduling may cause some
node(s) will run extra job instance(s) to satisfy the request.
### Build Your Own
If you want to use the latest development version (master branch) you need to
build your own custom image.
See the [Developer Guide](/advanced/developer-guide) for instructions how to
build images and deploy them on your cluster.
## Usage
@ -31,20 +87,12 @@ it prefers running on the cluster's master nodes but will run on worker
nodes if no master nodes are found.
For High Availability, you should simply increase the replica count of
the deployment object. You should also look into adding [inter-pod](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity)
the deployment object. You should also look into adding
[inter-pod](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity)
affinity to prevent masters from running on the same node.
However note that inter-pod affinity is costly and is not recommended
in bigger clusters.
You can use the template spec provided to deploy nfd-master, or
use `nfd-master.yaml` generated by `Makefile`. The latter includes
`image:` and `namespace:` definitions that match the latest built
image. Example:
```
make IMAGE_TAG=<IMAGE_TAG>
docker push <IMAGE_TAG>
kubectl create -f nfd-master.yaml
```
NFD-Master listens for connections from nfd-worker(s) and connects to the
Kubernetes API server to add node labels advertised by them.
@ -53,22 +101,12 @@ initialized with kubeadm) you need to configure the appropriate ClusterRoles,
ClusterRoleBindings and a ServiceAccount in order for NFD to create node
labels. The provided template will configure these for you.
### NFD-Worker
NFD-Worker is preferably run as a Kubernetes DaemonSet. There is an
example spec (`nfd-worker-daemonset.yaml.template`) that can be used
as a template, or, as is when just trying out the service. Similarly
to nfd-master above, the `Makefile` also generates
`nfd-worker-daemonset.yaml` from the template that you can use to
deploy the latest image. Example:
```
make IMAGE_TAG=<IMAGE_TAG>
docker push <IMAGE_TAG>
kubectl create -f nfd-worker-daemonset.yaml
```
NFD-Worker connects to the nfd-master service to advertise hardware features.
NFD-Worker is preferably run as a Kubernetes DaemonSet. This assures
re-labeling on regular intervals capturing changes in the system configuration
and mames sure that new nodes are labeled as they are added to the cluster.
Worker connects to the nfd-master service to advertise hardware features.
When run as a daemonset, nodes are re-labeled at an interval specified using
the `--sleep-interval` option. In the
@ -77,32 +115,6 @@ the default interval is set to 60s which is also the default when no
`--sleep-interval` is specified. Also, the configuration file is re-read on
each iteration providing a simple mechanism of run-time reconfiguration.
Feature discovery can alternatively be configured as a one-shot job. There is
an example script in this repo that demonstrates how to deploy the job in the cluster.
```
./label-nodes.sh [<IMAGE_TAG>]
```
The label-nodes.sh script tries to launch as many jobs as there are Ready nodes.
Note that this approach does not guarantee running once on every node.
For example, if some node is tainted NoSchedule or fails to start a job for some other reason, then some other node will run extra job instance(s) to satisfy the request and the tainted/failed node does not get labeled.
### NFD-Master and NFD-Worker in the same Pod
You can also run nfd-master and nfd-worker inside a single pod (skip the `sed`
part if running the latest released version):
```
sed -E s',^(\s*)image:.+$,\1image: <YOUR_IMAGE_REPO>:<YOUR_IMAGE_TAG>,' nfd-daemonset-combined.yaml.template > nfd-daemonset-combined.yaml
kubectl apply -f nfd-daemonset-combined.yaml
```
Similar to the nfd-worker setup above, this creates a DaemonSet that schedules
an NFD Pod an all worker nodes, with the difference that the Pod also also
contains an nfd-master instance. In this case no nfd-master service is run on
the master node(s), but, the worker nodes are able to label themselves.
This may be desirable e.g. in single-node setups.
### TLS authentication
NFD supports mutual TLS authentication between the nfd-master and nfd-worker
@ -122,30 +134,6 @@ nfd-master args, in which case nfd-master verifies that the NodeName presented
by nfd-worker matches the Common Name (CN) of its certificate. This means that
each nfd-worker requires a individual node-specific TLS certificate.
## Deployment options
### Operator
*WORK IN PROGRESS...*
### Deployment Templates
For a stable version with ready-built images see the
[latest released version](https://github.com/kubernetes-sigs/node-feature-discovery/tree/v0.6.0) ([release notes](https://github.com/kubernetes-sigs/node-feature-discovery/releases/latest)).
*WORK IN PROGRESS...*
### Build Your Own
If you want to use the latest development version (master branch) you need to
build your own custom image.
See the [Developer Guide](/advanced/developer-guide) for instructions how to
build images and deploy them on your cluster.
## Configuration
NFD-Worker supports a configuration file. The default location is
@ -160,14 +148,17 @@ VolumeMounts are needed to make your configuration available for NFD. The
preferred method is to use a ConfigMap which provides easy deployment and
re-configurability. For example, create a config map using the example config
as a template:
```
```bash
cp nfd-worker.conf.example nfd-worker.conf
vim nfd-worker.conf # edit the configuration
kubectl create configmap nfd-worker-config --from-file=nfd-worker.conf
```
Then, configure Volumes and VolumeMounts in the Pod spec (just the relevant
snippets shown below):
```
```yaml
...
containers:
volumeMounts:
@ -180,6 +171,7 @@ snippets shown below):
name: nfd-worker-config
...
```
You could also use other types of volumes, of course. That is, hostPath if
different config for different nodes would be required, for example.
@ -191,9 +183,11 @@ configuration in custom-built images.
Configuration options can also be specified via the `--options` command line
flag, in which case no mounts need to be used. The same format as in the config
file must be used, i.e. JSON (or YAML). For example:
```
--options='{"sources": { "pci": { "deviceClassWhitelist": ["12"] } } }'
```
Configuration options specified from the command line will override those read
from the config file.
@ -201,7 +195,6 @@ Currently, the only available configuration options are related to the
[CPU](#cpu-features), [PCI](#pci-features) and [Kernel](#kernel-features)
feature sources.
## Using Node Labels
Nodes with specific features can be targeted using the `nodeSelector` field. The
@ -223,8 +216,7 @@ spec:
```
For more details on targeting nodes, see
[node selection][https://kubernetes.io/docs/tasks/tools/install-kubectl].
[node selection](https://kubernetes.io/docs/tasks/tools/install-kubectl).
## Uninstallation

View file

@ -1,34 +0,0 @@
#!/usr/bin/env bash
this=`basename $0`
if [ $# -gt 1 ] || [ "$1" == "-h" ] || [ "$1" == "--help" ]; then
echo Usage: $this [IMAGE[:TAG]]
exit 1
fi
IMAGE=$1
if [ -n "$IMAGE" ]; then
if [ ! -f nfd-worker-job.yaml ]; then
make IMAGE_TAG=$IMAGE nfd-worker-job.yaml
else
# Keep existing nfd-worker-job.yaml, only update image.
sed -E "s,^(\s*)image:.+$,\1image: $IMAGE," -i nfd-worker-job.yaml
fi
fi
if [ ! -f nfd-worker-job.yaml ]; then
# Missing image info for the labeling job.
echo "nfd-worker-job.yaml missing."
echo "Run 'make nfd-worker-job.yaml', use the template or provide IMAGE (see --help)."
exit 2
fi
# Get the number of nodes in Ready state in the Kubernetes cluster
NumNodes=$(kubectl get nodes | grep -i ' ready ' | wc -l)
# We set the .spec.completions and .spec.parallelism to the node count
# We set the NODE_NAME environment variable to get the Kubernetes node object.
sed -e "s/completions:.*$/completions: $NumNodes/" \
-e "s/parallelism:.*$/parallelism: $NumNodes/" \
-i nfd-worker-job.yaml
kubectl create -f nfd-worker-job.yaml

View file

@ -6,8 +6,8 @@ metadata:
name: nfd-worker
namespace: node-feature-discovery
spec:
completions: COMPLETION_COUNT
parallelism: PARALLELISM_COUNT
completions: NUM_NODES
parallelism: NUM_NODES
template:
metadata:
labels: