1
0
Fork 0
mirror of https://github.com/kubernetes-sigs/node-feature-discovery.git synced 2024-12-14 11:57:51 +00:00

docs: markdown style fixes

Fix markdown syntax and style for content that was moved from README.md
to docs/:

Unify the spelling of master and worker in headings and beginning of
senctences.

Also, env variable for container name in developers-guide.
This commit is contained in:
Markus Lehtonen 2020-10-30 17:01:04 +02:00
parent 5acfaf7ead
commit 1f492784c0
5 changed files with 164 additions and 116 deletions

View file

@ -17,29 +17,33 @@ sort: 2
## Building from source
**Download the source code:**
### Download the source code
```
```bash
git clone https://github.com/kubernetes-sigs/node-feature-discovery
```
**Build the container image:**<br>
### Docker Build
#### Build the container image
See [customizing the build](#customizing-the-build) below for altering the
container image registry, for example.
```
```bash
cd <project-root>
make
```
**Push the container image:**<br>
#### Push the container image
Optional, this example with Docker.
```
```bash
docker push <IMAGE_TAG>
```
**Change the job spec to use your custom image (optional):**
#### Change the job spec to use your custom image (optional)
To use your published image from the step above instead of the
`k8s.gcr.io/nfd/node-feature-discovery` image, edit `image`
@ -66,22 +70,22 @@ name of the resulting container image.
For example, to use a custom registry:
```
```bash
make IMAGE_REGISTRY=<my custom registry uri>
```
Or to specify a build tool different from Docker:
It can be done in 2 ways, by pre-defining the variable
It can be done in 2 ways, by pre-defining the variable
```
```bash
IMAGE_BUILD_CMD="buildah bud" make
```
Or By overriding the variable value
```
```bash
make IMAGE_BUILD_CMD="buildah bud"
```
@ -89,18 +93,19 @@ make IMAGE_BUILD_CMD="buildah bud"
Unit tests are automatically run as part of the container image build. You can
also run them manually in the source code tree by simply running:
```
```bash
make test
```
End-to-end tests are built on top of the e2e test framework of Kubernetes, and,
they required a cluster to run them on. For running the tests on your test
cluster you need to specify the kubeconfig to be used:
```
```bash
make e2e-test KUBECONFIG=$HOME/.kube/config
```
## Running Locally
You can run NFD locally, either directly on your host OS or in containers for
@ -112,15 +117,18 @@ features-detection.
When running as a standalone container labeling is expected to fail because
Kubernetes API is not available. Thus, it is recommended to use `--no-publish`
command line flag. E.g.
```
$ docker run --rm --name=nfd-test <NFD_CONTAINER_IMAGE> nfd-master --no-publish
```bash
$ NFD_CONTAINER_IMAGE=k8s.gcr.io/nfd/node-feature-discovery:v0.6.0
$ docker run --rm --name=nfd-test ${NFD_CONTAINER_IMAGE} nfd-master --no-publish
2019/02/01 14:48:21 Node Feature Discovery Master <NFD_VERSION>
2019/02/01 14:48:21 gRPC server serving on port: 8080
```
Command line flags of nfd-master:
```
$ docker run --rm <NFD_CONTAINER_IMAGE> nfd-master --help
```bash
$ docker run --rm ${NFD_CONTAINER_IMAGE} nfd-master --help
...
nfd-master.
@ -161,17 +169,20 @@ nfd-master.
In order to run nfd-worker as a "stand-alone" container against your
standalone nfd-master you need to run them in the same network namespace:
```
```bash
$ docker run --rm --network=container:nfd-test <NFD_CONTAINER_IMAGE> nfd-worker
2019/02/01 14:48:56 Node Feature Discovery Worker <NFD_VERSION>
...
```
If you just want to try out feature discovery without connecting to nfd-master,
pass the `--no-publish` flag to nfd-worker.
Command line flags of nfd-worker:
```
$ docker run --rm <CONTAINER_IMAGE_ID> nfd-worker --help
```bash
$ docker run --rm ${NFD_CONTAINER_IMAGE} nfd-worker --help
...
nfd-worker.
@ -218,6 +229,7 @@ nfd-worker.
value implies no re-labeling (i.e. infinite
sleep). [Default: 60s]
```
**NOTE** Some feature sources need certain directories and/or files from the
host mounted inside the NFD container. Thus, you need to provide Docker with the
correct `--volume` options in order for them to work correctly when run
@ -225,7 +237,6 @@ stand-alone directly with `docker run`. See the
[template spec](https://github.com/kubernetes-sigs/node-feature-discovery/blob/master/nfd-worker-daemonset.yaml.template)
for up-to-date information about the required volume mounts.
## Documentation
*WORK IN PROGRESS...*

View file

@ -22,17 +22,17 @@ sort: 3
(properly set up and configured to work with your Kubernetes cluster)
1. [Docker][docker-down] (only required to build and push docker images)
## Usage
### nfd-master
### NFD-Master
Nfd-master runs as a deployment (with a replica count of 1), by default
NFD-Master runs as a deployment (with a replica count of 1), by default
it prefers running on the cluster's master nodes but will run on worker
nodes if no master nodes are found.
For High Availability, you should simply increase the replica count of
the deployment object. You should also look into adding [inter-pod](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity)
the deployment object. You should also look into adding
[inter-pod](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity)
affinity to prevent masters from running on the same node.
However note that inter-pod affinity is costly and is not recommended
in bigger clusters.
@ -41,12 +41,14 @@ You can use the template spec provided to deploy nfd-master, or
use `nfd-master.yaml` generated by `Makefile`. The latter includes
`image:` and `namespace:` definitions that match the latest built
image. Example:
```
```bash
make IMAGE_TAG=<IMAGE_TAG>
docker push <IMAGE_TAG>
kubectl create -f nfd-master.yaml
```
Nfd-master listens for connections from nfd-worker(s) and connects to the
NFD-Master listens for connections from nfd-worker(s) and connects to the
Kubernetes API server to add node labels advertised by them.
If you have RBAC authorization enabled (as is the default e.g. with clusters
@ -54,22 +56,22 @@ initialized with kubeadm) you need to configure the appropriate ClusterRoles,
ClusterRoleBindings and a ServiceAccount in order for NFD to create node
labels. The provided template will configure these for you.
### NFD-Worker
### nfd-worker
Nfd-worker is preferably run as a Kubernetes DaemonSet. There is an
NFD-Worker is preferably run as a Kubernetes DaemonSet. There is an
example spec (`nfd-worker-daemonset.yaml.template`) that can be used
as a template, or, as is when just trying out the service. Similarly
to nfd-master above, the `Makefile` also generates
`nfd-worker-daemonset.yaml` from the template that you can use to
deploy the latest image. Example:
```
```bash
make IMAGE_TAG=<IMAGE_TAG>
docker push <IMAGE_TAG>
kubectl create -f nfd-worker-daemonset.yaml
```
Nfd-worker connects to the nfd-master service to advertise hardware features.
NFD-Worker connects to the nfd-master service to advertise hardware features.
When run as a daemonset, nodes are re-labeled at an interval specified using
the `--sleep-interval` option. In the
@ -79,24 +81,29 @@ the default interval is set to 60s which is also the default when no
each iteration providing a simple mechanism of run-time reconfiguration.
Feature discovery can alternatively be configured as a one-shot job. There is
an example script in this repo that demonstrates how to deploy the job in the cluster.
an example script in this repo that demonstrates how to deploy the job in the
cluster.
```
```bash
./label-nodes.sh [<IMAGE_TAG>]
```
The label-nodes.sh script tries to launch as many jobs as there are Ready nodes.
Note that this approach does not guarantee running once on every node.
For example, if some node is tainted NoSchedule or fails to start a job for some other reason, then some other node will run extra job instance(s) to satisfy the request and the tainted/failed node does not get labeled.
The label-nodes.sh script tries to launch as many jobs as there are Ready
nodes. Note that this approach does not guarantee running once on every node.
For example, if some node is tainted NoSchedule or fails to start a job for
some other reason, then some other node will run extra job instance(s) to
satisfy the request and the tainted/failed node does not get labeled.
### nfd-master and nfd-worker in the same Pod
### NFD-Master and NFD-Worker in the same Pod
You can also run nfd-master and nfd-worker inside a single pod (skip the `sed`
part if running the latest released version):
```
```bash
sed -E s',^(\s*)image:.+$,\1image: <YOUR_IMAGE_REPO>:<YOUR_IMAGE_TAG>,' nfd-daemonset-combined.yaml.template > nfd-daemonset-combined.yaml
kubectl apply -f nfd-daemonset-combined.yaml
```
Similar to the nfd-worker setup above, this creates a DaemonSet that schedules
an NFD Pod an all worker nodes, with the difference that the Pod also also
contains an nfd-master instance. In this case no nfd-master service is run on
@ -123,7 +130,6 @@ nfd-master args, in which case nfd-master verifies that the NodeName presented
by nfd-worker matches the Common Name (CN) of its certificate. This means that
each nfd-worker requires a individual node-specific TLS certificate.
## Deployment options
### Deployment Templates
@ -131,7 +137,6 @@ each nfd-worker requires a individual node-specific TLS certificate.
For a stable version with ready-built images see the
[latest released version](https://github.com/kubernetes-sigs/node-feature-discovery/tree/v0.6.0) ([release notes](https://github.com/kubernetes-sigs/node-feature-discovery/releases/latest)).
### Build Your Own
If you want to use the latest development version (master branch) you need to
@ -139,10 +144,9 @@ build your own custom image.
See the [Developer Guide](advanced-developer-guide.md) for instructions how to
build images and deploy them on your cluster.
## Configuration
Nfd-worker supports a configuration file. The default location is
NFD-Worker supports a configuration file. The default location is
`/etc/kubernetes/node-feature-discovery/nfd-worker.conf`, but,
this can be changed by specifying the`--config` command line flag.
Configuration file is re-read on each labeling pass (determined by
@ -161,7 +165,8 @@ kubectl create configmap nfd-worker-config --from-file=nfd-worker.conf
```
Then, configure Volumes and VolumeMounts in the Pod spec (just the relevant
snippets shown below):
```
```yaml
...
containers:
volumeMounts:
@ -174,6 +179,7 @@ snippets shown below):
name: nfd-worker-config
...
```
You could also use other types of volumes, of course. That is, hostPath if
different config for different nodes would be required, for example.
@ -185,9 +191,11 @@ configuration in custom-built images.
Configuration options can also be specified via the `--options` command line
flag, in which case no mounts need to be used. The same format as in the config
file must be used, i.e. JSON (or YAML). For example:
```
--options='{"sources": { "pci": { "deviceClassWhitelist": ["12"] } } }'
```
Configuration options specified from the command line will override those read
from the config file.
@ -218,6 +226,5 @@ spec:
For more details on targeting nodes, see [node selection][node-sel].
<!-- Links -->
[docker-down]: https://docs.docker.com/install

View file

@ -23,7 +23,6 @@ This page contains usage examples and demos.
[![asciicast](https://asciinema.org/a/247316.svg)](https://asciinema.org/a/247316)
### Demo Use Case
A demo on the benefits of using node feature discovery can be found in the

View file

@ -31,9 +31,11 @@ The published node labels encode a few pieces of information:
- The value of the discovered feature.
Feature label names adhere to the following pattern:
```
<namespace>/<source name>-<feature name>[.<attribute name>]
```
The last component (i.e. `attribute-name`) is optional, and only used if a
feature logically has sub-hierarchy, e.g. `sriov.capable` and
`sriov.configure` from the `network` source.
@ -50,15 +52,15 @@ such as restricting discovered features with the --label-whitelist option._
| Feature name | Attribute | Description |
| ----------------------- | ------------------ | ----------------------------- |
| cpuid | &lt;cpuid flag&gt; | CPU capability is supported
| hardware_multithreading | <br> | Hardware multithreading, such as Intel HTT, enabled (number of logical CPUs is greater than physical CPUs)
| hardware_multithreading | | Hardware multithreading, such as Intel HTT, enabled (number of logical CPUs is greater than physical CPUs)
| power | sst_bf.enabled | Intel SST-BF ([Intel Speed Select Technology][intel-sst] - Base frequency) enabled
| [pstate][intel-pstate] | turbo | Set to 'true' if turbo frequencies are enabled in Intel pstate driver, set to 'false' if they have been disabled.
| [rdt][intel-rdt] | RDTMON | Intel RDT Monitoring Technology
| <br> | RDTCMT | Intel Cache Monitoring (CMT)
| <br> | RDTMBM | Intel Memory Bandwidth Monitoring (MBM)
| <br> | RDTL3CA | Intel L3 Cache Allocation Technology
| <br> | RDTL2CA | Intel L2 Cache Allocation Technology
| <br> | RDTMBA | Intel Memory Bandwidth Allocation (MBA) Technology
| | RDTCMT | Intel Cache Monitoring (CMT)
| | RDTMBM | Intel Memory Bandwidth Monitoring (MBM)
| | RDTL3CA | Intel L3 Cache Allocation Technology
| | RDTL2CA | Intel L2 Cache Allocation Technology
| | RDTMBA | Intel Memory Bandwidth Allocation (MBA) Technology
The (sub-)set of CPUID attributes to publish is configurable via the
`attributeBlacklist` and `attributeWhitelist` cpuid options of the cpu source.
@ -73,7 +75,6 @@ RDRAND, RDSEED, RDTSCP, SGX, SSE, SSE2, SSE3, SSE4.1, SSE4.2 and SSSE3.
**NOTE** The cpuid features advertise *supported* CPU capabilities, that is, a
capability might be supported but not enabled.
#### X86 CPUID Attributes (Partial List)
| Attribute | Description |
@ -116,11 +117,12 @@ capability might be supported but not enabled.
### Custom Features
The Custom feature source allows the user to define features based on a mix of predefined rules.
A rule is provided input witch affects its process of matching for a defined feature.
The Custom feature source allows the user to define features based on a mix of
predefined rules. A rule is provided input witch affects its process of
matching for a defined feature.
To aid in making Custom Features clearer, we define a general and a per rule nomenclature, keeping things as
consistent as possible.
To aid in making Custom Features clearer, we define a general and a per rule
nomenclature, keeping things as consistent as possible.
#### General Nomenclature & Definitions
@ -152,11 +154,13 @@ Matcher :A composition of Rules, each Matcher may be composed of at most one
Specifying Rules to match on a feature is done by providing a list of Matchers.
Each Matcher contains one or more Rules.
Logical _OR_ is performed between Matchers and logical _AND_ is performed between Rules
of a given Matcher.
Logical _OR_ is performed between Matchers and logical _AND_ is performed
between Rules of a given Matcher.
#### Rules
##### PciId Rule
###### Nomenclature
```
@ -164,8 +168,9 @@ Attribute :A PCI attribute.
Element :An identifier of the PCI attribute.
```
The PciId Rule allows matching the PCI devices in the system on the following Attributes: `class`,`vendor` and
`device`. A list of Elements is provided for each Attribute.
The PciId Rule allows matching the PCI devices in the system on the following
Attributes: `class`,`vendor` and `device`. A list of Elements is provided for
each Attribute.
###### Format
@ -176,11 +181,13 @@ pciId :
device: [<device id>, ...]
```
Matching is done by performing a logical _OR_ between Elements of an Attribute and logical _AND_ between the specified Attributes for
each PCI device in the system.
At least one Attribute must be specified. Missing attributes will not partake in the matching process.
Matching is done by performing a logical _OR_ between Elements of an Attribute
and logical _AND_ between the specified Attributes for each PCI device in the
system. At least one Attribute must be specified. Missing attributes will not
partake in the matching process.
##### UsbId Rule
###### Nomenclature
```
@ -200,11 +207,13 @@ usbId :
device: [<device id>, ...]
```
Matching is done by performing a logical _OR_ between Elements of an Attribute and logical _AND_ between the specified Attributes for
each USB device in the system.
At least one Attribute must be specified. Missing attributes will not partake in the matching process.
Matching is done by performing a logical _OR_ between Elements of an Attribute
and logical _AND_ between the specified Attributes for each USB device in the
system. At least one Attribute must be specified. Missing attributes will not
partake in the matching process.
##### LoadedKMod Rule
###### Nomenclature
```
@ -218,7 +227,9 @@ The LoadedKMod Rule allows matching the loaded kernel modules in the system agai
```yaml
loadedKMod : [<kernel module>, ...]
```
Matching is done by performing logical _AND_ for each provided Element, i.e the Rule will match if all provided Elements (kernel modules) are loaded
Matching is done by performing logical _AND_ for each provided Element, i.e the
Rule will match if all provided Elements (kernel modules) are loaded
in the system.
#### Example
@ -243,7 +254,7 @@ custom:
- loadedKMod : ["vendor_kmod1", "vendor_kmod2"]
pciId:
vendor: ["15b3"]
device: ["1014", "1017"]
device: ["1014", "1017"]
- name: "my.accumulated.feature"
matchOn:
- loadedKMod : ["some_kmod1", "some_kmod2"]
@ -253,23 +264,33 @@ custom:
```
__In the example above:__
- A node would contain the label: `feature.node.kubernetes.io/custom-my.kernel.feature=true`
if the node has `kmod1` _AND_ `kmod2` kernel modules loaded.
- A node would contain the label: `feature.node.kubernetes.io/custom-my.pci.feature=true`
if the node contains a PCI device with a PCI vendor ID of `15b3` _AND_ PCI device ID of `1014` _OR_ `1017`.
- A node would contain the label: `feature.node.kubernetes.io/custom-my.usb.feature=true`
if the node contains a USB device with a USB vendor ID of `1d6b` _AND_ USB device ID of `0003`.
- A node would contain the label: `feature.node.kubernetes.io/custom-my.combined.feature=true`
if `vendor_kmod1` _AND_ `vendor_kmod2` kernel modules are loaded __AND__ the node contains a PCI device
with a PCI vendor ID of `15b3` _AND_ PCI device ID of `1014` _or_ `1017`.
- A node would contain the label: `feature.node.kubernetes.io/custom-my.accumulated.feature=true`
if `some_kmod1` _AND_ `some_kmod2` kernel modules are loaded __OR__ the node contains a PCI device
with a PCI vendor ID of `15b3` _AND_ PCI device ID of `1014` _OR_ `1017`.
- A node would contain the label:
`feature.node.kubernetes.io/custom-my.kernel.feature=true` if the node has
`kmod1` _AND_ `kmod2` kernel modules loaded.
- A node would contain the label:
`feature.node.kubernetes.io/custom-my.pci.feature=true` if the node contains
a PCI device with a PCI vendor ID of `15b3` _AND_ PCI device ID of `1014`
_OR_ `1017`.
- A node would contain the label:
`feature.node.kubernetes.io/custom-my.usb.feature=true` if the node contains
a USB device with a USB vendor ID of `1d6b` _AND_ USB device ID of `0003`.
- A node would contain the label:
`feature.node.kubernetes.io/custom-my.combined.feature=true` if
`vendor_kmod1` _AND_ `vendor_kmod2` kernel modules are loaded __AND__ the
node contains a PCI device with a PCI vendor ID of `15b3` _AND_ PCI device ID
of `1014` _or_ `1017`.
- A node would contain the label:
`feature.node.kubernetes.io/custom-my.accumulated.feature=true` if
`some_kmod1` _AND_ `some_kmod2` kernel modules are loaded __OR__ the node
contains a PCI device with a PCI vendor ID of `15b3` _AND_ PCI device ID of
`1014` _OR_ `1017`.
#### Statically defined features
Some feature labels which are common and generic are defined statically in the `custom` feature source.
A user may add additional Matchers to these feature labels by defining them in the `nfd-worker` configuration file.
Some feature labels which are common and generic are defined statically in the
`custom` feature source. A user may add additional Matchers to these feature
labels by defining them in the `nfd-worker` configuration file.
| Feature | Attribute | Description |
| ------- | --------- | -----------|
@ -278,8 +299,8 @@ A user may add additional Matchers to these feature labels by defining them in t
### IOMMU Features
| Feature name | Description |
| :------------: | :---------------------------------------------------------------------------------: |
| Feature name | Description |
| -------------- | ----------------------------------------------------------- |
| enabled | IOMMU is present and enabled in the kernel
### Kernel Features
@ -289,9 +310,9 @@ A user may add additional Matchers to these feature labels by defining them in t
| config | &lt;option name&gt; | Kernel config option is enabled (set 'y' or 'm').<br> Default options are `NO_HZ`, `NO_HZ_IDLE`, `NO_HZ_FULL` and `PREEMPT`
| selinux | enabled | Selinux is enabled on the node
| version | full | Full kernel version as reported by `/proc/sys/kernel/osrelease` (e.g. '4.5.6-7-g123abcde')
| <br> | major | First component of the kernel version (e.g. '4')
| <br> | minor | Second component of the kernel version (e.g. '5')
| <br> | revision | Third component of the kernel version (e.g. '6')
| | major | First component of the kernel version (e.g. '4')
| | minor | Second component of the kernel version (e.g. '5')
| | revision | Third component of the kernel version (e.g. '6')
Kernel config file to use, and, the set of config options to be detected are
configurable.
@ -301,7 +322,7 @@ See [configuration options](#configuration-options) for more information.
| Feature | Attribute | Description |
| ------- | --------- | ------------------------------------------------------ |
| numa | <br> | Multiple memory nodes i.e. NUMA architecture detected
| numa | | Multiple memory nodes i.e. NUMA architecture detected
| nv | present | NVDIMM device(s) are present
| nv | dax | NVDIMM region(s) configured in DAX mode are present
@ -310,12 +331,12 @@ See [configuration options](#configuration-options) for more information.
| Feature | Attribute | Description |
| ------- | ---------- | ----------------------------------------------------- |
| sriov | capable | [Single Root Input/Output Virtualization][sriov] (SR-IOV) enabled Network Interface Card(s) present
| <br> | configured | SR-IOV virtual functions have been configured
| | configured | SR-IOV virtual functions have been configured
### PCI Features
| Feature | Attribute | Description |
| -------------------- | ------------- | ----------------------------------------- |
| Feature | Attribute | Description |
| -------------------- | ------------- | ------------------------------------- |
| &lt;device label&gt; | present | PCI device is detected
| &lt;device label&gt; | sriov.capable | [Single Root Input/Output Virtualization][sriov] (SR-IOV) enabled PCI device present
@ -324,6 +345,7 @@ The set of fields used in `<device label>` is configurable, valid fields being
`class`, `vendor`, `device`, `subsystem_vendor` and `subsystem_device`.
Defaults are `class` and `vendor`. An example label using the default
label fields:
```
feature.node.kubernetes.io/pci-1200_8086.present=true
```
@ -334,8 +356,8 @@ GPUs, co-processors and accelerator cards are detected.
### USB Features
| Feature | Attribute | Description |
| -------------------- | ------------- | ----------------------------------------- |
| Feature | Attribute | Description |
| -------------------- | ------------- | ------------------------------------- |
| &lt;device label&gt; | present | USB device is detected
`<device label>` is composed of raw USB IDs, separated by underscores.
@ -343,6 +365,7 @@ The set of fields used in `<device label>` is configurable, valid fields being
`class`, `vendor`, and `device`.
Defaults are `class`, `vendor` and `device`. An example label using the default
label fields:
```
feature.node.kubernetes.io/usb-fe_1a6e_089a.present=true
```
@ -352,8 +375,8 @@ for more information on NFD config.
### Storage Features
| Feature name | Description |
| :--------------: | :---------------------------------------------------------------------------------: |
| Feature name | Description |
| ------------------ | ------------------------------------------------------- |
| nonrotationaldisk | Non-rotational disk, like SSD, is present in the node
### System Features
@ -361,9 +384,9 @@ for more information on NFD config.
| Feature | Attribute | Description |
| ----------- | ---------------- | --------------------------------------------|
| os_release | ID | Operating system identifier
| <br> | VERSION_ID | Operating system version identifier (e.g. '6.7')
| <br> | VERSION_ID.major | First component of the OS version id (e.g. '6')
| <br> | VERSION_ID.minor | Second component of the OS version id (e.g. '7')
| | VERSION_ID | Operating system version identifier (e.g. '6.7')
| | VERSION_ID.major | First component of the OS version id (e.g. '6')
| | VERSION_ID.minor | Second component of the OS version id (e.g. '7')
### Feature Detector Hooks (User-specific Features)
@ -375,14 +398,17 @@ new user-specific features, and, for overriding labels created by the other
feature sources.
The *local* feature source gets its labels by two different ways:
* It tries to execute files found under `/etc/kubernetes/node-feature-discovery/source.d/`
directory. The hook files must be executable and they are supposed to print all
discovered features in `stdout`, one per line. With ELF binaries static
linking is recommended as the selection of system libraries available in the
NFD release image is very limited. Other runtimes currently supported by the
NFD stock image are bash and perl.
* It reads files found under `/etc/kubernetes/node-feature-discovery/features.d/`
directory. The file content is expected to be similar to the hook output (described above).
* It tries to execute files found under
`/etc/kubernetes/node-feature-discovery/source.d/` directory. The hook files
must be executable and they are supposed to print all discovered features in
`stdout`, one per line. With ELF binaries static linking is recommended as
the selection of system libraries available in the NFD release image is very
limited. Other runtimes currently supported by the NFD stock image are bash
and perl.
* It reads files found under
`/etc/kubernetes/node-feature-discovery/features.d/` directory. The file
content is expected to be similar to the hook output (described above).
These directories must be available inside the Docker image so Volumes and
VolumeMounts must be used if standard NFD images are used. The given template
@ -424,12 +450,12 @@ contains `hostPath` mounts for `sources.d` and `features.d` directories. By
using the same mounts in the secondary Pod (e.g. device plugin) you have
created a shared area for delivering hooks and feature files to NFD.
#### A Hook Example
User has a shell script
`/etc/kubernetes/node-feature-discovery/source.d/my-source` which has the
following `stdout` output:
```
MY_FEATURE_1
MY_FEATURE_2=myvalue
@ -437,7 +463,9 @@ MY_FEATURE_2=myvalue
/override_source-OVERRIDE_VALUE=123
override.namespace/value=456
```
which, in turn, will translate into the following node labels:
```
feature.node.kubernetes.io/my-source-MY_FEATURE_1=true
feature.node.kubernetes.io/my-source-MY_FEATURE_2=myvalue
@ -451,6 +479,7 @@ override.namespace/value=456
User has a file
`/etc/kubernetes/node-feature-discovery/features.d/my-source` which contains the
following lines:
```
MY_FEATURE_1
MY_FEATURE_2=myvalue
@ -458,7 +487,9 @@ MY_FEATURE_2=myvalue
/override_source-OVERRIDE_VALUE=123
override.namespace/value=456
```
which, in turn, will translate into the following node labels:
```
feature.node.kubernetes.io/my-source-MY_FEATURE_1=true
feature.node.kubernetes.io/my-source-MY_FEATURE_2=myvalue
@ -509,6 +540,7 @@ Example usage of the command line arguments, using a new namespace:
The above would result in following extended resources provided that related
labels exist:
```
sgx.some.ns/epc: <label value>
feature.node.kubernetes.io/my_source-my.feature: <label value>

View file

@ -20,9 +20,9 @@ hardware features available on each node in a Kubernetes cluster, and
advertises those features using node labels.
NFD consists of two software components:
1. nfd-master
2. nfd-worker
1. nfd-master
1. nfd-worker
## NFD-Master
@ -30,17 +30,16 @@ Nfd-master is the daemon responsible for communication towards the Kubernetes
API. That is, it receives labeling requests from the worker and modifies node
objects accordingly.
## NFD-Worker
Nfd-worker is a daemon responsible for feature detection. It then communicates
the information to nfd-master which does the actual node labeling. One
instance of nfd-worker is supposed to be running on each node of the cluster,
## Feature Discovery
Feature discovery is divided into domain-specific feature sources:
- CPU
- IOMMU
- Kernel
@ -60,6 +59,7 @@ Non-standard user-specific feature labels can be created with the local and
custom feature sources.
An overview of the default feature labels:
```json
{
"feature.node.kubernetes.io/cpu-<feature-name>": "true",
@ -76,7 +76,6 @@ An overview of the default feature labels:
}
```
## Node Annotations
NFD also annotates nodes it is running on:
@ -88,5 +87,5 @@ NFD also annotates nodes it is running on:
| nfd.node.kubernetes.io/feature-labels | Comma-separated list of node labels managed by NFD. NFD uses this internally so must not be edited by users.
| nfd.node.kubernetes.io/extended-resources | Comma-separated list of node extended resources managed by NFD. NFD uses this internally so must not be edited by users.
Unapplicable annotations are not created, i.e. for example master.version is only created on nodes running nfd-master.
Unapplicable annotations are not created, i.e. for example master.version is
only created on nodes running nfd-master.