Add a configuration option for controlling the enabled "raw" feature
sources. This is useful e.g. in testing and development, plus it also
allows fully shutting down discovery of features that are not needed in
a deployment. Supplements core.labelSources which controls the
enablement of label sources.
The goal is to make the name more descriptive. Also keeping in mind a
possible future addition a 'featureSources' option (or similar) for
controlling the feature discovery.
Support templating of var names in a similar manner as labels. Add
support for a new 'varsTemplate' field to the feature rule spec which is
treated similarly to the 'labelsTemplate' field. The value of the field
is processed through the golang "text/template" template engine and the
expanded value must contain variables in <key>=<value> format, separated
by newlines i.e.:
- name: <rule-name>
varsTemplate: |
<label-1>=<value-1>
<label-2>=<value-2>
...
Similar rules as for 'labelsTemplate' apply, i.e.
1. In case of matchAny is specified, the template is executed separately
against each individual matchFeatures matcher.
2. 'vars' field has priority over 'varsTemplate'
Support backreferencing of output values from previous rules. Enables
complex rule setups where custom features are further combined together
to form even more sophisticated higher level labels. The labels created
by preceding rules are available as a special 'rule.matched' feature
(for matchFeatures to use).
If referencing rules accross multiple configs/CRDs care must be taken
with the ordering. Processing order of rules in nfd-worker:
1. Static rules
2. Files from /etc/kubernetes/node-feature-discovery/custom.d/
in alphabetical order. Subdirectories are processed by reading their
files in alphabetical order.
3. Custom rules from main nfd-worker.conf
In nfd-master, NodeFeatureRule objects are processed in alphabetical
order (based on their metadata.name).
This patch also adds new 'vars' fields to the rule spec. Like 'labels',
it is a map of key-value pairs but no labels are generated from these.
The values specified in 'vars' are only added for backreferencing into
the 'rules.matched' feature. This may by desired in schemes where the
output of certain rules is only used as intermediate variables for other
rules and no labels out of these are wanted.
An example setup:
- name: "kernel feature"
labels:
kernel-feature:
matchFeatures:
- feature: kernel.version
matchExpressions:
major: {op: Gt, value: ["4"]}
- name: "intermediate var feature"
vars:
nolabel-feature: "true"
matchFeatures:
- feature: cpu.cpuid
matchExpressions:
AVX512F: {op: Exists}
- feature: pci.device
matchExpressions:
vendor: {op: In, value: ["8086"]}
device: {op: In, value: ["1234", "1235"]}
- name: top-level-feature
matchFeatures:
- feature: rule.matched
matchExpressions:
kernel-feature: "true"
nolabel-feature: "true"
Support templating of label names in feature rules. It is available both
in NodeFeatureRule CRs and in custom rule configuration of nfd-worker.
This patch adds a new 'labelsTemplate' field to the rule spec, making it
possible to dynamically generate multiple labels per rule based on the
matched features. The feature relies on the golang "text/template"
package. When expanded, the template must contain labels in a raw
<key>[=<value>] format (where 'value' defaults to "true"), separated by
newlines i.e.:
- name: <rule-name>
labelsTemplate: |
<label-1>[=<value-1>]
<label-2>[=<value-2>]
...
All the matched features of 'matchFeatures' directives are available for
templating engine in a nested data structure that can be described in
yaml as:
.
<domain-1>:
<key-feature-1>:
- Name: <matched-key>
- ...
<value-feature-1:
- Name: <matched-key>
Value: <matched-value>
- ...
<instance-feature-1>:
- <attribute-1-name>: <attribute-1-value>
<attribute-2-name>: <attribute-2-value>
...
- ...
<domain-2>:
...
That is, the per-feature data available for matching depends on the type
of feature that was matched:
- "key features": only 'Name' is available
- "value features": 'Name' and 'Value' can be used
- "instance features": all attributes of the matched instance are
available
NOTE: In case of matchAny is specified, the template is executed
separately against each individual matchFeatures matcher and the
eventual set of labels is a superset of all these expansions. Consider
the following:
- name: <name>
labelsTemplate: <template>
matchAny:
- matchFeatures: <matcher#1>
- matchFeatures: <matcher#2>
matchFeatures: <matcher#3>
In the example above (assuming the overall result is a match) the
template would be executed on matcher#1 and/or matcher#2 (depending on
whether both or only one of them match), and finally on matcher#3, and
all the labels from these separate expansions would be created (i.e. the
end result would be a union of all the individual expansions).
NOTE 2: The 'labels' field has priority over 'labelsTemplate', i.e.
labels specified in the 'labels' field will override any labels
originating from the 'labelsTemplate' field.
A special case of an empty match expression set matches everything (i.e.
matches/returns all existing keys/values). This makes it simpler to
write templates that run over all values. Also, makes it possible to
later implement support for templates that run over all _keys_ of a
feature.
Some example configurations:
- name: "my-pci-template-features"
labelsTemplate: |
{{ range .pci.device }}intel-{{ .class }}-{{ .device }}=present
{{ end }}
matchFeatures:
- feature: pci.device
matchExpressions:
class: {op: InRegexp, value: ["^06"]}
vendor: ["8086"]
- name: "my-system-template-features"
labelsTemplate: |
{{ range .system.osrelease }}system-{{ .Name }}={{ .Value }}
{{ end }}
matchFeatures:
- feature: system.osRelease
matchExpressions:
ID: {op: Exists}
VERSION_ID.major: {op: Exists}
Imaginative template pipelines are possible, of course, but care must be
taken in order to produce understandable and maintainable rule sets.
Separate feature discovery and creation of feature labels.
Generalize the discovery of nvdimm devices so that they can be matched
in custom label rules in a similar fashion as pci and usb devices.
Available attributes for matching nvdimm devices are limited to:
- devtype
- mode
For numa we now detect the number of numa nodes which can be matched
agains in custom label rules.
Labels created by the memory feature source are unchanged. The new
features being detected are available in custom rules only.
Example custom rule:
- name: "my memory rule"
labels:
my-memory-feature: "true"
matchFeatures:
- feature: memory.numa
matchExpressions:
"node_count": {op: Gt, value: ["3"]}
- feature: memory.nv
matchExpressions:
"devtype" {op: In, value: ["nd_dax"]}
Also, add minimalist unit test.
Separate feature discovery and creation of feature labels. Generalize
the feature discovery so that network devices can be matched in custom
label rules in a similar fashion as pci and usb devices. Available
attributes for matching are:
- operstate
- speed
- sriov_numvfs
- sriov_totalvfs
Labels created by the network feature source are unchanged. The new
features being detected are available in custom rules only.
Example custom rule:
- name: "my network rule"
labels:
my-network-feature: "true"
matchFeatures:
- feature: network.device
matchExpressions:
"operstate": { op: In, value: ["up"] }
"sriov_numvfs": { op: Gt, value: ["9"] }
Also, add minimalist unit test.
Add a new command line flag for disabling/enabling the controller for
NodeFeatureRule objects. In practice, disabling the controller disables
all labels generated from rules in NodeFeatureRule objects.
Implement a simple controller stub that operates on NodeFeatureRule
objects. The controller does not yet have any functionality other than
logging changes in the (NodeFeatureRule) objecs it is watching.
Also update the documentation on the -no-publish flag to match the new
functionality.
Separate feature discovery and creation of feature labels. Generalize
the feature discovery so that block devices can be matched in custom
label rules in a similar fashion as pci and usb devices. This extends
the discovery to other block queue attributes than 'rotational': now we
also detect 'dax', 'nr_zones' and 'zoned'.
Labels created by the storage feature source are unchanged. The new
features being detected are available in custom rules only.
Example custom rules:
- name: "my block rule 1"
labels:
my-block-feature-1: "true"
matchFeatures:
- feature: storage.block
"rotational": {op: In, value: ["0"]}
- name: "my block rule 2"
labels:
my-block-feature-2: "true"
matchFeatures:
- feature: storage.block
"zoned": {op: In, value: [“host-aware”, “host-managed”]}
Also, add minimalist unit test.
Add a cluster-scoped Custom Resource Definition for specifying labeling
rules. Nodes (node features, node objects) are cluster-level objects and
thus the natural and encouraged setup is to only have one NFD deployment
per cluster - the set of underlying features of the node stays the same
independent of how many parallel NFD deployments you have. Our extension
points (hooks, feature files and now CRs) can be be used by multiple
actors (depending on us) simultaneously. Having the CRD cluster-scoped
hopefully drives deployments in this direction. It also should make
deployment of vendor-specific labeling rules easy as there is no need to
worry about the namespace.
This patch virtually replicates the source.custom.FeatureSpec in a CRD
API (located in the pkg/apis/nfd/v1alpha1 package) with the notable
exception that "MatchOn" legacy rules are not supported. Legacy rules
are left out in order to keep the CRD simple and clean.
The duplicate functionality in source/custom will be dropped by upcoming
patches.
This patch utilizes controller-gen (from sigs.k8s.io/controller-tools)
for generating the CRD and deepcopy methods. Code can be (re-)generated
with "make generate". Install controller-gen with:
go install sigs.k8s.io/controller-tools/cmd/controller-gen@v0.7.0
Update kustomize and helm deployments to deploy the CRD.
The NodeResourceTopology API has been made cluster
scoped as in the current context a CR corresponds to
a Node and since Node is a cluster scoped resource it
makes sense to make NRT cluster scoped as well.
Ref: https://github.com/k8stopologyawareschedwg/noderesourcetopology-api/pull/18
Signed-off-by: Swati Sehgal <swsehgal@redhat.com>
Implement a new 'matchAny' directive in the new rule format, building on
top of the previously implemented 'matchFeatures' matcher. MatchAny
applies a logical OR over multiple matchFeatures directives. That is, it
allows specifying multiple alternative matchers (at least one of which
must match) in a single label rule.
The configuration format for the new matchers is
matchAny:
- matchFeatures:
- feature: <domain>.<feature>
matchExpressions:
<attribute>:
op: <operator>
value:
- <list-of-values>
- matchFeatures:
...
A configuration example. In order to require a cpu feature, kernel
module and one of two specific PCI devices (taking use of the shortform
notation):
- name: multi-device-test
labels:
multi-device-feature: "true"
matchFeatures:
- feature: kernel.loadedmodule
matchExpressions: [driver-module]
- feature: cpu.cpuid
matchExpressions: [AVX512F]
matchAny:
- matchFeatures:
- feature; pci.device
matchExpressions:
vendor: "8086"
device: "1234"
- matchFeatures:
- feature: pci.device
matchExpressions:
vendor: "8086"
device: "abcd"
Implement generic feature matchers that cover all feature sources (that
implement the FeatureSource interface). The implementation relies on the
unified data model provided by the FeatureSource interface as well as
the generic expression-based rule processing framework that was added to
the source/custom/expression package.
With this patch any new features added will be automatically available
for custom rules, without any additional work. Rule hierarchy follows
the source/feature hierarchy by design.
This patch introduces a new format for custom rule specifications,
dropping the 'value' field and introducing new 'labels' field which
makes it possible to specify multiple labels per rule. Also, in the new
format the 'name' field is just for reference and no matching label is
created. The new generic rules are available in this new rule format
under a 'matchFeatures. MatchFeatures implements a logical AND over
an array of per-feature matchers - i.e. a match for all of the matchers
is required. The goal of the new rule format is to make it better follow
K8s API design guidelines and make it extensible for future enhancements
(e.g. addition of templating, taints, annotations, extended resources
etc).
The old rule format (with cpuID, kConfig, loadedKMod, nodename, pciID,
usbID rules) is still supported. The rule format (new vs. old) is
determined at config parsing time based on the existence of the
'matchOn' field.
The new rule format and the configuration format for the new
matchFeatures field is
- name: <rule-name>
labels:
<key>: <value>
...
matchFeatures:
- feature: <domain>.<feature>
matchExpressions:
<attribute>:
op: <operator>
value:
- <list-of-values>
- feature: <domain>.<feature>
...
Currently, "cpu", "kernel", "pci", "system", "usb" and "local" sources
are covered by the matshers/feature selectors. Thus, the following
features are available for matching with this patch:
- cpu.cpuid:
<cpuid-flag>: <exists/does-not-exist>
- cpu.cstate:
enabled: <bool>
- cpu.pstate:
status: <string>
turbo: <bool>
scaling_governor: <string>
- cpu.rdt:
<rdt-feature>: <exists/does-not-exist>
- cpu.sst:
bf.enabled: <bool>
- cpu.topology:
hardware_multithreading: <bool>
- kernel.config:
<flag-name>: <string>
- kernel.loadedmodule:
<module-name>: <exists/does-not-exist>
- kernel.selinux:
enabled: <bool>
- kernel.version:
major: <int>
minor: <int>
revision: <int>
full: <string>
- system.osrelease:
<key-name>: <string>
VERSION_ID.major: <int>
VERSION_ID.minor: <int>
- system.name:
nodename: <string>
- pci.device:
<device-instance>:
class: <string>
vendor: <string>
device: <string>
subsystem_vendor: <string>
susbystem_device: <string>
sriov_totalvfs: <int>
- usb.device:
<device-instance>:
class: <string>
vendor: <string>
device: <string>
serial: <string>
- local.label:
<label-name>: <string>
The configuration also supports some "shortforms" for convenience:
matchExpressions: [<attr-1>, <attr-2>=<val-2>]
---
matchExpressions:
<attr-3>:
<attr-4>: <val-4>
is equal to:
matchExpressions:
<attr-1>: {op: Exists}
<attr-2>: {op: In, value: [<val-2>]}
---
matchExpressions:
<attr-3>: {op: Exists}
<attr-4>: {op: In, value: [<val-4>]}
In other words:
- feature: kernel.config
matchExpressions: ["X86", "INIT_ENV_ARG_LIMIT=32"]
- feature: pci.device
matchExpressions:
vendor: "8086"
is the same as:
- feature: kernel.config
matchExpressions:
X86: {op: Exists}
INIT_ENV_ARG_LIMIT: {op: In, values: ["32"]}
- feature: pci.device
matchExpressions:
vendor: {op: In, value: ["8086"]
Some configuration examples below. In order to match a CPUID feature the
following snippet can be used:
- name: cpu-test-1
labels:
cpu-custom-feature: "true"
matchFeatures:
- feature: cpu.cpuid
matchExpressions:
AESNI: {op: Exists}
AVX: {op: Exists}
In order to match against a loaded kernel module and OS version:
- name: kernel-test-1
labels:
kernel-custom-feature: "true"
matchFeatures:
- feature: kernel.loadedmodule
matchExpressions:
e1000: {op: Exists}
- feature: system.osrelease
matchExpressions:
NAME: {op: InRegexp, values: ["^openSUSE"]}
VERSION_ID.major: {op: Gt, values: ["14"]}
In order to require a kernel module and both of two specific PCI devices:
- name: multi-device-test
labels:
multi-device-feature: "true"
matchFeatures:
- feature: kernel.loadedmodule
matchExpressions:
driver-module: {op: Exists}
- pci.device:
vendor: "8086"
device: "1234"
- pci.device:
vendor: "8086"
device: "abcd"
* Simplify NFD worker service configuration in Helm
Signed-off-by: Elias Koromilas <elias.koromilas@gmail.com>
* Update docs/get-started/deployment-and-usage.md
Co-authored-by: Markus Lehtonen <markus.lehtonen@intel.com>
Co-authored-by: Markus Lehtonen <markus.lehtonen@intel.com>
Drop --sleep-interval from the template. We really don't want to do that
as. First, it's the default value so no use repeating that in the
template. And more importantly, the commandline flag will override
anything that will be provided in the worker config file, making it
impossible for users to specify the sleep interval (other than by
editing the template directly).
The base should really have the very bare minimum. Remove all redundant
(at default-value) args and move the others to the specific
topologyupdater kustomize component. This also makes these settings
re-usable in user-specific overlays (that are not based on
topologyupdater-daemonset).
Align "topologyupdater" overlay with "topologyupdater-job". Both should
deploy topologyupdater as a standalone application. Previously the
topologyupdater overlay did not deploy nfd-master at all (but deployed
nfd-worker instead) causing the pods to end up in crashloopbackoff as
there was no master to communicate with.
- create an overlay for deployment of all components
- create an overlay for just topologyupdater deployment (to be deployed in
conjunction with the default overlay)
- create a separate overlay for deployment of master and topologyupdater-job
Signed-off-by: Swati Sehgal <swsehgal@redhat.com>
This commit makes the mount of /usr/src optional in the Helm chart, and
removes it from the kustomization. Reason is that some systems do not
have a /usr/src (such as Talos) *and* have a R/O filesystem. Since
/usr/src is optional per FHS 3.0, NFD should not assume its presence.
Signed-off-by: Jorik Jonker <jorik@kippendief.biz>
Replicates nfd-daemonset-combined.yaml.template.
In addition to the overlay we need to add a separate set of patches
under components/common in order to handle the double-container pod.
Implement functionality virtually replicating deployment templates for
nfd-master and nfd-worker daemonset (nfd-master.yaml.template and
nfd-worker-daemonset.yaml.template) by adding a kustomize overlay named
"default".
We split the resources into multiple bases (rbac, master and
worker-daemonset) so that relevant parts are re-usable in
other deployment scenarios added later (e.g. "one-shot job", and
"combined daemonset").
This patch adds one component (components/common) doing the required
kustomization for the example deployment.
The naming was changed in when with cpuid v2
(github.com/klauspost/cpuid/v2) and we didn't catch this in NFD. No
issue reports of the inadvertent naming change so let's just adapt to
the updated naming in NFD configuration. The SSE4* labels are disabled
by default so they're not widely used, if at all.
Mount /usr/lib and /usr/src as /host-usr/lib and /host-usr/src inside the pod
to allow NFD to search for the kernel configuration file inside /usr.
This solves the problem of the kernel config file not being present in /boot
on s390x RHCOS.
Signed-off-by: Jan Schintag <jan.schintag@de.ibm.com>
There are cases when the only available metadata for discovering
features is the node's name. The "nodename" rule extends the custom
source and matches when the node's name matches one of the given
nodename regexp patterns.
It is also possible now to set an optional "value" on custom rules,
which overrides the default "true" label value in case the rule matches.
In order to allow more dynamic configurations without having to modify
the complete worker configuration, custom rules are additionally read
from a "custom.d" directory now. Typically that directory will be filled
by mounting one or more ConfigMaps.
Signed-off-by: Marc Sluiter <msluiter@redhat.com>
This commit adds Helm chart for node-feature-discovery
Signed-off-by: Adrian Chiris <adrianc@nvidia.com>
Signed-off-by: Ivan Kolodiazhnyi <ikolodiazhny@nvidia.com>