Simplify the code and reduce possible error scenarios by dropping
fsnotify-based reconfiguration from nfd-master and nfd-worker. Also
eliminates repeated re-configuration in scenarios where kubelet
continuosly touches the (every minute) mounted file (configmap) on the
filesystem.
Also modifies the Helm and kustomize deployments so that nfd-master,
nfd-worker and nfd-topology-updater pods are restarted on configmap
updates. In kustomize, the slght downside of this is the name of the
config map(s) depends on the content, so every time a user customizes
the config data, the old unused configmap will be left and must be
garbage-collected manually.
Disable AVX10 as unnecessary as AVX10_LEVEL is better suited for
checking AVX10 compatibility. There is not yet any hardware with the
feature so disabling it shouldn't cause problems for users.
Extend the format of feature matcher terms (the elements of the
arrayspecified under under matchFeatures field) with new matchName
field. The value of this field is an expression that is evaluated
against the names of feature elements instead of their values (values
are matched with the matchExpressions field, instead).
The matchName field is useful e.g. in template rules for creating
per-feature-element labels based on feature names (instead of values)
and in non-template rules for checking if (at least) one of certain
feature element names are present.
If both matchExpressions and matchName for certain feature matcher term
is specified, they both must match in order to get an overall match.
Also, in this case the list of matched features (used in templating) is
the union of the results from matchExpressions and matchName.
An example of creating an "avx512" label if any AVX512* CPUID feature is
present:
- name: "avx wildcard rule"
labels:
avx512: "true"
matchFeatures:
- feature: cpu.cpuid
matchName: {op: InRegexp, value: ["^AVX512"]}
An example of a template rule creating a dynamic set of labels based on
the existence of certain kconfig options.
- name: "kconfig template rule"
labelsTemplate: |
{{ range .kernel.config }}kconfig-{{ .Name }}={{ .Value }}
{{ end }}
matchFeatures:
- feature: kernel.config
matchName: {op: In, value: ["SWAP", "X86", "ARM"]}
NOTE: this patch changes the corner case of nil/null match expressions
with instance features (i.e. "matchExpressions: null"). Previously, we
returned all instances for templating but now a nil match expression is
not evaluated and no instances for templating are returned.
This patch creates a owner-dependent relationship between the
nfd-worker pod and the NodeFeature object that it creates. With this
change the orphaned NodeFeature object(s) gets automatically
garbage-collected when the nfd-worker pod goes away, without the need
for manual clean-up actions.
The "combined" overlay, deploying nfd-master and nfd-worker in the same
pod (with a daemonset) doesn't make sense anymore as we have enabled
NodeFeature API. There is no direct communication between nfd-master and
nfd-worker anymore, Moreover, the combined deployment can be seen as
broken as there is one NodeFeature controller (i.e. nfd-master) on each
node, causing them to race against each other, all processing all
NodeFeature objects.
Add new autoDefaultNs (default is "true") config option to nfd-master.
Setting the config option to false stops NFD from automatically adding
the "feature.node.kubernetes.io/" prefix to labels, annotations and
extended resources. Taints are not affected as for them no prefix is
automatically added. The user-visible part of enabling the option change
is that NodeFeatureRules, local feature files, hooks and configuration
of the "custom" may need to be altereda (if the auto-prefixing is
relied on).
For now, the config option defaults to "true", meaning no change in
default behavior. However, the intent is to change the default to
"false" in a future release, deprecating the option and eventually
removing it (forcing it to "false").
The goal of stopping doing "auto-prefixing" is to simplify the operation
(of nfd and users). Make the naming more straightforward and easier to
understand and debug (kind of WYSIWYG), eliminating peculiar corner
cases:
1. Make validation simpler and unambiguous
2. Remove "overloading" of names, i.e. the mapping two values to the
same actual name. E.g. previously something like
labels:
feature.node.kubernetes.io/foo: bar
foo: baz
Could actually result in node label:
feature.node.kubernetes.io/foo: baz
3. Make the processing/usagee of the "rule.matched" and "local.labels"
feature in NodeFeatureRules unambiguous and more understadable. E.g.
previously you could have node label
"feature.node.kubernetes.io/local-foo: bar" but in the NodeFeatureRule
you'd need to use the unprefixed name "local-foo" or the fully
prefixed name, depending on what was specified in the feature file (or
hook) on the node(s).
NOTE: setting autoDefaultNs to false is a breaking change for users who
rely on automatic prefixing with the default feature.node.kubernetes.io/
namespace. NodeFeatureRules, feature files, hooks and custom rules
(configuration of the "custom" source of nfd-worker) will need to be
altered. Unprefixed labels, annoations and extended resources will be
denied by nfd-master.
We have deprecated hooks in v0.12.0 but kept it enabled by default.
Starting from v0.14 we are starting to disable it by default and
plan to fully remove it in the near future.
Signed-off-by: Feruzjon Muyassarov <feruzjon.muyassarov@intel.com>
NFD already has the capability to discover whether baremetal / host
machines support Intel TDX. Now, the next step is to add support for
discovering whether a node is TDX protected (as in, a virtual machine
started using Intel TDX).
In order to do so, we've decided to go for a new `cpu-security.tdx`
property, called `protected` (`cpu-security.tdx.protected`).
Signed-off-by: Hairong Chen <hairong.chen@intel.com>
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
This PR aims to optimize the process of updating nodes with
corresponding features. In fact, previously, we were updating nodes
sequentially even though they are independent from each other.
Therefore, we integrated new components: LabelersNodePool which is
responsible for spininng a goroutine whenever there's a request for
updating nodes, and a Workqueue which is responsible for holding nodes names
that should be updated.
Signed-off-by: AhmedGrati <ahmedgrati1999@gmail.com>
It allows NFD-master to be run in active-passive way when running
multiple instances of NFD-master to prevent multiple components
from updating same custom resources.
Signed-off-by: PiotrProkop <pprokop@nvidia.com>
Use a separate NODE_ADDRESS environment variable in the default value of
-kubelet-config-uri (instead of NODE_NAME that was previously used).
Also change the kustomize and Helm deployments to set this variable to
node IP address. This should make the default deployment more robust,
making it work in scenarios where node name does not resolve to the node
ip, e.g. nodename != hostname.
This PR adds a config option for setting the NFD API controller resync period.
The resync period is only activated when the NodeFeature API has been
enabled (with -enable-nodefeature-api).
Signed-off-by: AhmedGrati <ahmedgrati1999@gmail.com>
This PR adds the combination of dynamic and builtin kernel modules into
one feature called `kernel.enabledmodule`. It's a superset of the
`kernel.loadedmodule` feature.
Signed-off-by: AhmedGrati <ahmedgrati1999@gmail.com>
Similar to the nfd-worker, in this PR we want to support the
dynamic run-time configurability through a config file for the nfd-master.
We'll use a json or yaml configuration file along with the fsnotify in
order to watch for changes in the config file. As a result, we're
allowing dynamic control of logging params, allowed namespaces,
extended resources, label whitelisting, and denied namespaces.
Signed-off-by: AhmedGrati <ahmedgrati1999@gmail.com>
Add a configuration option for controlling the enabled "raw" feature
sources. This is useful e.g. in testing and development, plus it also
allows fully shutting down discovery of features that are not needed in
a deployment. Supplements core.labelSources which controls the
enablement of label sources.
The goal is to make the name more descriptive. Also keeping in mind a
possible future addition a 'featureSources' option (or similar) for
controlling the feature discovery.
Support backreferencing of output values from previous rules. Enables
complex rule setups where custom features are further combined together
to form even more sophisticated higher level labels. The labels created
by preceding rules are available as a special 'rule.matched' feature
(for matchFeatures to use).
If referencing rules accross multiple configs/CRDs care must be taken
with the ordering. Processing order of rules in nfd-worker:
1. Static rules
2. Files from /etc/kubernetes/node-feature-discovery/custom.d/
in alphabetical order. Subdirectories are processed by reading their
files in alphabetical order.
3. Custom rules from main nfd-worker.conf
In nfd-master, NodeFeatureRule objects are processed in alphabetical
order (based on their metadata.name).
This patch also adds new 'vars' fields to the rule spec. Like 'labels',
it is a map of key-value pairs but no labels are generated from these.
The values specified in 'vars' are only added for backreferencing into
the 'rules.matched' feature. This may by desired in schemes where the
output of certain rules is only used as intermediate variables for other
rules and no labels out of these are wanted.
An example setup:
- name: "kernel feature"
labels:
kernel-feature:
matchFeatures:
- feature: kernel.version
matchExpressions:
major: {op: Gt, value: ["4"]}
- name: "intermediate var feature"
vars:
nolabel-feature: "true"
matchFeatures:
- feature: cpu.cpuid
matchExpressions:
AVX512F: {op: Exists}
- feature: pci.device
matchExpressions:
vendor: {op: In, value: ["8086"]}
device: {op: In, value: ["1234", "1235"]}
- name: top-level-feature
matchFeatures:
- feature: rule.matched
matchExpressions:
kernel-feature: "true"
nolabel-feature: "true"
Support templating of label names in feature rules. It is available both
in NodeFeatureRule CRs and in custom rule configuration of nfd-worker.
This patch adds a new 'labelsTemplate' field to the rule spec, making it
possible to dynamically generate multiple labels per rule based on the
matched features. The feature relies on the golang "text/template"
package. When expanded, the template must contain labels in a raw
<key>[=<value>] format (where 'value' defaults to "true"), separated by
newlines i.e.:
- name: <rule-name>
labelsTemplate: |
<label-1>[=<value-1>]
<label-2>[=<value-2>]
...
All the matched features of 'matchFeatures' directives are available for
templating engine in a nested data structure that can be described in
yaml as:
.
<domain-1>:
<key-feature-1>:
- Name: <matched-key>
- ...
<value-feature-1:
- Name: <matched-key>
Value: <matched-value>
- ...
<instance-feature-1>:
- <attribute-1-name>: <attribute-1-value>
<attribute-2-name>: <attribute-2-value>
...
- ...
<domain-2>:
...
That is, the per-feature data available for matching depends on the type
of feature that was matched:
- "key features": only 'Name' is available
- "value features": 'Name' and 'Value' can be used
- "instance features": all attributes of the matched instance are
available
NOTE: In case of matchAny is specified, the template is executed
separately against each individual matchFeatures matcher and the
eventual set of labels is a superset of all these expansions. Consider
the following:
- name: <name>
labelsTemplate: <template>
matchAny:
- matchFeatures: <matcher#1>
- matchFeatures: <matcher#2>
matchFeatures: <matcher#3>
In the example above (assuming the overall result is a match) the
template would be executed on matcher#1 and/or matcher#2 (depending on
whether both or only one of them match), and finally on matcher#3, and
all the labels from these separate expansions would be created (i.e. the
end result would be a union of all the individual expansions).
NOTE 2: The 'labels' field has priority over 'labelsTemplate', i.e.
labels specified in the 'labels' field will override any labels
originating from the 'labelsTemplate' field.
A special case of an empty match expression set matches everything (i.e.
matches/returns all existing keys/values). This makes it simpler to
write templates that run over all values. Also, makes it possible to
later implement support for templates that run over all _keys_ of a
feature.
Some example configurations:
- name: "my-pci-template-features"
labelsTemplate: |
{{ range .pci.device }}intel-{{ .class }}-{{ .device }}=present
{{ end }}
matchFeatures:
- feature: pci.device
matchExpressions:
class: {op: InRegexp, value: ["^06"]}
vendor: ["8086"]
- name: "my-system-template-features"
labelsTemplate: |
{{ range .system.osrelease }}system-{{ .Name }}={{ .Value }}
{{ end }}
matchFeatures:
- feature: system.osRelease
matchExpressions:
ID: {op: Exists}
VERSION_ID.major: {op: Exists}
Imaginative template pipelines are possible, of course, but care must be
taken in order to produce understandable and maintainable rule sets.
Separate feature discovery and creation of feature labels.
Generalize the discovery of nvdimm devices so that they can be matched
in custom label rules in a similar fashion as pci and usb devices.
Available attributes for matching nvdimm devices are limited to:
- devtype
- mode
For numa we now detect the number of numa nodes which can be matched
agains in custom label rules.
Labels created by the memory feature source are unchanged. The new
features being detected are available in custom rules only.
Example custom rule:
- name: "my memory rule"
labels:
my-memory-feature: "true"
matchFeatures:
- feature: memory.numa
matchExpressions:
"node_count": {op: Gt, value: ["3"]}
- feature: memory.nv
matchExpressions:
"devtype" {op: In, value: ["nd_dax"]}
Also, add minimalist unit test.
Separate feature discovery and creation of feature labels. Generalize
the feature discovery so that network devices can be matched in custom
label rules in a similar fashion as pci and usb devices. Available
attributes for matching are:
- operstate
- speed
- sriov_numvfs
- sriov_totalvfs
Labels created by the network feature source are unchanged. The new
features being detected are available in custom rules only.
Example custom rule:
- name: "my network rule"
labels:
my-network-feature: "true"
matchFeatures:
- feature: network.device
matchExpressions:
"operstate": { op: In, value: ["up"] }
"sriov_numvfs": { op: Gt, value: ["9"] }
Also, add minimalist unit test.
Separate feature discovery and creation of feature labels. Generalize
the feature discovery so that block devices can be matched in custom
label rules in a similar fashion as pci and usb devices. This extends
the discovery to other block queue attributes than 'rotational': now we
also detect 'dax', 'nr_zones' and 'zoned'.
Labels created by the storage feature source are unchanged. The new
features being detected are available in custom rules only.
Example custom rules:
- name: "my block rule 1"
labels:
my-block-feature-1: "true"
matchFeatures:
- feature: storage.block
"rotational": {op: In, value: ["0"]}
- name: "my block rule 2"
labels:
my-block-feature-2: "true"
matchFeatures:
- feature: storage.block
"zoned": {op: In, value: [“host-aware”, “host-managed”]}
Also, add minimalist unit test.
Implement a new 'matchAny' directive in the new rule format, building on
top of the previously implemented 'matchFeatures' matcher. MatchAny
applies a logical OR over multiple matchFeatures directives. That is, it
allows specifying multiple alternative matchers (at least one of which
must match) in a single label rule.
The configuration format for the new matchers is
matchAny:
- matchFeatures:
- feature: <domain>.<feature>
matchExpressions:
<attribute>:
op: <operator>
value:
- <list-of-values>
- matchFeatures:
...
A configuration example. In order to require a cpu feature, kernel
module and one of two specific PCI devices (taking use of the shortform
notation):
- name: multi-device-test
labels:
multi-device-feature: "true"
matchFeatures:
- feature: kernel.loadedmodule
matchExpressions: [driver-module]
- feature: cpu.cpuid
matchExpressions: [AVX512F]
matchAny:
- matchFeatures:
- feature; pci.device
matchExpressions:
vendor: "8086"
device: "1234"
- matchFeatures:
- feature: pci.device
matchExpressions:
vendor: "8086"
device: "abcd"
Implement generic feature matchers that cover all feature sources (that
implement the FeatureSource interface). The implementation relies on the
unified data model provided by the FeatureSource interface as well as
the generic expression-based rule processing framework that was added to
the source/custom/expression package.
With this patch any new features added will be automatically available
for custom rules, without any additional work. Rule hierarchy follows
the source/feature hierarchy by design.
This patch introduces a new format for custom rule specifications,
dropping the 'value' field and introducing new 'labels' field which
makes it possible to specify multiple labels per rule. Also, in the new
format the 'name' field is just for reference and no matching label is
created. The new generic rules are available in this new rule format
under a 'matchFeatures. MatchFeatures implements a logical AND over
an array of per-feature matchers - i.e. a match for all of the matchers
is required. The goal of the new rule format is to make it better follow
K8s API design guidelines and make it extensible for future enhancements
(e.g. addition of templating, taints, annotations, extended resources
etc).
The old rule format (with cpuID, kConfig, loadedKMod, nodename, pciID,
usbID rules) is still supported. The rule format (new vs. old) is
determined at config parsing time based on the existence of the
'matchOn' field.
The new rule format and the configuration format for the new
matchFeatures field is
- name: <rule-name>
labels:
<key>: <value>
...
matchFeatures:
- feature: <domain>.<feature>
matchExpressions:
<attribute>:
op: <operator>
value:
- <list-of-values>
- feature: <domain>.<feature>
...
Currently, "cpu", "kernel", "pci", "system", "usb" and "local" sources
are covered by the matshers/feature selectors. Thus, the following
features are available for matching with this patch:
- cpu.cpuid:
<cpuid-flag>: <exists/does-not-exist>
- cpu.cstate:
enabled: <bool>
- cpu.pstate:
status: <string>
turbo: <bool>
scaling_governor: <string>
- cpu.rdt:
<rdt-feature>: <exists/does-not-exist>
- cpu.sst:
bf.enabled: <bool>
- cpu.topology:
hardware_multithreading: <bool>
- kernel.config:
<flag-name>: <string>
- kernel.loadedmodule:
<module-name>: <exists/does-not-exist>
- kernel.selinux:
enabled: <bool>
- kernel.version:
major: <int>
minor: <int>
revision: <int>
full: <string>
- system.osrelease:
<key-name>: <string>
VERSION_ID.major: <int>
VERSION_ID.minor: <int>
- system.name:
nodename: <string>
- pci.device:
<device-instance>:
class: <string>
vendor: <string>
device: <string>
subsystem_vendor: <string>
susbystem_device: <string>
sriov_totalvfs: <int>
- usb.device:
<device-instance>:
class: <string>
vendor: <string>
device: <string>
serial: <string>
- local.label:
<label-name>: <string>
The configuration also supports some "shortforms" for convenience:
matchExpressions: [<attr-1>, <attr-2>=<val-2>]
---
matchExpressions:
<attr-3>:
<attr-4>: <val-4>
is equal to:
matchExpressions:
<attr-1>: {op: Exists}
<attr-2>: {op: In, value: [<val-2>]}
---
matchExpressions:
<attr-3>: {op: Exists}
<attr-4>: {op: In, value: [<val-4>]}
In other words:
- feature: kernel.config
matchExpressions: ["X86", "INIT_ENV_ARG_LIMIT=32"]
- feature: pci.device
matchExpressions:
vendor: "8086"
is the same as:
- feature: kernel.config
matchExpressions:
X86: {op: Exists}
INIT_ENV_ARG_LIMIT: {op: In, values: ["32"]}
- feature: pci.device
matchExpressions:
vendor: {op: In, value: ["8086"]
Some configuration examples below. In order to match a CPUID feature the
following snippet can be used:
- name: cpu-test-1
labels:
cpu-custom-feature: "true"
matchFeatures:
- feature: cpu.cpuid
matchExpressions:
AESNI: {op: Exists}
AVX: {op: Exists}
In order to match against a loaded kernel module and OS version:
- name: kernel-test-1
labels:
kernel-custom-feature: "true"
matchFeatures:
- feature: kernel.loadedmodule
matchExpressions:
e1000: {op: Exists}
- feature: system.osrelease
matchExpressions:
NAME: {op: InRegexp, values: ["^openSUSE"]}
VERSION_ID.major: {op: Gt, values: ["14"]}
In order to require a kernel module and both of two specific PCI devices:
- name: multi-device-test
labels:
multi-device-feature: "true"
matchFeatures:
- feature: kernel.loadedmodule
matchExpressions:
driver-module: {op: Exists}
- pci.device:
vendor: "8086"
device: "1234"
- pci.device:
vendor: "8086"
device: "abcd"
The base should really have the very bare minimum. Remove all redundant
(at default-value) args and move the others to the specific
topologyupdater kustomize component. This also makes these settings
re-usable in user-specific overlays (that are not based on
topologyupdater-daemonset).
- create an overlay for deployment of all components
- create an overlay for just topologyupdater deployment (to be deployed in
conjunction with the default overlay)
- create a separate overlay for deployment of master and topologyupdater-job
Signed-off-by: Swati Sehgal <swsehgal@redhat.com>
This commit makes the mount of /usr/src optional in the Helm chart, and
removes it from the kustomization. Reason is that some systems do not
have a /usr/src (such as Talos) *and* have a R/O filesystem. Since
/usr/src is optional per FHS 3.0, NFD should not assume its presence.
Signed-off-by: Jorik Jonker <jorik@kippendief.biz>
Replicates nfd-daemonset-combined.yaml.template.
In addition to the overlay we need to add a separate set of patches
under components/common in order to handle the double-container pod.
Implement functionality virtually replicating deployment templates for
nfd-master and nfd-worker daemonset (nfd-master.yaml.template and
nfd-worker-daemonset.yaml.template) by adding a kustomize overlay named
"default".
We split the resources into multiple bases (rbac, master and
worker-daemonset) so that relevant parts are re-usable in
other deployment scenarios added later (e.g. "one-shot job", and
"combined daemonset").
This patch adds one component (components/common) doing the required
kustomization for the example deployment.