Extend the format of feature matcher terms (the elements of the
arrayspecified under under matchFeatures field) with new matchName
field. The value of this field is an expression that is evaluated
against the names of feature elements instead of their values (values
are matched with the matchExpressions field, instead).
The matchName field is useful e.g. in template rules for creating
per-feature-element labels based on feature names (instead of values)
and in non-template rules for checking if (at least) one of certain
feature element names are present.
If both matchExpressions and matchName for certain feature matcher term
is specified, they both must match in order to get an overall match.
Also, in this case the list of matched features (used in templating) is
the union of the results from matchExpressions and matchName.
An example of creating an "avx512" label if any AVX512* CPUID feature is
present:
- name: "avx wildcard rule"
labels:
avx512: "true"
matchFeatures:
- feature: cpu.cpuid
matchName: {op: InRegexp, value: ["^AVX512"]}
An example of a template rule creating a dynamic set of labels based on
the existence of certain kconfig options.
- name: "kconfig template rule"
labelsTemplate: |
{{ range .kernel.config }}kconfig-{{ .Name }}={{ .Value }}
{{ end }}
matchFeatures:
- feature: kernel.config
matchName: {op: In, value: ["SWAP", "X86", "ARM"]}
NOTE: this patch changes the corner case of nil/null match expressions
with instance features (i.e. "matchExpressions: null"). Previously, we
returned all instances for templating but now a nil match expression is
not evaluated and no instances for templating are returned.
Drop the private fields – that were supposed to be used for caching parsed
templates – from the Rule type. Keep the API typedefs cleaner and
simpler. Moreover, the caching was not even used in practice,
effectively complicating code without any benefit: the way the types
are used in nfd-master creates a local copy of Rule type storing the
cached template in the copy, wasting it from any future users.
There are also other possible caveats in caching like we tried to do it.
For example the objects returned by the api lister are supposed to be
treated as read-only - in particular if we would be to modify them there
should at least be proper locking in place as nfd-master potentially
processes the same rule (the same Go object) in parallel for multiple
nodes. If any optimization like this will be pursued it should be done
properly, probably with private type(s) at the consumer's end, not
contaminating the API types.
Drop the private field for caching parsed regexp from the
MatchExpression type. This tidies up the API type definition and not so
tied with particular implementation details. The change also elimiates
potential concurrency problems as no locking is in place in the API
types.
If caching will be desired in the future, it's better to do it properly
in a separate package, not directly in the API types.
Fix NodeFeatureRule templating in cases where multiple matchFeatures
terms are targeting the same feature. Previously, only matched feature
elements from the last matcher terms were used as the input to the
template. However, the input should contain all matched elements from
all matcher terms.
For example, consider the example rule snippet below:
...
labelsTemplate: |
{{ range .pci.device }}vendor.io/pci-device.{{ .class }}-{{ .device }}=exists
{{ end }}
matchFeatures:
- feature: pci.device
matchExpressions:
class: {op: InRegexp, value: ["^03"]}
vendor: {op: In, value: ["1234"]}
- feature: pci.device
matchExpressions:
class: {op: InRegexp, value: ["^12"]}
This rule matches if both a pci device of class 03 from vendor 1234
exists and a pci device of class 12 (from any vendor) exists.
Previously, the template would only generate labels from the devices in
class 12 (as that's the last term). With this patch the template creates
device labels from devices in both classes 03 and 12.
Add support for management of Extended Resources via the
NodeFeatureRule CRD API.
There are usage scenarios where users want to advertise features
as extended resources instead of labels (or annotations).
This patch enables the discovery of extended resources, via annotation
and patch of node.status.capacity and node.status.allocatable. By using
the NodeFeatureRule API.
Co-authored-by: Carlos Eduardo Arango Gutierrez <eduardoa@nvidia.com>
Co-authored-by: Markus Lehtonen <markus.lehtonen@intel.com>
Co-authored-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Signed-off-by: Carlos Eduardo Arango Gutierrez <eduardoa@nvidia.com>
Extend NodeFeatureRule Spec with taints field to allow users to
specify the list of the taints they want to be set on the node if
rule matches.
Signed-off-by: Feruzjon Muyassarov <feruzjon.muyassarov@intel.com>
Flatten the data structure that stores features, dropping the "domain"
level from the data model. That extra level of hierarchy brought little
benefit but just caused some extra complexity, instead. The new
structure nicely matches what we have in the NodeFeatureRule object (the
matchFeatures field of uses the same flat structure with the "feature"
field having a value <domain>.<feature>, e.g. "kernel.version").
This is pre-work for introducing a new "node feature" CRD that contains
the raw feature data. It makes the life of both users and developers
easier when both CRDs, plus our internal code, handle feature data in a
similar flat structure.
Support backreferencing of output values from previous rules. Enables
complex rule setups where custom features are further combined together
to form even more sophisticated higher level labels. The labels created
by preceding rules are available as a special 'rule.matched' feature
(for matchFeatures to use).
If referencing rules accross multiple configs/CRDs care must be taken
with the ordering. Processing order of rules in nfd-worker:
1. Static rules
2. Files from /etc/kubernetes/node-feature-discovery/custom.d/
in alphabetical order. Subdirectories are processed by reading their
files in alphabetical order.
3. Custom rules from main nfd-worker.conf
In nfd-master, NodeFeatureRule objects are processed in alphabetical
order (based on their metadata.name).
This patch also adds new 'vars' fields to the rule spec. Like 'labels',
it is a map of key-value pairs but no labels are generated from these.
The values specified in 'vars' are only added for backreferencing into
the 'rules.matched' feature. This may by desired in schemes where the
output of certain rules is only used as intermediate variables for other
rules and no labels out of these are wanted.
An example setup:
- name: "kernel feature"
labels:
kernel-feature:
matchFeatures:
- feature: kernel.version
matchExpressions:
major: {op: Gt, value: ["4"]}
- name: "intermediate var feature"
vars:
nolabel-feature: "true"
matchFeatures:
- feature: cpu.cpuid
matchExpressions:
AVX512F: {op: Exists}
- feature: pci.device
matchExpressions:
vendor: {op: In, value: ["8086"]}
device: {op: In, value: ["1234", "1235"]}
- name: top-level-feature
matchFeatures:
- feature: rule.matched
matchExpressions:
kernel-feature: "true"
nolabel-feature: "true"
Support templating of label names in feature rules. It is available both
in NodeFeatureRule CRs and in custom rule configuration of nfd-worker.
This patch adds a new 'labelsTemplate' field to the rule spec, making it
possible to dynamically generate multiple labels per rule based on the
matched features. The feature relies on the golang "text/template"
package. When expanded, the template must contain labels in a raw
<key>[=<value>] format (where 'value' defaults to "true"), separated by
newlines i.e.:
- name: <rule-name>
labelsTemplate: |
<label-1>[=<value-1>]
<label-2>[=<value-2>]
...
All the matched features of 'matchFeatures' directives are available for
templating engine in a nested data structure that can be described in
yaml as:
.
<domain-1>:
<key-feature-1>:
- Name: <matched-key>
- ...
<value-feature-1:
- Name: <matched-key>
Value: <matched-value>
- ...
<instance-feature-1>:
- <attribute-1-name>: <attribute-1-value>
<attribute-2-name>: <attribute-2-value>
...
- ...
<domain-2>:
...
That is, the per-feature data available for matching depends on the type
of feature that was matched:
- "key features": only 'Name' is available
- "value features": 'Name' and 'Value' can be used
- "instance features": all attributes of the matched instance are
available
NOTE: In case of matchAny is specified, the template is executed
separately against each individual matchFeatures matcher and the
eventual set of labels is a superset of all these expansions. Consider
the following:
- name: <name>
labelsTemplate: <template>
matchAny:
- matchFeatures: <matcher#1>
- matchFeatures: <matcher#2>
matchFeatures: <matcher#3>
In the example above (assuming the overall result is a match) the
template would be executed on matcher#1 and/or matcher#2 (depending on
whether both or only one of them match), and finally on matcher#3, and
all the labels from these separate expansions would be created (i.e. the
end result would be a union of all the individual expansions).
NOTE 2: The 'labels' field has priority over 'labelsTemplate', i.e.
labels specified in the 'labels' field will override any labels
originating from the 'labelsTemplate' field.
A special case of an empty match expression set matches everything (i.e.
matches/returns all existing keys/values). This makes it simpler to
write templates that run over all values. Also, makes it possible to
later implement support for templates that run over all _keys_ of a
feature.
Some example configurations:
- name: "my-pci-template-features"
labelsTemplate: |
{{ range .pci.device }}intel-{{ .class }}-{{ .device }}=present
{{ end }}
matchFeatures:
- feature: pci.device
matchExpressions:
class: {op: InRegexp, value: ["^06"]}
vendor: ["8086"]
- name: "my-system-template-features"
labelsTemplate: |
{{ range .system.osrelease }}system-{{ .Name }}={{ .Value }}
{{ end }}
matchFeatures:
- feature: system.osRelease
matchExpressions:
ID: {op: Exists}
VERSION_ID.major: {op: Exists}
Imaginative template pipelines are possible, of course, but care must be
taken in order to produce understandable and maintainable rule sets.
Add auto-generated code for interfacing our CRD API. On top of this, a
CR controller can be implemented. This patch uses k8s/code-generator
for code generation. Run "make generate" in order to (re-)generate
everything. Path to the code-generator repository may need to be
specified:
K8S_CODE_GENERATOR=path/to/code-generator make apigen
Code-generator version 0.20.7 was used to create this patch. Install
k8s code-generator tools and clone the repo with:
git clone https://github.com/kubernetes/code-generator -b v0.20.7 <path/to/code-generator>
go install k8s.io/code-generator/cmd/...(at)v0.20.7
Having a dedicated type makes it possible to specify deepcopy functions
for it. We need to do this manually as deepcopy-gen doesn't know how to
create copies of regexps.
Add a cluster-scoped Custom Resource Definition for specifying labeling
rules. Nodes (node features, node objects) are cluster-level objects and
thus the natural and encouraged setup is to only have one NFD deployment
per cluster - the set of underlying features of the node stays the same
independent of how many parallel NFD deployments you have. Our extension
points (hooks, feature files and now CRs) can be be used by multiple
actors (depending on us) simultaneously. Having the CRD cluster-scoped
hopefully drives deployments in this direction. It also should make
deployment of vendor-specific labeling rules easy as there is no need to
worry about the namespace.
This patch virtually replicates the source.custom.FeatureSpec in a CRD
API (located in the pkg/apis/nfd/v1alpha1 package) with the notable
exception that "MatchOn" legacy rules are not supported. Legacy rules
are left out in order to keep the CRD simple and clean.
The duplicate functionality in source/custom will be dropped by upcoming
patches.
This patch utilizes controller-gen (from sigs.k8s.io/controller-tools)
for generating the CRD and deepcopy methods. Code can be (re-)generated
with "make generate". Install controller-gen with:
go install sigs.k8s.io/controller-tools/cmd/controller-gen@v0.7.0
Update kustomize and helm deployments to deploy the CRD.