Support templating of label names in feature rules. It is available both
in NodeFeatureRule CRs and in custom rule configuration of nfd-worker.
This patch adds a new 'labelsTemplate' field to the rule spec, making it
possible to dynamically generate multiple labels per rule based on the
matched features. The feature relies on the golang "text/template"
package. When expanded, the template must contain labels in a raw
<key>[=<value>] format (where 'value' defaults to "true"), separated by
newlines i.e.:
- name: <rule-name>
labelsTemplate: |
<label-1>[=<value-1>]
<label-2>[=<value-2>]
...
All the matched features of 'matchFeatures' directives are available for
templating engine in a nested data structure that can be described in
yaml as:
.
<domain-1>:
<key-feature-1>:
- Name: <matched-key>
- ...
<value-feature-1:
- Name: <matched-key>
Value: <matched-value>
- ...
<instance-feature-1>:
- <attribute-1-name>: <attribute-1-value>
<attribute-2-name>: <attribute-2-value>
...
- ...
<domain-2>:
...
That is, the per-feature data available for matching depends on the type
of feature that was matched:
- "key features": only 'Name' is available
- "value features": 'Name' and 'Value' can be used
- "instance features": all attributes of the matched instance are
available
NOTE: In case of matchAny is specified, the template is executed
separately against each individual matchFeatures matcher and the
eventual set of labels is a superset of all these expansions. Consider
the following:
- name: <name>
labelsTemplate: <template>
matchAny:
- matchFeatures: <matcher#1>
- matchFeatures: <matcher#2>
matchFeatures: <matcher#3>
In the example above (assuming the overall result is a match) the
template would be executed on matcher#1 and/or matcher#2 (depending on
whether both or only one of them match), and finally on matcher#3, and
all the labels from these separate expansions would be created (i.e. the
end result would be a union of all the individual expansions).
NOTE 2: The 'labels' field has priority over 'labelsTemplate', i.e.
labels specified in the 'labels' field will override any labels
originating from the 'labelsTemplate' field.
A special case of an empty match expression set matches everything (i.e.
matches/returns all existing keys/values). This makes it simpler to
write templates that run over all values. Also, makes it possible to
later implement support for templates that run over all _keys_ of a
feature.
Some example configurations:
- name: "my-pci-template-features"
labelsTemplate: |
{{ range .pci.device }}intel-{{ .class }}-{{ .device }}=present
{{ end }}
matchFeatures:
- feature: pci.device
matchExpressions:
class: {op: InRegexp, value: ["^06"]}
vendor: ["8086"]
- name: "my-system-template-features"
labelsTemplate: |
{{ range .system.osrelease }}system-{{ .Name }}={{ .Value }}
{{ end }}
matchFeatures:
- feature: system.osRelease
matchExpressions:
ID: {op: Exists}
VERSION_ID.major: {op: Exists}
Imaginative template pipelines are possible, of course, but care must be
taken in order to produce understandable and maintainable rule sets.
Separate feature discovery and creation of feature labels.
Generalize the discovery of nvdimm devices so that they can be matched
in custom label rules in a similar fashion as pci and usb devices.
Available attributes for matching nvdimm devices are limited to:
- devtype
- mode
For numa we now detect the number of numa nodes which can be matched
agains in custom label rules.
Labels created by the memory feature source are unchanged. The new
features being detected are available in custom rules only.
Example custom rule:
- name: "my memory rule"
labels:
my-memory-feature: "true"
matchFeatures:
- feature: memory.numa
matchExpressions:
"node_count": {op: Gt, value: ["3"]}
- feature: memory.nv
matchExpressions:
"devtype" {op: In, value: ["nd_dax"]}
Also, add minimalist unit test.
Separate feature discovery and creation of feature labels. Generalize
the feature discovery so that network devices can be matched in custom
label rules in a similar fashion as pci and usb devices. Available
attributes for matching are:
- operstate
- speed
- sriov_numvfs
- sriov_totalvfs
Labels created by the network feature source are unchanged. The new
features being detected are available in custom rules only.
Example custom rule:
- name: "my network rule"
labels:
my-network-feature: "true"
matchFeatures:
- feature: network.device
matchExpressions:
"operstate": { op: In, value: ["up"] }
"sriov_numvfs": { op: Gt, value: ["9"] }
Also, add minimalist unit test.
Implement a simple controller stub that operates on NodeFeatureRule
objects. The controller does not yet have any functionality other than
logging changes in the (NodeFeatureRule) objecs it is watching.
Also update the documentation on the -no-publish flag to match the new
functionality.
Separate feature discovery and creation of feature labels. Generalize
the feature discovery so that block devices can be matched in custom
label rules in a similar fashion as pci and usb devices. This extends
the discovery to other block queue attributes than 'rotational': now we
also detect 'dax', 'nr_zones' and 'zoned'.
Labels created by the storage feature source are unchanged. The new
features being detected are available in custom rules only.
Example custom rules:
- name: "my block rule 1"
labels:
my-block-feature-1: "true"
matchFeatures:
- feature: storage.block
"rotational": {op: In, value: ["0"]}
- name: "my block rule 2"
labels:
my-block-feature-2: "true"
matchFeatures:
- feature: storage.block
"zoned": {op: In, value: [“host-aware”, “host-managed”]}
Also, add minimalist unit test.
Add a cluster-scoped Custom Resource Definition for specifying labeling
rules. Nodes (node features, node objects) are cluster-level objects and
thus the natural and encouraged setup is to only have one NFD deployment
per cluster - the set of underlying features of the node stays the same
independent of how many parallel NFD deployments you have. Our extension
points (hooks, feature files and now CRs) can be be used by multiple
actors (depending on us) simultaneously. Having the CRD cluster-scoped
hopefully drives deployments in this direction. It also should make
deployment of vendor-specific labeling rules easy as there is no need to
worry about the namespace.
This patch virtually replicates the source.custom.FeatureSpec in a CRD
API (located in the pkg/apis/nfd/v1alpha1 package) with the notable
exception that "MatchOn" legacy rules are not supported. Legacy rules
are left out in order to keep the CRD simple and clean.
The duplicate functionality in source/custom will be dropped by upcoming
patches.
This patch utilizes controller-gen (from sigs.k8s.io/controller-tools)
for generating the CRD and deepcopy methods. Code can be (re-)generated
with "make generate". Install controller-gen with:
go install sigs.k8s.io/controller-tools/cmd/controller-gen@v0.7.0
Update kustomize and helm deployments to deploy the CRD.
The NodeResourceTopology API has been made cluster
scoped as in the current context a CR corresponds to
a Node and since Node is a cluster scoped resource it
makes sense to make NRT cluster scoped as well.
Ref: https://github.com/k8stopologyawareschedwg/noderesourcetopology-api/pull/18
Signed-off-by: Swati Sehgal <swsehgal@redhat.com>
The base should really have the very bare minimum. Remove all redundant
(at default-value) args and move the others to the specific
topologyupdater kustomize component. This also makes these settings
re-usable in user-specific overlays (that are not based on
topologyupdater-daemonset).
- create an overlay for deployment of all components
- create an overlay for just topologyupdater deployment (to be deployed in
conjunction with the default overlay)
- create a separate overlay for deployment of master and topologyupdater-job
Signed-off-by: Swati Sehgal <swsehgal@redhat.com>
Replicates nfd-daemonset-combined.yaml.template.
In addition to the overlay we need to add a separate set of patches
under components/common in order to handle the double-container pod.
Implement functionality virtually replicating deployment templates for
nfd-master and nfd-worker daemonset (nfd-master.yaml.template and
nfd-worker-daemonset.yaml.template) by adding a kustomize overlay named
"default".
We split the resources into multiple bases (rbac, master and
worker-daemonset) so that relevant parts are re-usable in
other deployment scenarios added later (e.g. "one-shot job", and
"combined daemonset").
This patch adds one component (components/common) doing the required
kustomization for the example deployment.