Update mocked implementation of
k8s.io/kubelet/pkg/apis/podresources/v1.PodResourcesListerClient. The
mocked implementation is moved to a separate "mocks" subpackage as it's
for an external interface.
This patch also adds code for auto-generation for the mocked interface.
Change the NFD API handler to re-try on node update failures. Will work
around transient failures, making sure that failed nodes (i.e. nodes
that we failed to update) don't need to wait for the 1 hour resync
period before being tried again.
This PR adds the combination of dynamic and builtin kernel modules into
one feature called `kernel.enabledmodule`. It's a superset of the
`kernel.loadedmodule` feature.
Signed-off-by: AhmedGrati <ahmedgrati1999@gmail.com>
Increase the NFD API controller resync period from 5 minutes to 1 hour.
The resync causes nfd-master to replay all NodeFeature and
NodeFeatureRule objects, being effectively a "big hammer reset all"
button. This should only be needed as an "insurance" to fix labels et al
in case they have been manually tampered (outside NFD) and against
certain bugs in nfd itself. NFD is not supposed to manage anything
fast-changing so 1 hour should be enough.
This change only affects behavior when the NodeFeature API has been
enabled (with -enable-nodefeature-api).
Add support for management of Extended Resources via the
NodeFeatureRule CRD API.
There are usage scenarios where users want to advertise features
as extended resources instead of labels (or annotations).
This patch enables the discovery of extended resources, via annotation
and patch of node.status.capacity and node.status.allocatable. By using
the NodeFeatureRule API.
Co-authored-by: Carlos Eduardo Arango Gutierrez <eduardoa@nvidia.com>
Co-authored-by: Markus Lehtonen <markus.lehtonen@intel.com>
Co-authored-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Signed-off-by: Carlos Eduardo Arango Gutierrez <eduardoa@nvidia.com>
Update node status before node metadata. This fixes a problem where we
lose track of NFD-managed extended resources in case patching node
status fails. Previously we removed all labels and annotations
(including the one listing our ERs) and only after that updated node
status. If node status update failed we had lost the annotation but
extended resources were still there, leaving them orphaned.
Disallow taints having a key with "kubernetes.io/" or "*.kubernetes.io/"
prefix. This is a precaution to protect the user from messing up with
the "official" well-known taints from Kubernetes itself. The only
exception is that the "nfd.node.kubernetes.io/" prefix is allowed.
However, there is one allowed NFD-specific namespace (and its
sub-namespaces) i.e. "feature.node.kubernetes.io" under the
kubernetes.io domain that can be used for NFD-managed taints.
Also disallow unprefixed taint keys. We don't add a default prefix to
unprefixed taints (like we do for labels) from NodeFeatureRules. This is
to prevent unpleasant surprises to users that need to manage matching
tolerations for their workloads.
Similar to the nfd-worker, in this PR we want to support the
dynamic run-time configurability through a config file for the nfd-master.
We'll use a json or yaml configuration file along with the fsnotify in
order to watch for changes in the config file. As a result, we're
allowing dynamic control of logging params, allowed namespaces,
extended resources, label whitelisting, and denied namespaces.
Signed-off-by: AhmedGrati <ahmedgrati1999@gmail.com>
Access to the kubelet state directory may raise concerns in some setups, added an option to disable it.
The feature is enabled by default.
Signed-off-by: Talor Itzhak <titzhak@redhat.com>
When a message received via the channel,
the main loop updates the `NodeResourceTopology` objects.
The notifier will send a message via the channel if:
1. It reached the sleep timeout.
2. It detected a change in Kubelet state files
Signed-off-by: Talor Itzhak <titzhak@redhat.com>
On different Kubernetes flavors like OpenShift for exmaple,
the Kubelet state directory path is different. make it configurable
for maximum flexability.
Signed-off-by: Talor Itzhak <titzhak@redhat.com>
Enabling reactive update for nfd-topology-updater
by detecting changes in Kubelet state/checkpoint files,
and signaling to the main loop to update the NodeResourceTopology
objects.
This has high value when scaling is an issue.
Having multiple pods deployed in between single update instance
might reflect incorrect resource accounting in the NRT CRs.
Example:
Time Interval = 5s
t0 - New update sent to NRT CRs
t1 - Schedule guaranteed podA
t2 - Schedule guaranteed podB
time elapsed between t0-t2 < 5 seconds,
IOW the update on t0 is the recent update.
In t2 the resource accounting reflected by NRT
is not aligned with the actual accounting because
NRT CRs doesn't reflect the change happened in t1.
With this reactive update feature we expect an update to be trigger
between t1 and t2 so the NRT objects will reflect more accurate
picture.
There still might be a scenario when the updates
aren't fast enough, but this is an additional
future planned optimization.
The notifier has two event types:
1. Time based - keeping the old behavior, trigger
an update per interval.
2. FS event - trigger an update when Kubelet state/checkpoint files modified.
Signed-off-by: Talor Itzhak <titzhak@redhat.com>
NodeResourceTopology(aka NRT) custom resource is used to enable NUMA aware Scheduling in Kubernetes.
As of now node-feature-discovery daemons are used to advertise those
resources but there is no service responsible for removing obsolete
objects(without corresponding Kubernetes node).
This patch adds new daemon called nfd-topology-gc which removes old
NRTs.
Signed-off-by: PiotrProkop <pprokop@nvidia.com>
Don't require features to be specified. The creator possibly only wants
to create labels or only some types of features. No need to specify
empty structs for the unused fields.
Correctly handle the case where no NodeFeature objects exist for certain
node (and NodeFeature API has been enabled with
-enable-nodefeature-api). In this case all the labels should be removed.
We want to always update all nodes at startup. Without this patch we
don't get any update event from the controller if no NodeFeature or
NodeFeatureRule objects exist in the cluster. Thus all nodes would stay
untouched whereas we really want to remove all labels from all nodes in
this case.
Implement a naive ratelimiter for node update events originating from
the nfd API. We might get a ton of events in short interval. The
simplest example is startup when we get a separate Add event for every
NodeFeature and NodeFeatureRule object. Without rate limiting we
run "update all nodes" separately for each NodeFeatureRule object, plus,
we would run "update node X" separately for each NodeFeature object
targeting node X. This is a huge amount of wasted work because in
principle just running "update all nodes" once should be enough.
Implement handling of multiple NodeFeature objects by merging all
objects (targeting a certain node) into one before processing the data.
This patch implements MergeInto() methods for all required data types.
With support for multiple NodeFeature objects per node, The "nfd api
workflow" can be easily demonstrated and tested from the command line.
Creating the folloiwing object (assuming node-n exists in the cluster):
apiVersion: nfd.k8s-sigs.io/v1alpha1
kind: NodeFeature
metadata:
labels:
nfd.node.kubernetes.io/node-name: node-n
name: my-features-for-node-n
spec:
# Features for NodeFeatureRule matching
features:
flags:
vendor.domain-a:
elements:
feature-x: {}
attributes:
vendor.domain-b:
elements:
feature-y: "foo"
feature-z: "123"
instances:
vendor.domain-c:
elements:
- attributes:
name: "elem-1"
vendor: "acme"
- attributes:
name: "elem-2"
vendor: "acme"
# Labels to be created
labels:
vendor-feature.enabled: "true"
vendor-setting.value: "100"
will create two feature labes:
feature.node.kubernetes.io/vendor-feature.enabled: "true"
feature.node.kubernetes.io/vendor-setting.value: "100"
In addition it will advertise hidden/raw features that can be used for
custom rules in NodeFeatureRule objects. Now, creating a NodeFeatureRule
object:
apiVersion: nfd.k8s-sigs.io/v1alpha1
kind: NodeFeatureRule
metadata:
name: my-rule
spec:
rules:
- name: "my feature rule"
labels:
"my-feature": "true"
matchFeatures:
- feature: vendor.domain-a
matchExpressions:
feature-x: {op: Exists}
- feature: vendor.domain-c
matchExpressions:
vendor: {op: In, value: ["acme"]}
will match the features in the NodeFeature object above and cause one
more label to be created:
feature.node.kubernetes.io/my-feature: "true"
Deprecate the '-featurerules-controller' command line flag as the name
does not describe the functionality anymore: in practice it controls the
CRD controller handling both NodeFeature and NodeFeatureRule objects.
The patch introduces a duplicate, more generally named, flag
'-crd-controller'. A warning is printed in the log if
'-featurerules-controller' flag is encountered.
Add initial support for handling NodeFeature objects. With this patch
nfd-master watches NodeFeature objects in all namespaces and reacts to
changes in any of these. The node which a certain NodeFeature object
affects is determined by the "nfd.node.kubernetes.io/node-name"
annotation of the object. When a NodeFeature object targeting certain
node is changed, nfd-master needs to process all other objects targeting
the same node, too, because there may be dependencies between them.
Add a new command line flag for selecting between gRPC and NodeFeature
CRD API as the source of feature requests. Enabling NodeFeature API
disables the gRPC interface.
-enable-nodefeature-api enable NodeFeature CRD API for incoming
feature requests, will disable the gRPC
interface (defaults to false)
It is not possible to serve gRPC and watch NodeFeature objects at the
same time. This is deliberate to avoid labeling races e.g. by nfd-worker
sending gRPC requests but NodeFeature objects in the cluster
"overriding" those changes (labels from the gRPC requests will get
overridden when NodeFeature objects are processed).
Support the new NodeFeatures object of the NFD CRD api. Add two new
command line options to nfd-worker:
-kubeconfig specifies the kubeconfig to use for
connecting k8s api (defaults to empty which
implies in-cluster config)
-enable-nodefeature-api enable the NodeFeature CRD API for
communicating node features to nfd-master,
will also automatically disable gRPC
(defgault to false)
No config file option for selecting the API is available as there should
be no need for dynamically selecting between gRPC and CRD. The
nfd-master configuration must be changed in tandem and it is safer (and
avoid awkward configuration races) to configure the whole NFD deployment
at once.
Default behavior of nfd-worker is not changed i.e. NodeFeatures object
creation is not enabled by default (but must be enabled with the command
line flag).
The patch also updates the kustomize and Helm deployment, adding RBAC
rules for nfd-worker and updating the example worker configuration.
Add a new NodeFeature CRD to the nfd Kubernetes API to communicate node
features over K8s api objects instead of gRPC. The new resource is
namespaced which will help the management of multiple NodeFeature
objects per node. This aims at enabling 3rd party detectors for custom
features.
In addition to communicating raw features the NodeFeature object also
has a field for directly requesting labels that should be applied on the
node object.
Rename the crd deployment file to nfd-api-crds.yaml so that it matches
the new content of the file. Also, rename the Helm subdir for CRDs to
match the expected chart directory structure.
Drop the gRPC communication to nfd-master and connect to the Kubernetes
API server directly when updating NodeResourceTopology objects.
Topology-updater already has connection to the API server for listing
Pods so this is not that dramatic change. It also simplifies the code
a lot as there is no need for the NFD gRPC client and no need for
managing TLS certs/keys.
This change aligns nfd-topology-updater with the future direction of
nfd-worker where the gRPC API is being dropped and replaced by a
CRD-based API.
This patch also update deployment files and documentation to reflect
this change.
Implement detection of kubernetes namespace by reading file
/var/run/secrets/kubernetes.io/serviceaccount/namespace
Aa a fallback (if the file is not accessible) we take namespace from
KUBERNETES_NAMESPACE environment variable. This is useful for e.g.
testing and development where you might run nfd-worker directly from the
command line on a host system.
This commits extends NFD master code to support adding node taints
from NodeFeatureRule CR. We also introduce a new annotation for
taints which helps to identify if the taint set on node is owned
by NFD or not. When user deletes the taint entry from
NodeFeatureRule CR, NFD will remove the taint from the node. But
to avoid accidental deletion of taints not owned by the NFD, it
needs to know the owner. Keeping track of NFD set taints in the
annotation can be used during the filtering of the owner. Also
enable-taints flag is added to allow users opt in/out for node
tainting feature. The flag takes precedence over taints defined
in NodeFeatureRule CR. In other words, if enbale-taints is set to
false(disabled) and user still defines taints on the CR, NFD will
ignore those taints and skip them from setting on the node.
Signed-off-by: Feruzjon Muyassarov <feruzjon.muyassarov@intel.com>
Extend NodeFeatureRule Spec with taints field to allow users to
specify the list of the taints they want to be set on the node if
rule matches.
Signed-off-by: Feruzjon Muyassarov <feruzjon.muyassarov@intel.com>
Drop the following flags that were deprecated already in v0.8.0:
-sleep-interval (replaced by core.sleepInterval config file option)
-label-whitelist (replaced by core.labelWhiteList config file option)
-sources (replaced by -label-sources flag)
The exclude-list allows to filter specific resource accounting
from NRT's objects per node basis.
The CRs created by the topology-updater are used by the scheduler-plugin
as a source of truth for making scheduling decisions.
As such, this feature allows to hide specific information
from the scheduler, which in turn
will affect the scheduling decision.
A common use case is when user would like to perform scheduling
decisions which are based on a specific resource.
In that case, we can exclude all the other resources
which we don't want the scheduler to exemine.
The exclude-list is provided to the topology-updater via a ConfigMap.
Resource type's names specified in the list should match the names
as shown here: https://pkg.go.dev/k8s.io/api/core/v1#ResourceName
This is a resurrection of an old work started here:
https://github.com/kubernetes-sigs/node-feature-discovery/pull/545
Signed-off-by: Talor Itzhak <titzhak@redhat.com>
Fix handling of templates that got broken in
b907d07d7e when "flattening" the internal
data structure of features. That happened because the golang
text/template format uses dots to reference fields of a struct /
elements of a map (i.e. 'foo.bar' means that 'bar' must be a sub-element
of foo). Thus, using dots in our feature names (e.g. 'cpu.cpuid') means
that that hierarchy must be reflected in the data structure that is fed
to the templating engine. Thus, for templates we're now stuck stuck with
two level hierarchy. It doesn't really matter for now as all our
features follow that naming patter. We might be able to overcome this
limitation e.g. by using reflect but that's left as a future exercise.
Scanning podresources can temporarily fail; the previous code was
mistakenly not rearming the loop condition when this occurred,
effectively stopping the monitoring.
Rather, we should always pool and bail out on unrecoverable
error or when asked to stop.
Signed-off-by: Francesco Romani <fromani@redhat.com>
Flatten the data structure that stores features, dropping the "domain"
level from the data model. That extra level of hierarchy brought little
benefit but just caused some extra complexity, instead. The new
structure nicely matches what we have in the NodeFeatureRule object (the
matchFeatures field of uses the same flat structure with the "feature"
field having a value <domain>.<feature>, e.g. "kernel.version").
This is pre-work for introducing a new "node feature" CRD that contains
the raw feature data. It makes the life of both users and developers
easier when both CRDs, plus our internal code, handle feature data in a
similar flat structure.
Move the previously-protobuf-only internal "feature api" over to the
public "nfd api" package. This is in preparation for introducing a new
CRD API for communicating features.
This patch carries no functional changes. Just moving code around.
Error strings should not be capitalized (ST1005) & remove the
redundancy from array, slice or map composite literals.
Signed-off-by: Feruzjon Muyassarov <feruzjon.muyassarov@intel.com>
Make the NoPublish config flag a more direct control point for
whether to publishing features. This patch is pre-work for adding
support for other clients (upcoming new CRD API) in nfd-worker.
Refactor the code so that the initialization and running of the gRPC
server is done in a separate function. The goal is to make the code more
maintainable in terms of disabling (and eventually removing) the gRPC
functionality in the future.
Refactor the code, moving the hostpath helper functionality to new
"pkg/utils/hostpath" package. This breaks odd-ish dependency
"pkg/utils" -> "source".
This patch adds a kubebuilder marker to add a short name nfr for
NodeFeatureRule CRD.
Signed-off-by: Feruzjon Muyassarov <feruzjon.muyassarov@intel.com>
Remove the cleanup code that removes ancient NFD labels with the
node.alpha.kubernetes-incubator.io/ prefix. This label namespace was
deprecated/dropped already in v0.4.0 so it should be safe to drop this
code.
Replace deprecated grpc.WithInsecure() with
grpc.WithTransportCredentials and insecure.NewCredentials(). Makes
golangci-lint pass muster.
enter the commit message for your changes. Lines starting
test that NodeFeatureRule templates work with empty MatchFeatures, but
with MatchAny.
this test would fail, higligting an issue which is fixed in next commit.
see #864.
Signed-off-by: Viktor Oreshkin <imselfish@stek29.rocks>
Revert the hack that was a workaround for issues with k8s deepcopy-gen.
New deepcopy-gen is able to generate code correctly without issues so
this is not needed anymore.
Also, removing this hack solves issues with object validation when
creating NodeFeatureRules programmatically with nfd go-client. This is
needed later with NodeFeatureRules e2e-tests.
Logically reverts f3cc109f99.
This patch changes a rare corner case of custom label rules with an
empty set of matchexpressions. The patch removes a special case where an
empty match expression set matched everything and returned all feature
elements for templates to consume. With this patch the match expression
set logically evaluates all expressions in the set and returns all
matches - if there are no expressions there are no matches and no
matched features are returned. However, the overall match result
(determining if "non-template" labels will be created) in this special
case will be "true" as before as none of the zero match expressions
failed.
The former behavior was somewhat illogical and counterintuitive: having
1 to N expressions matched and returned 1 to N features (at most), but,
having 0 expressions always matched everything and returned all
features. This was some leftover proof-of-concept functionality (for
some possible future extensions) that should have been removed before
merging.
It's possible for device plugins to advertise non-existent
numa node ids that cause topology updater to crash.
Signed-off-by: Tuomas Katila <tuomas.katila@intel.com>
* fix linter issues for few files
* fix linter issue of exported const Name should have comment or be unexported
* fix name lint issue and resolve lints
* add changes to comments
Do not prefix label names from the new matchFeatures/matchAny custom
rules with "custom-". We want to have the same result (set of labels)
from a rule independent of whether it has been specified in worker
config or in a NodeFeatureRule CRs. Legacy matchOn rules (not available
in NodeFeatureRule CRs) are intact, i.e. still prefixed, in order to
retain backwards compatibility.
Add a configuration option for controlling the enabled "raw" feature
sources. This is useful e.g. in testing and development, plus it also
allows fully shutting down discovery of features that are not needed in
a deployment. Supplements core.labelSources which controls the
enablement of label sources.
Provide backwards compatibility via a deprecated 'core.sources' config
file option. This will override 'core.labelSources'. A warning is
printed in the log if this option is detected.
The goal is to make the name more descriptive. Also keeping in mind a
possible future addition a 'featureSources' option (or similar) for
controlling the feature discovery.
Use the single-dash (i.e. '-option' instead of '--option') format
consistently accross log messages and documentation. This is the format
that was mostly used, already, and shown by command line help of the
binaries, for example.
Support templating of var names in a similar manner as labels. Add
support for a new 'varsTemplate' field to the feature rule spec which is
treated similarly to the 'labelsTemplate' field. The value of the field
is processed through the golang "text/template" template engine and the
expanded value must contain variables in <key>=<value> format, separated
by newlines i.e.:
- name: <rule-name>
varsTemplate: |
<label-1>=<value-1>
<label-2>=<value-2>
...
Similar rules as for 'labelsTemplate' apply, i.e.
1. In case of matchAny is specified, the template is executed separately
against each individual matchFeatures matcher.
2. 'vars' field has priority over 'varsTemplate'
Support backreferencing of output values from previous rules. Enables
complex rule setups where custom features are further combined together
to form even more sophisticated higher level labels. The labels created
by preceding rules are available as a special 'rule.matched' feature
(for matchFeatures to use).
If referencing rules accross multiple configs/CRDs care must be taken
with the ordering. Processing order of rules in nfd-worker:
1. Static rules
2. Files from /etc/kubernetes/node-feature-discovery/custom.d/
in alphabetical order. Subdirectories are processed by reading their
files in alphabetical order.
3. Custom rules from main nfd-worker.conf
In nfd-master, NodeFeatureRule objects are processed in alphabetical
order (based on their metadata.name).
This patch also adds new 'vars' fields to the rule spec. Like 'labels',
it is a map of key-value pairs but no labels are generated from these.
The values specified in 'vars' are only added for backreferencing into
the 'rules.matched' feature. This may by desired in schemes where the
output of certain rules is only used as intermediate variables for other
rules and no labels out of these are wanted.
An example setup:
- name: "kernel feature"
labels:
kernel-feature:
matchFeatures:
- feature: kernel.version
matchExpressions:
major: {op: Gt, value: ["4"]}
- name: "intermediate var feature"
vars:
nolabel-feature: "true"
matchFeatures:
- feature: cpu.cpuid
matchExpressions:
AVX512F: {op: Exists}
- feature: pci.device
matchExpressions:
vendor: {op: In, value: ["8086"]}
device: {op: In, value: ["1234", "1235"]}
- name: top-level-feature
matchFeatures:
- feature: rule.matched
matchExpressions:
kernel-feature: "true"
nolabel-feature: "true"
Require that the expanded LabelsTemplate has values. That is, the
(expanded) template must consist of key=value pairs separated by
newlines. No default value will be assigned and we now return an error
if a (non-empty) line not conforming with the key=value format is
encountered.
Commit c8d73666d described that the value defaults to "true" if not
specified. That was not the case and we defaulted to an empty string,
instead.
An example:
- name: "my rule"
labelsTemplate: |
my.label.1=foo
my.label.2=
Would create these labels:
"my.label.1": "foo"
"my.label.2": ""
Further, the following:
- name: "my failing rule"
labelsTemplate: |
my.label.3
will cause an error in the rule processing.
Support templating of label names in feature rules. It is available both
in NodeFeatureRule CRs and in custom rule configuration of nfd-worker.
This patch adds a new 'labelsTemplate' field to the rule spec, making it
possible to dynamically generate multiple labels per rule based on the
matched features. The feature relies on the golang "text/template"
package. When expanded, the template must contain labels in a raw
<key>[=<value>] format (where 'value' defaults to "true"), separated by
newlines i.e.:
- name: <rule-name>
labelsTemplate: |
<label-1>[=<value-1>]
<label-2>[=<value-2>]
...
All the matched features of 'matchFeatures' directives are available for
templating engine in a nested data structure that can be described in
yaml as:
.
<domain-1>:
<key-feature-1>:
- Name: <matched-key>
- ...
<value-feature-1:
- Name: <matched-key>
Value: <matched-value>
- ...
<instance-feature-1>:
- <attribute-1-name>: <attribute-1-value>
<attribute-2-name>: <attribute-2-value>
...
- ...
<domain-2>:
...
That is, the per-feature data available for matching depends on the type
of feature that was matched:
- "key features": only 'Name' is available
- "value features": 'Name' and 'Value' can be used
- "instance features": all attributes of the matched instance are
available
NOTE: In case of matchAny is specified, the template is executed
separately against each individual matchFeatures matcher and the
eventual set of labels is a superset of all these expansions. Consider
the following:
- name: <name>
labelsTemplate: <template>
matchAny:
- matchFeatures: <matcher#1>
- matchFeatures: <matcher#2>
matchFeatures: <matcher#3>
In the example above (assuming the overall result is a match) the
template would be executed on matcher#1 and/or matcher#2 (depending on
whether both or only one of them match), and finally on matcher#3, and
all the labels from these separate expansions would be created (i.e. the
end result would be a union of all the individual expansions).
NOTE 2: The 'labels' field has priority over 'labelsTemplate', i.e.
labels specified in the 'labels' field will override any labels
originating from the 'labelsTemplate' field.
A special case of an empty match expression set matches everything (i.e.
matches/returns all existing keys/values). This makes it simpler to
write templates that run over all values. Also, makes it possible to
later implement support for templates that run over all _keys_ of a
feature.
Some example configurations:
- name: "my-pci-template-features"
labelsTemplate: |
{{ range .pci.device }}intel-{{ .class }}-{{ .device }}=present
{{ end }}
matchFeatures:
- feature: pci.device
matchExpressions:
class: {op: InRegexp, value: ["^06"]}
vendor: ["8086"]
- name: "my-system-template-features"
labelsTemplate: |
{{ range .system.osrelease }}system-{{ .Name }}={{ .Value }}
{{ end }}
matchFeatures:
- feature: system.osRelease
matchExpressions:
ID: {op: Exists}
VERSION_ID.major: {op: Exists}
Imaginative template pipelines are possible, of course, but care must be
taken in order to produce understandable and maintainable rule sets.
Implement a private helper type (nameTemplateHelper) for handling
(executing and caching) of templated names. DeepCopy methods are
manually implemented as controller-gen is not able to help with that.
Enable Custom Resource based label creation in nfd-master. This extends
the previously implemented controller stub for watching NodeFeatureRule
objects. NFD-master watches NodeFeatureRule objects in the cluster and
processes the rules on every incoming labeling request from workers.
The functionality relies on the "raw features" (identical to how
nfd-worker handles custom rules) submitted by nfd-worker, making it
independent of the label source configuration of the worker. This means
that the labeling functions as expected even if all sources in the
worker are disabled.
NOTE: nfd-master is stateless and re-labeling only happens on the
reception of SetLabelsRequest from the worker – i.e. on intervals
specified by the core.sleepInterval configuration option (or
-sleep-interval cmdline flag) of each nfd-worker instance. This means
that modification/creation of NodeFeatureRule objects does not
automatically update the node labels. Instead, the changes only come
visible when workers send their labeling requests.
Add a new command line flag for disabling/enabling the controller for
NodeFeatureRule objects. In practice, disabling the controller disables
all labels generated from rules in NodeFeatureRule objects.
Implement a simple controller stub that operates on NodeFeatureRule
objects. The controller does not yet have any functionality other than
logging changes in the (NodeFeatureRule) objecs it is watching.
Also update the documentation on the -no-publish flag to match the new
functionality.
Add auto-generated code for interfacing our CRD API. On top of this, a
CR controller can be implemented. This patch uses k8s/code-generator
for code generation. Run "make generate" in order to (re-)generate
everything. Path to the code-generator repository may need to be
specified:
K8S_CODE_GENERATOR=path/to/code-generator make apigen
Code-generator version 0.20.7 was used to create this patch. Install
k8s code-generator tools and clone the repo with:
git clone https://github.com/kubernetes/code-generator -b v0.20.7 <path/to/code-generator>
go install k8s.io/code-generator/cmd/...(at)v0.20.7
Move the rule processing of matchFeatures and matchAny from
source/custom package over to pkg/apis/nfd, aiming for better integrity
and re-usability of the code. Does not change the CRD API as such, just
adds more supportive functions.
Having a dedicated type makes it possible to specify deepcopy functions
for it. We need to do this manually as deepcopy-gen doesn't know how to
create copies of regexps.
Add a cluster-scoped Custom Resource Definition for specifying labeling
rules. Nodes (node features, node objects) are cluster-level objects and
thus the natural and encouraged setup is to only have one NFD deployment
per cluster - the set of underlying features of the node stays the same
independent of how many parallel NFD deployments you have. Our extension
points (hooks, feature files and now CRs) can be be used by multiple
actors (depending on us) simultaneously. Having the CRD cluster-scoped
hopefully drives deployments in this direction. It also should make
deployment of vendor-specific labeling rules easy as there is no need to
worry about the namespace.
This patch virtually replicates the source.custom.FeatureSpec in a CRD
API (located in the pkg/apis/nfd/v1alpha1 package) with the notable
exception that "MatchOn" legacy rules are not supported. Legacy rules
are left out in order to keep the CRD simple and clean.
The duplicate functionality in source/custom will be dropped by upcoming
patches.
This patch utilizes controller-gen (from sigs.k8s.io/controller-tools)
for generating the CRD and deepcopy methods. Code can be (re-)generated
with "make generate". Install controller-gen with:
go install sigs.k8s.io/controller-tools/cmd/controller-gen@v0.7.0
Update kustomize and helm deployments to deploy the CRD.
Create a new package pkg/apis/nfd/v1alpha1 and migrate the custom rule
expressions over there. This is the first step in creating a new CRD API
for custom rules.
Enable transmitting the discovered "raw" features over the gRPC API.
Extend pkg/api/feature with protobuf and gRPC code. In this, utilize
go-to-protobuf from k8s code-generator for auto-generating the gRPC
interface from golang code. The tool can be Installed with:
go install k8s.io/code-generator/cmd/go-to-protobuf@v0.20.7
The auto-generated code is (re-)generated/updated with "make apigen".
The NodeResourceTopology API has been made cluster
scoped as in the current context a CR corresponds to
a Node and since Node is a cluster scoped resource it
makes sense to make NRT cluster scoped as well.
Ref: https://github.com/k8stopologyawareschedwg/noderesourcetopology-api/pull/18
Signed-off-by: Swati Sehgal <swsehgal@redhat.com>
The Kuberenetes pod resource API now exposing the memory and hugepages information
for guaranteed pods. We can use this information to update NodeResourceTopology
resource with memory and hugepages data.
Signed-off-by: Artyom Lukianov <alukiano@redhat.com>
The methods are used during calculation of reserved memory for system workloads.
The calcualation is `resourceCapacity - resourceAllocatable`.
Signed-off-by: Artyom Lukianov <alukiano@redhat.com>
Use 'go generate' for auto-generating code. Drop the old 'mock' and
'apigen' makefile targets. Those are replaced with a single
make generate
which (re-)generates everything.
There have been recent changes made to the noderesourcetopology API
storing the proto file generated using go-to-protobuf tool and
this code inports the proto generated in the API in the topology-updater.proto
The PRs corresponding to the changes are as follows:
https://github.com/k8stopologyawareschedwg/noderesourcetopology-api/pull/9https://github.com/k8stopologyawareschedwg/noderesourcetopology-api/pull/13
Commands used to generate topology-updater.pb.go file:
go install github.com/golang/protobuf/protoc-gen-go@v1.4.3
go mod vendor
protoc --go_opt=paths=source_relative --go_out=plugins=grpc:. pkg/topologyupdater/topology-updater.proto -I. -Ivendor
As part of implmentation of this patch, reserved (non-allocatable) CPUs
are evaluated by performing a difference between all the CPUs on a system
(determined by using ghw) and allocatable CPUs (determined by querying
GetAllocatableResources podResource API endpoint).
When aggregator creates the NUMA zones, it will skip the zone creation if
there are no allocatable resources. In this update we creates those missing
zone with zero allocatable/available resources so we won't have holes in the
array of reported zones.
Co-Authored-by: Talor Itzhak <titzhak@redhat.com>
Signed-off-by: Swati Sehgal <swsehgal@redhat.com>
For accounting we should consider all guaranteed pods with
integral CPU requests and all the pods with device requests
This patch ensures that pods are only considered
for accounting disregarding non-guranteed pods without any
device request.
Signed-off-by: Swati Sehgal <swsehgal@redhat.com>
- Files obtained after running make mock
- Run `go get github.com/vektra/mockery` and make sure that
mockery is in your $PATH
- run `make mock`
Signed-off-by: Swati Sehgal <swsehgal@redhat.com>
- This patch allows to expose Resource Hardware Topology information
through CRDs in Node Feature Discovery.
- In order to do this we introduce another software component called
nfd-topology-updater in addition to the already existing software
components nfd-master and nfd-worker.
- nfd-master was enhanced to communicate with nfd-topology-updater
over gRPC followed by creation of CRs corresponding to the nodes
in the cluster exposing resource hardware topology information
of that node.
- Pin kubernetes dependency to one that include pod resource implementation
- This code is responsible for obtaining hardware information from the system
as well as pod resource information from the Pod Resource API in order to
determine the allocatable resource information for each NUMA zone. This
information along with Costs for NUMA zones (obtained by reading NUMA distances)
is gathered by nfd-topology-updater running on all the nodes
of the cluster and propagate NUMA zone costs to master in order to populate
that information in the CRs corresponding to the nodes.
- We use GHW facilities for obtaining system information like CPUs, topology,
NUMA distances etc.
- This also includes updates made to Makefile and Dockerfile and Manifests for
deploying nfd-topology-updater.
- This patch includes unit tests
- As part of the Topology Aware Scheduling work, this patch captures
the configured Topology manager scope in addition to the Topology manager policy.
Based on the value of both attribues a single string will be populated to the CRD.
The string value will be on of the following {SingleNUMANodeContainerLevel,
SingleNUMANodePodLevel, BestEffort, Restricted, None}
Co-Authored-by: Artyom Lukianov <alukiano@redhat.com>
Co-Authored-by: Francesco Romani <fromani@redhat.com>
Co-Authored-by: Talor Itzhak <titzhak@redhat.com>
Signed-off-by: Swati Sehgal <swsehgal@redhat.com>
Setup the topologyupdater API for gRPC communication of
nfd-topology-updater with master
We generate pb.go file to reflect latest dependency changes
using github.com/golang/protobuf/protoc-gen-go and generate
grpc files via:
`protoc pkg/topologyupdater/topology-updater.proto --go_out=plugins=grpc:.`
Please refer to: https://github.com/k8stopologyawareschedwg/noderesourcetopology-api/blob/master/pkg/apis/topology/v1alpha1/types.go
Co-Authored-by: Artyom Lukianov <alukiano@redhat.com>
Co-Authored-by: Francesco Romani <fromani@redhat.com>
Signed-off-by: Swati Sehgal <swsehgal@redhat.com>
Specify a new interface for managing "raw" feature data. This is the
first step to separate raw feature data from node labels. None of the
feature sources implement this interface, yet.
This patch unifies the data format of "raw" features by dividing them
into three different basic types.
- keys, a set of names without any associated values, e.g. CPUID flags
or loaded kernel modules
- values, a map of key-value pairs, for features with a single value,
e.g. kernel config flags or os version
- instances, a list of instances each of which has multiple attributes
(key-value pairs of their own), e.g. PCI or USB devices
The new feature data types are defined in a new "pkg/api/feature"
package, catering decoupling and re-usability of code e.g. within future
extentions of the NFD gRPC API.
Rename the Discover() method of LabelSource interface to GetLabels().
Implement new registration infrastructure under the "source" package.
This change loosens the coupling between label sources and the
nfd-worker, making it easier to refactor and move the code around.
Also, create a separate interface (ConfigurableSource) for configurable
feature sources in order to eliminate boilerplate code.
Add safety checks to the sources that they actually implement the
interfaces they should.
In sake of consistency and predictability (of behavior) change all
methods of the sources to use pointer receivers.
Add simple unit tests for the new functionality and include source/...
into make test target.
Refactor the worker code and split out gRPC client connection handling
into a separate base type. The intent is to promote re-usability of code
for other NFD clients, too.
Add a separate label namespace for profile labels, intended for
user-specified higher level "meta features". Also sub-namespaces of this
(i.e. <sub-ns>.profile.node.kubernetes.io) are allowed.
Allow <sub-ns>.feature.node.kubernetes.io label namespaces. Makes it
possible to have e.g. vendor specific label ns without the need to user
-extra-label-ns.
For auto-generating api(s).
Also, re-generate/refresh the gRPC with `make apigen` (with protoc
v3.17.3 and protoc-gen-go from github.com/golang/protobuf v1.5.2) to
sync up things.
Reduce default log verbosity. Only print out labels if log verbosity is
1 or higher ('core.klog.v: 1' config file option or '-v 1' on command
line). Also, dump the labels in a reproducible (sorted) format.
The code should be stable enough. If there are fatal bugs causing the
discovery to panic/segfault that should be made visible instead of
semi-siently hiding it. Also, this caused one (negative) test case to
fail undetected which is now fixed.
Changes the behaviour so that if the specified configuration file exists
it must be valid. Error out at startup if the config is invalid.
Similarly, exit with an error at runtime if the config file becomes
invalid. Bailing out, instead of just printing an error, was a
deliberate choice in order to make configuration mistakes evident.
Having no configuration file is tolerated, however. If the specified
configuration file does not exists nfd-worker resorts to default
settings.
Add a config file option for controlling the enabled feature sources,
aimed at replacing the --sources command line flag which is now marked
as deprecated. The command line flag takes precedence over the config
file option.
Add a config file option for label whitelisting. Deprecate the
--label-whitelist command line flag. Note that the command line flag has
higher priority than the config file option.
Add a new config file option for (dynamically) controlling the sleep
interval. At the same time, deprecate the --sleep-interval command line
flag. The command line flag takes precedence over the config file option.
Allows dynamic (re-)configuration of most nfd-worker options. The goal
is to have most configuration parameters specified in the configuration
file and deprecate most of the command line flags. The priority is
intended to be such that command line flags override whatever is
specified in the configuration file. Thus, specifying something on the
command line effectively disables dynamic configurability of that
parameter.
This patch adds core.noPublish config file option to demonstrate how the
new mechanism is supposed to work. The --no-publish command line flag
takes precedence over this config file option.
Always do re-discovery and re-labeling after a configuration file
change. his way the new config comes into effect immediately, even if
the sleep interval is long (or infinite) # Please enter the commit
message for your changes. Lines starting
Add support for detecting configuration file changes via file system
notifications (fsnotify). Watches are added for the whole directory
chain (up to root directory) so that all changes (even directory
renames) affecting the given configuration file path are captured.
Previously dynamic (re-)configuration of nfd-worker was implemented by
(re-)reading the configuration file on every labeling pass. This was
simple and effective, even if a bit wasteful. However, it didn't provide
asynchronous configuration updates that will be required for e.g.
controlling the "sleep-interval" parameter dynamically which will be
implemented by later patches.
This can be used to help running multiple parallel NFD deployments in
the same cluster. The flag changes the node annotation namespace to
<instance>.nfd.node.kubernetes.io allowing different nfd-master intances
to store metadata in separate annotations.
Handle both creation and parsing of the "feature-labels" and
"extended-resources" annotations in the function. I think this is more
logical to keep them together.
When updating node labels and annotations use JSON patches instead of
doing a read-modify-write on the whole node object. Patching is already
being used in managing extended resources so some of the existing code
was re-usable.
This patch should mitigate the problem of node update failures caused by
race conditions (a change in the node object between our read and write)
resulting e.g. in errors/restarts in nfd worker pods.
For historical reasons the labels in the default nfd namespace have been
internally represented without the namespace part. I.e. instead of
"feature.node.kubernetes.io/foo" we just use "foo". NFD worker uses this
representation, too, both internally and over the gRPC requests. The
same scheme has been used for annotations.
This patch changes NFD master to use fully namespaced label and
annotation names internally. This hopefully makes the code a bit more
understandable. It also addresses some corner cases making the handling
of label names consistent, making it possible to use both "truncated"
and fully namespaced names over the gRPC interface (and in the
annotations).
A new special value 'all' is a shortcut for enabling all feature
sources. It should be the only name specified -- if any other names are
specified 'all' does not take effect, but, we only enable the listed
feature sources. E.g.
--sources=all enables all sources, but
--sources=all,cpu only enables the cpu source
Also, print a warning if unknown sources are specified.
A new sub-command like flag for cleaning up a cluster. When --prune is
specified nfd-master removes all NFD related labels, annotations and
extended resources from all nodes of the cluster and exits.
This should help undeployment of NFD and be useful for development.
Dumb re-read/re-parse of the configuration file on every round of
discoery. Probably not the most elegant solution to watch for config
file changes, but, it works and doesn't cost much overhead.
Extend the FeatureSource interface with new methods for configuration
handling. This enables easier on-the fly reconfiguration of the
feature sources. Further, it simplifies adding config support to feature
sources in the future. Stub methods are added to sources that do not
currently have any configurability.
The patch fixes some (corner) cases with the overrides (--options)
handling, too:
- Overrides were not applied if config file was missing or its parsing
failed
- Overrides for a certain source did not have effect if an empty config
for the source was specified in the config file. This was caused by
the first pass of parsing (config file) setting a nil pointer to the
source-specific config, effectively detaching it from the main config.
The second pass would then create a new instance of the source
specific config, but, this was not visible in the feature source, of
course.
Make the list of enabled sources and the label whitelist regexp members
of the nfdWorker instance. Get rid of the not-that-well-defined
configureParameters() function.
Unify handling of --label-whitelist in nfd-worker and nfd-master. That is,
in nfd-worker, apply the regexp filter on non-namespaced part of the
label name.
Brief history:
1. Originally the whitelist regexp was applied on the full namespaced
label name (that would be e.g.
'feature.node.kubernetes.io/cpu-cpuid.AVX' in the current nfd version)
2. Commit 81752b2d changed the behavior so that the regexp was applied
on the non-namespaced part (that would be `cpu-cpuid.AVX`)
3. Commit 40918827 added support for custom label namespaces. With this
change, the label whitelist handling diverged between nfd-worker and
nfd-master. In nfd-master the whitelist regexp is always applied on
the non-namespaced label name. However, in nfd-worker the whitelist
handling is two-fold (and inconsistent): for labels in the standard
nfd namespace regexp is applied on the non-namespaced part (e.g.
`cpu-cpuid.AVX`, but, for labels in custom namespaces the regexp is
applied on the full name (e.g. `example.com/my-feature`).
This patch changes nfd-worker to behave similarly to nfd-master. The
namespace part is now always omitted, which should be easier for the
users to comprehend.
Also, fixes a bug in the label name prefixing so that the name of the
feature source is not prefixed into labels with custom label namespace
(effectively mangling the intended namespace). For example, previously a
'example.com/feature' label from the 'custom' feature source would be
prefixed with the source name, mangling it to
'custom-example.com/feature'.
This builds on the PCI support to enable the discovery of USB devices.
This is primarily intended to be used for the discovery of Edge-based
heterogeneous accelerators that are connected via USB, such as the Coral
USB Accelerator and the Intel NCS2 - our main motivation for adding this
capability to NFD, and as part of our work in the SODALITE H2020
project.
USB devices may define their base class at either the device or
interface levels. In the case where no device class is set, the
per-device interfaces are enumerated instead. USB devices may
furthermore have multiple interfaces, which may or may not use the
identical class across each interface. We therefore report device
existence for each unique class definition to enable more fine-grained
labelling and node selection.
The default labelling format includes the class, vendor and device
(product) IDs, as follows:
feature.node.kubernetes.io/usb-fe_1a6e_089a.present=true
As with PCI, a subset of device classes are whitelisted for matching.
By default, there are only a subset of device classes under which
accelerators tend to be mapped, which is used as the basis for
the whitelist. These are:
- Video
- Miscellaneous
- Application Specific
- Vendor Specific
For those interested in matching other classes, this may be extended
by using the UsbId rule provided through the custom source. A full
list of class codes is provided by the USB-IF at:
https://www.usb.org/defined-class-codes
For the moment, owing to a lack of a demonstrable use case, neither
the subclass nor the protocol information are exposed. If this
becomes necessary, support for these attributes can be trivially
added.
Signed-off-by: Paul Mundt <paul.mundt@adaptant.io>
Just print a warning instead of exiting with an error if no version has
been specified at build-time. This was pointless and just annoying at
development time when doing builds with go directly.
This adds support for making selected labels extended resources.
Labels which have integer values, can be promoted to Kubernetes extended
resources by listing them to the added command line flag
`--resource-labels`. These labels won't then show in the node label
section, they will appear only as extended resources.
Signed-off-by: Ukri Niemimuukko <ukri.niemimuukko@intel.com>