Error strings should not be capitalized (ST1005) & remove the
redundancy from array, slice or map composite literals.
Signed-off-by: Feruzjon Muyassarov <feruzjon.muyassarov@intel.com>
Add tests covering the basic functionality of NodeFeatureRule objects,
covering different feature types ("flag features", "attribute features"
and "instance features") as well as backreferencing (using the output of
previously run rules) and templating. The test relies on the "fake"
feature source and its default configuration.
We need this fix https://github.com/kubernetes/kubernetes/pull/110875
to have reliable tests, but up until we can bump the k/k deps to 1.25+,
we can't consume it.
So borrow it from k/k repo for the time being.
Signed-off-by: Francesco Romani <fromani@redhat.com>
In some cases (CI) it is useful to run NFD e2e tests using
ephemeral clusters. To save time and bandwidth, it is also useful
to prime the ephemeral cluster with the images under test.
In these circumstances there is no risk of running a stale image,
and having a `Always` PullPolicy hardcoded actually makes
the whole exercise null.
So we add a new option, disabled by default, to make the e2e
manifest use the `IfNotPresent` pull policy, to effectively
cover this use case.
Signed-off-by: Francesco Romani <fromani@redhat.com>
Change the part of the e2e-test configuration that contains
node-specific expected labels and annotations to a list, instead of a
map. This makes the parsing order deterministic and makes it possible to
e.g. have a default at the end of the list that captures "all the rest".
The test was broken twofold: Firstly, the annotation was not checked at
all because the name of the node where nfd-master is running was not
set. Secondly, the annotation prefix was used incorrectly.
Lift the restriction to run custom rule tests on non-master node. Try to
find one but do not fail if that fails. Makes the end-to-end tests
runnable on single-node clusters such a simple minikube deployments.
- This patch allows to expose Resource Hardware Topology information
through CRDs in Node Feature Discovery.
- In order to do this we introduce another software component called
nfd-topology-updater in addition to the already existing software
components nfd-master and nfd-worker.
- nfd-master was enhanced to communicate with nfd-topology-updater
over gRPC followed by creation of CRs corresponding to the nodes
in the cluster exposing resource hardware topology information
of that node.
- Pin kubernetes dependency to one that include pod resource implementation
- This code is responsible for obtaining hardware information from the system
as well as pod resource information from the Pod Resource API in order to
determine the allocatable resource information for each NUMA zone. This
information along with Costs for NUMA zones (obtained by reading NUMA distances)
is gathered by nfd-topology-updater running on all the nodes
of the cluster and propagate NUMA zone costs to master in order to populate
that information in the CRs corresponding to the nodes.
- We use GHW facilities for obtaining system information like CPUs, topology,
NUMA distances etc.
- This also includes updates made to Makefile and Dockerfile and Manifests for
deploying nfd-topology-updater.
- This patch includes unit tests
- As part of the Topology Aware Scheduling work, this patch captures
the configured Topology manager scope in addition to the Topology manager policy.
Based on the value of both attribues a single string will be populated to the CRD.
The string value will be on of the following {SingleNUMANodeContainerLevel,
SingleNUMANodePodLevel, BestEffort, Restricted, None}
Co-Authored-by: Artyom Lukianov <alukiano@redhat.com>
Co-Authored-by: Francesco Romani <fromani@redhat.com>
Co-Authored-by: Talor Itzhak <titzhak@redhat.com>
Signed-off-by: Swati Sehgal <swsehgal@redhat.com>
Mount /usr/lib and /usr/src as /host-usr/lib and /host-usr/src inside the pod
to allow NFD to search for the kernel configuration file inside /usr.
This solves the problem of the kernel config file not being present in /boot
on s390x RHCOS.
Signed-off-by: Jan Schintag <jan.schintag@de.ibm.com>
There are cases when the only available metadata for discovering
features is the node's name. The "nodename" rule extends the custom
source and matches when the node's name matches one of the given
nodename regexp patterns.
It is also possible now to set an optional "value" on custom rules,
which overrides the default "true" label value in case the rule matches.
In order to allow more dynamic configurations without having to modify
the complete worker configuration, custom rules are additionally read
from a "custom.d" directory now. Typically that directory will be filled
by mounting one or more ConfigMaps.
Signed-off-by: Marc Sluiter <msluiter@redhat.com>
This can be used to help running multiple parallel NFD deployments in
the same cluster. The flag changes the node annotation namespace to
<instance>.nfd.node.kubernetes.io allowing different nfd-master intances
to store metadata in separate annotations.
For historical reasons the labels in the default nfd namespace have been
internally represented without the namespace part. I.e. instead of
"feature.node.kubernetes.io/foo" we just use "foo". NFD worker uses this
representation, too, both internally and over the gRPC requests. The
same scheme has been used for annotations.
This patch changes NFD master to use fully namespaced label and
annotation names internally. This hopefully makes the code a bit more
understandable. It also addresses some corner cases making the handling
of label names consistent, making it possible to use both "truncated"
and fully namespaced names over the gRPC interface (and in the
annotations).
This patch fix typo in a file name
Updates default image on e2e tests
Adds an extra file that should be ignored to gitignore
Use master instead of versioned tag for default test image
Interpret node names in the e2e test config as regexps instead of plain
strings. This makes it a lot easier to maintain a config file where
multiple nodes share configuration.
Implement an end-to-end test with all feature sources enabled. The new
test runs nfd-worker as a daemonset on all (schedulable) nodes of the
test cluster which makes it possible to cover a wide range features,
assuming the test cluster is heterogenous containing nodes with varying
system configurations.
The features available depends on the node(s) the e2e testa are run on.
Thus, some runtime parameterization of the tests is needed. The patch
adds a new command line test flag 'nfd.e2e-config' that is used to
specify the per-node feature labels and annotations that is expected to
be present in the cluster. An example configuration file is provided
with the patch. The pod spec of nfd-worker deployment is changed to
better correspond the default deployment and thus enable wider feature
discovery. This means using hostnetwork and adding mounts for /sys /boot
and /etc/os-release.
The patch changes node object management so that all nfd-related labels
are removed after each test (not just the ones the test is expected to
add). Also, all nfd-related annotations are now removed.