Reduce the wait time of nfd-worker pods to be in ready-state (before
proceeding with tests) from five to two seconds. Make tests faster to
run. Two seconds should be enough for nfd-workers to do their job and
get nodes labeled.
The docker image that used during e2e test
composed of repo and tag flags that are
passed to the test itself.
The problem is that the docker image initialized
before the flags are parsed. Hence, it will always contains
the default flags value.
Moving the variable into a separate function, fixing the issue.
Also, moving the global variables to `e2e_test.go` since
it commonly used by all tests.
Signed-off-by: Talor Itzhak <titzhak@redhat.com>
Use the "single-dash" version of nfd command line flags in deployment
files and e2e-tests. No impact in functionality, just aligns with
documentation and other parts of the codebase.
Add an initial test set for the NodeFeature API. This is done simply by
running a second pass of the tests but with -enable-nodefeature-api
(i.e. NodeFeature API enabled and gRPC disabled). This should give basic
confidence that the API actually works and form a basis for further
imporovements on testing the new CRD API.
Drop the pod-security.kubernetes.io/enforce label from the test
namespace, i.e. remove pod security admission enforcement. NFD-worker
uses restricted host mounts (/sys) etc so pod creation fails even in
privileged mode if pod security admission enforcement is enabled.
Only generate CRDs once in the beginning of the test run. Use the "Ordered"
option for the test container so that we can utilize ginkgo.BeforeAll to
only do stuff once before the first test. Changing from unordered to
ordered shouldn't make a big difference here.
Add a cleanup function to remove stale NodeFeatureRule objects that are
cluster-scoped and not deleted with the test namespace.
Use RuntimeDefault seccomp profile in nfd worker and topology
updater pod spec similar to nfd master.
Signed-off-by: Feruzjon Muyassarov <feruzjon.muyassarov@intel.com>
After introducing NodeFeatureRule we packed two CRD definitions in one
yaml file. Our e2e-tests were not prepared to that and the file itself
was also renamed so it couldn't even be read by the test suite.
With this change the e2e-tests start to create NodeFeatre CRD in the
test cluster, preparing for the addition of e2e-tests for NodeFeature
API.
Drop the gRPC communication to nfd-master and connect to the Kubernetes
API server directly when updating NodeResourceTopology objects.
Topology-updater already has connection to the API server for listing
Pods so this is not that dramatic change. It also simplifies the code
a lot as there is no need for the NFD gRPC client and no need for
managing TLS certs/keys.
This change aligns nfd-topology-updater with the future direction of
nfd-worker where the gRPC API is being dropped and replaced by a
CRD-based API.
This patch also update deployment files and documentation to reflect
this change.
Fixes stricter API check on daemonset pod spec that started to cause e2e
test failures. RestartPolicyNever that we previously set (by defaylt)
isn't compatible with DaemonSets.
The new package should provide pod-related utilities,
hence let's move all the daemonset-related utilities
to their own package as well.
Signed-off-by: Talor Itzhak <titzhak@redhat.com>
By moving those utils in to a seperate package,
we can make the functions names shorter and clearer.
For example, instead of:
```
testutils.NFDWorkerPod(opts...)
testutils.NFDMasterPod(opts...)
testutils.SpecWithContainerImage(...)
```
we'll have:
```
testpod.NFDWorker(opts...)
testpod.NFDMaster(opts...)
testpod.SpecWithContainerImage(...)
```
It will also make the package more isolated and portable.
Signed-off-by: Talor Itzhak <titzhak@redhat.com>
The master pod need these `SecurityContext` configurations
In order to run inside a namespace with restricted policy
Signed-off-by: Talor Itzhak <titzhak@redhat.com>
Change the pod spec generator functions to accept parameterization in
the form of more generic "mutator functions". This makes the addition of
new test specific pod spec customizations a lot cleaner. Plus, hopefully
makes the code a bit more readable as well.
Also, slightly simplify the SpecWithConfigMap() but dropping one
redundant argument.
Inspired by latest contributions by Talor Itzhak (titzhak@redhat.com).
Different tests requires different configuration
of the topology-updater DaemonSet.
Here, we decouple the configuration from the creation part
using `JustBeforeEach` so that each test container
will has its own configuration.
Additional reading:
https://onsi.github.io/ginkgo/#separating-creation-and-configuration-justbeforeeach
Signed-off-by: Talor Itzhak <titzhak@redhat.com>
It might take time for the CRD to get deleted
and it might cause some falkiness in the tests.
Now before we create the CRD, we make sure to delete
the old object, wait for it deletion to complete
and only then create a new CRD object.
Signed-off-by: Talor Itzhak <titzhak@redhat.com>
We might not get the most updated node topology
resource on the first `GET` call.
Hence, put the whole check inside `Eventually`,
and check for the most updated node topology resource on every
iteration.
Signed-off-by: Talor Itzhak <titzhak@redhat.com>
The tested pods have some lax spec wrt security,
hence a restrict podSecurity namespace won't allow running those pods.
In topology-updater tests, the topology-updater pod
needs to run the container as root
so change the namespace podSecurity from restricted to priviliged.
In node-feature-discovery tests, we don't need root access,
so add the required security context configuration.
Signed-off-by: Talor Itzhak <titzhak@redhat.com>
Error strings should not be capitalized (ST1005) & remove the
redundancy from array, slice or map composite literals.
Signed-off-by: Feruzjon Muyassarov <feruzjon.muyassarov@intel.com>
Add tests covering the basic functionality of NodeFeatureRule objects,
covering different feature types ("flag features", "attribute features"
and "instance features") as well as backreferencing (using the output of
previously run rules) and templating. The test relies on the "fake"
feature source and its default configuration.
We need this fix https://github.com/kubernetes/kubernetes/pull/110875
to have reliable tests, but up until we can bump the k/k deps to 1.25+,
we can't consume it.
So borrow it from k/k repo for the time being.
Signed-off-by: Francesco Romani <fromani@redhat.com>
In some cases (CI) it is useful to run NFD e2e tests using
ephemeral clusters. To save time and bandwidth, it is also useful
to prime the ephemeral cluster with the images under test.
In these circumstances there is no risk of running a stale image,
and having a `Always` PullPolicy hardcoded actually makes
the whole exercise null.
So we add a new option, disabled by default, to make the e2e
manifest use the `IfNotPresent` pull policy, to effectively
cover this use case.
Signed-off-by: Francesco Romani <fromani@redhat.com>
Change the part of the e2e-test configuration that contains
node-specific expected labels and annotations to a list, instead of a
map. This makes the parsing order deterministic and makes it possible to
e.g. have a default at the end of the list that captures "all the rest".
The test was broken twofold: Firstly, the annotation was not checked at
all because the name of the node where nfd-master is running was not
set. Secondly, the annotation prefix was used incorrectly.
Lift the restriction to run custom rule tests on non-master node. Try to
find one but do not fail if that fails. Makes the end-to-end tests
runnable on single-node clusters such a simple minikube deployments.
- This patch allows to expose Resource Hardware Topology information
through CRDs in Node Feature Discovery.
- In order to do this we introduce another software component called
nfd-topology-updater in addition to the already existing software
components nfd-master and nfd-worker.
- nfd-master was enhanced to communicate with nfd-topology-updater
over gRPC followed by creation of CRs corresponding to the nodes
in the cluster exposing resource hardware topology information
of that node.
- Pin kubernetes dependency to one that include pod resource implementation
- This code is responsible for obtaining hardware information from the system
as well as pod resource information from the Pod Resource API in order to
determine the allocatable resource information for each NUMA zone. This
information along with Costs for NUMA zones (obtained by reading NUMA distances)
is gathered by nfd-topology-updater running on all the nodes
of the cluster and propagate NUMA zone costs to master in order to populate
that information in the CRs corresponding to the nodes.
- We use GHW facilities for obtaining system information like CPUs, topology,
NUMA distances etc.
- This also includes updates made to Makefile and Dockerfile and Manifests for
deploying nfd-topology-updater.
- This patch includes unit tests
- As part of the Topology Aware Scheduling work, this patch captures
the configured Topology manager scope in addition to the Topology manager policy.
Based on the value of both attribues a single string will be populated to the CRD.
The string value will be on of the following {SingleNUMANodeContainerLevel,
SingleNUMANodePodLevel, BestEffort, Restricted, None}
Co-Authored-by: Artyom Lukianov <alukiano@redhat.com>
Co-Authored-by: Francesco Romani <fromani@redhat.com>
Co-Authored-by: Talor Itzhak <titzhak@redhat.com>
Signed-off-by: Swati Sehgal <swsehgal@redhat.com>
Mount /usr/lib and /usr/src as /host-usr/lib and /host-usr/src inside the pod
to allow NFD to search for the kernel configuration file inside /usr.
This solves the problem of the kernel config file not being present in /boot
on s390x RHCOS.
Signed-off-by: Jan Schintag <jan.schintag@de.ibm.com>
There are cases when the only available metadata for discovering
features is the node's name. The "nodename" rule extends the custom
source and matches when the node's name matches one of the given
nodename regexp patterns.
It is also possible now to set an optional "value" on custom rules,
which overrides the default "true" label value in case the rule matches.
In order to allow more dynamic configurations without having to modify
the complete worker configuration, custom rules are additionally read
from a "custom.d" directory now. Typically that directory will be filled
by mounting one or more ConfigMaps.
Signed-off-by: Marc Sluiter <msluiter@redhat.com>
This can be used to help running multiple parallel NFD deployments in
the same cluster. The flag changes the node annotation namespace to
<instance>.nfd.node.kubernetes.io allowing different nfd-master intances
to store metadata in separate annotations.
For historical reasons the labels in the default nfd namespace have been
internally represented without the namespace part. I.e. instead of
"feature.node.kubernetes.io/foo" we just use "foo". NFD worker uses this
representation, too, both internally and over the gRPC requests. The
same scheme has been used for annotations.
This patch changes NFD master to use fully namespaced label and
annotation names internally. This hopefully makes the code a bit more
understandable. It also addresses some corner cases making the handling
of label names consistent, making it possible to use both "truncated"
and fully namespaced names over the gRPC interface (and in the
annotations).
This patch fix typo in a file name
Updates default image on e2e tests
Adds an extra file that should be ignored to gitignore
Use master instead of versioned tag for default test image
Interpret node names in the e2e test config as regexps instead of plain
strings. This makes it a lot easier to maintain a config file where
multiple nodes share configuration.
Implement an end-to-end test with all feature sources enabled. The new
test runs nfd-worker as a daemonset on all (schedulable) nodes of the
test cluster which makes it possible to cover a wide range features,
assuming the test cluster is heterogenous containing nodes with varying
system configurations.
The features available depends on the node(s) the e2e testa are run on.
Thus, some runtime parameterization of the tests is needed. The patch
adds a new command line test flag 'nfd.e2e-config' that is used to
specify the per-node feature labels and annotations that is expected to
be present in the cluster. An example configuration file is provided
with the patch. The pod spec of nfd-worker deployment is changed to
better correspond the default deployment and thus enable wider feature
discovery. This means using hostnetwork and adding mounts for /sys /boot
and /etc/os-release.
The patch changes node object management so that all nfd-related labels
are removed after each test (not just the ones the test is expected to
add). Also, all nfd-related annotations are now removed.
Tests that nfd master-worker communication works and that the worker is
able to label the node with the labels from the 'fake' source.
An example of running the test suite with a custom image with user's
kubeconfig:
$ go test ./test/e2e/ -args -nfd.repo=<image-repo> -nfd.tag=<image-tag> \
-kubeconfig=$HOME/.kube/config
Partly based on some previous work done by Balaji Subramaniam.
Patch the (Kubernetes) e2e wireframe introduced in the previous commit with some minor
modifications, dropping some bits in order to simplify the code.
Also adds a dummy test stub for node feature discovery.
Also, adds new method WaitForReady() into NfdMaster.
In practice, this quite widely tests nfd-master, too, as the tests
create an instance of NfdMaster and verify that the communication
between master and worker works.