Don't require features to be specified. The creator possibly only wants
to create labels or only some types of features. No need to specify
empty structs for the unused fields.
Use the "single-dash" version of nfd command line flags in deployment
files and e2e-tests. No impact in functionality, just aligns with
documentation and other parts of the codebase.
Add an initial test set for the NodeFeature API. This is done simply by
running a second pass of the tests but with -enable-nodefeature-api
(i.e. NodeFeature API enabled and gRPC disabled). This should give basic
confidence that the API actually works and form a basis for further
imporovements on testing the new CRD API.
Correctly handle the case where no NodeFeature objects exist for certain
node (and NodeFeature API has been enabled with
-enable-nodefeature-api). In this case all the labels should be removed.
Drop the pod-security.kubernetes.io/enforce label from the test
namespace, i.e. remove pod security admission enforcement. NFD-worker
uses restricted host mounts (/sys) etc so pod creation fails even in
privileged mode if pod security admission enforcement is enabled.
Document the usage of the NodeFeature CRD API. Also re-organize the
documentation a bit, moving the description of NodeFeatureRule
controller from customization guide to nfd-master usage page.
We want to always update all nodes at startup. Without this patch we
don't get any update event from the controller if no NodeFeature or
NodeFeatureRule objects exist in the cluster. Thus all nodes would stay
untouched whereas we really want to remove all labels from all nodes in
this case.
Only generate CRDs once in the beginning of the test run. Use the "Ordered"
option for the test container so that we can utilize ginkgo.BeforeAll to
only do stuff once before the first test. Changing from unordered to
ordered shouldn't make a big difference here.
Add a cleanup function to remove stale NodeFeatureRule objects that are
cluster-scoped and not deleted with the test namespace.
Use RuntimeDefault seccomp profile in nfd worker and topology
updater pod spec similar to nfd master.
Signed-off-by: Feruzjon Muyassarov <feruzjon.muyassarov@intel.com>
Implement a naive ratelimiter for node update events originating from
the nfd API. We might get a ton of events in short interval. The
simplest example is startup when we get a separate Add event for every
NodeFeature and NodeFeatureRule object. Without rate limiting we
run "update all nodes" separately for each NodeFeatureRule object, plus,
we would run "update node X" separately for each NodeFeature object
targeting node X. This is a huge amount of wasted work because in
principle just running "update all nodes" once should be enough.
Implement handling of multiple NodeFeature objects by merging all
objects (targeting a certain node) into one before processing the data.
This patch implements MergeInto() methods for all required data types.
With support for multiple NodeFeature objects per node, The "nfd api
workflow" can be easily demonstrated and tested from the command line.
Creating the folloiwing object (assuming node-n exists in the cluster):
apiVersion: nfd.k8s-sigs.io/v1alpha1
kind: NodeFeature
metadata:
labels:
nfd.node.kubernetes.io/node-name: node-n
name: my-features-for-node-n
spec:
# Features for NodeFeatureRule matching
features:
flags:
vendor.domain-a:
elements:
feature-x: {}
attributes:
vendor.domain-b:
elements:
feature-y: "foo"
feature-z: "123"
instances:
vendor.domain-c:
elements:
- attributes:
name: "elem-1"
vendor: "acme"
- attributes:
name: "elem-2"
vendor: "acme"
# Labels to be created
labels:
vendor-feature.enabled: "true"
vendor-setting.value: "100"
will create two feature labes:
feature.node.kubernetes.io/vendor-feature.enabled: "true"
feature.node.kubernetes.io/vendor-setting.value: "100"
In addition it will advertise hidden/raw features that can be used for
custom rules in NodeFeatureRule objects. Now, creating a NodeFeatureRule
object:
apiVersion: nfd.k8s-sigs.io/v1alpha1
kind: NodeFeatureRule
metadata:
name: my-rule
spec:
rules:
- name: "my feature rule"
labels:
"my-feature": "true"
matchFeatures:
- feature: vendor.domain-a
matchExpressions:
feature-x: {op: Exists}
- feature: vendor.domain-c
matchExpressions:
vendor: {op: In, value: ["acme"]}
will match the features in the NodeFeature object above and cause one
more label to be created:
feature.node.kubernetes.io/my-feature: "true"