Point to the latest release in the README, and, point out that a
user-built custom image is required to run the latest development
version. Update the deployment instructions to reflect the need to
specify the container image when using the deployment spec template(s).
Also, update the Job deployment script to set a user-defined container
image.
Remove 'cpuid', 'pstate' and 'rdt' feature sources and move their
functionality under the 'cpu' source. The goal is to have a more
systematic organization of feature sources and labels. After this change
we now basically have one source per type of hw, one for kernel and one
for userspace sw.
Related feature labels are changed, correspondingly, new labels being:
feature.node.k8s.io/cpu-cpuid.<cpuid flag>
feature.node.k8s.io/cpu-pstate.turbo
feature.node.k8s.io/cpu-rdt.<rdt feature>
Also, adds new method WaitForReady() into NfdMaster.
In practice, this quite widely tests nfd-master, too, as the tests
create an instance of NfdMaster and verify that the communication
between master and worker works.
Move most of the code under cmd/nfd-master and cmd/nfd-worker into new
packages pkg/nfd-master and pk/nfd-worker, respectively. Makes extending
unit tests to "main" functions easier.
Change the structure and naming of the make variables that control the
container image name/tag that gets created. Default values and behavior
stay the same, but, this change tries to make it easier to customize the
build from command line.
Also, document all the relevant make variables in readme.
Move the documentation of the feature detection hooks (i.e. 'local'
feature source) after all other feature sources. It is a more logical
place to document custom user-specific functionality after the built-in
features. Also, adjust the title a bit.
Detect of the Intel SST-BF (Speed Select Technology - Base Frequency)
has been enabled.
Adds one new feature label:
feature.node.kubernetes.io/cpu-power.sst_bf.enabled=true
Based on a patch from kuralamudhan.ramakrishnan@intel.com
Make NodeName based authorization of the workers optional (off by
default). This makes it possible for all nfd-worker pods in the cluster
to use one shared secret, making NFD deployment much easier. However,
this also opens a way for nfd-workers to label other nodes (than what it
is running on), too.
Command line option for overriding the Common Name (CN) expected from
the nfd-master TLS certificate. This can be especially handy in
testing/development.
Implement TLS client certificate authentication. It is enabled by
specifying --ca-file, --key-file and --cert-file, on both the nfd-master
and nfd-worker side. When enabled, nfd-master verifies that the client
(worker) presents a valid certificate signed by the root certificate
(--ca-file). In addition, nfd-master does authorization based on the Common Name
(CN) of the client certificate: CN must match the node name specified in
the labeling request. This ensures (assuming that the worker
certificates are correctly deployed) that nfd-worker is only able to label
the node it is running on, i.e. prevents it from labeling other nodes.
Add support for TLS authentication. When enabled, nfd-worker verifies
that nfd-master has a valid certificate, i.e. signed by the given root
certificate and its Common Name (CN) matches the DNS name of the
nfd-master service being used. TLS authentication is enabled by
specifying --key-file and --cert-file on nfd-master, and, --ca-file on
nfd-worker.
Refactor NFD into a simple server-client system. Labeling is now done by
a separate 'nfd-master' server. It is a simple service with small
codebase, designed for easy isolation. The feature discovery part is
implemented in a 'nfd-worker' client which sends labeling requests to
nfd-server, thus, requiring no access/permissions to the Kubernetes API
itself.
Client-server communication is implemented by using gRPC. The protocol
currently consists of only one request, i.e. the labeling request.
The spec templates are converted to the new scheme. The nfd-master
server can be deployed using the nfd-master.yaml.template which now also
contains the necessary RBAC configuration. NFD workers can be deployed
by using the nfd-worker-daemonset.yaml.template or
nfd-worker-job.yaml.template (most easily used with the label-nodes.sh
script).
Only nfd-worker currently support config file or options. The (default)
NFD config file is renamed to nfd-worker.conf.