Remove the 'selinux' feature source and move the functionality under the
'kernel' feature source. The selinux feature label is changed to
feature.node.kubernetes.io/selinux.enabled
The selinux feature source was rather narrow in scope, and, the sole
feature it advertised naturally falls under the kernel feature source.
Currently, it only detects one feature, i.e. hardware multithreading
(such as Intel hyper-threading technology). The corresponding feature
label is:
feature.node.kubernetes.io/cpu-hardware_multithreading=true
However, this (architecture/platform dependent) feature is not detected
directly, and, the heuristics can be mislead. Detection works by
checking the thread siblings of each logical (and online) cpu in the
system. If any cpu has any thread siblings the feature label is set to
true. Thus, hardware multithreading could be effectively disabled e.g.
by putting all sibling cpus offline (even if the technology would be
enabled in hardware).
Implement new 'system' feature source. It now detects OS release
information from the os-release file, assumed to be available at
/host-etc/os-release. It currently creates two labels (assuming that the
corresponding fields are found in the os-release file), with example
values:
feature.node.kubernetes.io/system-os_release.ID=opensuse
feature.node.kubernetes.io/system-os_release.VERSION_ID=42.3
Also, update the template spec to mount /etc/os-release file from the
host inside the container.
Change links in README.md and RELEASE.md to point to the new repo
location under kubernetes-sigs. Also, remove some outdated references to
kubernetes incubator project.
This implementation only detects kconfig options ("NO_HZ", "NO_HZ_IDLE",
"NO_HZ_FULL" and "PREEMPT"). The corresponding node labels will be
node.alpha.kubernetes-incubator.io/nfd-kernel-config.<option name>
Currently, only bool and tristate (i.e. '=y' or '=m') kernel config
options are supported. Other kconfig types (e.g. string or int) are
simply ignored. If the kconfig flag is set to '=y' or '=m', the
corresponding node label will be present and it's value will be 'true'.
Docker v17.07 and later supports configuring proxy servers via the
docker client configuration (https://docs.docker.com/network/proxy/).
This is better than using --build-args for passing the proxy settings to
the build environment. Previously, we could end up with empty variables
values which could cause the build to fail. E.g. if you had
http_proxy=<myproxy> defined but HTTP_PROXY unset in the host
environment, you ended up with http_proxy=<myproxy> and HTTP_PROXY=""
(i.e. empty value) inside the build which caused problems in some cases.
In addition, this makes builds via make and directly with docker more
similar.
Make it possible for the hooks to fully define the label name to be used
(i.e. without the '<hook name>-' prefix) by prefixing the printed
feature names with a slash ('/'). This makes it possible to e.g.
override labels create by other sources.
For example having the following output from a hook:
/override_source-override_bool
/override_source-override_value=my value
will translate into the following feature labels:
feature.node.kubernetes.io/override_source-override_bool = true
feature.node.kubernetes.io/override_source-override_value = my value
Make the feature detector hooks, run by the 'local' feature source,
support non-binary label values. Hooks can advertise non-binary value by
using <name>=<value> format.
For example, /etc/kubernetes/node-feature-discovery/source.d/myhook
having the following stdout:
LABEL_1
LABEL_2=foobar
Would translate into the following labels:
feature.node.kubernetes.io/myhook-LABEL_1 = true
feature.node.kubernetes.io/myhook-LABEL_2 = foobar
Implement a new feature source named 'local' whose only purpose is to
run feature source hooks found under
/etc/kubernetes/node-feature-discovery/source.d/ It tries to execute all
files found under the directory, in alphabetical order.
This feature source provides users a mechanism to implement custom
feature sources in a pluggable way, without modifying nfd source code or
Docker images.
The hooks are supposed to print all discovered features in stdout, one
feature per line. The output in stdout is used in the node label as is.
Full node label name will have the following format:
feature.node.kubernetes.io/<hook name>-<feature name>
Stderr from the hooks is propagated to nfd log.
No need for that anymore as annotations are used to keep track of labels
managed by nfd. This also makes the feature labels more generic with no
traces of the NFD project name. This makes way for "standardizing" the
node feature labels in a larger architectural scope, in case that was
something that would be pursued in the future.
Add new 'nfd.node.kubernetes.io/feature-labels' annotation to store all
the feature labels added by NFD. This annotation is used by NFD by
cleaning up old labels when doing re-labeling.
In this scheme NFD does not need to rely on NFD-specific prefix in
feature label, and, is always able to reliably clean up old label.
Remove labels in the old, deprecated,
node.alpha.kubernetes-incubator.io namespace. We want clean up the
deprecated labels when deploying new version of NFD.
Add new 'nfd.node.kubernetes.io/version' annotation for advertising the
version of NFD that created the feature labels on the node. Introduces
new 'nfd.node.kubernetes.io' namespace that is supposed to be used by
all future NFD annotations. The old
'node.alpha.kubernetes-incubator.io/node-feature-discovery.version' is
dropped in favor of the new annotation.
Annotations are better suited for this kind of metadata. NFD version
should not be used for pod scheduling, especially because all the nodes
in the cluster should normally run the same version of NFD.
This commit fixes demo video broken link (from png to svg image) and updates
link to `node-feature-discovery-job.yaml.template` file.
Signed-off-by: Obed N Munoz <obed.n.munoz@intel.com>
Add a new 'kernel' feature source, detecting the kernel version. The
kernel version is split into multiple labels in order to make this more
usable in label selectors. Kernel version in the format X.Y.Z-patch will
be presented as
node.alpha.kubernetes-incubator.io/nfd-kernel-version.full=X.Y.Z-patch
node.alpha.kubernetes-incubator.io/nfd-kernel-version.major=X
node.alpha.kubernetes-incubator.io/nfd-kernel-version.minor=Y
node.alpha.kubernetes-incubator.io/nfd-kernel-version.revision=Z
The '.full' label will always be avaiable. The other labels if these
components can be parsed from the kernel version number.
Add new config option for specifying the device label, i.e. the
<device-label> part in
node.alpha.kubernetes-incubator.io/nfd-pci-<device label>.present
The option is a list of field names:
"class" PCI device class
"vendor" Vendor ID
"device" Device ID
"subsystem_vendor" Subsystem vendor ID
"subsystem_device" Subsystem device ID
E.g. the following command line flag can be used to use all of the
above:
--options='{"sources": {"pci": {"deviceLabelFields": ["class", "vendor", "device", "subsystem_vendor", "subsystem_device"] } } }'
User can now configure the list of device classes to detect, either via
a configuration file or by using the --options command line flag.
An example of a command line flag to detect all network controllers and
("main class 0x02) and VGA display controllers ("main" class 0x03 and
subclass 0x00) would be:
--options='{"sources": {"pci": {"deviceClassWhitelist": ["02", "0300"] } } }'
This feature source detects the presence of PCI devices. At the moment,
it only advertises GPUs and accelerator cards, i.e. device classes 0x03,
0x0b40 and 0x12.
The label format is:
node.alpha.kubernetes-incubator.io/nfd-pci-<device label>.present
where <device label> is composed of raw PCI IDs:
<class id>_<vendor id>
Implement new '--options' command line flag that can be used to specify
config options from command line. Options specified via this command
line flag will override those read from the config file. The same format
as in the config file must be used, that is, the flag value must be
valid YAML or JSON.
Support yaml/json based config file for nfd. This commit does not add
any actual consumers for the config file, yet.
By default, nfd tries to read
/etc/kubernetes/node-feature-discovery/node-feature-discovery.conf.
This can be changed by specifying the --config command line flag.
Introduce a new scheme where features may have logical sub-components.
Rename sriov labels from the network source according to the new
pattern:
sriov -> sriov.capable
sriov-configured -> sriov.configured
Also, document this new labeling scheme in the README.
Signed-off-by: Markus Lehtonen <markus.lehtonen@intel.com>
Convert resource templates from json to yaml
Yaml is easier and less error prone to modify by hand. It also allows
comments which can be especially useful in the templates.
To cut the image size further, down to about 75MB. We use Debian
strecth-slim as the base for the production image as golang docker
images use stretch as their base.
This reduces the size of the Docker image from ca. 1.2GB down to about
750MB.
Also, move unit tests from .travis.yml to Dockerfile. Final production
image is not able to run unit tests anymore, as sources are missing from
there.
Make it possible to specify an image build tool other than docker - a
limitation is that the build tool must be compatible with docker files,
of course. This makes it possible to build an NFD image without the
Docker daemon, for example.
The image build command is specified in a makefile variable and can be
overridden from command line, for example:
$ make IMAGE_BUILD_CMD="buildah bud"
Thanks: Zvonko Kosic for suggesting this
Counting nodes in Ready state was too fragile, matching
entries like:
Ready,SchedulingDisabled
NotReady
By requiring whitespace on both side, we accept only clean Ready.
* Arrange feature sources alphabetically
Just a cosmetic change, but, a small readability improvement.
* Implement detection of IOMMU (#136)
Add a new feature source, i.e. 'iommu', which detects if an IOMMU is
present and enabled in the kernel. The new node label is
node.alpha.kubernetes-incubator.io/nfd-iommu-present
A "newbie style" deployment attempt was made on a recent
cluster, and some added notes in README and in label-nodes.sh
could help someone who just started with node-feature-discovery.
Correcting also description of label-nodes.sh which does not
deal with unlabeled nodes as was promised in README.
intel-cmt-cat repo is located in github/intel/ now,
update links accordingly, correcting also some
source file names pointed from files under rdt-discovery/.
Updated also ref. to intel-cmt-cat in Dockerfile.
No functional changes.