In certain Prometheus Operator deployment scenarios it is desirable to
manage CRD creation outside of the operator. Likewise, it can be
desirable to scope the permissions of the Prometheus Operator so that it
does not have cluster-level access. This commit enables operation in
these situations by adding a flag to configure whether or not the
Prometheus Operator should try to create CRDs itself.
As requested, this updates the resource specification to live directly in config.kubeStateMetrics
It also clarifies the config variables. These names are what google uses in some of their tooling.
(And a slight tweak to the way collectors are specified)
* helm: Use CRDs for rules for operator 0.20.0+
Changed rules configmapst push -f to be PrometheusRule instead
Deprecated `additionalRulesConfigMapLabels` in favor of `additionalRulesLabels`
Fixes#1523, #1576, #1595
* helm: Rename configmap files to prometheusrule
* helm: Remove alert-rules labels from rules
Since rules are now sourced from CRDs and rules can be for recording
* helm: Bump chart versions
As I work with kube-state-metrics in a large cluster, I found I needed to make some adjustments.
- Expose the collectors, allowing one to configure exclusions.
- Expose the addon_resizer parameters, facilitating reproduce adjustments
- Allow adjusting scrapeTimeout and scrapeInterval
The e2e framework overrides the Prometheus config reloader argument of
the Prometheus Operator. Instead of overriding the correct argument, it
had been overriding the config reloader argument, resulting in two
specifications of the Prometheus config reloader argument one shadowing
the other.
This has only now caused problems as previously v0.21.0 would win, which
is present on quay.io. With the new release (v0.22.0) it fails as the
Prometheus config reloader v0.22.0 is not yet present on quay.io.
This patch resolves the wrong override and thereby fixes the e2e tests.
The config map data limit seems to differ between environments. This
might result from different meta data sizes accross environments. Adding
a big buffer fixes the problem.
Similar to how `exporter-kube-etcd` supports monitoring of etcd clusters
running outside the Kubernetes cluster, this patch permits to configure
`exporter-node` to *not* deploy `node_exporter` as a `DaemonSet` in the
cluster, and instead fetch metrics from pre-provisioned `node_exporter`
instances, e.g. deployed as part of the host OS.