1
0
Fork 0
mirror of https://github.com/kyverno/kyverno.git synced 2024-12-14 11:57:48 +00:00
kyverno/charts/kyverno
shuting f4efee1dae
release 1.9.5 (#7314)
Signed-off-by: ShutingZhao <shuting@nirmata.com>
2023-05-26 23:04:47 +00:00
..
ci Support existing imagePullSecrets for image verify functionality (#5627) 2022-12-12 16:26:43 +00:00
grafana fix: grafana dashboard (#5645) 2022-12-11 21:51:15 +00:00
templates fix conflicts (#7109) 2023-05-09 11:12:53 +02:00
.helmignore fix: add helmignore (#3948) 2022-05-17 18:34:16 +00:00
Chart.yaml release 1.9.5 (#7314) 2023-05-26 23:04:47 +00:00
README.md release 1.9.5 (#7314) 2023-05-26 23:04:47 +00:00
README.md.gotmpl Corrected Kubernetes spelling (#5134) 2022-10-31 17:00:17 +00:00
values.yaml fix conflicts (#7109) 2023-05-09 11:12:53 +02:00

kyverno

Kubernetes Native Policy Management

Version: 2.7.5 Type: application AppVersion: v1.9.5

About

Kyverno is a Kubernetes Native Policy Management engine.

It allows you to:

  • Manage policies as Kubernetes resources (no new language required.)
  • Validate, mutate, and generate resource configurations.
  • Select resources based on labels and wildcards.
  • View policy enforcement as events.
  • Scan existing resources for violations.

This chart bootstraps a Kyverno deployment on a Kubernetes cluster using the Helm package manager.

Access the complete user documentation and guides at: https://kyverno.io.

Installing the Chart

Add the Kyverno Helm repository:

$ helm repo add kyverno https://kyverno.github.io/kyverno/

Create a namespace:

You can install Kyverno in any namespace. The examples use kyverno as the namespace.

$ kubectl create namespace kyverno

Install the Kyverno chart:

$ helm install kyverno --namespace kyverno kyverno/kyverno

The command deploys Kyverno on the Kubernetes cluster with default configuration. The installation guide lists the parameters that can be configured during installation.

The Kyverno ClusterRole/ClusterRoleBinding that manages webhook configurations must have the suffix :webhook. Ex., *:webhook or kyverno:webhook. Other ClusterRole/ClusterRoleBinding names are configurable.

Notes on using ArgoCD:

When deploying this chart with ArgoCD you will need to enable Replace in the syncOptions, and you probably want to ignore diff in aggregated cluster roles.

You can do so by following instructions in these pages of ArgoCD documentation:

ArgoCD uses helm only for templating but applies the results with kubectl.

Unfortunately kubectl adds metadata that will cross the limit allowed by Kubernetes. Using Replace overcomes this limitation.

Another option is to use server side apply, this will be supported in ArgoCD v2.5.

Finally, we introduced new CRDs in 1.8 to manage resource-level reports. Those reports are associated with parent resources using an ownerReference object.

As a consequence, ArgoCD will show those reports in the UI, but as they are managed dynamically by Kyverno it can pollute your dashboard.

You can tell ArgoCD to ignore reports globally by adding them under the resource.exclusions stanza in the ArgoCD ConfigMap.

    resource.exclusions: |
      - apiGroups:
          - kyverno.io
        kinds:
          - AdmissionReport
          - BackgroundScanReport
          - ClusterAdmissionReport
          - ClusterBackgroundScanReport
        clusters:
          - '*'      

Below is an example of ArgoCD Application manifest that should work with this chart.

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: kyverno
  namespace: argocd
spec:
  destination:
    namespace: kyverno
    server: https://kubernetes.default.svc
  project: default
  source:
    chart: kyverno
    repoURL: https://kyverno.github.io/kyverno
    targetRevision: 2.6.0
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
      - CreateNamespace=true
      - Replace=true

Uninstalling the Chart

To uninstall/delete the kyverno deployment:

$ helm delete -n kyverno kyverno

The command removes all the Kubernetes components associated with the chart and deletes the release.

Values

Key Type Default Description
nameOverride string nil Override the name of the chart
fullnameOverride string nil Override the expanded name of the chart
namespace string nil Namespace the chart deploys to
customLabels object {} Additional labels
rbac.create bool true Create ClusterRoles, ClusterRoleBindings, and ServiceAccount
rbac.serviceAccount.create bool true Create a ServiceAccount
rbac.serviceAccount.name string nil The ServiceAccount name
rbac.serviceAccount.annotations object {} Annotations for the ServiceAccount
image.registry string nil Image registry
image.repository string "ghcr.io/kyverno/kyverno" Image repository
image.tag string nil Image tag Defaults to appVersion in Chart.yaml if omitted
image.pullPolicy string "IfNotPresent" Image pull policy
image.pullSecrets list [] Image pull secrets
initImage.registry string nil Image registry
initImage.repository string "ghcr.io/kyverno/kyvernopre" Image repository
initImage.tag string nil Image tag If initImage.tag is missing, defaults to image.tag
initImage.pullPolicy string nil Image pull policy If initImage.pullPolicy is missing, defaults to image.pullPolicy
initContainer.extraArgs list ["--loggingFormat=text"] Extra arguments to give to the kyvernopre binary.
testImage.registry string nil Image registry
testImage.repository string "busybox" Image repository
testImage.tag float 1.35 Image tag Defaults to latest if omitted
testImage.pullPolicy string nil Image pull policy Defaults to image.pullPolicy if omitted
replicaCount int nil Desired number of pods
podLabels object {} Additional labels to add to each pod
podAnnotations object {} Additional annotations to add to each pod
podSecurityContext object {} Security context for the pod
securityContext object {"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]},"privileged":false,"readOnlyRootFilesystem":true,"runAsNonRoot":true,"seccompProfile":{"type":"RuntimeDefault"}} Security context for the containers
testSecurityContext object {"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]},"privileged":false,"readOnlyRootFilesystem":true,"runAsGroup":65534,"runAsNonRoot":true,"runAsUser":65534,"seccompProfile":{"type":"RuntimeDefault"}} Security context for the test containers
priorityClassName string "" Optional priority class to be used for kyverno pods
antiAffinity.enable bool true Pod antiAffinities toggle. Enabled by default but can be disabled if you want to schedule pods to the same node.
podAntiAffinity object See values.yaml Pod anti affinity constraints.
podAffinity object {} Pod affinity constraints.
nodeAffinity object {} Node affinity constraints.
podDisruptionBudget.minAvailable int 1 Configures the minimum available pods for kyverno disruptions. Cannot be used if maxUnavailable is set.
podDisruptionBudget.maxUnavailable string nil Configures the maximum unavailable pods for kyverno disruptions. Cannot be used if minAvailable is set.
nodeSelector object {} Node labels for pod assignment
tolerations list [] List of node taints to tolerate
hostNetwork bool false Change hostNetwork to true when you want the kyverno's pod to share its host's network namespace. Useful for situations like when you end up dealing with a custom CNI over Amazon EKS. Update the dnsPolicy accordingly as well to suit the host network mode.
dnsPolicy string "ClusterFirst" dnsPolicy determines the manner in which DNS resolution happens in the cluster. In case of hostNetwork: true, usually, the dnsPolicy is suitable to be ClusterFirstWithHostNet. For further reference: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy.
envVarsInit object {} Env variables for initContainers.
envVars object {} Env variables for containers.
extraArgs list ["--loggingFormat=text","--exceptionNamespace={{ include \"kyverno.namespace\" . }}"] Extra arguments to give to the binary.
extraInitContainers list [] Array of extra init containers
extraContainers list [] Array of extra containers to run alongside kyverno
imagePullSecrets object {} Image pull secrets for image verify and imageData policies. This will define the --imagePullSecrets Kyverno argument.
existingImagePullSecrets list [] Existing Image pull secrets for image verify and imageData policies. This will define the --imagePullSecrets Kyverno argument.
resources.limits object {"memory":"384Mi"} Pod resource limits
resources.requests object {"cpu":"100m","memory":"128Mi"} Pod resource requests
initResources.limits object {"cpu":"100m","memory":"256Mi"} Pod resource limits
initResources.requests object {"cpu":"10m","memory":"64Mi"} Pod resource requests
testResources.limits object {"cpu":"100m","memory":"256Mi"} Pod resource limits
testResources.requests object {"cpu":"10m","memory":"64Mi"} Pod resource requests
startupProbe object See values.yaml Startup probe. The block is directly forwarded into the deployment, so you can use whatever startupProbes configuration you want. ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/
livenessProbe object See values.yaml Liveness probe. The block is directly forwarded into the deployment, so you can use whatever livenessProbe configuration you want. ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/
readinessProbe object See values.yaml Readiness Probe. The block is directly forwarded into the deployment, so you can use whatever readinessProbe configuration you want. ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/
generatecontrollerExtraResources list [] Additional resources to be added to controller RBAC permissions.
excludeKyvernoNamespace bool true Exclude Kyverno namespace Determines if default Kyverno namespace exclusion is enabled for webhooks and resourceFilters
resourceFiltersExcludeNamespaces list [] resourceFilter namespace exclude Namespaces to exclude from the default resourceFilters
config.resourceFilters list See values.yaml Resource types to be skipped by the Kyverno policy engine. Make sure to surround each entry in quotes so that it doesn't get parsed as a nested YAML list. These are joined together without spaces, run through tpl, and the result is set in the config map.
config.existingConfig string "" Name of an existing config map (ignores default/provided resourceFilters)
config.annotations object {} Additional annotations to add to the configmap
config.excludeGroupRole string nil Exclude group role
config.excludeUsername string nil Exclude username
config.webhooks string nil Defines the namespaceSelector in the webhook configurations. Note that it takes a list of namespaceSelector and/or objectSelector in the JSON format, and only the first element will be forwarded to the webhook configurations. The Kyverno namespace is excluded if excludeKyvernoNamespace is true (default)
config.generateSuccessEvents bool false Generate success events.
config.metricsConfig object {"annotations":{},"namespaces":{"exclude":[],"include":[]}} Metrics config.
config.metricsConfig.annotations object {} Additional annotations to add to the metricsconfigmap
config.webhookAnnotations object {} Defines annotations to set on webhook configurations.
updateStrategy object See values.yaml Deployment update strategy. Ref: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy
service.port int 443 Service port.
service.type string "ClusterIP" Service type.
service.nodePort string nil Service node port. Only used if service.type is NodePort.
service.annotations object {} Service annotations.
topologySpreadConstraints list [] Topology spread constraints.
metricsService.create bool true Create service.
metricsService.port int 8000 Service port. Kyverno's metrics server will be exposed at this port.
metricsService.type string "ClusterIP" Service type.
metricsService.nodePort string nil Service node port. Only used if metricsService.type is NodePort.
metricsService.annotations object {} Service annotations.
serviceMonitor.enabled bool false Create a ServiceMonitor to collect Prometheus metrics.
serviceMonitor.additionalLabels string nil Additional labels
serviceMonitor.namespace string nil Override namespace (default is the same as kyverno)
serviceMonitor.interval string "30s" Interval to scrape metrics
serviceMonitor.scrapeTimeout string "25s" Timeout if metrics can't be retrieved in given time interval
serviceMonitor.secure bool false Is TLS required for endpoint
serviceMonitor.tlsConfig object {} TLS Configuration for endpoint
createSelfSignedCert bool false Kyverno requires a certificate key pair and corresponding certificate authority to properly register its webhooks. This can be done in one of 3 ways: 1) Use kube-controller-manager to generate a CA-signed certificate (preferred) 2) Provide your own CA and cert. In this case, you will need to create a certificate with a specific name and data structure. As long as you follow the naming scheme, it will be automatically picked up. kyverno-svc.(namespace).svc.kyverno-tls-ca (with data entries named tls.key and tls.crt) kyverno-svc.kyverno.svc.kyverno-tls-pair (with data entries named tls.key and tls.crt) 3) Let Helm generate a self signed cert, by setting createSelfSignedCert true If letting Kyverno create its own CA or providing your own, make createSelfSignedCert is false
installCRDs bool true Whether to have Helm install the Kyverno CRDs. If the CRDs are not installed by Helm, they must be added before policies can be created.
crds.annotations object {} Additional CRDs annotations.
networkPolicy.enabled bool false When true, use a NetworkPolicy to allow ingress to the webhook This is useful on clusters using Calico and/or native k8s network policies in a default-deny setup.
networkPolicy.ingressFrom list [] A list of valid from selectors according to https://kubernetes.io/docs/concepts/services-networking/network-policies.
webhooksCleanup.enable bool false Create a helm pre-delete hook to cleanup webhooks.
webhooksCleanup.image string "bitnami/kubectl:latest" kubectl image to run commands for deleting webhooks.
tufRootMountPath string "/.sigstore" A writable volume to use for the TUF root initialization.
grafana.enabled bool false Enable grafana dashboard creation.
grafana.namespace string nil Namespace to create the grafana dashboard configmap. If not set, it will be created in the same namespace where the chart is deployed.
grafana.annotations object {} Grafana dashboard configmap annotations.
cleanupController.enabled bool true Enable cleanup controller.
cleanupController.rbac.create bool true Create RBAC resources
cleanupController.rbac.serviceAccount.name string nil Service account name
cleanupController.rbac.clusterRole.extraResources list [] Extra resource permissions to add in the cluster role
cleanupController.createSelfSignedCert bool false Create self-signed certificates at deployment time. The certificates won't be automatically renewed if this is set to true.
cleanupController.image.registry string nil Image registry
cleanupController.image.repository string "ghcr.io/kyverno/cleanup-controller" Image repository
cleanupController.image.tag string nil Image tag Defaults to appVersion in Chart.yaml if omitted
cleanupController.image.pullPolicy string "IfNotPresent" Image pull policy
cleanupController.image.pullSecrets list [] Image pull secrets
cleanupController.replicas int nil Desired number of pods
cleanupController.updateStrategy object See values.yaml Deployment update strategy. Ref: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy
cleanupController.priorityClassName string "" Optional priority class
cleanupController.hostNetwork bool false Change hostNetwork to true when you want the pod to share its host's network namespace. Useful for situations like when you end up dealing with a custom CNI over Amazon EKS. Update the dnsPolicy accordingly as well to suit the host network mode.
cleanupController.dnsPolicy string "ClusterFirst" dnsPolicy determines the manner in which DNS resolution happens in the cluster. In case of hostNetwork: true, usually, the dnsPolicy is suitable to be ClusterFirstWithHostNet. For further reference: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy.
cleanupController.extraArgs list [] Extra arguments passed to the container on the command line
cleanupController.resources.limits object {"memory":"128Mi"} Pod resource limits
cleanupController.resources.requests object {"cpu":"100m","memory":"64Mi"} Pod resource requests
cleanupController.startupProbe object See values.yaml Startup probe. The block is directly forwarded into the deployment, so you can use whatever startupProbes configuration you want. ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/
cleanupController.livenessProbe object See values.yaml Liveness probe. The block is directly forwarded into the deployment, so you can use whatever livenessProbe configuration you want. ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/
cleanupController.readinessProbe object See values.yaml Readiness Probe. The block is directly forwarded into the deployment, so you can use whatever readinessProbe configuration you want. ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/
cleanupController.nodeSelector object {} Node labels for pod assignment
cleanupController.tolerations list [] List of node taints to tolerate
cleanupController.antiAffinity.enabled bool true Pod antiAffinities toggle. Enabled by default but can be disabled if you want to schedule pods to the same node.
cleanupController.podAntiAffinity object See values.yaml Pod anti affinity constraints.
cleanupController.podAffinity object {} Pod affinity constraints.
cleanupController.nodeAffinity object {} Node affinity constraints.
cleanupController.topologySpreadConstraints list [] Topology spread constraints.
cleanupController.podSecurityContext object {} Security context for the pod
cleanupController.securityContext object {"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]},"privileged":false,"readOnlyRootFilesystem":true,"runAsNonRoot":true,"seccompProfile":{"type":"RuntimeDefault"}} Security context for the containers
cleanupController.podDisruptionBudget.minAvailable int 1 Configures the minimum available pods for disruptions. Cannot be used if maxUnavailable is set.
cleanupController.podDisruptionBudget.maxUnavailable string nil Configures the maximum unavailable pods for disruptions. Cannot be used if minAvailable is set.
cleanupController.service.port int 443 Service port.
cleanupController.service.type string "ClusterIP" Service type.
cleanupController.service.nodePort string nil Service node port. Only used if service.type is NodePort.
cleanupController.service.annotations object {} Service annotations.
cleanupController.metricsService.create bool true Create service.
cleanupController.metricsService.port int 8000 Service port. Metrics server will be exposed at this port.
cleanupController.metricsService.type string "ClusterIP" Service type.
cleanupController.metricsService.nodePort string nil Service node port. Only used if metricsService.type is NodePort.
cleanupController.metricsService.annotations object {} Service annotations.
cleanupController.serviceMonitor.enabled bool false Create a ServiceMonitor to collect Prometheus metrics.
cleanupController.serviceMonitor.additionalLabels string nil Additional labels
cleanupController.serviceMonitor.namespace string nil Override namespace (default is the same as kyverno)
cleanupController.serviceMonitor.interval string "30s" Interval to scrape metrics
cleanupController.serviceMonitor.scrapeTimeout string "25s" Timeout if metrics can't be retrieved in given time interval
cleanupController.serviceMonitor.secure bool false Is TLS required for endpoint
cleanupController.serviceMonitor.tlsConfig object {} TLS Configuration for endpoint
cleanupController.tracing.enabled bool false Enable tracing
cleanupController.tracing.address string nil Traces receiver address
cleanupController.tracing.port string nil Traces receiver port
cleanupController.tracing.creds string "" Traces receiver credentials
cleanupController.logging.format string "text" Logging format
cleanupController.metering.disabled bool false Disable metrics export
cleanupController.metering.config string "prometheus" Otel configuration, can be prometheus or grpc
cleanupController.metering.port int 8000 Prometheus endpoint port
cleanupController.metering.collector string "" Otel collector endpoint
cleanupController.metering.creds string "" Otel collector credentials

TLS Configuration

If createSelfSignedCert is true, Helm will take care of the steps of creating an external self-signed certificate described in option 2 of the installation documentation

If createSelfSignedCert is false, Kyverno will generate a self-signed CA and a certificate, or you can provide your own TLS CA and signed-key pair and create the secret yourself as described in the documentation.

Default resource filters

Kyverno resource filters are a used to exclude resources from the Kyverno engine rules processing.

This chart comes with default resource filters that apply exclusions on a couple of namespaces and resource kinds:

  • all resources in kube-system, kube-public and kube-node-lease namespaces
  • all resources in all namespaces for the following resource kinds:
    • Event
    • Node
    • APIService
    • TokenReview
    • SubjectAccessReview
    • SelfSubjectAccessReview
    • Binding
    • ReplicaSet
    • AdmissionReport
    • ClusterAdmissionReport
    • BackgroundScanReport
    • ClusterBackgroundScanReport
  • all resources created by this chart itself

Those default exclusions are there to prevent disruptions as much as possible. Under the hood, Kyverno installs an admission controller for critical cluster resources. A cluster can become unresponsive if Kyverno is not up and running, ultimately preventing pods to be scheduled in the cluster.

You can however override the default resource filters by setting the config.resourceFilters stanza. It contains an array of string templates that are passed through the tpl Helm function and joined together to produce the final resourceFilters written in the Kyverno config map.

Please consult the values.yaml file before overriding config.resourceFilters and use the apropriate templates to build your desired exclusions list.

High availability

Running a highly-available Kyverno installation is crucial in a production environment.

In order to run Kyverno in high availability mode, you should set replicaCount to 3 or more. You should also pay attention to anti affinity rules, spreading pods across nodes and availability zones.

Please see https://kyverno.io/docs/installation/#security-vs-operability for more informations.

Source Code

Requirements

Kubernetes: >=1.16.0-0

Maintainers

Name Email Url
Nirmata https://kyverno.io/

Autogenerated from chart metadata using helm-docs v1.11.0