1
0
Fork 0
mirror of https://github.com/kyverno/kyverno.git synced 2024-12-14 11:57:48 +00:00

remove docs and update README.md

This commit is contained in:
Jim Bugwadia 2020-10-14 17:39:45 -07:00
parent 7d7919d7d9
commit 4ea1126fce
23 changed files with 90 additions and 4013 deletions

166
README.md
View file

@ -4,176 +4,22 @@
![logo](documentation/images/Kyverno_Horizontal.png) ![logo](documentation/images/Kyverno_Horizontal.png)
Kyverno is a policy engine built for Kubernetes:
* policies as Kubernetes resources (no new language to learn!)
* validate, mutate, or generate any resource
* match resources using label selectors and wildcards
* validate and mutate using overlays (like Kustomize!)
* generate and synchronize defaults across namespaces
* block or report violations
* test using kubectl
Watch a 3 minute video review of Kyverno on Coffee and Cloud Native with Adrian Goins: <hr/>
[![Kyyverno review on Coffee and Cloud Native](https://img.youtube.com/vi/DW2u6LhNMh0/0.jpg)](https://www.youtube.com/watch?v=DW2u6LhNMh0&feature=youtu.be&t=116) > Kyverno is a policy engine designed for Kubernetes. It can validate, mutate, and generate configurations using admission controls and background scans. Kyverno policies are Kubernetes resources and do not require learning a new language. Kyverno is designed to work nicely with tools you already use like `kubectl`, `kustomize`, and `Git`.
## Quick Start
**NOTE** : Your Kubernetes cluster version must be above v1.14 which adds webhook timeouts.
To check the version, enter `kubectl version`.
Install Kyverno:
```console
kubectl create -f https://raw.githubusercontent.com/kyverno/kyverno/master/definitions/release/install.yaml
```
You can also install Kyverno using a [Helm chart](https://github.com/kyverno/kyverno/blob/master/documentation/installation.md#install-kyverno-using-helm).
Add the policy below. It contains a single validation rule that requires that all pods have
a `app.kubernetes.io/name` label. Kyverno supports different rule types to validate,
mutate, and generate configurations. The policy attribute `validationFailureAction` is set
to `enforce` to block API requests that are non-compliant (using the default value `audit`
will report violations but not block requests.)
```yaml
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: require-labels
spec:
validationFailureAction: enforce
rules:
- name: check-for-labels
match:
resources:
kinds:
- Pod
validate:
message: "label `app.kubernetes.io/name` is required"
pattern:
metadata:
labels:
app.kubernetes.io/name: "?*"
```
Try creating a deployment without the required label:
```console
kubectl create deployment nginx --image=nginx
```
You should see an error:
```console
Error from server: admission webhook "nirmata.kyverno.resource.validating-webhook" denied the request:
resource Deployment/default/nginx was blocked due to the following policies
require-labels:
autogen-check-for-labels: 'Validation error: label `app.kubernetes.io/name` is required;
Validation rule autogen-check-for-labels failed at path /spec/template/metadata/labels/app.kubernetes.io/name/'
```
Create a pod with the required label. For example from this YAML:
```yaml
kind: "Pod"
apiVersion: "v1"
metadata:
name: nginx
labels:
app.kubernetes.io/name: nginx
spec:
containers:
- name: "nginx"
image: "nginx:latest"
```
This pod configuration complies with the policy rules, and is not blocked.
Clean up by deleting all cluster policies:
```console
kubectl delete cpol --all
```
As a next step, browse the [sample policies](https://github.com/kyverno/kyverno/blob/master/samples/README.md)
and learn about [writing policies](https://github.com/kyverno/kyverno/blob/master/documentation/writing-policies.md).
You can test policies using the [Kyverno cli](https://github.com/kyverno/kyverno/blob/master/documentation/kyverno-cli.md).
See [docs](https://github.com/kyverno/kyverno/#documentation) for complete details.
## Documentation ## Documentation
- [Getting Started](documentation/installation.md) Kyverno guides and reference documents are available at: <a href="https://kyverno.io/">kyverno.io</a>.
- [Writing Policies](documentation/writing-policies.md)
- [Selecting Resources](/documentation/writing-policies-match-exclude.md)
- [Validating Resources](documentation/writing-policies-validate.md)
- [Mutating Resources](documentation/writing-policies-mutate.md)
- [Generating Resources](documentation/writing-policies-generate.md)
- [Variable Substitution](documentation/writing-policies-variables.md)
- [Preconditions](documentation/writing-policies-preconditions.md)
- [Auto-Generation of Pod Controller Policies](documentation/writing-policies-autogen.md)
- [Background Processing](documentation/writing-policies-background.md)
- [Using ConfigMaps for variables](documentation/writing-policies-configmap-reference.md)
- [Testing Policies](documentation/testing-policies.md)
- [Policy Violations](documentation/policy-violations.md)
- [Kyverno CLI](documentation/kyverno-cli.md)
- [Sample Policies](/samples/README.md)
- [API Documentation](https://htmlpreview.github.io/?https://github.com/kyverno/kyverno/blob/master/documentation/index.html)
## License ## Contributing
[Apache License 2.0](https://github.com/kyverno/kyverno/blob/master/LICENSE) Checkout out the Kyverno <a href="https://kyverno.io/docs/community">Community</a> page for ways to get involved and details on joining our next community meeting.
## Community ## Getting Help
### Community Meetings
To attend our next monthly community meeting join the [Kyverno group](https://groups.google.com/g/kyverno). You will then be sent a meeting invite and get access to the [agenda and meeting notes](https://docs.google.com/document/d/10Hu1qTip1KShi8Lf_v9C5UVQtp7vz_WL3WVxltTvdAc/edit#).
### Getting Help
- For feature requests and bugs, file an [issue](https://github.com/kyverno/kyverno/issues). - For feature requests and bugs, file an [issue](https://github.com/kyverno/kyverno/issues).
- For discussions or questions, join the **#kyverno** channel on the [Kubernetes Slack](https://kubernetes.slack.com/) or the [mailing list](https://groups.google.com/g/kyverno). - For discussions or questions, join the **#kyverno** channel on the [Kubernetes Slack](https://kubernetes.slack.com/) or the [mailing list](https://groups.google.com/g/kyverno).
### Contributing
Thanks for your interest in contributing!
- Please review and agree to abide with the [Code of Conduct](/CODE_OF_CONDUCT.md) before contributing.
- We encourage all contributions and encourage you to read our [contribution guidelines](./CONTRIBUTING.md).
- See the [Wiki](https://github.com/kyverno/kyverno/wiki) for developer documentation.
- Browse through the [open issues](https://github.com/kyverno/kyverno/issues)
## Presentations and Articles
- [Coffee and Cloud Native Video Review](https://www.youtube.com/watch?v=DW2u6LhNMh0&feature=youtu.be&t=116)
- [CNCF Webinar Video and Slides](https://www.cncf.io/webinars/how-to-keep-your-clusters-safe-and-healthy/)
- [VMware Code Meetup Video](https://www.youtube.com/watch?v=mgEmTvLytb0)
- [Virtual Rejekts Video](https://www.youtube.com/watch?v=caFMtSg4A6I)
- [TGIK Video](https://www.youtube.com/watch?v=ZE4Zu9WQET4&list=PL7bmigfV0EqQzxcNpmcdTJ9eFRPBe-iZa&index=18&t=0s)
- [10 Kubernetes Best Practices - blog post](https://thenewstack.io/10-kubernetes-best-practices-you-can-easily-apply-to-your-clusters/)
- [Introducing Kyverno - blog post](https://nirmata.com/2019/07/11/managing-kubernetes-configuration-with-policies/)
## Alternatives
### Open Policy Agent
[Open Policy Agent (OPA)](https://www.openpolicyagent.org/) is a general-purpose policy engine that can be used as a Kubernetes admission controller. It supports a large set of use cases. Policies are written using [Rego](https://www.openpolicyagent.org/docs/latest/how-do-i-write-policies#what-is-rego) a custom query language.
### k-rail
[k-rail](https://github.com/cruise-automation/k-rail/) provides several ready to use policies for security and multi-tenancy. The policies are written in Golang. Several of the [Kyverno sample policies](/samples/README.md) were inspired by k-rail policies.
### Polaris
[Polaris](https://github.com/reactiveops/polaris) validates configurations for best practices. It includes several checks across health, networking, security, etc. Checks can be assigned a severity. A dashboard reports the overall score.
### External configuration management tools
Tools like [Kustomize](https://github.com/kubernetes-sigs/kustomize) can be used to manage variations in configurations outside of clusters. There are several advantages to this approach when used to produce variations of the same base configuration. However, such solutions cannot be used to validate or enforce configurations.
## Roadmap
See [Milestones](https://github.com/kyverno/kyverno/milestones) and [Issues](https://github.com/kyverno/kyverno/issues).

Binary file not shown.

Before

Width:  |  Height:  |  Size: 80 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 248 KiB

File diff suppressed because it is too large Load diff

View file

@ -1,333 +0,0 @@
<small>*[documentation](/README.md#documentation) / Installation*</small>
# Installation
You can install Kyverno using the Helm chart or YAML files in this repository.
## Install Kyverno using Helm
Add the nirmata Helm repository
```sh
helm repo add kyverno https://kyverno.github.io/kyverno/
```
Create a namespace and then install the kyverno helm chart.
```sh
# Create a namespace
kubectl create ns <namespace>
# Install the kyverno helm chart
helm install kyverno --namespace <namespace> kyverno/kyverno
```
For installing in kyverno namespace:
```sh
kubectl create ns kyverno
helm install kyverno --namespace kyverno kyverno/kyverno
```
## Install Kyverno using YAMLs
The Kyverno policy engine runs as an admission webhook and requires a CA-signed certificate and key to setup secure TLS communication with the kube-apiserver (the CA can be self-signed). There are 2 ways to configure the secure communications link between Kyverno and the kube-apiserver.
### Option 1: Use kube-controller-manager to generate a CA-signed certificate
Kyverno can request a CA signed certificate-key pair from `kube-controller-manager`. To install Kyverno in a cluster that supports certificate signing, run the following command on a host with kubectl `cluster-admin` access:
```sh
## Install Kyverno
kubectl create -f https://github.com/kyverno/kyverno/raw/master/definitions/install.yaml
```
This method requires that the kube-controller-manager is configured to act as a certificate signer. To verify that this option is enabled for your cluster, check the command-line args for the kube-controller-manager. If `--cluster-signing-cert-file` and `--cluster-signing-key-file` are passed to the controller manager with paths to your CA's key-pair, then you can proceed to install Kyverno using this method.
**Deploying on EKS requires enabling a command-line argument `--fqdn-as-cn` in the 'kyverno' container in the deployment, due to a current limitation with the certificates returned by EKS for CSR(bug: https://github.com/awslabs/amazon-eks-ami/issues/341)**
Note that the above command will install the last released (stable) version of Kyverno. If you want to install the latest version, you can edit the [install.yaml] and update the image tag.
Also, by default kyverno is installed in "kyverno" namespace. To install in different namespace, you can edit the [install.yaml] and update the namespace.
To check the Kyverno controller status, run the command:
```sh
## Check pod status
kubectl get pods -n <namespace>
````
If the Kyverno controller is not running, you can check its status and logs for errors:
````sh
kubectl describe pod <kyverno-pod-name> -n <namespace>
````
````sh
kubectl logs <kyverno-pod-name> -n <namespace>
````
### Option 2: Use your own CA-signed certificate
You can install your own CA-signed certificate, or generate a self-signed CA and use it to sign a certifcate. Once you have a CA and X.509 certificate-key pair, you can install these as Kubernetes secrets in your cluster. If Kyverno finds these secrets, it uses them. Otherwise it will request the kube-controller-manager to generate a certificate (see Option 1 above).
#### 2.1. Generate a self-signed CA and signed certificate-key pair
**Note: using a separate self-signed root CA is difficult to manage and not recommeded for production use.**
If you already have a CA and a signed certificate, you can directly proceed to Step 2.
Here are the commands to create a self-signed root CA, and generate a signed certificate and key using openssl (you can customize the certificate attributes for your deployment):
1. Create a self-signed CA
````bash
openssl genrsa -out rootCA.key 4096
openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 1024 -out rootCA.crt -subj "/C=US/ST=test/L=test /O=test /OU=PIB/CN=*.kyverno.svc/emailAddress=test@test.com"
````
2. Create a keypair
````bash
openssl genrsa -out webhook.key 4096
openssl req -new -key webhook.key -out webhook.csr -subj "/C=US/ST=test /L=test /O=test /OU=PIB/CN=kyverno-svc.kyverno.svc/emailAddress=test@test.com"
````
3. Create a **webhook.ext file** with the Subject Alternate Names (SAN) to use. This is required with Kubernetes 1.19+ and Go 1.15+.
````
subjectAltName = DNS:kyverno-svc,DNS:kyverno-svc.kyverno,DNS:kyverno-svc.kyverno.svc
````
4. Sign the keypair with the CA passing in the extension
````bash
openssl x509 -req -in webhook.csr -CA rootCA.crt -CAkey rootCA.key -CAcreateserial -out webhook.crt -days 1024 -sha256 -extfile webhook.ext
````
5. Verify the contents of the certificate
````bash
openssl x509 -in webhook.crt -text -noout
````
The certificate must contain the SAN information in the X509v3 extensions section:
````
X509v3 extensions:
X509v3 Subject Alternative Name:
DNS:kyverno-svc, DNS:kyverno-svc.kyverno, DNS:kyverno-svc.kyverno.svc
````
#### 2.2. Configure secrets for the CA and TLS certificate-key pair
You can now use the following files to create secrets:
- rootCA.crt
- webhooks.crt
- webhooks.key
To create the required secrets, use the following commands (do not change the secret names):
````bash
kubectl create ns <namespace>
kubectl create secret tls kyverno-svc.kyverno.svc.kyverno-tls-pair --cert=webhook.crt --key=webhook.key -n <namespace>
kubectl annotate secret kyverno-svc.kyverno.svc.kyverno-tls-pair self-signed-cert=true -n <namespace>
kubectl create secret generic kyverno-svc.kyverno.svc.kyverno-tls-ca --from-file=rootCA.crt -n <namespace>
````
**NOTE: The annotation on the TLS pair secret is used by Kyverno to identify the use of self-signed certificates and checks for the required root CA secret**
Secret | Data | Content
------------ | ------------- | -------------
`kyverno-svc.kyverno.svc.kyverno-tls-pair` | rootCA.crt | root CA used to sign the certificate
`kyverno-svc.kyverno.svc.kyverno-tls-ca` | tls.key & tls.crt | key and signed certificate
Kyverno uses secrets created above to setup TLS communication with the kube-apiserver and specify the CA bundle to be used to validate the webhook server's certificate in the admission webhook configurations.
#### 2.3. Install Kyverno
You can now install kyverno by downloading and updating the [install.yaml], or using the command below (assumes that the namespace is **kyverno**):
```sh
kubectl create -f https://github.com/kyverno/kyverno/raw/master/definitions/install.yaml
```
# Configure Kyverno permissions
Kyverno, in `foreground` mode, leverages admission webhooks to manage incoming api-requests, and `background` mode applies the policies on existing resources. It uses ServiceAccount `kyverno-service-account`, which is bound to multiple ClusterRole, which defines the default resources and operations that are permitted.
ClusterRoles used by kyverno:
- kyverno:webhook
- kyverno:userinfo
- kyverno:customresources
- kyverno:policycontroller
- kyverno:generatecontroller
The `generate` rule creates a new resource, and to allow kyverno to create resource kyverno ClusterRole needs permissions to create/update/delete. This can be done by adding the resource to the ClusterRole `kyverno:generatecontroller` used by kyverno or by creating a new ClusterRole and a ClusterRoleBinding to kyverno's default ServiceAccount.
```yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: kyverno:generatecontroller
rules:
- apiGroups:
- "*"
resources:
- namespaces
- networkpolicies
- secrets
- configmaps
- resourcequotas
- limitranges
- ResourceA # new Resource to be generated
- ResourceB
verbs:
- create # generate new resources
- get # check the contents of exiting resources
- update # update existing resource, if required configuration defined in policy is not present
- delete # clean-up, if the generate trigger resource is deleted
```
```yaml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: kyverno-admin-generate
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kyverno:generatecontroller # clusterRole defined above, to manage generated resources
subjects:
- kind: ServiceAccount
name: kyverno-service-account # default kyverno serviceAccount
namespace: kyverno
```
# Custom installations
To install a specific version, download [install.yaml] and then change the image tag.
e.g., change image tag from `latest` to the specific tag `v1.0.0`.
>>>
spec:
containers:
- name: kyverno
# image: nirmata/kyverno:latest
image: nirmata/kyverno:v1.0.0
To install in a specific namespace replace the namespace "kyverno" with your namespace.
Example:
````sh
apiVersion: v1
kind: Namespace
metadata:
name: <namespace>
````
````sh
apiVersion: v1
kind: Service
metadata:
labels:
app: kyverno
name: kyverno-svc
namespace: <namespace>
````
and in other places (ServiceAccount, ClusterRoles, ClusterRoleBindings, ConfigMaps, Service, Deployment) where namespace is mentioned.
To run kyverno:
````sh
kubectl create -f ./install.yaml
````
To check the Kyverno controller status, run the command:
````sh
kubectl get pods -n <namespace>
````
If the Kyverno controller is not running, you can check its status and logs for errors:
````sh
kubectl describe pod <kyverno-pod-name> -n <namespace>
````
````sh
kubectl logs <kyverno-pod-name> -n <namespace>
````
Here is a script that generates a self-signed CA, a TLS certificate-key pair, and the corresponding kubernetes secrets: [helper script](/scripts/generate-self-signed-cert-and-k8secrets.sh)
# Configure Kyverno flags
1. `excludeGroupRole` : excludeGroupRole role expected string with Comma seperated group role. It will exclude all the group role from the user request. Default we are using `system:serviceaccounts:kube-system,system:nodes,system:kube-scheduler`.
2. `excludeUsername` : excludeUsername expected string with Comma seperated kubernetes username. In generate request if user enable `Synchronize` in generate policy then only kyverno can update/delete generated resource but admin can exclude specific username who have access of delete/update generated resource.
3. `filterK8Resources`: k8s resource in format [kind,namespace,name] where policy is not evaluated by the admission webhook. For example --filterKind "[Deployment, kyverno, kyverno]" --filterKind "[Deployment, kyverno, kyverno],[Events, *, *].
# Configure access to policy violations
During Kyverno installation, it creates a ClusterRole `kyverno:policyviolations` which has the `list,get,watch` operations on resource `policyviolations`. To grant access to a namespace admin, configure the following YAML file then apply to the cluster.
- Replace `metadata.namespace` with namespace of the admin
- Configure `subjects` field to bind admin's role to the ClusterRole `policyviolation`
````yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: policyviolation
# change namespace below to create rolebinding for the namespace admin
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: policyviolation
subjects:
# configure below to access policy violation for the namespace admin
- kind: ServiceAccount
name: default
namespace: default
# - apiGroup: rbac.authorization.k8s.io
# kind: User
# name:
# - apiGroup: rbac.authorization.k8s.io
# kind: Group
# name:
````
# Filter resources that Kyverno should not process
The admission webhook checks if a policy is applicable on all admission requests. The Kubernetes kinds that are not be processed can be filtered by adding a `ConfigMap` in namespace `kyverno` and specifying the resources to be filtered under `data.resourceFilters`. The default name of this `ConfigMap` is `init-config` but can be changed by modifying the value of the environment variable `INIT_CONFIG` in the kyverno deployment dpec. `data.resourceFilters` must be a sequence of one or more `[<Kind>,<Namespace>,<Name>]` entries with `*` as wildcard. Thus, an item `[Node,*,*]` means that admissions of `Node` in any namespace and with any name will be ignored.
By default we have specified Nodes, Events, APIService & SubjectAccessReview as the kinds to be skipped in the default configuration [install.yaml].
```
apiVersion: v1
kind: ConfigMap
metadata:
name: init-config
namespace: kyverno
data:
# resource types to be skipped by kyverno policy engine
resourceFilters: "[Event,*,*][*,kube-system,*][*,kube-public,*][*,kube-node-lease,*][Node,*,*][APIService,*,*][TokenReview,*,*][SubjectAccessReview,*,*][*,kyverno,*]"
```
To modify the `ConfigMap`, either directly edit the `ConfigMap` `init-config` in the default configuration [install.yaml] and redeploy it or modify the `ConfigMap` use `kubectl`. Changes to the `ConfigMap` through `kubectl` will automatically be picked up at runtime.
# Installing outside of the cluster (debug mode)
To build Kyverno in a development environment see: https://github.com/kyverno/kyverno/wiki/Building
To run controller in this mode you should prepare a TLS key/certificate pair for debug webhook, then start controller with kubeconfig and the server address.
1. Run `sudo scripts/deploy-controller-debug.sh --service=localhost --serverIP=<server_IP>`, where <server_IP> is the IP address of the host where controller runs. This scripts will generate a TLS certificate for debug webhook server and register this webhook in the cluster. It also registers a CustomResource policy.
2. Start the controller using the following command: `sudo KYVERNO_NAMESPACE=<namespace> KYVERNO_SVC=<service_name> go run ./cmd/kyverno/main.go --kubeconfig=~/.kube/config --serverIP=<server_IP>`. In case environment variable "KYVERNO_NAMESPACE" and "KYVERNO_SVC" is not passed kyverno will run in its default namespace "kyverno" and with default service name "kyverno-svc".
---
<small>*Read Next >> [Writing Policies](/documentation/writing-policies.md)*</small>
[install.yaml]: https://github.com/kyverno/kyverno/raw/master/definitions/install.yaml

View file

@ -1,216 +0,0 @@
<small>*[documentation](/README.md#documentation) / kyverno-cli*</small>
# Kyverno CLI
The Kyverno Command Line Interface (CLI) is designed to validate policies and test the behavior of applying policies to resources before adding the policy to a cluster. It can be used as a kubectl plugin and as a standalone CLI.
## Build the CLI
You can build the CLI binary locally, then move the binary into a directory in your PATH.
```bash
git clone https://github.com/kyverno/kyverno.git
cd github.com/kyverno/kyverno
make cli
mv ./cmd/cli/kubectl-kyverno/kyverno /usr/local/bin/kyverno
```
You can also use [Krew](https://github.com/kubernetes-sigs/krew)
```bash
# Install kyverno using krew plugin manager
kubectl krew install kyverno
#example
kubectl kyverno version
```
## Install via AUR (archlinux)
You can install the kyverno cli via your favourite AUR helper (e.g. [yay](https://github.com/Jguer/yay))
```
yay -S kyverno-git
```
## Commands
### Version
Prints the version of kyverno used by the CLI.
Example:
```
kyverno version
```
### Validate
Validates a policy, can validate multiple policy resource description files or even an entire folder containing policy resource description
files. Currently supports files with resource description in yaml. The policies can also be passed from stdin.
Example:
```
kyverno validate /path/to/policy1.yaml /path/to/policy2.yaml /path/to/folderFullOfPolicies
```
Passing policy from stdin:
```
kustomize build nginx/overlays/envs/prod/ | kyverno validate -
```
Use the -o <yaml/json> flag to display the mutated policy.
Example:
```
kyverno validate /path/to/policy1.yaml /path/to/policy2.yaml /path/to/folderFullOfPolicies -o yaml
```
Policy can also be validated with CRDs. Use -c flag to pass the CRD, can pass multiple CRD files or even an entire folder containin CRDs.
Example:
```
kyverno validate /path/to/policy1.yaml -c /path/to/crd.yaml -c /path/to/folderFullOfCRDs
```
### Apply
Applies policies on resources, and supports applying multiple policies on multiple resources in a single command.
Also supports applying the given policies to an entire cluster. The current kubectl context will be used to access the cluster.
Displays mutate results to stdout, by default. Use the -o <path> flag to save mutated resources to a file or directory.
Apply to a resource:
```
kyverno apply /path/to/policy.yaml --resource /path/to/resource.yaml
```
Apply to all matching resources in a cluster:
```
kyverno apply /path/to/policy.yaml --cluster > policy-results.txt
```
The resources can also be passed from stdin:
```
kustomize build nginx/overlays/envs/prod/ | kyverno apply /path/to/policy.yaml --resource -
```
Apply multiple policies to multiple resources:
```
kyverno apply /path/to/policy1.yaml /path/to/folderFullOfPolicies --resource /path/to/resource1.yaml --resource /path/to/resource2.yaml --cluster
```
Saving the mutated resource in a file/directory:
```
kyverno apply /path/to/policy.yaml --resource /path/to/resource.yaml -o <file path/directory path>
```
Apply policy with variables:
Use --set flag to pass the values for variables in a policy while applying on a resource.
```
kyverno apply /path/to/policy.yaml --resource /path/to/resource.yaml --set <variable1>=<value1>,<variable2>=<value2>
```
Use --values_file for applying multiple policies on multiple resources and pass a file containing variables and its values.
```
kyverno apply /path/to/policy1.yaml /path/to/policy2.yaml --resource /path/to/resource1.yaml --resource /path/to/resource2.yaml -f /path/to/value.yaml
```
Format of value.yaml :
```
policies:
- name: <policy1 name>
resources:
- name: <resource1 name>
values:
<variable1 in policy1>: <value>
<variable2 in policy1>: <value>
- name: <resource2 name>
values:
<variable1 in policy1>: <value>
<variable2 in policy1>: <value>
- name: <policy2 name>
resources:
- name: <resource1 name>
values:
<variable1 in policy2>: <value>
<variable2 in policy2>: <value>
- name: <resource2 name>
values:
<variable1 in policy2>: <value>
<variable2 in policy2>: <value>
```
Example:
Policy file(add_network_policy.yaml):
```
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: add-networkpolicy
annotations:
policies.kyverno.io/category: Workload Management
policies.kyverno.io/description: By default, Kubernetes allows communications across
all pods within a cluster. Network policies and, a CNI that supports network policies,
must be used to restrict communinications. A default NetworkPolicy should be configured
for each namespace to default deny all ingress traffic to the pods in the namespace.
Application teams can then configure additional NetworkPolicy resources to allow
desired traffic to application pods from select sources.
spec:
rules:
- name: default-deny-ingress
match:
resources:
kinds:
- Namespace
name: "*"
generate:
kind: NetworkPolicy
name: default-deny-ingress
namespace: "{{request.object.metadata.name}}"
synchronize : true
data:
spec:
# select all pods in the namespace
podSelector: {}
policyTypes:
- Ingress
```
Resource file(required_default_network_policy.yaml) :
```
kind: Namespace
apiVersion: v1
metadata:
name: "devtest"
```
Applying policy on resource using set/-s flag:
```
kyverno apply /path/to/add_network_policy.yaml --resource /path/to/required_default_network_policy.yaml -s request.object.metadata.name=devtest
```
Applying policy on resource using --values_file/-f flag:
yaml file with variables(value.yaml) :
```
policies:
- name: default-deny-ingress
resources:
- name: devtest
values:
request.namespace: devtest
```
```
kyverno apply /path/to/add_network_policy.yaml --resource /path/to/required_default_network_policy.yaml -f /path/to/value.yaml
```
<small>*Read Next >> [Sample Policies](/samples/README.md)*</small>

View file

@ -1,31 +0,0 @@
<small>*[documentation](/README.md#documentation) / Policy Violations*</small>
# Policy Violations
Policy Violations are created to:
1. Report resources that do not comply with validation rules with `validationFailureAction` set to `audit`.
2. Report existing resources (i.e. resources created before the policy was created) that do not comply with validation or mutation rules.
Policy Violation objects are created in the resource namespace. Policy Violation resources are automatically removed when the resource is updated to comply with the policy rule, or when the policy rule is deleted.
You can view all existing policy violations as shown below:
````
λ kubectl get polv --all-namespaces
NAMESPACE NAME POLICY RESOURCEKIND RESOURCENAME AGE
default disallow-root-user-56j4t disallow-root-user Deployment nginx-deployment 5m7s
default validation-example2-7snmh validation-example2 Deployment nginx-deployment 5m7s
docker disallow-root-user-2kl4m disallow-root-user Pod compose-api-dbbf7c5db-kpnvk 43m
docker disallow-root-user-hfxzn disallow-root-user Pod compose-7b7c5cbbcc-xj8f6 43m
docker disallow-root-user-s5rjp disallow-root-user Deployment compose 43m
docker disallow-root-user-w58kp disallow-root-user Deployment compose-api 43m
docker validation-example2-dgj9j validation-example2 Deployment compose 5m28s
docker validation-example2-gzfdf validation-example2 Deployment compose-api 5m27s
````
# Cluster Policy Violations
Cluster Policy Violations are like Policy Violations but created for cluster-wide resources.
<small>*Read Next >> [Kyverno CLI](/documentation/kyverno-cli.md)*</small>

View file

@ -1,30 +0,0 @@
<small>*[documentation](/README.md#documentation) / Testing Policies*</small>
# Testing Policies
The resources definitions for testing are located in the [test](/test) directory. Each test contains a pair of files: one is the resource definition, and the second is the Kyverno policy for this definition.
## Test using kubectl
To do this you should [install Kyverno to the cluster](installation.md).
For example, to test the simplest Kyverno policy for `ConfigMap`, create the policy and then the resource itself via `kubectl`:
````bash
cd test
kubectl create -f policy/policy-CM.yaml
kubectl create -f resources/CM.yaml
````
Then compare the original resource definition in `CM.yaml` with the actual one:
````bash
kubectl get -f resources/CM.yaml -o yaml
````
## Test using Kyverno CLI
The Kyverno CLI allows testing policies before they are applied to a cluster. It is documented at [Kyverno CLI](kyverno-cli.md)
<small>*Read Next >> [Policy Violations](/documentation/policy-violations.md)*</small>

View file

@ -1,24 +0,0 @@
<small>*[documentation](/README.md#documentation) / [Writing Policies](/documentation/writing-policies.md) / Auto-Generation for Pod Controllers*</small>
# Auto Generating Rules for Pod Controllers
**Note: The auto-gen feature is only supported for validation rules with patterns and mutation rules with overlay. Validate - Deny rules and Generate rules are not supported.**
Writing policies on pods helps address all pod creation flows.
However, when pod controllers are used, pod-level policies result in errors not being reported when the pod controller object is created.
Kyverno solves this issue by supporting automatic generation of policy rules for pod controllers from a rule written for a pod.
This auto-generation behavior is controlled by the `pod-policies.kyverno.io/autogen-controllers` annotation.
By default, Kyverno inserts an annotation `pod-policies.kyverno.io/autogen-controllers=DaemonSet,Deployment,Job,StatefulSet,CrobJob`, to generate additional rules that are applied to these pod controllers.
You can change the annotation `pod-policies.kyverno.io/autogen-controllers` to customize the target pod controllers for the auto-generated rules. For example, Kyverno generates a rule for a `Deployment` if the annotation of policy is defined as `pod-policies.kyverno.io/autogen-controllers=Deployment`.
When a `name` or `labelSelector` is specified in the match / exclude block, Kyverno skips generating pod controllers rule as these filters may not be applicable to pod controllers.
To disable auto-generating rules for pod controllers set `pod-policies.kyverno.io/autogen-controllers` to the value `none`.
<small>*Read Next >> [Background Processing](/documentation/writing-policies-background.md)*</small>

View file

@ -1,20 +0,0 @@
<small>*[documentation](/README.md#documentation) / [Writing Policies](/documentation/writing-policies.md) / Background Processing*</small>
# Background processing
Kyverno applies policies during admission control and to existing resources in the cluster that may have been created before a policy was created. The application of policies to existing resources is referred to as `background` processing.
Note, that Kyverno does not mutate existing resources, and will only report policy violation for existing resources that do not match mutation, validation, or generation rules.
A policy is always enabled for processing during admission control. However, policy rules that rely on request information (e.g. `{{request.userInfo}}`) cannot be applied to existing resource in the `background` mode as the user information is not available outside of the admission controller. Hence, these rules must use the boolean flag `{spec.background}` to disable `background` processing.
```
spec:
background: true
rules:
- name: default-deny-ingress
```
The default value of `background` is `true`. When a policy is created or modified, the policy validation logic will report an error if a rule uses `userInfo` and does not set `background` to `false`.
<small>*Read Next >> [Configmap Lookup](/documentation/writing-policies-configmap-reference.md)*</small>

View file

@ -1,93 +0,0 @@
<small>*[documentation](/README.md#documentation) / [Writing Policies](/documentation/writing-policies.md) / Configmap Lookup*</small>
# Using ConfigMaps for Variables
There are many cases where the values that are passed into Kyverno policies are dynamic or need to be vary based on the execution environment.
Kyverno supports using Kubernetes [ConfigMaps](https://kubernetes.io/docs/concepts/configuration/configmap/) to manage variable values outside of a policy definition.
# Defining ConfigMaps in a Rule Context
To refer to values from a ConfigMap inside any Rule, define a context inside the rule with one or more ConfigMap declarations.
````yaml
rules:
- name: example-configmap-lookup
# added context to define the configmap information which will be referred
context:
# unique name to identify configmap
- name: dictionary
configMap:
# configmap name - name of the configmap which will be referred
name: mycmap
# configmap namepsace - namespace of the configmap which will be referred
namespace: test
````
Sample ConfigMap Definition
````yaml
apiVersion: v1
data:
env: production
kind: ConfigMap
metadata:
name: mycmap
````
# Looking up values
A ConfigMap that is defined in a rule context can be referred to using its unique name within the context. ConfigMap values can be referenced using a JMESPATH style expression `{{<name>.<data>.<key>}}`.
For the example above, we can refer to a ConfigMap value using `{{dictionary.data.env}}`. The variable will be substituted with the value `production` during policy execution.
# Handling Array Values
The ConfigMap value can be an array of string values in JSON format. Kyverno will parse the JSON string to a list of strings, so set operations like In and NotIn can then be applied.
For example, a list of allowed roles can be stored in a ConfigMap, and the Kyverno policy can refer to this list to deny the requests where the role does not match one of the values in the list.
Here are the allowed roles in the ConfigMap:
````yaml
apiVersion: v1
data:
allowed-roles: "[\"cluster-admin\", \"cluster-operator\", \"tenant-admin\"]"
kind: ConfigMap
metadata:
name: roles-dictionary
namespace: test
````
Here is a rule to block a Deployment if the value of annotation `role` is not in the allowed list:
````yaml
spec:
validationFailureAction: enforce
rules:
- name: validate-role-annotation
context:
- name: roles-dictionary
configMap:
name: roles-dictionary
namespace: test
match:
resources:
kinds:
- Deployment
preconditions:
- key: "{{ request.object.metadata.annotations.role }}"
operator: NotEquals
value: ""
validate:
message: "role {{ request.object.metadata.annotations.role }} is not in the allowed list {{ \"roles-dictionary\".data.\"allowed-roles\" }}"
deny:
conditions:
- key: "{{ request.object.metadata.annotations.role }}"
operator: NotIn
value: "{{ \"roles-dictionary\".data.\"allowed-roles\" }}"
````
<small>*Read Next >> [Testing Policies](/documentation/testing-policies.md)*</small>

View file

@ -1,135 +0,0 @@
<small>*[documentation](/README.md#documentation) / [Writing Policies](/documentation/writing-policies.md) / Generate Resources*</small>
# Generating Resources
The ```generate``` rule can used to create additional resources when a new resource is created. This is useful to create supporting resources, such as new role bindings for a new namespace.
The `generate` rule supports `match` and `exclude` blocks, like other rules. Hence, the trigger for applying this rule can be the creation of any resource and its possible to match or exclude API requests based on subjects, roles, etc.
The generate rule is triggered during a API CREATE operation. To keep resources synchronized across changes you can use the `synchronize` property. When `synchronize` is set to `true` the generated resource is kept in-sync with the source resource (which can be defined as part of the policy or may be an existing resource), and generated resources cannot be modified by users. If `synchronize` is set to `false` then users can update or delete the generated resource directly.
This policy sets the Zookeeper and Kafka connection strings for all namespaces.
```yaml
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: "zk-kafka-address"
spec:
rules:
- name: "zk-kafka-address"
match:
resources:
kinds:
- Namespace
exclude:
namespaces:
- "kube-system"
- "default"
- "kube-public"
- "kyverno"
generate:
synchronize: true
kind: ConfigMap
name: zk-kafka-address
# generate the resource in the new namespace
namespace: "{{request.object.metadata.name}}"
data:
kind: ConfigMap
data:
ZK_ADDRESS: "192.168.10.10:2181,192.168.10.11:2181,192.168.10.12:2181"
KAFKA_ADDRESS: "192.168.10.13:9092,192.168.10.14:9092,192.168.10.15:9092"
```
## Example 1
````yaml
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: basic-policy
spec:
rules:
- name: "Generate ConfigMap"
match:
resources:
kinds:
- Namespace
exclude:
namespaces:
- "kube-system"
- "default"
- "kube-public"
- "kyverno"
generate:
kind: ConfigMap # Kind of resource
name: default-config # Name of the new Resource
namespace: "{{request.object.metadata.name}}" # namespace that triggers this rule
synchronize : true
clone:
namespace: default
name: config-template
- name: "Generate Secret (insecure)"
match:
resources:
kinds:
- Namespace
generate:
kind: Secret
name: mongo-creds
namespace: "{{request.object.metadata.name}}" # namespace that triggers this rule
data:
data:
DB_USER: YWJyYWthZGFicmE=
DB_PASSWORD: YXBwc3dvcmQ=
metadata:
labels:
purpose: mongo
````
In this example each namespaces will receive 2 new resources:
* A `ConfigMap` cloned from `default/config-template`.
* A `Secret` with values `DB_USER` and `DB_PASSWORD`, and label `purpose: mongo`.
## Example 2
````yaml
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: "default"
spec:
rules:
- name: "deny-all-traffic"
match:
resources:
kinds:
- Namespace
name: "*"
exclude:
namespaces:
- "kube-system"
- "default"
- "kube-public"
- "kyverno"
generate:
kind: NetworkPolicy
name: deny-all-traffic
namespace: "{{request.object.metadata.name}}" # namespace that triggers this rule
data:
spec:
# select all pods in the namespace
podSelector: {}
policyTypes:
- Ingress
metadata:
labels:
policyname: "default"
````
In this example new namespaces will receive a `NetworkPolicy` that by default denies all inbound and outbound traffic.
---
<small>*Read Next >> [Variables](/documentation/writing-policies-variables.md)*</small>

View file

@ -1,142 +0,0 @@
<small>*[documentation](/README.md#documentation) / Writing Policies / Match & Exclude *</small>
# Match & Exclude
The `match` and `exclude` filters control which resources policies are applied to.
The match / exclude clauses have the same structure, and can each contain the following elements:
* resources: select resources by name, namespaces, kinds, label selectors and annotations.
* subjects: select users, user groups, and service accounts
* roles: select namespaced roles
* clusterroles: select cluster wide roles
At least one element must be specified in a `match` block. The `kind` attribute is optional, but if it's not specified the policy rule will only be applicable to metatdata that is common across all resources kinds.
When Kyverno receives an admission controller request, i.e. a validation or mutation webhook, it first checks to see if the resource and user information matches or should be excluded from processing. If both checks pass, then the rule logic to mutate, validate, or generate resources is applied.
The following YAML provides an example for a match clause.
````yaml
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: policy
spec:
# 'enforce' to block resource request if any rules fail
# 'audit' to allow resource request on failure of rules, but create policy violations to report them
validationFailureAction: enforce
# Each policy has a list of rules applied in declaration order
rules:
# Rules must have a unique name
- name: "check-pod-controller-labels"
# Each rule matches specific resource described by "match" field.
match:
resources:
kinds: # Required, list of kinds
- Deployment
- StatefulSet
name: "mongo*" # Optional, a resource name is optional. Name supports wildcards (* and ?)
namespaces: # Optional, list of namespaces. Supports wildcards (* and ?)
- "dev*"
- test
selector: # Optional, a resource selector is optional. Values support wildcards (* and ?)
matchLabels:
app: mongodb
matchExpressions:
- {key: tier, operator: In, values: [database]}
# Optional, subjects to be matched
subjects:
- kind: User
name: mary@somecorp.com
# Optional, roles to be matched
roles:
# Optional, clusterroles to be matched
clusterroles: cluster-admin
...
````
All`match` and `exclude` element must be satisfied for the resource to be selected as a candidate for the policy rule. In other words, the match and exclude conditions are evaluated using a logical AND operation.
Here is an example of a rule that matches all pods, excluding pods created by using the `cluster-admin` cluster role.
````yaml
spec:
rules:
name: "match-pods-except-admin"
match:
resources:
kinds:
- Pod
exclude:
clusterroles: cluster-admin
````
This rule that matches all pods, excluding pods in the `kube-system` namespace.
````yaml
spec:
rules:
name: "match-pods-except-admin"
match:
resources:
kinds:
- Pod
exclude:
namespaces:
- "kube-system"
````
Condition checks inside the `resources` block follow the logic "**AND across types but an OR inside list types**". For example, if a rule match contains a list of kinds and a list of namespaces, the rule will be evaluated if the request contains any one (OR) of the kinds AND any one (OR) of the namespaces. Conditions inside `clusterRoles`, `roles` and `subjects` are always evaluated using a logical OR operation, as each request can only have a single instance of these values.
This is an example that select Deployment **OR** StatefulSet that has label `app=critical`.
````yaml
spec:
rules:
- name: match-critical-app
match:
resources: # AND across types but an OR inside types that take a list
kinds:
- Deployment,StatefulSet
selector:
matchLabels:
app: critical
````
The following example matches all resources with label `app=critical` excluding the resource created by clusterRole `cluster-admin` **OR** by the user `John`.
````yaml
spec:
rules:
- name: match-criticals-except-given-rbac
match:
resources:
selector:
matchLabels:
app: critical
exclude:
clusterRoles:
- cluster-admin
subjects:
- kind: User
name: John
````
Here is an example of a rule that matches all pods, having 'imageregistry: "https://hub.docker.com/"' annotations.
````yaml
spec:
rules:
- name: match-pod-annotations
match:
resources:
annotations:
imageregistry: "https://hub.docker.com/"
kinds:
- Pod
name: "*"
````
---
<small>*Read Next >> [Validate Resources](/documentation/writing-policies-validate.md)*</small>

View file

@ -1,326 +0,0 @@
<small>*[documentation](/README.md#documentation) / [Writing Policies](/documentation/writing-policies.md) / Mutate Resources*</small>
# Mutating Resources
The ```mutate``` rule can be used to add, replace, or delete elements in matching resources. A mutate rule can be written as a JSON Patch or as an overlay.
By using a ```patch``` in the [JSONPatch - RFC 6902](http://jsonpatch.com/) format, you can make precise changes to the resource being created. Using an ```overlay``` is convenient for describing the desired state of the resource.
Resource mutation occurs before validation, so the validation rules should not contradict the changes performed by the mutation section.
This policy sets the imagePullPolicy to Always if the image tag is latest:
```yaml
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: set-image-pull-policy
spec:
rules:
- name: set-image-pull-policy
match:
resources:
kinds:
- Pod
mutate:
overlay:
spec:
containers:
# match images which end with :latest
- (image): "*:latest"
# set the imagePullPolicy to "Always"
imagePullPolicy: "Always"
```
## JSONPatch - RFC 6902
A JSON Patch rule provides an alternate way to mutate resources.
[JSONPatch](http://jsonpatch.com/) supports the following operations (in the 'op' field):
* **add**
* **replace**
* **remove**
With Kyverno, the add and replace have the same behavior i.e. both operations will add or replace the target element.
This patch policy adds, or replaces, entries in a `ConfigMap` with the name `config-game` in any namespace.
````yaml
apiVersion : kyverno.io/v1
kind : ClusterPolicy
metadata :
name : policy-generate-cm
spec :
rules:
- name: pCM1
match:
resources:
name: "config-game"
kinds :
- ConfigMap
mutate:
patchesJson6902: |-
- path: "/data/ship.properties"
op: add
value: |
type=starship
owner=utany.corp
- path : "/data/newKey1"
op : add
value : newValue1
````
If your ConfigMap has empty data, the following policy adds an entry to `config-game`.
````yaml
apiVersion : kyverno.io/v1
kind : ClusterPolicy
metadata :
name : policy-generate-cm
spec :
rules:
- name: pCM1
match:
resources:
name: "config-game"
kinds :
- ConfigMap
mutate:
patchesJson6902: |-
- path: "/data"
op: add
value: {"ship.properties": "{\"type\": \"starship\", \"owner\": \"utany.corp\"}"}
````
Here is the example of a patch that removes a label from the secret:
````yaml
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: policy-remove-label
spec:
rules:
- name: "Remove unwanted label"
match:
resources:
kinds:
- Secret
mutate:
patchesJson6902: |-
- path: "/metadata/labels/purpose"
op: remove
````
This policy adds elements to list:
````yaml
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: insert-container
spec:
rules:
- name: insert-container
match:
resources:
kinds:
- Pod
mutate:
patchesJson6902: |-
- op: add
path: /spec/containers/1
value: {"name":"busyboxx","image":"busybox:latest"}
- op: add
path: /spec/containers/0/command
value:
- ls
````
Note, that if **remove** operation cannot be applied, then this **remove** operation will be skipped with no error.
## Strategic Merge Patch
A `patchStrategicMerge` patch is [stategic-merge](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-api-machinery/strategic-merge-patch.md)-style patch. The `patchStrategicMerge` overlay resolves to a partial resource definition.
This policy sets the imagePullPolicy, adds command to container `nginx`:
````yaml
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: strategic-merge-patch
spec:
rules:
- name: set-image-pull-policy-add-command
match:
resources:
kinds:
- Pod
mutate:
patchStrategicMerge:
metadata:
labels:
name: "{{request.object.metadata.name}}"
spec:
containers:
- name: "nginx"
image: "nginx:latest"
imagePullPolicy: "Never"
command:
- ls
````
## Mutate Overlay
A mutation overlay describes the desired form of resource. The existing resource values are replaced with the values specified in the overlay. If a value is specified in the overlay but not present in the target resource, then it will be added to the resource.
The overlay cannot be used to delete values in a resource: use **patches** for this purpose.
The following mutation overlay will add (or replace) the memory request and limit to 10Gi for every Pod with a label `memory: high`:
````yaml
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: policy-change-memory-limit
spec:
rules:
- name: "Set hard memory limit to 2Gi"
match:
resources:
kinds:
- Pod
selector:
matchLabels:
memory: high
mutate:
overlay:
spec:
containers:
# the wildcard * will match all containers in the list
- (name): "*"
resources:
requests:
memory: "10Gi"
limits:
memory: "10Gi"
````
### Working with lists
Applying overlays to a list type is fairly straightforward: new items will be added to the list, unless they already exist. For example, the next overlay will add IP "192.168.10.172" to all addresses in all Endpoints:
````yaml
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: policy-endpoints
spec:
rules:
- name: "Add IP to subsets"
match:
resources:
kinds:
- Endpoints
mutate:
overlay:
subsets:
- addresses:
- ip: 192.168.42.172
````
### Conditional logic using anchors
An **anchor** field, marked by parentheses and an optional preceeding character, allows conditional processing for mutations.
The mutate overlay rules support two types of anchors:
| Anchor | Tag | Behavior |
|--------------------|----- |----------------------------------------------------- |
| Conditional | () | Use the tag and value as an "if" condition |
| Add if not present | +() | Add the tag value, if the tag is not already present |
The **anchors** values support **wildcards**:
1. `*` - matches zero or more alphanumeric characters
2. `?` - matches a single alphanumeric character
#### Conditional anchor
A `conditional anchor` evaluates to `true` if the anchor tag exists and if the value matches the specified value. Processing stops if a tag does not exist or when the value does not match. Once processing stops, any child elements or any remaining siblings in a list, will not be processed.
For example, this overlay will add or replace the value `6443` for the `port` field, for all ports with a name value that starts with "secure":
````yaml
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: policy-set-port
spec:
rules:
- name: "Set port"
match:
resources:
kinds :
- Endpoints
mutate:
overlay:
subsets:
- ports:
- (name): "secure*"
port: 6443
````
If the anchor tag value is an object or array, the entire object or array must match. In other words, the entire object or array becomes part of the "if" clause. Nested `conditional anchor` tags are not supported.
### Add if not present anchor
A variation of an anchor, is to add a field value if it is not already defined. This is done by using the `add anchor` (short for `add if not present anchor`) with the notation `+(...)` for the tag.
An `add anchor` is processed as part of applying the mutation. Typically, every non-anchor tag-value is applied as part of the mutation. If the `add anchor` is set on a tag, the tag and value are only applied if they do not exist in the resource.
For example, this policy matches and mutates pods with `emptyDir` volume, to add the `safe-to-evict` annotation if it is not specified.
````yaml
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: add-safe-to-evict
annotations:
pod-policies.kyverno.io/autogen-controllers: none
spec:
rules:
- name: "annotate-empty-dir"
match:
resources:
kinds:
- Pod
mutate:
overlay:
metadata:
annotations:
+(cluster-autoscaler.kubernetes.io/safe-to-evict): true
spec:
volumes:
- (emptyDir): {}
````
#### Anchor processing flow
The anchor processing behavior for mutate conditions is as follows:
1. First, all conditional anchors are processed. Processing stops when the first conditional anchor return a `false`. Mutation proceeds only of all conditional anchors return a `true`. Note that for `conditional anchor` tags with complex (object or array) values the entire value (child) object is treated as part of the condition, as explained above.
2. Next, all tag-values without anchors and all `add anchor` tags are processed to apply the mutation.
## Additional Details
Additional details on mutation overlay behaviors are available on the wiki: [Mutation Overlay](https://github.com/kyverno/kyverno/wiki/Mutation-Overlay)
---
<small>*Read Next >> [Generate Resources](/documentation/writing-policies-generate.md)*</small>

View file

@ -1,48 +0,0 @@
<small>*[documentation](/README.md#documentation) / [Writing Policies](/documentation/writing-policies.md) / Preconditions*</small>
# Preconditions
Preconditions allow controlling policy rule execution based on variable values.
While `match` & `exclude` conditions allow filtering requests based on resource and user information, `preconditions` can be used to define custom filters for more granular control.
The following operators are currently supported for preconditon evaluation:
- Equal
- Equals
- NotEqual
- NotEquals
- In
- NotIn
## Example
```yaml
- name: generate-owner-role
match:
resources:
kinds:
- Namespace
preconditions:
- key: "{{serviceAccountName}}"
operator: NotEqual
value: ""
```
In the above example, the rule is only applied to requests from service accounts i.e. when the `{{serviceAccountName}}` is not empty.
```yaml
- name: generate-default-build-role
match:
resources:
kinds:
- Namespace
preconditions:
- key: "{{serviceAccountName}}"
operator: In
value: ["build-default", "build-base"]
```
In the above example, the rule is only applied to requests from service account with name `build-default` and `build-base`.
<small>*Read Next >> [Auto-Generation for Pod Controllers](/documentation/writing-policies-autogen.md)*</small>

View file

@ -1,299 +0,0 @@
<small>*[documentation](/README.md#documentation) / [Writing Policies](/documentation/writing-policies.md) / Validate Resources*</small>
# Validating Resources and Requests
A validation rule can be used to validate resources or to deny API requests based on other information.
To validate resource data, define a [pattern](#patterns) in the validation rule. To deny certain API requests define a [deny](#deny-rules) element in the validation rule along a set of conditions that control when to allow or deny the request.
This policy requires that all pods have CPU and memory resource requests and limits:
```yaml
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: check-cpu-memory
spec:
# `enforce` blocks the request. `audit` reports violations
validationFailureAction: enforce
rules:
- name: check-pod-resources
match:
resources:
kinds:
- Pod
validate:
message: "CPU and memory resource requests and limits are required"
pattern:
spec:
containers:
# 'name: *' selects all containers in the pod
- name: "*"
resources:
limits:
# '?' requires 1 alphanumeric character and '*' means that
# there can be 0 or more characters. Using them together
# e.g. '?*' requires at least one character.
memory: "?*"
cpu: "?*"
requests:
memory: "?*"
cpu: "?*"
```
This policy prevents users from changing default network policies:
```yaml
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: deny-netpol-changes
spec:
validationFailureAction: enforce
background: false
rules:
- name: check-netpol-updates
match:
resources:
kinds:
- NetworkPolicy
name:
- *-default
exclude:
clusterRoles:
- cluster-admin
validate:
message: "Changing default network policies is not allowed"
deny: {}
```
## Patterns
A validation rule that checks resource data is defined as an overlay pattern that provides the desired configuration. Resource configurations must match fields and expressions defined in the pattern to pass the validation rule. The following rules are followed when processing the overlay pattern:
1. Validation will fail if a field is defined in the pattern and if the field does not exist in the configuration.
2. Undefined fields are treated as wildcards.
3. A validation pattern field with the wildcard value '*' will match zero or more alphanumeric characters. Empty values are matched. Missing fields are not matched.
4. A validation pattern field with the wildcard value '?' will match any single alphanumeric character. Empty or missing fields are not matched.
5. A validation pattern field with the wildcard value '?*' will match any alphanumeric characters and requires the field to be present with non-empty values.
6. A validation pattern field with the value `null` or "" (empty string) requires that the field not be defined or has no value.
7. The validation of siblings is performed only when one of the field values matches the value defined in the pattern. You can use the parenthesis operator to explictly specify a field value that must be matched. This allows writing rules like 'if fieldA equals X, then fieldB must equal Y'.
8. Validation of child values is only performed if the parent matches the pattern.
### Wildcards
1. `*` - matches zero or more alphanumeric characters
2. `?` - matches a single alphanumeric character
### Operators
| Operator | Meaning |
|------------|---------------------------|
| `>` | greater than |
| `<` | less than |
| `>=` | greater than or equals to |
| `<=` | less than or equals to |
| `!` | not equals |
| \| | logical or |
There is no operator for `equals` as providing a field value in the pattern requires equality to the value.
### Anchors
Anchors allow conditional processing (i.e. "if-then-else) and other logical checks in validation patterns. The following types of anchors are supported:
| Anchor | Tag | Behavior |
|------------- |----- |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Conditional | () | If tag with the given value (including child elements) is specified, then peer elements will be processed. <br/>e.g. If image has tag latest then imagePullPolicy cannot be IfNotPresent. <br/>&nbsp;&nbsp;&nbsp;&nbsp;(image): "*:latest" <br>&nbsp;&nbsp;&nbsp;&nbsp;imagePullPolicy: "!IfNotPresent"<br/> |
| Equality | =() | If tag is specified, then processing continues. For tags with scalar values, the value must match. For tags with child elements, the child element is further evaluated as a validation pattern. <br/>e.g. If hostPath is defined then the path cannot be /var/lib<br/>&nbsp;&nbsp;&nbsp;&nbsp;=(hostPath):<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;path: "!/var/lib"<br/> |
| Existence | ^() | Works on the list/array type only. If at least one element in the list satisfies the pattern. In contrast, a conditional anchor would validate that all elements in the list match the pattern. <br/>e.g. At least one container with image nginx:latest must exist. <br/>&nbsp;&nbsp;&nbsp;&nbsp;^(containers):<br/>&nbsp;&nbsp;&nbsp;&nbsp;- image: nginx:latest<br/> |
| Negation | X() | The tag cannot be specified. The value of the tag is not evaulated. <br/>e.g. Hostpath tag cannot be defined.<br/>&nbsp;&nbsp;&nbsp;&nbsp;X(hostPath):<br/> |
### Anchors and child elements
Child elements are handled differently for conditional and equality anchors.
For conditional anchors, the child element is considered to be part of the "if" clause, and all peer elements are considered to be part of the "then" clause. For example, consider the pattern:
````yaml
pattern:
metadata:
labels:
allow-docker: "true"
spec:
(volumes):
- (hostPath):
path: "/var/run/docker.sock"
````
This reads as "If a hostPath volume exists and the path equals /var/run/docker.sock, then a label "allow-docker" must be specified with a value of true."
For equality anchors, a child element is considered to be part of the "then" clause. Consider this pattern:
````yaml
pattern:
spec:
=(volumes):
=(hostPath):
path: "!/var/run/docker.sock"
````
This is read as "If a hostPath volume exists, then the path must not be equal to /var/run/docker.sock".
### Validation Pattern Examples
The following rule prevents the creation of Deployment, StatefuleSet and DaemonSet resources without label 'app' in selector:
````yaml
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: validation-example
spec:
rules:
- name: check-label
match:
resources:
# Kind specifies one or more resource types to match
kinds:
- Deployment
- StatefuleSet
- DaemonSet
# Name is optional and can use wildcards
name: "*"
# Selector is optional
selector:
validate:
# Message is optional, used to report custom message if the rule condition fails
message: "The label app is required"
pattern:
spec:
template:
metadata:
labels:
app: "?*"
````
#### Existence anchor: at least one
A variation of an anchor, is to check that in a list of elements at least one element exists that matches the patterm. This is done by using the ^(...) notation for the field.
For example, this pattern will check that at least one container has memory requests and limits defined and that the request is less than the limit:
````yaml
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: validation-example2
spec:
rules:
- name: check-memory_requests_link_in_yaml_relative
match:
resources:
# Kind specifies one or more resource types to match
kinds:
- Deployment
# Name is optional and can use wildcards
name: "*"
validate:
pattern:
spec:
template:
spec:
^(containers):
- resources:
requests:
memory: "$(<=./../../limits/memory)"
limits:
memory: "2048Mi"
````
#### Logical OR across validation patterns
In some cases content can be defined at a different level. For example, a security context can be defined at the Pod or Container level. The validation rule should pass if either one of the conditions is met.
The `anyPattern` tag can be used to check if any one of the patterns in the list match.
<small>*Note: either one of `pattern` or `anyPattern` is allowed in a rule, they both can't be declared in the same rule.*</small>
````yaml
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: check-container-security-context
spec:
rules:
- name: check-root-user
exclude:
resources:
namespaces:
- kube-system
match:
resources:
kinds:
- Pod
validate:
message: "Root user is not allowed. Set runAsNonRoot to true."
anyPattern:
- spec:
securityContext:
runAsNonRoot: true
- spec:
containers:
- name: "*"
securityContext:
runAsNonRoot: true
````
Additional examples are available in [samples](/samples/README.md)
## Validation Failure Action
The `validationFailureAction` attribute controls processing behaviors when the resource is not compliant with the policy. If the value is set to `enforce` resource creation or updates are blocked when the resource does not comply, and when the value is set to `audit` a policy violation is reported but the resource creation or update is allowed.
## Deny rules
In addition to applying patterns to check resources, a validate rule can `deny` a request based on a set of conditions. This is useful for applying fine grained access controls that cannot be performed using Kubernetes RBAC.
For example, the policy below denies `delete requests` for objects with the label `app.kubernetes.io/managed-by: kyverno` and for all users who do not have the `cluster-admin` role.
As the example shows, you can use `match` and `exclude` to select when the rule should be applied and then use additional conditions in the `deny` declaration to apply fine-grained controls.
Note that the `validationFailureAction` must be set to `enforce` to block the request.
```yaml
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: multi-tenancy
spec:
validationFailureAction: enforce
background: false
rules:
- name: block-deletes-for-kyverno-resources
match:
resources:
selector:
matchLabels:
app.kubernetes.io/managed-by: kyverno
exclude:
clusterRoles:
- cluster-admin
validate:
message: "Deleting {{request.oldObject.kind}}/{{request.oldObject.metadata.name}} is not allowed"
deny:
conditions:
- key: "{{request.operation}}"
operator: Equals
value: "DELETE"
```
Learn more about using [variables](writing-policies-variables.md) and [conditions](writing-policies-preconditions.md) in upcoming sections.
---
<small>*Read Next >> [Mutate Resources](/documentation/writing-policies-mutate.md)*</small>

View file

@ -1,35 +0,0 @@
<small>*[documentation](/README.md#documentation) / [Writing Policies](/documentation/writing-policies.md) / Variables*</small>
# Variables
Sometimes it is necessary to vary the contents of a mutated or generated resource based on request data. To achieve this, variables can be used to reference attributes that are loaded in the rule processing context using a [JMESPATH](http://jmespath.org/) notation.
The policy engine will substitute any values with the format `{{<JMESPATH>}}` with the variable value before processing the rule.
The following data is available for use in context:
- Resource: `{{request.object}}`
- UserInfo: `{{request.userInfo}}`
## Pre-defined Variables
Kyverno automatically creates a few useful variables:
- `serviceAccountName` : the "userName" which is last part of a service account i.e. without the prefix `system:serviceaccount:<namespace>:`. For example, when processing a request from `system:serviceaccount:nirmata:user1` Kyverno will store the value `user1` in the variable `serviceAccountName`.
- `serviceAccountNamespace` : the "namespace" part of the serviceAccount. For example, when processing a request from `system:serviceaccount:nirmata:user1` Kyverno will store `nirmata` in the variable `serviceAccountNamespace`.
## Examples
1. Reference a resource name (type string)
`{{request.object.metadata.name}}`
2. Build name from multiple variables (type string)
`"ns-owner-{{request.object.metadata.namespace}}-{{request.userInfo.username}}-binding"`
3. Reference the metadata (type object)
`{{request.object.metadata}}`
<small>*Read Next >> [Preconditions](/documentation/writing-policies-preconditions.md)*</small>

View file

@ -1,16 +0,0 @@
<small>*[documentation](/README.md#documentation) / Writing Policies*</small>
# Writing Policies
The following picture shows the structure of a Kyverno Policy:
![KyvernoPolicy](images/Kyverno-Policy-Structure.png)
Each Kyverno policy contains one or more rules. Each rule has a `match` clause, an optional `exclude` clause, and one of a `mutate`, `validate`, or `generate` clause.
Each rule can validate, mutate, or generate configurations of matching resources. A rule definition can contain only a single **mutate**, **validate**, or **generate** child node.
These actions are applied to the resource in described order: mutation, validation and then generation.
---
<small>*Read Next >> [Selecting Resources](/documentation/writing-policies-match-exclude.md)*</small>

View file

@ -156,39 +156,48 @@ type Spec struct {
Background *bool `json:"background,omitempty" yaml:"background,omitempty"` Background *bool `json:"background,omitempty" yaml:"background,omitempty"`
} }
// Rule is set of mutation, validation and generation actions // Rule contains a mutation, validation, or generation action
// for the single resource description // for the single resource description
type Rule struct { type Rule struct {
// Specifies rule name // A unique label for the rule
Name string `json:"name,omitempty" yaml:"name,omitempty"` Name string `json:"name,omitempty" yaml:"name,omitempty"`
// Specifies resources for which the rule has to be applied.
// If it's defined, "kind" inside MatchResources block is required. // Defines variables that can be used during rule execution.
// +optional
Context []ContextEntry `json:"context,omitempty" yaml:"context,omitempty"`
// Selects resources for which the policy rule should be applied.
// If it's defined, "kinds" inside MatchResources block is required.
// +optional // +optional
MatchResources MatchResources `json:"match,omitempty" yaml:"match,omitempty"` MatchResources MatchResources `json:"match,omitempty" yaml:"match,omitempty"`
// Specifies resources for which rule can be excluded
// Selects resources for which the policy rule should not be applied.
// +optional // +optional
ExcludeResources ExcludeResources `json:"exclude,omitempty" yaml:"exclude,omitempty"` ExcludeResources ExcludeResources `json:"exclude,omitempty" yaml:"exclude,omitempty"`
// Allows controlling policy rule execution
// Allows condition-based control of the policy rule execution.
// +optional // +optional
Conditions []Condition `json:"preconditions,omitempty" yaml:"preconditions,omitempty"` Conditions []Condition `json:"preconditions,omitempty" yaml:"preconditions,omitempty"`
// Specifies patterns to mutate resources
// Modifies matching resources.
// +optional // +optional
Mutation Mutation `json:"mutate,omitempty" yaml:"mutate,omitempty"` Mutation Mutation `json:"mutate,omitempty" yaml:"mutate,omitempty"`
// Specifies patterns to validate resources
// Checks matching resources.
// +optional // +optional
Validation Validation `json:"validate,omitempty" yaml:"validate,omitempty"` Validation Validation `json:"validate,omitempty" yaml:"validate,omitempty"`
// Specifies patterns to create additional resources
// Generates new resources.
// +optional // +optional
Generation Generation `json:"generate,omitempty" yaml:"generate,omitempty"` Generation Generation `json:"generate,omitempty" yaml:"generate,omitempty"`
// Context
Context []ContextEntry `json:"context,omitempty" yaml:"context,omitempty"`
} }
type ContextEntry struct { type ContextEntry struct {
Name string `json:"name,omitempty" yaml:"name,omitempty"` Name string `json:"name,omitempty" yaml:"name,omitempty"`
ConfigMap ConfigMapReference `json:"configMap,omitempty" yaml:"configMap,omitempty"` Path string `json:"path,omitempty" yaml:"path,omitempty"`
ConfigMap *ConfigMapReference `json:"configMap,omitempty" yaml:"configMap,omitempty"`
} }
type ConfigMapReference struct { type ConfigMapReference struct {
Name string `json:"name,omitempty" yaml:"name,omitempty"` Name string `json:"name,omitempty" yaml:"name,omitempty"`
Namespace string `json:"namespace,omitempty" yaml:"namespace,omitempty"` Namespace string `json:"namespace,omitempty" yaml:"namespace,omitempty"`

View file

@ -48,6 +48,7 @@ func filterRule(rule kyverno.Rule, resource unstructured.Unstructured, admission
}, },
} }
} }
// add configmap json data to context // add configmap json data to context
if err := AddResourceToContext(log, rule.Context, resCache, jsonContext); err != nil { if err := AddResourceToContext(log, rule.Context, resCache, jsonContext); err != nil {
log.Info("cannot add configmaps to context", "reason", err.Error()) log.Info("cannot add configmaps to context", "reason", err.Error())

View file

@ -28,6 +28,9 @@ func Mutate(policyContext PolicyContext) (resp response.EngineResponse) {
patchedResource := policyContext.NewResource patchedResource := policyContext.NewResource
ctx := policyContext.Context ctx := policyContext.Context
result := policyContext.Client.GetDiscoveryCache().RESTClient().Get().Do()
result.
resCache := policyContext.ResourceCache resCache := policyContext.ResourceCache
jsonContext := policyContext.JSONContext jsonContext := policyContext.JSONContext
logger := log.Log.WithName("EngineMutate").WithValues("policy", policy.Name, "kind", patchedResource.GetKind(), logger := log.Log.WithName("EngineMutate").WithValues("policy", policy.Name, "kind", patchedResource.GetKind(),
@ -62,6 +65,9 @@ func Mutate(policyContext PolicyContext) (resp response.EngineResponse) {
logger.V(3).Info("resource not matched", "reason", err.Error()) logger.V(3).Info("resource not matched", "reason", err.Error())
continue continue
} }
// add configmap json data to context // add configmap json data to context
if err := AddResourceToContext(logger, rule.Context, resCache, jsonContext); err != nil { if err := AddResourceToContext(logger, rule.Context, resCache, jsonContext); err != nil {
logger.V(4).Info("cannot add configmaps to context", "reason", err.Error()) logger.V(4).Info("cannot add configmaps to context", "reason", err.Error())

View file

@ -286,35 +286,40 @@ func SkipPolicyApplication(policy kyverno.ClusterPolicy, resource unstructured.U
} }
// AddResourceToContext - Add the Configmap JSON to Context. // AddResourceToContext - Add the Configmap JSON to Context.
// it will read configmaps (can be extended to get other type of resource like secrets, namespace etc) from the informer cache // it will read configmaps (can be extended to get other type of resource like secrets, namespace etc)
// and add the configmap data to context // from the informer cache and add the configmap data to context
func AddResourceToContext(logger logr.Logger, contexts []kyverno.ContextEntry, resCache resourcecache.ResourceCacheIface, ctx *context.Context) error { func AddResourceToContext(logger logr.Logger, contextEntries []kyverno.ContextEntry, resCache resourcecache.ResourceCacheIface, ctx *context.Context) error {
if len(contexts) == 0 { if len(contextEntries) == 0 {
return nil return nil
} }
// get GVR Cache for "configmaps" // get GVR Cache for "configmaps"
// can get cache for other resources if the informers are enabled in resource cache // can get cache for other resources if the informers are enabled in resource cache
gvrC := resCache.GetGVRCache("configmaps") gvrC := resCache.GetGVRCache("configmaps")
if gvrC != nil { if gvrC != nil {
lister := gvrC.GetLister() lister := gvrC.GetLister()
for _, context := range contexts { for _, context := range contextEntries {
contextData := make(map[string]interface{}) contextData := make(map[string]interface{})
name := context.ConfigMap.Name name := context.ConfigMap.Name
namespace := context.ConfigMap.Namespace namespace := context.ConfigMap.Namespace
if namespace == "" { if namespace == "" {
namespace = "default" namespace = "default"
} }
key := fmt.Sprintf("%s/%s", namespace, name) key := fmt.Sprintf("%s/%s", namespace, name)
obj, err := lister.Get(key) obj, err := lister.Get(key)
if err != nil { if err != nil {
logger.Error(err, fmt.Sprintf("failed to read configmap %s/%s from cache", namespace, name)) logger.Error(err, fmt.Sprintf("failed to read configmap %s/%s from cache", namespace, name))
continue continue
} }
unstructuredObj, err := runtime.DefaultUnstructuredConverter.ToUnstructured(obj) unstructuredObj, err := runtime.DefaultUnstructuredConverter.ToUnstructured(obj)
if err != nil { if err != nil {
logger.Error(err, "failed to convert context runtime object to unstructured") logger.Error(err, "failed to convert context runtime object to unstructured")
continue continue
} }
// extract configmap data // extract configmap data
contextData["data"] = unstructuredObj["data"] contextData["data"] = unstructuredObj["data"]
contextData["metadata"] = unstructuredObj["metadata"] contextData["metadata"] = unstructuredObj["metadata"]
@ -325,6 +330,7 @@ func AddResourceToContext(logger logr.Logger, contexts []kyverno.ContextEntry, r
logger.Error(err, "failed to unmarshal context data") logger.Error(err, "failed to unmarshal context data")
continue continue
} }
// add data to context // add data to context
err = ctx.AddJSON(jdata) err = ctx.AddJSON(jdata)
if err != nil { if err != nil {

View file

@ -32,11 +32,11 @@ func Validate(policyRaw []byte, client *dclient.Client, mock bool, openAPIContro
var p kyverno.ClusterPolicy var p kyverno.ClusterPolicy
err = json.Unmarshal(policyRaw, &p) err = json.Unmarshal(policyRaw, &p)
if err != nil { if err != nil {
return fmt.Errorf("failed to unmarshal policy admission request err %v", err) return fmt.Errorf("failed to unmarshal policy: %v", err)
} }
if common.PolicyHasVariables(p) && common.PolicyHasNonAllowedVariables(p) { if common.PolicyHasVariables(p) && common.PolicyHasNonAllowedVariables(p) {
return fmt.Errorf("policy contains non allowed variables") return fmt.Errorf("policy contains reserved variables (serviceAccountName, serviceAccountNamespace)")
} }
if path, err := validateUniqueRuleName(p); err != nil { if path, err := validateUniqueRuleName(p); err != nil {
@ -49,16 +49,25 @@ func Validate(policyRaw []byte, client *dclient.Client, mock bool, openAPIContro
} }
for i, rule := range p.Spec.Rules { for i, rule := range p.Spec.Rules {
// validate resource description // validate resource description
if path, err := validateResources(rule); err != nil { if path, err := validateResources(rule); err != nil {
return fmt.Errorf("path: spec.rules[%d].%s: %v", i, path, err) return fmt.Errorf("path: spec.rules[%d].%s: %v", i, path, err)
} }
// validate rule types // validate rule types
// only one type of rule is allowed per rule // only one type of rule is allowed per rule
if err := validateRuleType(rule); err != nil { if err := validateRuleType(rule); err != nil {
// as there are more than 1 operation in rule, not need to evaluate it further // as there are more than 1 operation in rule, not need to evaluate it further
return fmt.Errorf("path: spec.rules[%d]: %v", i, err) return fmt.Errorf("path: spec.rules[%d]: %v", i, err)
} }
if err := validateRuleContext(rule); err != nil {
return fmt.Errorf("path: spec.rules[%d]: %v", i, err)
}
// validate Cluster Resources in namespaced cluster policy // validate Cluster Resources in namespaced cluster policy
// For namespaced cluster policy, ClusterResource type field and values are not allowed in match and exclude // For namespaced cluster policy, ClusterResource type field and values are not allowed in match and exclude
if !mock && p.ObjectMeta.Namespace != "" { if !mock && p.ObjectMeta.Namespace != "" {
@ -86,7 +95,7 @@ func Validate(policyRaw []byte, client *dclient.Client, mock bool, openAPIContro
return checkClusterResourceInMatchAndExclude(rule, clusterResources) return checkClusterResourceInMatchAndExclude(rule, clusterResources)
} }
if doesMatchAndExcludeConflict(rule) { if doMatchAndExcludeConflict(rule) {
return fmt.Errorf("path: spec.rules[%v]: rule is matching an empty set", rule.Name) return fmt.Errorf("path: spec.rules[%v]: rule is matching an empty set", rule.Name)
} }
@ -147,16 +156,17 @@ func checkInvalidFields(policyRaw []byte) error {
break break
} }
} }
if !ok { if !ok {
return fmt.Errorf("unknown field \"%s\" in policy admission request", requestField) return fmt.Errorf("unknown field \"%s\" in policy", requestField)
} }
} }
return nil return nil
} }
// doesMatchAndExcludeConflict checks if the resultant // doMatchAndExcludeConflict checks if the resultant
// of match and exclude block is not an empty set // of match and exclude block is not an empty set
func doesMatchAndExcludeConflict(rule kyverno.Rule) bool { func doMatchAndExcludeConflict(rule kyverno.Rule) bool {
if reflect.DeepEqual(rule.ExcludeResources, kyverno.ExcludeResources{}) { if reflect.DeepEqual(rule.ExcludeResources, kyverno.ExcludeResources{}) {
return false return false
@ -439,6 +449,34 @@ func validateRuleType(r kyverno.Rule) error {
return nil return nil
} }
func validateRuleContext(rule kyverno.Rule) (error) {
if rule.Context == nil || len(rule.Context) == 0 {
return nil
}
for _, entry := range rule.Context {
if entry.Name == ""{
return fmt.Errorf("a name is required for context entries")
}
if entry.Path == "" && entry.ConfigMap == nil {
return fmt.Errorf("path or configMap required for context entries")
}
if entry.ConfigMap != nil {
if entry.ConfigMap.Name == "" {
return fmt.Errorf("a name is required for configMap context entry")
}
if entry.ConfigMap.Namespace == "" {
return fmt.Errorf("a namespace is required for configMap context entry")
}
}
}
return nil
}
// validateResourceDescription checks if all necesarry fields are present and have values. Also checks a Selector. // validateResourceDescription checks if all necesarry fields are present and have values. Also checks a Selector.
// field type is checked through openapi // field type is checked through openapi
// Returns error if // Returns error if