1
0
Fork 0
mirror of https://github.com/kyverno/kyverno.git synced 2025-03-28 18:38:40 +00:00

34: Updated documentation

This commit is contained in:
Denis Belyshev 2019-05-22 18:14:10 +03:00
parent 16c14b30d1
commit 6251e971cc
6 changed files with 91 additions and 35 deletions

View file

@ -10,7 +10,7 @@ Kyverno allows cluster adminstrators to manage environment specific configuratio
Kyverno policies are Kubernetes resources that can be written in YAML or JSON. Kyverno policies can validate, mutate, and generate any Kubernetes resources.
Kyverno runs as a [dynamic admission controller](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/) in a Kubernetes cluster. Kyverno receives validating and mutating admission webhook HTTP callbacks from the kube-apiserver and applies matching polcies to return results that enforce admission policies or reject requests.
Kyverno runs as a [dynamic admission controller](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/) in a Kubernetes cluster. Kyverno receives validating and mutating admission webhook HTTP callbacks from the kube-apiserver and applies matching policies to return results that enforce admission policies or reject requests.
Kyverno policies can match resources using the resource kind, name, and label selectors. Wildcards are supported in names.
@ -25,7 +25,7 @@ Policy enforcement is captured using Kubernetes events. Kyverno also reports pol
This policy requires that all pods have CPU and memory resource requests and limits:
````yaml
apiVersion: policy.nirmata.io/v1alpha1
apiVersion: kyverno.io/v1alpha1
kind: Policy
metadata:
name: check-cpu-memory
@ -33,7 +33,8 @@ spec:
rules:
- name: check-pod-resources
resource:
kind: Pod
kinds:
- Pod
validate:
message: "CPU and memory resource requests and limits are required"
pattern:
@ -56,7 +57,7 @@ spec:
This policy sets the imagePullPolicy to Always if the image tag is latest:
````yaml
apiVersion: policy.nirmata.io/v1alpha1
apiVersion: kyverno.io/v1alpha1
kind: Policy
metadata:
name: set-image-pull-policy
@ -64,13 +65,14 @@ spec:
rules:
- name: set-image-pull-policy
resource:
kind: Pod
kinds:
- Pod
mutate:
overlay:
spec:
containers:
# match images which end with :latest
- image: "(*:latest)"
- (image): "*:latest"
# set the imagePullPolicy to "Always"
imagePullPolicy: "Always"
````
@ -80,7 +82,7 @@ spec:
This policy sets the Zookeeper and Kafka connection strings for all namespaces with a label key 'kafka'.
````yaml
apiVersion: policy.nirmata.io/v1alpha1
apiVersion: kyverno.io/v1alpha1
kind: Policy
metadata:
name: "zk-kafka-address"
@ -88,7 +90,8 @@ spec:
rules:
- name: "zk-kafka-address"
resource:
kind : Namespace
kinds:
- Namespace
selector:
matchExpressions:
- {key: kafka, operator: Exists}

View file

@ -2,15 +2,26 @@
# Testing Policies
The resources definitions for testing are located in [/test](/test) directory. Each test contains a pair of files: one is the resource definition, and the second is the kyverno policy for this definition.
## Test using kubectl
To do this you should [install kyverno to the cluster](/documentation/installation.md).
For example, to test the simplest kyverno policy for ConfigMap, create the policy and then the resource itself via kubectl:
````bash
cd test/ConfigMap
kubectl create -f policy-CM.yaml
kubectl create -f CM.yaml
````
Then compare the original resource definition in CM.yaml with the actual one:
````bash
kubectl get -f CM.yaml -o yaml
````
## Test using the Kyverno CLI
*This feature will be available soon*
## Autotest
---
*Will be available after Kyverno CLI is implemented*

View file

@ -2,8 +2,46 @@
# Generate Configurations
```generatate``` feature can be applied to created namespaces to create new resources in them. This feature is useful when every namespace in a cluster must contain some basic required resources. The feature is available for policy rules in which the resource kind is Namespace.
## Example
````yaml
apiVersion : kyverno.io/v1alpha1
kind : Policy
metadata :
name : basic-policy
spec :
rules:
- name: "Basic confog generator for all namespaces"
resource:
kind: Namespace
generate:
# For now the next kinds are supported:
# ConfigMap
# Secret
- kind: ConfigMap
name: default-config
copyFrom:
namespace: default
name: config-template
data:
DB_ENDPOINT: mongodb://mydomain.ua/db_stage:27017
labels:
purpose: mongo
- kind: Secret
name: mongo-creds
data:
DB_USER: YWJyYWthZGFicmE=
DB_PASSWORD: YXBwc3dvcmQ=
labels:
purpose: mongo
````
In this example, when this policy is applied, any new namespace will receive 2 new resources after its creation:
* ConfigMap copied from default/config-template with added value DB_ENDPOINT.
* Secret with values DB_USER and DB_PASSWORD.
Both resources will contain a label ```purpose: mongo```
---
<small>*Read Next >> [Testing Policies](/documentation/testing-policies.md)*</small>

View file

@ -2,7 +2,9 @@
# Mutate Configurations
The ```mutate``` rule contains actions that should be applied to the resource before its creation. Mutation can be made using patches or overlay. Using ```patches``` in the JSONPatch format, you can make point changes to the created resource, and ```overlays``` are designed to bring the resource to the desired view according to a specific pattern.
Resource mutation occurs before validation, so the validation rules should not contradict the changes set in the mutation section.
---

View file

@ -37,8 +37,7 @@ spec :
...
````
Each rule can validate, mutate, or generate configurations of matching resources. A rule definition can contain only a single **validate**, **mutate**, or **generate** child node.
Each rule can validate, mutate, or generate configurations of matching resources. A rule definition can contain only a single **mutate**, **validate**, or **generate** child node. These actions are applied to the resource in described order: mutation, validation and then generation.
---
<small>*Read Next >> [Validate](/documentation/writing-policies-validate.md)*</small>

View file

@ -1,10 +1,12 @@
# Examples
# Test examples
Examples of policies and resources with which you can play to see the kube-policy in action. There are definitions for each supported resource type and an example policy for the corresponding resource.
## How to play
First of all, **build and install the policy controller**: see README file in the project's root.
For now, the testing is possible only via ```kubectl``` when kyverno is installed to the cluster. So, [build and install the policy controller](/documentation/installation.md) first.
Each folder contains a pair of files, one of which is the definition of the resource, and the second is the definition of the policy for this resource. Let's look at an example of the endpoints mutation. Endpoints are listed in file `examples/Endpoints/endpoints.yaml`:
```apiVersion: v1
````yaml
apiVersion: v1
kind: Endpoints
metadata:
name: test-endpoint
@ -17,25 +19,25 @@ subsets:
- name: secure-connection
port: 443
protocol: TCP
```
````
Create this resource:
```
> kubectl create -f examples/Endpoints/endpoints.yaml
````yaml
> kubectl create -f test/Endpoints/endpoints.yaml
endpoints/test-endpoint created
> kubectl get -f examples/Endpoints/endpoints.yaml
> kubectl get -f test/Endpoints/endpoints.yaml
NAME ENDPOINTS AGE
test-endpoint 192.168.10.171:443 6s
```
````
We just created an endpoints resource and made sure that it was created without changes. Let's remove it now and try to create it again, but with an active policy for endpoints resources.
```
> kubectl delete -f test/endpoints.yaml
````bash
> kubectl delete -f test/Endpoints/endpoints.yaml
endpoints "test-endpoint" deleted
```
We have this a policy for enpoints (`examples/Endpoints/policy-endpoint.yaml`):
````
We have this a policy for enpoints ([policy-endpoint.yaml](/test/Endpoints/policy-endpoint.yaml)):
```
apiVersion : kubepolicy.nirmata.io/v1alpha1
````yaml
apiVersion : kyverno.io/v1alpha1
kind : Policy
metadata :
name : policy-endpoints
@ -43,7 +45,8 @@ spec :
rules:
- name:
resource:
kind : Endpoints
kinds:
- Endpoints
selector:
matchLabels:
label : test
@ -61,22 +64,22 @@ spec :
- name: load-balancer-connection
port: 80
protocol: UDP
```
````
This policy does 2 patches:
- **replaces** the first port of the first connection to 6443
- **adds** new endpoint with IP 192.168.10.171 and port 80 (UDP)
Let's apply this policy and create the endpoints again to see the changes:
```
> kubectl create -f examples/Endpoints/policy-endpoints.yaml
````bash
> kubectl create -f test/Endpoints/policy-endpoints.yaml
policy.policy.nirmata.io/policy-endpoints created
> kubectl create -f examples/Endpoints/endpoints.yaml
> kubectl create -f test/Endpoints/endpoints.yaml
endpoints/test-endpoint created
> kubectl get -f examples/Endpoints/endpoints.yaml
> kubectl get -f test/Endpoints/endpoints.yaml
NAME ENDPOINTS AGE
test-endpoint 192.168.10.171:80,192.168.10.171:9663 30s
```
````
As you can see, the endpoints resource was created with changes: a new port 80 was added, and port 443 was changed to 6443.
**Enjoy :)**