1
0
Fork 0
mirror of https://github.com/kyverno/kyverno.git synced 2025-03-28 18:38:40 +00:00

NK-14: Prepared repo to publishing.

Updated README.md, crd dir renamed to definitions, removed some test yamls.
This commit is contained in:
belyshevdenis 2019-03-18 18:59:30 +02:00
parent 0f85e09f7e
commit c126da313c
9 changed files with 151 additions and 102 deletions

183
README.md
View file

@ -3,53 +3,167 @@ A Kubernetes native policy engine
## Motivation
## How it works
The solution provides a possibility to validate the custom Kubernetes resources and modify them before their creation.
### Components
* **Policy Controller** (`/controller`) allows defining custom resources which can be used in your Kubernetes cluster
* **WebHooks Server** (`/server`) implements the connection between Kubernetes API server and **Mutation WebHook**
* **Mutation WebHook** (`/webhooks`) allows applying Nirmata policies for validation and mutation of the certain types of resources (see the list below)
* **Kube Client** (`/kubeclient`) allows other components to communicate with Kubernetes API server for resource management in a cluster
* **Initialization functions** (`/init.go`, `/utils`) allow running the controller inside the cluster without deep pre-tuning
The program initializes the configuration of the client API Cubernetis and creates a HTTPS server with a webhook for resource mutation. When a resource is created in a cluster for various reasons, the Kerbernetes core sends a request for a mutation of this resource to the web hook. The policy controller manages the objects of the politicians created in the cluster and is always aware of what policies are currently in effect: information about the policies is available on the webhook thanks to the policy controller. The request to create a resource contains its full definition. If the resource matches one or more of the current policies, the resource mutates in accordance with them.
### Policy application
**Supported resource types:**
* ConfigMap
* CronJob
* DaemonSet
* Deployment
* Endpoints
* HorizontalPodAutoscaler
* Ingress
* Job
* LimitRange
* Namespace
* NetworkPolicy
* PersistentVolumeClaim
* PodDisruptionBudget
* PodTemplate
* ResourceQuota
* Secret
* Service
* StatefulSet
When a request for a resource creation is received (i.e. a YAML file), it will be checked against the corresponding Nirmata policies.
The policy for a resource is looked up either by the resource name, or with the help of selector.
In case the data in the YAML file does not conform to the policy, the resource will be mutated with the help of the **Mutation WebHook**, which can perform one of the following:
* **add**: either add a lacking key and its value or replace a value of the already existing key;
* **replace**: either replace a value of the already existing key or add a lacking key and its value;
* **remove**: remove an unnecessary key and its value.
**NOTE**: **add** and **replace** behave in the same way, so they can be used interchangeably. But there is the difference between 'add' and 'replace' operations in case of mutating an array. In this case 'add' operation will add an element to the list 'replace' operation replaces whole list.
After the resource YAMP file is validated and mutated, the required object is created in the Kubernetes cluster.
## Examples
## How it works
### 1. Mutation of deployment resource
Here is the policy:
```
apiVersion : policy.nirmata.io/v1alpha1
kind : Policy
metadata :
name : policy-deployment-ghost
spec :
failurePolicy: stopOnError
rules:
- resource:
kind : Deployment
name :
selector :
matchLabels :
nirmata.io/deployment.name: "ghost"
patch:
- path: /metadata/labels/isMutated
op: add
value: "true"
- path: "/spec/strategy/rollingUpdate/maxSurge"
op: add
value: 5
- path: "/spec/template/spec/containers/0/ports/0"
op: replace
value:
containerPort: 2368
protocol: TCP
```
In the **name** parameter, you should specify the policy name.
The **failurePolicy** parameter is optional. It is set to **stopOnError** by default. Other possible value is **continueOnError**. If **continueOnError** is specified, the resource will be created despite the errors occured in web hook.
The **rules** section consists of mandatory **resource** sub-section and optional **patch** sub-section.
The **resource** sub-section defines to which kind of the supported resources a Nirmata policy has to be applied:
* In the **kind** parameter, you should specify the resource type. You can find the list of the supported types in the **How it works** section.
* In the **name** parameter, you should specify the name of the resource the policy has to be applied to. This parameter can be omitted if **selector** is specified.
* In the **selector** parameter, you should specify conditions based on which the resources will be chosen for the policy to be applied to. This parameter is optional if **name** is specified.
The **patch** sub-section defines what needs to be changed (i.e. mutated) before resource creation can take place. This section contains multiple entries of the path, operation, and value.
* In the **path** parameter, you should specify the required path.
* In the **op** parameter, you should specify the required operation (Add | Replace | Delete).
* In the **value** parameter, you should specify either a number, a YAML string, or text.
### 2. Adding secret and config map to namespace
```
apiVersion : policy.nirmata.io/v1alpha1
kind : Policy
metadata :
name : policy-namespace-default
spec :
failurePolicy: stopOnError
rules:
- resource:
kind : Namespace
name :
selector :
matchLabels :
target: "production"
configMapGenerator :
name: game-config-env-file
copyFrom:
namespace: some-ns
name: some-other-config-map
data:
foo: bar
app.properties: /
foo1=bar1
foo2=bar2
ui.properties: /
foo1=bar1
foo2=bar2
secretGenerator :
name: game-secrets
copyFrom:
namespace: some-ns
name: some-other-secrets
data: # data is optional
```
The **rules** section in this example have mandatory **resource** sub-section, additional **secretGenerator** and **configMapGenerator** sub-sections, and has no and optional **patch** sub-section.
**configMapGenerator** sub-section defines the contents of the config-map which will be created in future namespace.
**copyFrom** contains information about template config-map. **data** describes the contents of created config-map. **copyFrom** and **data** are optional, but at least one of these fields must be specified. If both the **copyFrom** and the **data** are specified, then the template **copyFrom** will be used for the configuration, and then the specified **data** will be added to config-map.
**secretGenerator** acts exactly as **configMapGenerator**, but creates the secret insted of config-map.
### More examples
See the contents of `/examples`: there are definitions and policies for every supported type of resource.
# Build
## Prerequisites
You need to have go and dep utils installed on your machine.
Ensure that GOPATH environment variable is set to desired location.
Code generation for CRD controller depends on kubernetes/hack, so before use code generation, execute:
You need to have the go and dep utils installed on your machine.
Ensure that GOPATH environment variable is set to the desired location.
Code generation for the CRD controller depends on kubernetes/hack, so before using code generation, execute:
`go get k8s.io/kubernetes/hack`
We are using [dep](https://github.com/golang/dep)
## You can `go get`
## Cloning
Due to the repository privacy, you should to add SSH key to your github user to clone repository using `go get` command.
Using `go get` you receive correct repository location ad $GOHOME/go/src which is needed to restore dependencies.
Configure SSH key due to this article: https://help.github.com/articles/adding-a-new-ssh-key-to-your-github-account/
After SSH key configured, you must tell git to use SSH. To do it use next command:
`git config --global url.git@github.com:.insteadOf https://github.com/`
After this is done, use next command to clone the repo:
`go get github.com/nirmata/kube-policy`
## Or `git clone`
If you don't want to use SSH, you just can clone repo with git, but ensure that repo will be inside this path: $GOPATH/src/.
`git clone https://github.com/nirmata/kube-policy.git $GOPATH/src/nirmata/kube-policy`
`git clone https://github.com/nirmata/kube-policy.git $GOPATH/src/github.com/nirmata/kube-policy`
Make sure that you use exactly the same subdirectory of the `$GOPATH` as shown above.
## Restore dependencies
Navigate to kube-policy project dir and execute:
Navigate to the kube-policy project dir and execute the following command:
`dep ensure`
This will install necessary dependencies described in README.md
This will install the necessary dependencies described in Gopkg.toml
## Compiling
We are using code generator for custom resources objects from here: https://github.com/kubernetes/code-generator
We are using the code generator for the custom resource objects from here: https://github.com/kubernetes/code-generator
Generate the additional controller code before compiling the project:
Generate additional controller code before compiling the project:
`scripts/update-codegen.sh`
@ -59,18 +173,19 @@ Then you can build the controller:
# Installation
There are 2 possible ways to install and use the controller: for **development** and for **production**
There are two possible ways of installing and using the controller: for **development** and for **production**
## For development
_At the time of this writing, only this installation method worked_
_At the time of creation of these instructions, only this installation method worked_
1. Open your `~/.kube/config` file and copy the value of `certificate-authority-data` to the clipboard
2. Open `crd/MutatingWebhookConfiguration_local.yaml` and replace `${CA_BUNDLE}` with the contents of clipboard
3. Open `~/.kube/config` again and copy the ip of the `server` value, for example `192.168.10.117`
4. Run `scripts/deploy-controller.sh --service=localhost --serverIp=<server_IP>` where `<server_IP>` is a server from clipboard. This scripts will generate TLS certificate for webhook server and register this webhook in the cluster. Also it registers CustomResource `Policy`.
5. Start controller: `sudo kube-policy --cert=certs/server.crt --key=certs/server-key.pem --kubeconfig=~/.kube/config`
1. Open your `~/.kube/config` file and copy the value of `certificate-authority-data` to the clipboard.
2. Open `crd/MutatingWebhookConfiguration_local.yaml` and replace `${CA_BUNDLE}` with the contents of the clipboard.
3. Open `~/.kube/config` again and copy the IP of the `server` value, for example `192.168.10.117`.
4. Run `scripts/deploy-controller.sh --service=localhost --serverIp=<server_IP>`, where `<server_IP>` is a server from the clipboard. This scripts will generate TLS certificate for webhook server and register this webhook in the cluster. Also it registers CustomResource `Policy`.
5. Start the controller using the following command: `sudo kube-policy --cert=certs/server.crt --key=certs/server-key.pem --kubeconfig=~/.kube/config`
## For production
_To be implemented_
_To be implemented_
The scripts for "development installation method" will be moved to the controller's code. The solution will perform the preparation inside the cluster automatically. Yuo should be able to use `definitions/install.yaml` to install the controller.

View file

@ -1,28 +0,0 @@
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kube-policy-deployment
labels:
app: kube-policy
spec:
replicas: 1
template:
metadata:
labels:
app: kube-policy
spec:
containers:
- name: kube-policy
image: nirmata/kube-policy:latest
imagePullPolicy: IfNotPresent
args:
- -cert=/etc/kube-policy/certs/server.crt
- -key=/etc/kube-policy/certs/server-key.pem
volumeMounts:
- name: kube-policy-certs
mountPath: /etc/kube-policy/certs
readOnly: true
volumes:
- name: kube-policy-certs
secret:
secretName: kube-policy-secret

View file

@ -1,7 +0,0 @@
apiVersion: policy.nirmata.io/v1alpha1
kind: Policy
metadata:
name: hello-policy
spec:
policySpec: 'hi'
image: hello-policy-image

View file

@ -1,21 +0,0 @@
apiVersion: policy.nirmata.io/v1alpha1
kind : Policy
metadata:
name: selector-policy
spec:
failurePolicy: continueOnError
rules:
- resource:
kind: ConfigMap
selector:
matchLabels:
label1: test1
matchExpressions:
- key: label2
operator: In
values:
- test2
patches:
- path: /
op : add
value : "20"

View file

@ -1,12 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: kube-policy-svc
labels:
app: kube-policy
spec:
ports:
- port: 443
targetPort: 443
selector:
app: kube-policy

View file

@ -1,3 +1,4 @@
# MutatingWebhookConfiguration document which should be used when placing controller inside the cluster
apiVersion: admissionregistration.k8s.io/v1beta1
kind: MutatingWebhookConfiguration
metadata:

View file

@ -1,3 +1,4 @@
# Example of MutatingWebhookConfiguration which can be used for debug, when controller is placed on master node
apiVersion: admissionregistration.k8s.io/v1beta1
kind: MutatingWebhookConfiguration
metadata: