mirror of
https://github.com/kyverno/kyverno.git
synced 2025-03-05 15:37:19 +00:00
new samples; updates (#1259)
* new samples; updates * typos * add policy to restrict LoadBalancer * correct sample numbering * fix typos
This commit is contained in:
parent
70243625f4
commit
c52f07b615
14 changed files with 293 additions and 15 deletions
|
@ -14,5 +14,4 @@ The easiest way to reach us is on the [Kubernetes slack #kyverno channel](https:
|
|||
|
||||
The [Kyverno Wiki](https://github.com/kyverno/kyverno/wiki) contains details on code design, building, and testing. Please review all sections.
|
||||
|
||||
Before you contribute, please review and agree to abite with our community [Code of Conduct](/CODE_OF_CONDUCT.md).
|
||||
|
||||
Before you contribute, please review and agree to abide by our community [Code of Conduct](/CODE_OF_CONDUCT.md).
|
||||
|
|
78
samples/CreatePodAntiAffinity.md
Normal file
78
samples/CreatePodAntiAffinity.md
Normal file
|
@ -0,0 +1,78 @@
|
|||
# Create Pod Anti-Affinity
|
||||
|
||||
In cases where you wish to run applications with multiple replicas, it may be required to ensure those Pods are separated from each other for availability purposes. While a `DaemonSet` resource would accomplish similar goals, your `Deployment` object may need fewer replicas than there are nodes. Pod anti-affinity rules ensures that Pods are separated from each other. Inversely, affinity rules ensure they are co-located.
|
||||
|
||||
This sample policy configures all Deployments with Pod anti-affinity rules with the `preferredDuringSchedulingIgnoredDuringExecution` option. It requires the topology key exists on all nodes with the key name of `kubernetes.io/hostname` and requires that that label `app` is applied to the Deployment.
|
||||
|
||||
In order to test the policy, you can use this sample Deployment manifest below.
|
||||
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
labels:
|
||||
app: busybox
|
||||
distributed: required
|
||||
name: busybox
|
||||
spec:
|
||||
replicas: 2
|
||||
selector:
|
||||
matchLabels:
|
||||
app: busybox
|
||||
distributed: required
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: busybox
|
||||
distributed: required
|
||||
spec:
|
||||
containers:
|
||||
- image: busybox:1.28
|
||||
name: busybox
|
||||
command: ["sleep", "9999"]
|
||||
```
|
||||
|
||||
## More Information
|
||||
|
||||
* [Inter-pod affinity and anti-affinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity)
|
||||
|
||||
## Policy YAML
|
||||
|
||||
[create_pod_antiaffinity.yaml](more/create_pod_antiaffinity.yaml)
|
||||
|
||||
```yaml
|
||||
apiVersion: kyverno.io/v1
|
||||
kind: ClusterPolicy
|
||||
metadata:
|
||||
name: insert-podantiaffinity
|
||||
spec:
|
||||
rules:
|
||||
- name: insert-podantiaffinity
|
||||
match:
|
||||
resources:
|
||||
kinds:
|
||||
- Deployment
|
||||
preconditions:
|
||||
# This precondition ensures that the label `app` is applied to Pods within the Deployment resource.
|
||||
- key: "{{request.object.metadata.labels.app}}"
|
||||
operator: NotEquals
|
||||
value: ""
|
||||
mutate:
|
||||
patchStrategicMerge:
|
||||
spec:
|
||||
template:
|
||||
spec:
|
||||
# Add the `affinity` key and others if not already specified in the Deployment manifest.
|
||||
+(affinity):
|
||||
+(podAntiAffinity):
|
||||
+(preferredDuringSchedulingIgnoredDuringExecution):
|
||||
- weight: 1
|
||||
podAffinityTerm:
|
||||
topologyKey: "kubernetes.io/hostname"
|
||||
labelSelector:
|
||||
matchExpressions:
|
||||
- key: app
|
||||
operator: In
|
||||
values:
|
||||
- "{{request.object.metadata.labels.app}}"
|
||||
```
|
|
@ -2,7 +2,7 @@
|
|||
|
||||
Sample policies are designed to be applied to your Kubernetes clusters with minimal changes.
|
||||
|
||||
The policies are mostly validation rules in `audit` mode i.e. your existing workloads will not be impacted, but will be audited for policy complaince.
|
||||
The policies are mostly validation rules in `audit` mode i.e. your existing workloads will not be impacted, but will be audited for policy compliance.
|
||||
|
||||
## Best Practice Policies
|
||||
|
||||
|
@ -32,18 +32,21 @@ These policies provide additional best practices and are worthy of close conside
|
|||
|
||||
1. [Restrict image registries](RestrictImageRegistries.md)
|
||||
1. [Restrict `NodePort` services](RestrictNodePort.md)
|
||||
1. [Restrict `LoadBalancer` services](RestrictLoadBalancer.md)
|
||||
1. [Restrict auto-mount of service account credentials](RestrictAutomountSAToken.md)
|
||||
1. [Restrict ingress classes](RestrictIngressClasses.md)
|
||||
1. [Restrict User Group](CheckUserGroup.md)
|
||||
1. [Require pods are labeled](RequireLabels.md)
|
||||
1. [Require pods have certain labels](RequireCertainLabels.md)
|
||||
1. [Require Deployments have multiple replicas](RequireDeploymentsHaveReplicas.md)
|
||||
1. [Spread Pods across topology](SpreadPodsAcrossTopology.md)
|
||||
1. [Create Pod Anti-Affinity](CreatePodAntiAffinity.md)
|
||||
|
||||
## Applying the sample policies
|
||||
|
||||
To apply these policies to your cluster, install Kyverno and import the policies as follows:
|
||||
|
||||
### Install Kyverno**
|
||||
### Install Kyverno
|
||||
|
||||
````sh
|
||||
kubectl create -f https://raw.githubusercontent.com/kyverno/kyverno/main/definitions/release/install.yaml
|
||||
|
@ -51,7 +54,7 @@ kubectl create -f https://raw.githubusercontent.com/kyverno/kyverno/main/definit
|
|||
|
||||
<small>[(installation docs)](../documentation/installation.md)</small>
|
||||
|
||||
### Apply Kyverno Policies**
|
||||
### Apply Kyverno Policies
|
||||
|
||||
To start applying policies to your cluster, first clone the repo:
|
||||
|
||||
|
@ -60,7 +63,7 @@ git clone https://github.com/kyverno/kyverno.git
|
|||
cd kyverno
|
||||
````
|
||||
|
||||
Import best practices from [here](best_pratices):
|
||||
Import best practices from [here](best_practices):
|
||||
|
||||
````bash
|
||||
kubectl create -f samples/best_practices
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
# Require certain labels
|
||||
|
||||
In many cases, you may require that at least a certain number of labels are assigned to each Pod from a select list of approved labels. This sample policy demonstrates the [`anyPattern`](https://kyverno.io/docs/writing-policies/validate/#anypattern---logical-or-across-multiple-validation-patterns) option in a policy by requiring any of the two possible labels defined within. A pod must either have the label `app.kubernetes.io/name` or `app.kubernetes.io/component` defined.
|
||||
In many cases, you may require that at least a certain number of labels are assigned to each Pod from a select list of approved labels. This sample policy demonstrates the [`anyPattern`](https://kyverno.io/docs/writing-policies/validate/#anypattern---logical-or-across-multiple-validation-patterns) option in a policy by requiring any of the two possible labels defined within. A pod must either have the label `app.kubernetes.io/name` or `app.kubernetes.io/component` defined. If you would rather validate that all Pods have multiple labels in an AND fashion rather than OR, check out the [require_labels](RequireLabels.md) example.
|
||||
|
||||
## Policy YAML
|
||||
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
|
||||
Labels are a fundamental and important way to assign descriptive metadata to Kubernetes resources, especially Pods. Labels are especially important as the number of applications grow and are composed in different ways.
|
||||
|
||||
This sample policy requires that the label `app.kubernetes.io/name` be defined on all Pods. If you wish to require that all Pods have multiple labels defined (as opposed to [any labels from an approved list](RequireCertainLabels.md)), this policy can be altered by adding an additional rule block which checks for a second (or third, etc.) label name.
|
||||
This sample policy requires that the label `app.kubernetes.io/name` be defined on all Pods. If you wish to require that all Pods have multiple labels defined (as opposed to [any labels from an approved list](RequireCertainLabels.md)), this policy can be altered by adding more labels.
|
||||
|
||||
## More Information
|
||||
|
||||
|
@ -31,4 +31,6 @@ spec:
|
|||
metadata:
|
||||
labels:
|
||||
app.kubernetes.io/name: "?*"
|
||||
# You can add more labels if you wish the policy to validate more than just one is present. Uncomment the below line, or add new ones.
|
||||
#app.kubernetes.io/component: "?*
|
||||
```
|
||||
|
|
|
@ -4,11 +4,18 @@ Liveness and readiness probes need to be configured to correctly manage a pod's
|
|||
|
||||
For each pod, a periodic `livenessProbe` is performed by the kubelet to determine if the pod's containers are running or need to be restarted. A `readinessProbe` is used by services and deployments to determine if the pod is ready to receive network traffic.
|
||||
|
||||
In this sample policy, a validation rule checks to ensure that all Pods have both a liveness and a readiness probe defined by looking at the `periodSeconds` field. By using the annotation `pod-policies.kyverno.io/autogen-controllers`, it modifies the default behavior and ensures that only Pods originating from DaemonSet, Deployment, and StatefulSet objects are validated.
|
||||
|
||||
## More Information
|
||||
|
||||
* [Kyverno Auto-Gen Rules for Pod Controllers](https://kyverno.io/docs/writing-policies/autogen/)
|
||||
* [Configure Liveness, Readiness and Startup Probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/)
|
||||
|
||||
## Policy YAML
|
||||
|
||||
[require_probes.yaml](best_practices/require_probes.yaml)
|
||||
|
||||
````yaml
|
||||
```yaml
|
||||
apiVersion: kyverno.io/v1
|
||||
kind: ClusterPolicy
|
||||
metadata:
|
||||
|
@ -32,4 +39,4 @@ spec:
|
|||
periodSeconds: ">0"
|
||||
readinessProbe:
|
||||
periodSeconds: ">0"
|
||||
````
|
||||
```
|
||||
|
|
|
@ -1,8 +1,8 @@
|
|||
# Disallow unknown image registries
|
||||
|
||||
Images from unknown registries may not be scanned and secured. Requiring the use of trusted registries helps reduce threat exposure.
|
||||
Images from unknown registries may not be scanned and secured. Requiring the use of trusted registries helps reduce threat exposure and is considered a common Kubernetes best practice.
|
||||
|
||||
You can customize this policy to allow image registries that you trust.
|
||||
This sample policy requires that all images come from either `k8s.gcr.io` or `gcr.io`. You can customize this policy to allow other or different image registries that you trust. Alternatively, you can invert the check to allow images from all other registries except one (or a list) by changing the `image` field to `image: "!k8s.gcr.io"`.
|
||||
|
||||
## Policy YAML
|
||||
|
||||
|
@ -22,9 +22,10 @@ spec:
|
|||
kinds:
|
||||
- Pod
|
||||
validate:
|
||||
message: "Unknown image registry"
|
||||
message: "Unknown image registry."
|
||||
pattern:
|
||||
spec:
|
||||
containers:
|
||||
# Allows images from either k8s.gcr.io or gcr.io.
|
||||
- image: "k8s.gcr.io/* | gcr.io/*"
|
||||
````
|
||||
|
|
29
samples/RestrictLoadBalancer.md
Normal file
29
samples/RestrictLoadBalancer.md
Normal file
|
@ -0,0 +1,29 @@
|
|||
# Restrict use of `LoadBalancer` services
|
||||
|
||||
A Kubernetes service of type `LoadBalancer` typically requires the use of a cloud provider to realize the infrastructure on the backend. Doing so has the side effect of increased cost and potentially bypassing existing `Ingress` resource(s) which are preferred methods of issuing traffic to a Kubernetes cluster. The use of Services of type `LoadBalancer` should therefore be carefully controlled or restricted across the cluster.
|
||||
|
||||
This sample policy checks for any services of type `LoadBalancer`. Change `validationFailureAction` to `enforce` to block their creation.
|
||||
|
||||
## Policy YAML
|
||||
|
||||
[restrict_loadbalancer.yaml](more/restrict_loadbalancer.yaml)
|
||||
|
||||
```yaml
|
||||
apiVersion: kyverno.io/v1
|
||||
kind: ClusterPolicy
|
||||
metadata:
|
||||
name: no-loadbalancers
|
||||
spec:
|
||||
validationFailureAction: audit
|
||||
rules:
|
||||
- name: no-LoadBalancer
|
||||
match:
|
||||
resources:
|
||||
kinds:
|
||||
- Service
|
||||
validate:
|
||||
message: "Service of type LoadBalancer is not allowed."
|
||||
pattern:
|
||||
spec:
|
||||
type: "!LoadBalancer"
|
||||
```
|
75
samples/SpreadPodsAcrossTopology.md
Normal file
75
samples/SpreadPodsAcrossTopology.md
Normal file
|
@ -0,0 +1,75 @@
|
|||
# Spread pods across topology
|
||||
|
||||
When having a Kubernetes cluster that spans multiple availability zones, it is often desired to spread your Pods out among them in a way which controls where they land. This can be advantageous in ensuring that, should one of those zones fail, your application continues to run in a more predictable way and with less potential loss.
|
||||
|
||||
This sample policy configures all Deployments having the label of `required: true` to be spread amongst hosts which are labeled with the key name of `zone`. It does this only to Deployments which do not already have the field `topologySpreadConstraints` set.
|
||||
|
||||
**NOTE:** When deploying this policy to a Kubernetes cluster less than version 1.19, some feature gate flags will need to be enabled. Please see the [More Information](#more-information) section below.
|
||||
|
||||
In order to test the policy, you can use this sample Deployment manifest below.
|
||||
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
labels:
|
||||
app: busybox
|
||||
distributed: required
|
||||
name: busybox
|
||||
spec:
|
||||
replicas: 2
|
||||
selector:
|
||||
matchLabels:
|
||||
app: busybox
|
||||
distributed: required
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: busybox
|
||||
distributed: required
|
||||
spec:
|
||||
containers:
|
||||
- image: busybox:1.28
|
||||
name: busybox
|
||||
command: ["sleep", "9999"]
|
||||
```
|
||||
|
||||
## More Information
|
||||
|
||||
* [Pod Topology Spread Constraints](https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/)
|
||||
|
||||
## Policy YAML
|
||||
|
||||
[spread_pods_across_topology.yaml](more/spread_pods_across_topology.yaml)
|
||||
|
||||
```yaml
|
||||
apiVersion: kyverno.io/v1
|
||||
kind: ClusterPolicy
|
||||
metadata:
|
||||
name: spread-pods
|
||||
spec:
|
||||
rules:
|
||||
- name: spread-pods-across-nodes
|
||||
# Matches any Deployment with the label `distributed=required`
|
||||
match:
|
||||
resources:
|
||||
kinds:
|
||||
- Deployment
|
||||
selector:
|
||||
matchLabels:
|
||||
distributed: required
|
||||
# Mutates the incoming Deployment.
|
||||
mutate:
|
||||
patchStrategicMerge:
|
||||
spec:
|
||||
template:
|
||||
spec:
|
||||
# Adds the topologySpreadConstraints field if non-existent in the request.
|
||||
+(topologySpreadConstraints):
|
||||
- maxSkew: 1
|
||||
topologyKey: zone
|
||||
whenUnsatisfiable: DoNotSchedule
|
||||
labelSelector:
|
||||
matchLabels:
|
||||
distributed: required
|
||||
```
|
|
@ -16,3 +16,5 @@ spec:
|
|||
metadata:
|
||||
labels:
|
||||
app.kubernetes.io/name: "?*"
|
||||
# You can add more labels if you wish the policy to validate more than just one is present. Uncomment the below line, or add new ones.
|
||||
#app.kubernetes.io/component: "?*
|
35
samples/more/create_pod_antiaffinity.yaml
Normal file
35
samples/more/create_pod_antiaffinity.yaml
Normal file
|
@ -0,0 +1,35 @@
|
|||
apiVersion: kyverno.io/v1
|
||||
kind: ClusterPolicy
|
||||
metadata:
|
||||
name: insert-podantiaffinity
|
||||
spec:
|
||||
rules:
|
||||
- name: insert-podantiaffinity
|
||||
match:
|
||||
resources:
|
||||
kinds:
|
||||
- Deployment
|
||||
preconditions:
|
||||
# This precondition ensures that the label `app` is applied to Pods within the Deployment resource.
|
||||
- key: "{{request.object.metadata.labels.app}}"
|
||||
operator: NotEquals
|
||||
value: ""
|
||||
# Mutates the Deployment resource to add fields.
|
||||
mutate:
|
||||
patchStrategicMerge:
|
||||
spec:
|
||||
template:
|
||||
spec:
|
||||
# Add the `affinity` key and others if not already specified in the Deployment manifest.
|
||||
+(affinity):
|
||||
+(podAntiAffinity):
|
||||
+(preferredDuringSchedulingIgnoredDuringExecution):
|
||||
- weight: 1
|
||||
podAffinityTerm:
|
||||
topologyKey: "kubernetes.io/hostname"
|
||||
labelSelector:
|
||||
matchExpressions:
|
||||
- key: app
|
||||
operator: In
|
||||
values:
|
||||
- "{{request.object.metadata.labels.app}}"
|
|
@ -7,6 +7,7 @@ metadata:
|
|||
policies.kyverno.io/description: Images from unknown registries may not be scanned and secured.
|
||||
Requiring use of known registries helps reduce threat exposure.
|
||||
spec:
|
||||
validationFailureAction: audit
|
||||
rules:
|
||||
- name: validate-registries
|
||||
match:
|
||||
|
@ -14,7 +15,7 @@ spec:
|
|||
kinds:
|
||||
- Pod
|
||||
validate:
|
||||
message: "Unknown image registry"
|
||||
message: "Unknown image registry."
|
||||
pattern:
|
||||
spec:
|
||||
containers:
|
||||
|
|
17
samples/more/restrict_loadbalancer.yaml
Normal file
17
samples/more/restrict_loadbalancer.yaml
Normal file
|
@ -0,0 +1,17 @@
|
|||
apiVersion: kyverno.io/v1
|
||||
kind: ClusterPolicy
|
||||
metadata:
|
||||
name: no-loadbalancers
|
||||
spec:
|
||||
validationFailureAction: audit
|
||||
rules:
|
||||
- name: no-LoadBalancer
|
||||
match:
|
||||
resources:
|
||||
kinds:
|
||||
- Service
|
||||
validate:
|
||||
message: "Service of type LoadBalancer is not allowed."
|
||||
pattern:
|
||||
spec:
|
||||
type: "!LoadBalancer"
|
29
samples/more/spread_pods_across_topology.yaml
Normal file
29
samples/more/spread_pods_across_topology.yaml
Normal file
|
@ -0,0 +1,29 @@
|
|||
apiVersion: kyverno.io/v1
|
||||
kind: ClusterPolicy
|
||||
metadata:
|
||||
name: spread-pods
|
||||
spec:
|
||||
rules:
|
||||
- name: spread-pods-across-nodes
|
||||
# Matches any Deployment with the label `distributed=required`
|
||||
match:
|
||||
resources:
|
||||
kinds:
|
||||
- Deployment
|
||||
selector:
|
||||
matchLabels:
|
||||
distributed: required
|
||||
# Mutates the incoming Deployment.
|
||||
mutate:
|
||||
patchStrategicMerge:
|
||||
spec:
|
||||
template:
|
||||
spec:
|
||||
# Adds the topologySpreadConstraints field if non-existent in the request.
|
||||
+(topologySpreadConstraints):
|
||||
- maxSkew: 1
|
||||
topologyKey: zone
|
||||
whenUnsatisfiable: DoNotSchedule
|
||||
labelSelector:
|
||||
matchLabels:
|
||||
distributed: required
|
Loading…
Add table
Reference in a new issue