1
0
Fork 0
mirror of https://github.com/kyverno/kyverno.git synced 2024-12-14 11:57:48 +00:00
kyverno/test/conformance/kuttl
Charles-Edouard Brétéché 7e8f72ccd3
fix: cap and validate webhook timeout (#6715)
Signed-off-by: Charles-Edouard Brétéché <charles.edouard@nirmata.com>
2023-03-29 07:02:52 +00:00
..
_aaa_template_resources More kuttl standard generate tests (#6332) 2023-02-27 14:39:18 +00:00
_config chore: split kuttl tests (#6423) 2023-02-28 15:33:46 +01:00
autogen test: simplify autogen kuttl tests (#5343) 2022-11-15 20:27:29 +00:00
cleanup chore: improve a couple kuttl tests (#6079) 2023-01-30 14:11:06 +00:00
events/policy refactor: event package (#6124) 2023-01-26 21:19:02 +00:00
exceptions spec.background field implementation for PolicyExceptions (#6127) 2023-02-06 15:45:31 +00:00
generate fix: block generate policies when lack of permission to operate downstream resources (#6610) 2023-03-22 13:14:57 +00:00
mutate add mutate.targets validations (#6693) 2023-03-27 12:30:46 +00:00
policy-validation fix: cap and validate webhook timeout (#6715) 2023-03-29 07:02:52 +00:00
rangeoperators/standard fix: change inrange operator regexs (#5962) 2023-01-16 16:23:36 +01:00
rbac/aggregate-to-admin feat: add view aggregated cluster role support (#6350) 2023-02-25 20:57:56 +01:00
reports fix: skip duplicate PSa checks for the latest version (#6634) 2023-03-21 14:03:40 +00:00
validate feat: add operations support in match/exclude (#6658) 2023-03-29 04:22:21 +00:00
verify-manifests test: add kuttl tests for manifests verification (#6701) 2023-03-27 12:19:19 -04:00
verifyImages/clusterpolicy fix imageRef matching (#5956) 2023-01-10 09:44:31 +00:00
webhooks refactor: do not allow matching with subresource kind (#6625) 2023-03-21 13:28:00 +00:00
kuttl-test.yaml feat: improve background scan reports enqueue logic (#5810) 2023-01-03 13:51:37 +00:00
README.md Update README.md (#6389) 2023-02-24 10:07:26 +00:00

Testing with kuttl

This document explains conformance and end-to-end (e2e) tests using the kuttl tool, when test coverage is required or beneficial, and how contributors may write these tests.

Overview

Kyverno uses kuttl for performing tests on a live Kubernetes environment with the current code of Kyverno running inside it. The official documentation for this tool is located here. kuttl is a Kubernetes testing tool that is capable of submitting resources to a cluster and checking the state of those resources. By comparing that state with declarations defined in other files, kuttl can determine whether the observed state is "correct" and either pass or fail based upon this. It also has abilities to run commands or whole scripts. kuttl tests work by defining a number of different YAML files with a numerical prefix and co-locating these files in a single directory. Each directory represents a "test case". Files within this directory are evaluated/executed in numerical order. If a failure is encountered at any step in the process, the test is halted and a failure reported. The benefit of kuttl is that test cases may be easily and quickly written with no knowledge of a programming language required.

How Tests Are Conducted

Kyverno uses kuttl tests to check behavior against incoming code in the form of PRs. Upon every PR, the following automated actions occur in GitHub Actions:

  1. A KinD cluster is built.
  2. Kyverno is built from source incorporating the changes in your PR.
  3. Kyverno is installed into the KinD cluster.
  4. Kuttl executes all test cases against the live environment.

When Tests Are Required

Tests are required for any PR which:

  1. Introduces a new capability
  2. Enhances an existing capability
  3. Fixes an issue
  4. Makes a behavioral change

Test cases are required for any of the above which can be tested and verified from an end-user (black box) perspective. Tests are also required at the same time as when a PR is proposed. Unless there are special circumstances, tests may not follow a PR which introduces any of the following items in the list. This is because it is too easy to forget to write a test and then it never happens. Tests should always be considered a part of a responsible development process and not an after thought or "extra".

Organizing Tests

Organization of tests is critical to ensure we have an accounting of what exists. With the eventuality of hundreds of test cases, they must be organized to be useful. Please look at the existing directory structure to identify a suitable location for your tests. Tests are typically organized with the following structure, though this is subject to change.

.
├── generate
│   └── clusterpolicy
│       ├── cornercases
│       │   ├── test_case_01
│       │   │   ├── <files>.yaml
│       │   └── test_case_02
│       │       ├── <files>.yaml
│       └── standard
│           ├── clone
│           │   ├── nosync
│           │   │   ├── test_case_03

PRs which address issues will typically go into the cornercases directory separated by clusterpolicy or policy depending on which it addresses. If both, it can go under cornercases. PRs which add net new functionality such as a new rule type or significant capability should have basic tests under the standard directory. Standard tests test for generic behavior and NOT an esoteric combination of inputs/events to expose a problem. For example, an example of a standard test is to ensure that a ClusterPolicy with a single validate rule can successfully be created. Unless the contents are highly specific, this is a standard test which should be organized under the standard directory.

Writing Tests

To make writing test cases even easier, we have provided an example here under the scaffold directory which may be copied-and-pasted to a new test case (directory) based upon the organizational structure outlined above. Additional kuttl test files may be found in either commands or scripts with some common test files for Kyverno.

It is imperative you modify README.md for each test case and follow the template provided. The template looks like the following:

## Description

This is a description of what my test does and why it needs to do it.

## Expected Behavior

This is the expected behavior of my test. Although it's assumed the test, overall, should pass/succeed, be specific about what the internal behavior is which leads to that result.

## Reference Issue(s)

1234

For some best practices we have identified, see the best practices document here.