mirror of
https://github.com/kyverno/kyverno.git
synced 2024-12-14 11:57:48 +00:00
feat: add ttl controller (#7821)
* added the ttl controller Signed-off-by: Ved Ratan <vedratan8@gmail.com> * fix Signed-off-by: Ved Ratan <vedratan8@gmail.com> * fixed label and vars Signed-off-by: Ved Ratan <vedratan8@gmail.com> * added logger Signed-off-by: Ved Ratan <vedratan8@gmail.com> * applied fixes Signed-off-by: Ved Ratan <vedratan8@gmail.com> * removed comments Signed-off-by: Ved Ratan <vedratan8@gmail.com> * lint Signed-off-by: Ved Ratan <vedratan8@gmail.com> * lint Signed-off-by: Ved Ratan <vedratan8@gmail.com> * lint Signed-off-by: Ved Ratan <vedratan8@gmail.com> * more lint fix Signed-off-by: Ved Ratan <vedratan8@gmail.com> * applied changes Signed-off-by: Ved Ratan <vedratan8@gmail.com> * minor fixes Signed-off-by: Ved Ratan <vedratan8@gmail.com> * fix Signed-off-by: Ved Ratan <vedratan8@gmail.com> * fix logger, separate parse logic Signed-off-by: Ved Ratan <vedratan8@gmail.com> * added tests Signed-off-by: Ved Ratan <vedratan8@gmail.com> * fix Signed-off-by: Ved Ratan <vedratan8@gmail.com> * added kuttl tests, validation utilities Signed-off-by: Ved Ratan <vedratan8@gmail.com> * commented code Signed-off-by: Ved Ratan <vedratan8@gmail.com> * renamed tests Signed-off-by: Ved Ratan <vedratan8@gmail.com> * fix test Signed-off-by: Ved Ratan <vedratan8@gmail.com> * created log.go Signed-off-by: Ved Ratan <vedratan8@gmail.com> * fix Signed-off-by: Ved Ratan <vedratan8@gmail.com> * fix Signed-off-by: Ved Ratan <vedratan8@gmail.com> * fix Signed-off-by: Ved Ratan <vedratan8@gmail.com> * fix log.go Signed-off-by: Ved Ratan <vedratan8@gmail.com> * fix Signed-off-by: Ved Ratan <vedratan8@gmail.com> * added README.md refactor code Signed-off-by: Ved Ratan <vedratan8@gmail.com> * lint fix Signed-off-by: Ved Ratan <vedratan8@gmail.com> * lint Signed-off-by: Ved Ratan <vedratan8@gmail.com> * lint fix Signed-off-by: Ved Ratan <vedratan8@gmail.com> * added validation webhook Signed-off-by: Ved Ratan <vedratan8@gmail.com> * label-validation fix Signed-off-by: Ved Ratan <vedratan8@gmail.com> * fix Signed-off-by: Ved Ratan <vedratan8@gmail.com> * added flag, updated verbs Signed-off-by: Ved Ratan <vedratan8@gmail.com> * fix Signed-off-by: Ved Ratan <vedratan8@gmail.com> * fix Signed-off-by: Ved Ratan <vedratan8@gmail.com> * fix Signed-off-by: Ved Ratan <vedratan8@gmail.com> * fix Signed-off-by: Ved Ratan <vedratan8@gmail.com> * updated verbs Signed-off-by: Ved Ratan <vedratan8@gmail.com> * updated helm chart Signed-off-by: Ved Ratan <vedratan8@gmail.com> * test fix Signed-off-by: Ved Ratan <vedratan8@gmail.com> * lint Signed-off-by: Ved Ratan <vedratan8@gmail.com> * linter Signed-off-by: Ved Ratan <vedratan8@gmail.com> * imporoved webhook validation Signed-off-by: Ved Ratan <vedratan8@gmail.com> * fix Signed-off-by: Ved Ratan <vedratan8@gmail.com> * linter fix Signed-off-by: Ved Ratan <vedratan8@gmail.com> * lint Signed-off-by: Ved Ratan <vedratan8@gmail.com> * lint fix Signed-off-by: Ved Ratan <vedratan8@gmail.com> * fix codegen Signed-off-by: Charles-Edouard Brétéché <charles.edouard@nirmata.com> * webhook names and path constants Signed-off-by: Charles-Edouard Brétéché <charles.edouard@nirmata.com> * constant label Signed-off-by: Charles-Edouard Brétéché <charles.edouard@nirmata.com> * fix label selector Signed-off-by: Charles-Edouard Brétéché <charles.edouard@nirmata.com> * kuttl test fix Signed-off-by: Ved Ratan <vedratan8@gmail.com> * helm docs Signed-off-by: Charles-Edouard Brétéché <charles.edouard@nirmata.com> * fix controller logger Signed-off-by: Charles-Edouard Brétéché <charles.edouard@nirmata.com> * fix: manager logger Signed-off-by: Charles-Edouard Brétéché <charles.edouard@nirmata.com> * fix failure policy Signed-off-by: Charles-Edouard Brétéché <charles.edouard@nirmata.com> * kuttl tests Signed-off-by: Charles-Edouard Brétéché <charles.edouard@nirmata.com> * move kuttl tests in separate job Signed-off-by: Charles-Edouard Brétéché <charles.edouard@nirmata.com> * remove rbac steps Signed-off-by: Charles-Edouard Brétéché <charles.edouard@nirmata.com> * remove configmaps from core cluster role Signed-off-by: Charles-Edouard Brétéché <charles.edouard@nirmata.com> * fix logger Signed-off-by: Charles-Edouard Brétéché <charles.edouard@nirmata.com> * rename flag Signed-off-by: Charles-Edouard Brétéché <charles.edouard@nirmata.com> * kuttl Signed-off-by: Charles-Edouard Brétéché <charles.edouard@nirmata.com> * fix error Signed-off-by: Charles-Edouard Brétéché <charles.edouard@nirmata.com> * fix linter Signed-off-by: Charles-Edouard Brétéché <charles.edouard@nirmata.com> --------- Signed-off-by: Ved Ratan <vedratan8@gmail.com> Signed-off-by: Ved Ratan <82467006+VedRatan@users.noreply.github.com> Signed-off-by: Charles-Edouard Brétéché <charles.edouard@nirmata.com> Co-authored-by: Charles-Edouard Brétéché <charles.edouard@nirmata.com>
This commit is contained in:
parent
cd9a13e751
commit
9f2cc6c99c
48 changed files with 969 additions and 38 deletions
60
.github/workflows/conformance.yaml
vendored
60
.github/workflows/conformance.yaml
vendored
|
@ -107,6 +107,66 @@ jobs:
|
|||
if: failure()
|
||||
uses: ./.github/actions/kyverno-logs
|
||||
|
||||
|
||||
# runs conformance test suites with configuration:
|
||||
ttl:
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
packages: read
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
config:
|
||||
- name: ttl
|
||||
values:
|
||||
- standard
|
||||
- ttl
|
||||
k8s-version:
|
||||
- name: v1.24
|
||||
version: v1.24.15
|
||||
- name: v1.25
|
||||
version: v1.25.11
|
||||
- name: v1.26
|
||||
version: v1.26.6
|
||||
- name: v1.27
|
||||
version: v1.27.3
|
||||
tests:
|
||||
- ttl
|
||||
needs: prepare-images
|
||||
name: ${{ matrix.k8s-version.name }} - ${{ matrix.config.name }} - ${{ matrix.tests }}
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # v3.5.3
|
||||
- name: Setup build env
|
||||
uses: ./.github/actions/setup-build-env
|
||||
with:
|
||||
build-cache-key: run-conformance
|
||||
- name: Create kind cluster
|
||||
run: |
|
||||
export KIND_IMAGE=kindest/node:${{ matrix.k8s-version.version }}
|
||||
make kind-create-cluster
|
||||
- name: Download kyverno images archive
|
||||
uses: actions/download-artifact@9bc31d5ccc31df68ecc42ccf4149144866c47d8a # v3.0.2
|
||||
with:
|
||||
name: kyverno.tar
|
||||
- name: Load kyverno images archive in kind cluster
|
||||
run: make kind-load-image-archive
|
||||
- name: Install kyverno
|
||||
run: |
|
||||
export USE_CONFIG=${{ join(matrix.config.values, ',') }}
|
||||
make kind-install-kyverno
|
||||
- name: Wait for kyverno ready
|
||||
uses: ./.github/actions/kyverno-wait-ready
|
||||
- name: Test with kuttl
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
run: |
|
||||
./.tools/kubectl-kuttl test ./test/conformance/kuttl/${{ matrix.tests }} \
|
||||
--config ./test/conformance/kuttl/_config/common.yaml
|
||||
- name: Debug failure
|
||||
if: failure()
|
||||
uses: ./.github/actions/kyverno-logs
|
||||
|
||||
# runs conformance test suites with configuration:
|
||||
force-failure-policy-ignore:
|
||||
runs-on: ubuntu-latest
|
||||
|
|
|
@ -2,17 +2,18 @@ package kyverno
|
|||
|
||||
const (
|
||||
// Well known labels
|
||||
LabelAppManagedBy = "app.kubernetes.io/managed-by"
|
||||
LabelAppComponent = "app.kubernetes.io/component"
|
||||
LabelAppManagedBy = "app.kubernetes.io/managed-by"
|
||||
LabelCacheEnabled = "cache.kyverno.io/enabled"
|
||||
LabelCertManagedBy = "cert.kyverno.io/managed-by"
|
||||
LabelCleanupTtl = "cleanup.kyverno.io/ttl"
|
||||
LabelWebhookManagedBy = "webhook.kyverno.io/managed-by"
|
||||
// Well known annotations
|
||||
AnnotationAutogenControllers = "pod-policies.kyverno.io/autogen-controllers"
|
||||
AnnotationImageVerify = "kyverno.io/verify-images"
|
||||
AnnotationPolicyCategory = "policies.kyverno.io/category"
|
||||
AnnotationPolicySeverity = "policies.kyverno.io/severity"
|
||||
AnnotationPolicyScored = "policies.kyverno.io/scored"
|
||||
AnnotationPolicySeverity = "policies.kyverno.io/severity"
|
||||
// Well known values
|
||||
ValueKyvernoApp = "kyverno"
|
||||
)
|
||||
|
|
|
@ -311,6 +311,7 @@ The chart values are organised per component.
|
|||
| features.registryClient.allowInsecure | bool | `false` | Allow insecure registry |
|
||||
| features.registryClient.credentialHelpers | list | `["default","google","amazon","azure","github"]` | Enable registry client helpers |
|
||||
| features.reports.chunkSize | int | `1000` | Reports chunk size |
|
||||
| features.ttlController.reconciliationInterval | string | `"1m"` | Reconciliation interval for the label based cleanup manager |
|
||||
|
||||
### Admission controller
|
||||
|
||||
|
|
|
@ -65,7 +65,10 @@
|
|||
{{- $flags = append $flags (print "--allowInsecureRegistry=" .allowInsecure) -}}
|
||||
{{- $flags = append $flags (print "--registryCredentialHelpers=" (join "," .credentialHelpers)) -}}
|
||||
{{- end -}}
|
||||
{{- with .ttlController -}}
|
||||
{{- $flags = append $flags (print "--ttlReconciliationInterval=" .reconciliationInterval) -}}
|
||||
{{- end -}}
|
||||
{{- with $flags -}}
|
||||
{{- toYaml . -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
|
@ -94,9 +94,8 @@ rules:
|
|||
resources:
|
||||
{{- toYaml .resources | nindent 6 }}
|
||||
verbs:
|
||||
- delete
|
||||
- list
|
||||
{{- toYaml .verbs | nindent 6 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
|
@ -109,6 +109,7 @@ spec:
|
|||
"deferredLoading"
|
||||
"dumpPayload"
|
||||
"logging"
|
||||
"ttlController"
|
||||
) | nindent 12 }}
|
||||
{{- range $key, $value := .Values.cleanupController.extraArgs }}
|
||||
{{- if $value }}
|
||||
|
@ -158,4 +159,4 @@ spec:
|
|||
{{- tpl (toYaml .) $ | nindent 12 }}
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
|
@ -409,6 +409,9 @@ features:
|
|||
reports:
|
||||
# -- Reports chunk size
|
||||
chunkSize: 1000
|
||||
ttlController:
|
||||
# -- Reconciliation interval for the label based cleanup manager
|
||||
reconciliationInterval: 1m
|
||||
|
||||
# Cleanup cronjobs to prevent internal resources from stacking up in the cluster
|
||||
cleanupJobs:
|
||||
|
@ -1154,6 +1157,10 @@ cleanupController:
|
|||
# - ''
|
||||
# resources:
|
||||
# - pods
|
||||
# verbs:
|
||||
# - delete
|
||||
# - list
|
||||
# - watch
|
||||
|
||||
# -- Create self-signed certificates at deployment time.
|
||||
# The certificates won't be automatically renewed if this is set to `true`.
|
||||
|
|
|
@ -11,17 +11,17 @@ import (
|
|||
"github.com/kyverno/kyverno/pkg/webhooks/handlers"
|
||||
)
|
||||
|
||||
type clenaupHandlers struct {
|
||||
type cleanupHandlers struct {
|
||||
client dclient.Interface
|
||||
}
|
||||
|
||||
func New(client dclient.Interface) *clenaupHandlers {
|
||||
return &clenaupHandlers{
|
||||
func New(client dclient.Interface) *cleanupHandlers {
|
||||
return &cleanupHandlers{
|
||||
client: client,
|
||||
}
|
||||
}
|
||||
|
||||
func (h *clenaupHandlers) Validate(ctx context.Context, logger logr.Logger, request handlers.AdmissionRequest, _ time.Time) handlers.AdmissionResponse {
|
||||
func (h *cleanupHandlers) Validate(ctx context.Context, logger logr.Logger, request handlers.AdmissionRequest, _ time.Time) handlers.AdmissionResponse {
|
||||
policy, _, err := admissionutils.GetCleanupPolicies(request.AdmissionRequest)
|
||||
if err != nil {
|
||||
logger.Error(err, "failed to unmarshal policies from admission request")
|
||||
|
|
|
@ -0,0 +1,37 @@
|
|||
package resourceadmission
|
||||
|
||||
import (
|
||||
"context"
|
||||
"time"
|
||||
|
||||
"github.com/go-logr/logr"
|
||||
admissionutils "github.com/kyverno/kyverno/pkg/utils/admission"
|
||||
validation "github.com/kyverno/kyverno/pkg/validation/ttl-label"
|
||||
"github.com/kyverno/kyverno/pkg/webhooks/handlers"
|
||||
admissionv1 "k8s.io/api/admission/v1"
|
||||
)
|
||||
|
||||
func Validate(_ context.Context, logger logr.Logger, request handlers.AdmissionRequest, _ time.Time) handlers.AdmissionResponse {
|
||||
logger.Info("triggered the label validator")
|
||||
ttlLabel, err := admissionutils.GetTtlLabel(request.AdmissionRequest.Object.Raw)
|
||||
if err != nil {
|
||||
logger.Error(err, "failed to get the ttl label")
|
||||
return admissionutils.ResponseSuccess(request.UID, err.Error())
|
||||
}
|
||||
|
||||
if request.Operation == admissionv1.Update {
|
||||
ttlLabel, err = admissionutils.GetTtlLabel(request.AdmissionRequest.Object.Raw)
|
||||
if err != nil {
|
||||
logger.Error(err, "failed to get the ttl label")
|
||||
return admissionutils.ResponseSuccess(request.UID, err.Error())
|
||||
}
|
||||
}
|
||||
|
||||
if ttlLabel != "" {
|
||||
if err := validation.Validate(ttlLabel); err != nil {
|
||||
logger.Error(err, "failed to unmarshal the ttl label value")
|
||||
return admissionutils.ResponseSuccess(request.UID, err.Error())
|
||||
}
|
||||
}
|
||||
return admissionutils.ResponseSuccess(request.UID)
|
||||
}
|
|
@ -8,8 +8,10 @@ import (
|
|||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/kyverno/kyverno/api/kyverno"
|
||||
admissionhandlers "github.com/kyverno/kyverno/cmd/cleanup-controller/handlers/admission"
|
||||
cleanuphandlers "github.com/kyverno/kyverno/cmd/cleanup-controller/handlers/cleanup"
|
||||
labelhandlers "github.com/kyverno/kyverno/cmd/cleanup-controller/handlers/resource-admission"
|
||||
"github.com/kyverno/kyverno/cmd/internal"
|
||||
kyvernoinformer "github.com/kyverno/kyverno/pkg/client/informers/externalversions"
|
||||
"github.com/kyverno/kyverno/pkg/config"
|
||||
|
@ -17,6 +19,7 @@ import (
|
|||
"github.com/kyverno/kyverno/pkg/controllers/cleanup"
|
||||
genericloggingcontroller "github.com/kyverno/kyverno/pkg/controllers/generic/logging"
|
||||
genericwebhookcontroller "github.com/kyverno/kyverno/pkg/controllers/generic/webhook"
|
||||
ttlcontroller "github.com/kyverno/kyverno/pkg/controllers/ttl-controller"
|
||||
"github.com/kyverno/kyverno/pkg/event"
|
||||
"github.com/kyverno/kyverno/pkg/informers"
|
||||
"github.com/kyverno/kyverno/pkg/leaderelection"
|
||||
|
@ -25,13 +28,15 @@ import (
|
|||
"github.com/kyverno/kyverno/pkg/webhooks"
|
||||
admissionregistrationv1 "k8s.io/api/admissionregistration/v1"
|
||||
corev1 "k8s.io/api/core/v1"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
kubeinformers "k8s.io/client-go/informers"
|
||||
)
|
||||
|
||||
const (
|
||||
resyncPeriod = 15 * time.Minute
|
||||
webhookWorkers = 2
|
||||
webhookControllerName = "webhook-controller"
|
||||
resyncPeriod = 15 * time.Minute
|
||||
webhookWorkers = 2
|
||||
policyWebhookControllerName = "policy-webhook-controller"
|
||||
ttlWebhookControllerName = "ttl-webhook-controller"
|
||||
)
|
||||
|
||||
// TODO:
|
||||
|
@ -55,12 +60,14 @@ func main() {
|
|||
serverIP string
|
||||
servicePort int
|
||||
maxQueuedEvents int
|
||||
interval time.Duration
|
||||
)
|
||||
flagset := flag.NewFlagSet("cleanup-controller", flag.ExitOnError)
|
||||
flagset.BoolVar(&dumpPayload, "dumpPayload", false, "Set this flag to activate/deactivate debug mode.")
|
||||
flagset.StringVar(&serverIP, "serverIP", "", "IP address where Kyverno controller runs. Only required if out-of-cluster.")
|
||||
flagset.IntVar(&servicePort, "servicePort", 443, "Port used by the Kyverno Service resource and for webhook configurations.")
|
||||
flagset.IntVar(&maxQueuedEvents, "maxQueuedEvents", 1000, "Maximum events to be queued.")
|
||||
flagset.DurationVar(&interval, "ttlReconciliationInterval", time.Minute, "Set this flag to set the interval after which the resource controller reconciliation should occur")
|
||||
// config
|
||||
appConfig := internal.NewConfiguration(
|
||||
internal.WithProfiling(),
|
||||
|
@ -73,6 +80,7 @@ func main() {
|
|||
internal.WithConfigMapCaching(),
|
||||
internal.WithDeferredLoading(),
|
||||
internal.WithFlagSets(flagset),
|
||||
internal.WithMetadataClient(),
|
||||
)
|
||||
// parse flags
|
||||
internal.ParseFlags(appConfig)
|
||||
|
@ -116,10 +124,10 @@ func main() {
|
|||
),
|
||||
certmanager.Workers,
|
||||
)
|
||||
webhookController := internal.NewController(
|
||||
webhookControllerName,
|
||||
policyValidatingWebhookController := internal.NewController(
|
||||
policyWebhookControllerName,
|
||||
genericwebhookcontroller.NewController(
|
||||
webhookControllerName,
|
||||
policyWebhookControllerName,
|
||||
setup.KubeClient.AdmissionregistrationV1().ValidatingWebhookConfigurations(),
|
||||
kubeInformer.Admissionregistration().V1().ValidatingWebhookConfigurations(),
|
||||
caSecret,
|
||||
|
@ -127,26 +135,67 @@ func main() {
|
|||
config.CleanupValidatingWebhookServicePath,
|
||||
serverIP,
|
||||
int32(servicePort),
|
||||
[]admissionregistrationv1.RuleWithOperations{{
|
||||
Rule: admissionregistrationv1.Rule{
|
||||
APIGroups: []string{"kyverno.io"},
|
||||
APIVersions: []string{"v2alpha1"},
|
||||
Resources: []string{
|
||||
"cleanuppolicies/*",
|
||||
"clustercleanuppolicies/*",
|
||||
nil,
|
||||
[]admissionregistrationv1.RuleWithOperations{
|
||||
{
|
||||
Rule: admissionregistrationv1.Rule{
|
||||
APIGroups: []string{"kyverno.io"},
|
||||
APIVersions: []string{"v2alpha1"},
|
||||
Resources: []string{
|
||||
"cleanuppolicies/*",
|
||||
"clustercleanuppolicies/*",
|
||||
},
|
||||
},
|
||||
Operations: []admissionregistrationv1.OperationType{
|
||||
admissionregistrationv1.Create,
|
||||
admissionregistrationv1.Update,
|
||||
},
|
||||
},
|
||||
Operations: []admissionregistrationv1.OperationType{
|
||||
admissionregistrationv1.Create,
|
||||
admissionregistrationv1.Update,
|
||||
},
|
||||
}},
|
||||
},
|
||||
genericwebhookcontroller.Fail,
|
||||
genericwebhookcontroller.None,
|
||||
setup.Configuration,
|
||||
),
|
||||
webhookWorkers,
|
||||
)
|
||||
ttlWebhookController := internal.NewController(
|
||||
ttlWebhookControllerName,
|
||||
genericwebhookcontroller.NewController(
|
||||
ttlWebhookControllerName,
|
||||
setup.KubeClient.AdmissionregistrationV1().ValidatingWebhookConfigurations(),
|
||||
kubeInformer.Admissionregistration().V1().ValidatingWebhookConfigurations(),
|
||||
caSecret,
|
||||
config.TtlValidatingWebhookConfigurationName,
|
||||
config.TtlValidatingWebhookServicePath,
|
||||
serverIP,
|
||||
int32(servicePort),
|
||||
&metav1.LabelSelector{
|
||||
MatchExpressions: []metav1.LabelSelectorRequirement{
|
||||
{
|
||||
Key: kyverno.LabelCleanupTtl,
|
||||
Operator: metav1.LabelSelectorOpExists,
|
||||
},
|
||||
},
|
||||
},
|
||||
[]admissionregistrationv1.RuleWithOperations{
|
||||
{
|
||||
Rule: admissionregistrationv1.Rule{
|
||||
APIGroups: []string{"*"},
|
||||
APIVersions: []string{"*"},
|
||||
Resources: []string{"*"},
|
||||
},
|
||||
Operations: []admissionregistrationv1.OperationType{
|
||||
admissionregistrationv1.Create,
|
||||
admissionregistrationv1.Update,
|
||||
},
|
||||
},
|
||||
},
|
||||
genericwebhookcontroller.Ignore,
|
||||
genericwebhookcontroller.None,
|
||||
setup.Configuration,
|
||||
),
|
||||
webhookWorkers,
|
||||
)
|
||||
cleanupController := internal.NewController(
|
||||
cleanup.ControllerName,
|
||||
cleanup.NewController(
|
||||
|
@ -158,6 +207,16 @@ func main() {
|
|||
),
|
||||
cleanup.Workers,
|
||||
)
|
||||
ttlManagerController := internal.NewController(
|
||||
ttlcontroller.ControllerName,
|
||||
ttlcontroller.NewManager(
|
||||
setup.MetadataClient,
|
||||
setup.KubeClient.Discovery(),
|
||||
setup.KubeClient.AuthorizationV1(),
|
||||
interval,
|
||||
),
|
||||
ttlcontroller.Workers,
|
||||
)
|
||||
// start informers and wait for cache sync
|
||||
if !internal.StartInformersAndWaitForCacheSync(ctx, logger, kyvernoInformer, kubeInformer) {
|
||||
logger.Error(errors.New("failed to wait for cache sync"), "failed to wait for cache sync")
|
||||
|
@ -166,9 +225,10 @@ func main() {
|
|||
// start leader controllers
|
||||
var wg sync.WaitGroup
|
||||
certController.Run(ctx, logger, &wg)
|
||||
webhookController.Run(ctx, logger, &wg)
|
||||
policyValidatingWebhookController.Run(ctx, logger, &wg)
|
||||
ttlWebhookController.Run(ctx, logger, &wg)
|
||||
cleanupController.Run(ctx, logger, &wg)
|
||||
// wait all controllers shut down
|
||||
ttlManagerController.Run(ctx, logger, &wg)
|
||||
wg.Wait()
|
||||
},
|
||||
nil,
|
||||
|
@ -234,6 +294,7 @@ func main() {
|
|||
return secret.Data[corev1.TLSCertKey], secret.Data[corev1.TLSPrivateKeyKey], nil
|
||||
},
|
||||
admissionHandlers.Validate,
|
||||
labelhandlers.Validate,
|
||||
cleanupHandlers.Cleanup,
|
||||
setup.MetricsManager,
|
||||
webhooks.DebugModeOptions{
|
||||
|
|
|
@ -29,9 +29,10 @@ type server struct {
|
|||
}
|
||||
|
||||
type (
|
||||
TlsProvider = func() ([]byte, []byte, error)
|
||||
ValidationHandler = func(context.Context, logr.Logger, handlers.AdmissionRequest, time.Time) handlers.AdmissionResponse
|
||||
CleanupHandler = func(context.Context, logr.Logger, string, time.Time, config.Configuration) error
|
||||
TlsProvider = func() ([]byte, []byte, error)
|
||||
ValidationHandler = func(context.Context, logr.Logger, handlers.AdmissionRequest, time.Time) handlers.AdmissionResponse
|
||||
LabelValidationHandler = func(context.Context, logr.Logger, handlers.AdmissionRequest, time.Time) handlers.AdmissionResponse
|
||||
CleanupHandler = func(context.Context, logr.Logger, string, time.Time, config.Configuration) error
|
||||
)
|
||||
|
||||
type Probes interface {
|
||||
|
@ -43,6 +44,7 @@ type Probes interface {
|
|||
func NewServer(
|
||||
tlsProvider TlsProvider,
|
||||
validationHandler ValidationHandler,
|
||||
labelValidationHandler LabelValidationHandler,
|
||||
cleanupHandler CleanupHandler,
|
||||
metricsConfig metrics.MetricsConfigManager,
|
||||
debugModeOpts webhooks.DebugModeOptions,
|
||||
|
@ -50,6 +52,7 @@ func NewServer(
|
|||
cfg config.Configuration,
|
||||
) Server {
|
||||
policyLogger := logging.WithName("cleanup-policy")
|
||||
labelLogger := logging.WithName("ttl-label")
|
||||
cleanupLogger := logging.WithName("cleanup")
|
||||
cleanupHandlerFunc := func(w http.ResponseWriter, r *http.Request) {
|
||||
policy := r.URL.Query().Get("policy")
|
||||
|
@ -76,6 +79,16 @@ func NewServer(
|
|||
WithAdmission(policyLogger.WithName("validate")).
|
||||
ToHandlerFunc(),
|
||||
)
|
||||
mux.HandlerFunc(
|
||||
"POST",
|
||||
config.TtlValidatingWebhookServicePath,
|
||||
handlers.FromAdmissionFunc("VALIDATE", labelValidationHandler).
|
||||
WithDump(debugModeOpts.DumpPayload).
|
||||
WithSubResourceFilter().
|
||||
WithMetrics(labelLogger, metricsConfig.Config(), metrics.WebhookValidating).
|
||||
WithAdmission(labelLogger.WithName("validate")).
|
||||
ToHandlerFunc(),
|
||||
)
|
||||
mux.HandlerFunc(
|
||||
"GET",
|
||||
cleanup.CleanupServicePath,
|
||||
|
|
|
@ -152,6 +152,7 @@ func createrLeaderControllers(
|
|||
config.ExceptionValidatingWebhookServicePath,
|
||||
serverIP,
|
||||
servicePort,
|
||||
nil,
|
||||
[]admissionregistrationv1.RuleWithOperations{{
|
||||
Rule: admissionregistrationv1.Rule{
|
||||
APIGroups: []string{"kyverno.io"},
|
||||
|
|
|
@ -39079,6 +39079,7 @@ spec:
|
|||
- --dumpPayload=false
|
||||
- --loggingFormat=text
|
||||
- --v=2
|
||||
- --ttlReconciliationInterval=1m
|
||||
env:
|
||||
- name: KYVERNO_DEPLOYMENT
|
||||
value: kyverno-cleanup-controller
|
||||
|
|
|
@ -31,6 +31,8 @@ const (
|
|||
MutatingWebhookConfigurationName = "kyverno-resource-mutating-webhook-cfg"
|
||||
// VerifyMutatingWebhookConfigurationName default verify mutating webhook configuration name
|
||||
VerifyMutatingWebhookConfigurationName = "kyverno-verify-mutating-webhook-cfg"
|
||||
// TtlValidatingWebhookConfigurationName ttl label validating webhook configuration name
|
||||
TtlValidatingWebhookConfigurationName = "kyverno-ttl-validating-webhook-cfg"
|
||||
)
|
||||
|
||||
// webhook names
|
||||
|
@ -57,6 +59,8 @@ const (
|
|||
ExceptionValidatingWebhookServicePath = "/exceptionvalidate"
|
||||
// CleanupValidatingWebhookServicePath is the path for cleanup policy validation webhook(used to validate cleanup policy resource)
|
||||
CleanupValidatingWebhookServicePath = "/validate"
|
||||
// TtlValidatingWebhookServicePath is the path for validation of cleanup.kyverno.io/ttl label value
|
||||
TtlValidatingWebhookServicePath = "/verifyttl"
|
||||
// PolicyMutatingWebhookServicePath is the path for policy mutation webhook(used to default)
|
||||
PolicyMutatingWebhookServicePath = "/policymutate"
|
||||
// MutatingWebhookServicePath is the path for mutation webhook
|
||||
|
|
|
@ -28,10 +28,12 @@ const (
|
|||
)
|
||||
|
||||
var (
|
||||
none = admissionregistrationv1.SideEffectClassNone
|
||||
fail = admissionregistrationv1.Fail
|
||||
None = &none
|
||||
Fail = &fail
|
||||
none = admissionregistrationv1.SideEffectClassNone
|
||||
fail = admissionregistrationv1.Fail
|
||||
ignore = admissionregistrationv1.Ignore
|
||||
None = &none
|
||||
Fail = &fail
|
||||
Ignore = &ignore
|
||||
)
|
||||
|
||||
type controller struct {
|
||||
|
@ -56,6 +58,7 @@ type controller struct {
|
|||
failurePolicy *admissionregistrationv1.FailurePolicyType
|
||||
sideEffects *admissionregistrationv1.SideEffectClass
|
||||
configuration config.Configuration
|
||||
labelSelector *metav1.LabelSelector
|
||||
}
|
||||
|
||||
func NewController(
|
||||
|
@ -67,6 +70,7 @@ func NewController(
|
|||
path string,
|
||||
server string,
|
||||
servicePort int32,
|
||||
labelSelector *metav1.LabelSelector,
|
||||
rules []admissionregistrationv1.RuleWithOperations,
|
||||
failurePolicy *admissionregistrationv1.FailurePolicyType,
|
||||
sideEffects *admissionregistrationv1.SideEffectClass,
|
||||
|
@ -88,6 +92,7 @@ func NewController(
|
|||
failurePolicy: failurePolicy,
|
||||
sideEffects: sideEffects,
|
||||
configuration: configuration,
|
||||
labelSelector: labelSelector,
|
||||
}
|
||||
controllerutils.AddDefaultEventHandlers(c.logger, vwcInformer.Informer(), queue)
|
||||
controllerutils.AddEventHandlersT(
|
||||
|
@ -172,6 +177,7 @@ func (c *controller) build(cfg config.Configuration, caBundle []byte) (*admissio
|
|||
FailurePolicy: c.failurePolicy,
|
||||
SideEffects: c.sideEffects,
|
||||
AdmissionReviewVersions: []string{"v1"},
|
||||
ObjectSelector: c.labelSelector,
|
||||
}},
|
||||
},
|
||||
nil
|
||||
|
|
180
pkg/controllers/ttl-controller/controller.go
Normal file
180
pkg/controllers/ttl-controller/controller.go
Normal file
|
@ -0,0 +1,180 @@
|
|||
package ttlcontroller
|
||||
|
||||
import (
|
||||
"context"
|
||||
"time"
|
||||
|
||||
"github.com/go-logr/logr"
|
||||
"github.com/kyverno/kyverno/api/kyverno"
|
||||
apierrors "k8s.io/apimachinery/pkg/api/errors"
|
||||
"k8s.io/apimachinery/pkg/api/meta"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/util/wait"
|
||||
"k8s.io/client-go/informers"
|
||||
"k8s.io/client-go/metadata"
|
||||
"k8s.io/client-go/tools/cache"
|
||||
"k8s.io/client-go/util/workqueue"
|
||||
)
|
||||
|
||||
type controller struct {
|
||||
client metadata.Getter
|
||||
queue workqueue.RateLimitingInterface
|
||||
lister cache.GenericLister
|
||||
wg wait.Group
|
||||
informer cache.SharedIndexInformer
|
||||
registration cache.ResourceEventHandlerRegistration
|
||||
logger logr.Logger
|
||||
}
|
||||
|
||||
func newController(client metadata.Getter, metainformer informers.GenericInformer, logger logr.Logger) (*controller, error) {
|
||||
c := &controller{
|
||||
client: client,
|
||||
queue: workqueue.NewRateLimitingQueue(workqueue.DefaultControllerRateLimiter()),
|
||||
lister: metainformer.Lister(),
|
||||
wg: wait.Group{},
|
||||
informer: metainformer.Informer(),
|
||||
logger: logger,
|
||||
}
|
||||
registration, err := c.informer.AddEventHandler(cache.ResourceEventHandlerFuncs{
|
||||
AddFunc: c.handleAdd,
|
||||
DeleteFunc: c.handleDelete,
|
||||
UpdateFunc: c.handleUpdate,
|
||||
})
|
||||
if err != nil {
|
||||
logger.Error(err, "failed to register event handler")
|
||||
return nil, err
|
||||
}
|
||||
c.registration = registration
|
||||
return c, nil
|
||||
}
|
||||
|
||||
func (c *controller) handleAdd(obj interface{}) {
|
||||
c.enqueue(obj)
|
||||
}
|
||||
|
||||
func (c *controller) handleDelete(obj interface{}) {
|
||||
c.enqueue(obj)
|
||||
}
|
||||
|
||||
func (c *controller) handleUpdate(oldObj, newObj interface{}) {
|
||||
c.enqueue(newObj)
|
||||
}
|
||||
|
||||
func (c *controller) Start(ctx context.Context, workers int) {
|
||||
for i := 0; i < workers; i++ {
|
||||
c.wg.StartWithContext(ctx, func(ctx context.Context) {
|
||||
defer c.logger.Info("worker stopped")
|
||||
c.logger.Info("worker starting ....")
|
||||
wait.UntilWithContext(ctx, c.worker, 1*time.Second)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func (c *controller) Stop() {
|
||||
defer c.logger.Info("queue stopped")
|
||||
defer c.wg.Wait()
|
||||
// Unregister the event handlers
|
||||
c.deregisterEventHandlers()
|
||||
c.logger.Info("queue stopping ....")
|
||||
c.queue.ShutDown()
|
||||
}
|
||||
|
||||
func (c *controller) enqueue(obj interface{}) {
|
||||
key, err := cache.MetaNamespaceKeyFunc(obj)
|
||||
if err != nil {
|
||||
c.logger.Error(err, "failed to extract name")
|
||||
return
|
||||
}
|
||||
c.queue.Add(key)
|
||||
}
|
||||
|
||||
// deregisterEventHandlers deregisters the event handlers from the informer.
|
||||
func (c *controller) deregisterEventHandlers() {
|
||||
err := c.informer.RemoveEventHandler(c.registration)
|
||||
if err != nil {
|
||||
c.logger.Error(err, "failed to deregister event handlers")
|
||||
return
|
||||
}
|
||||
c.logger.Info("deregistered event handlers")
|
||||
}
|
||||
|
||||
func (c *controller) worker(ctx context.Context) {
|
||||
for {
|
||||
if !c.processItem() {
|
||||
// No more items in the queue, exit the loop
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (c *controller) processItem() bool {
|
||||
item, shutdown := c.queue.Get()
|
||||
if shutdown {
|
||||
return false
|
||||
}
|
||||
defer c.queue.Forget(item)
|
||||
err := c.reconcile(item.(string))
|
||||
if err != nil {
|
||||
c.logger.Error(err, "reconciliation failed")
|
||||
c.queue.AddRateLimited(item)
|
||||
return true
|
||||
}
|
||||
c.queue.Done(item)
|
||||
return true
|
||||
}
|
||||
|
||||
func (c *controller) reconcile(itemKey string) error {
|
||||
logger := c.logger.WithValues("key", itemKey)
|
||||
namespace, name, err := cache.SplitMetaNamespaceKey(itemKey)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
obj, err := c.lister.ByNamespace(namespace).Get(name)
|
||||
if err != nil {
|
||||
if apierrors.IsNotFound(err) {
|
||||
// resource doesn't exist anymore, nothing much to do at this point
|
||||
return nil
|
||||
}
|
||||
// there was an error, return it to requeue the key
|
||||
return err
|
||||
}
|
||||
|
||||
metaObj, err := meta.Accessor(obj)
|
||||
if err != nil {
|
||||
logger.Info("object is not of type metav1.Object")
|
||||
return err
|
||||
}
|
||||
|
||||
labels := metaObj.GetLabels()
|
||||
ttlValue, ok := labels[kyverno.LabelCleanupTtl]
|
||||
|
||||
if !ok {
|
||||
// No 'ttl' label present, no further action needed
|
||||
return nil
|
||||
}
|
||||
|
||||
var deletionTime time.Time
|
||||
|
||||
// Try parsing ttlValue as duration
|
||||
err = parseDeletionTime(metaObj, &deletionTime, ttlValue)
|
||||
|
||||
if err != nil {
|
||||
logger.Error(err, "failed to parse label", "value", ttlValue)
|
||||
return nil
|
||||
}
|
||||
|
||||
if time.Now().After(deletionTime) {
|
||||
err = c.client.Namespace(namespace).Delete(context.Background(), metaObj.GetName(), metav1.DeleteOptions{})
|
||||
if err != nil {
|
||||
logger.Error(err, "failed to delete resource")
|
||||
return err
|
||||
}
|
||||
logger.Info("resource has been deleted")
|
||||
} else {
|
||||
// Calculate the remaining time until deletion
|
||||
timeRemaining := time.Until(deletionTime)
|
||||
// Add the item back to the queue after the remaining time
|
||||
c.queue.AddAfter(itemKey, timeRemaining)
|
||||
}
|
||||
return nil
|
||||
}
|
195
pkg/controllers/ttl-controller/manager.go
Normal file
195
pkg/controllers/ttl-controller/manager.go
Normal file
|
@ -0,0 +1,195 @@
|
|||
package ttlcontroller
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"github.com/go-logr/logr"
|
||||
"github.com/kyverno/kyverno/api/kyverno"
|
||||
"github.com/kyverno/kyverno/pkg/auth/checker"
|
||||
"github.com/kyverno/kyverno/pkg/controllers"
|
||||
"github.com/kyverno/kyverno/pkg/logging"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/runtime/schema"
|
||||
"k8s.io/apimachinery/pkg/util/sets"
|
||||
"k8s.io/apimachinery/pkg/util/wait"
|
||||
"k8s.io/client-go/discovery"
|
||||
authorizationv1client "k8s.io/client-go/kubernetes/typed/authorization/v1"
|
||||
"k8s.io/client-go/metadata"
|
||||
"k8s.io/client-go/metadata/metadatainformer"
|
||||
"k8s.io/client-go/tools/cache"
|
||||
)
|
||||
|
||||
type stopFunc = context.CancelFunc
|
||||
|
||||
const (
|
||||
Workers = 3
|
||||
ControllerName = "ttl-controller-manager"
|
||||
)
|
||||
|
||||
type manager struct {
|
||||
metadataClient metadata.Interface
|
||||
discoveryClient discovery.DiscoveryInterface
|
||||
checker checker.AuthChecker
|
||||
resController map[schema.GroupVersionResource]stopFunc
|
||||
logger logr.Logger
|
||||
interval time.Duration
|
||||
}
|
||||
|
||||
func NewManager(
|
||||
metadataInterface metadata.Interface,
|
||||
discoveryInterface discovery.DiscoveryInterface,
|
||||
authorizationInterface authorizationv1client.AuthorizationV1Interface,
|
||||
timeInterval time.Duration,
|
||||
) controllers.Controller {
|
||||
logger := logging.WithName(ControllerName)
|
||||
selfChecker := checker.NewSelfChecker(authorizationInterface.SelfSubjectAccessReviews())
|
||||
resController := map[schema.GroupVersionResource]stopFunc{}
|
||||
return &manager{
|
||||
metadataClient: metadataInterface,
|
||||
discoveryClient: discoveryInterface,
|
||||
checker: selfChecker,
|
||||
resController: resController,
|
||||
logger: logger,
|
||||
interval: timeInterval,
|
||||
}
|
||||
}
|
||||
|
||||
func (m *manager) Run(ctx context.Context, worker int) {
|
||||
defer func() {
|
||||
// Stop all informers and wait for them to finish
|
||||
for gvr := range m.resController {
|
||||
logger := m.logger.WithValues("gvr", gvr)
|
||||
if err := m.stop(ctx, gvr); err != nil {
|
||||
logger.Error(err, "failed to stop informer")
|
||||
}
|
||||
}
|
||||
}()
|
||||
ticker := time.NewTicker(m.interval)
|
||||
defer ticker.Stop()
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return
|
||||
case <-ticker.C:
|
||||
if err := m.reconcile(ctx, worker); err != nil {
|
||||
m.logger.Error(err, "reconciliation failed")
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (m *manager) getDesiredState() (sets.Set[schema.GroupVersionResource], error) {
|
||||
// Get the list of resources currently present in the cluster
|
||||
newresources, err := discoverResources(m.logger, m.discoveryClient)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
validResources := m.filterPermissionsResource(newresources)
|
||||
return sets.New(validResources...), nil
|
||||
}
|
||||
|
||||
func (m *manager) getObservedState() (sets.Set[schema.GroupVersionResource], error) {
|
||||
observedState := sets.New[schema.GroupVersionResource]()
|
||||
for resource := range m.resController {
|
||||
observedState.Insert(resource)
|
||||
}
|
||||
return observedState, nil
|
||||
}
|
||||
|
||||
func (m *manager) stop(ctx context.Context, gvr schema.GroupVersionResource) error {
|
||||
logger := m.logger.WithValues("gvr", gvr)
|
||||
if stopFunc, ok := m.resController[gvr]; ok {
|
||||
delete(m.resController, gvr)
|
||||
func() {
|
||||
defer logger.Info("controller stopped")
|
||||
logger.Info("stopping controller...")
|
||||
stopFunc()
|
||||
}()
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *manager) start(ctx context.Context, gvr schema.GroupVersionResource, workers int) error {
|
||||
logger := m.logger.WithValues("gvr", gvr)
|
||||
indexers := cache.Indexers{
|
||||
cache.NamespaceIndex: cache.MetaNamespaceIndexFunc,
|
||||
}
|
||||
options := func(options *metav1.ListOptions) {
|
||||
options.LabelSelector = kyverno.LabelCleanupTtl
|
||||
}
|
||||
informer := metadatainformer.NewFilteredMetadataInformer(m.metadataClient,
|
||||
gvr,
|
||||
metav1.NamespaceAll,
|
||||
10*time.Minute,
|
||||
indexers,
|
||||
options,
|
||||
)
|
||||
controller, err := newController(m.metadataClient.Resource(gvr), informer, logger)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
cont, cancel := context.WithCancel(ctx)
|
||||
var wg wait.Group
|
||||
|
||||
stopFunc := func() {
|
||||
// Send stop signal to informer's goroutine
|
||||
cancel()
|
||||
// Wait for the group to terminate
|
||||
wg.Wait()
|
||||
controller.Stop()
|
||||
}
|
||||
|
||||
wg.StartWithContext(cont, func(ctx context.Context) {
|
||||
logger.Info("informer starting...")
|
||||
informer.Informer().Run(cont.Done())
|
||||
})
|
||||
|
||||
if !cache.WaitForCacheSync(ctx.Done(), informer.Informer().HasSynced) {
|
||||
cancel()
|
||||
return fmt.Errorf("failed to wait for cache sync: %s", gvr.Resource)
|
||||
}
|
||||
|
||||
logger.Info("controller starting...")
|
||||
controller.Start(cont, workers)
|
||||
m.resController[gvr] = stopFunc // Store the stop function
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *manager) filterPermissionsResource(resources []schema.GroupVersionResource) []schema.GroupVersionResource {
|
||||
validResources := []schema.GroupVersionResource{}
|
||||
for _, resource := range resources {
|
||||
// Check if the service account has the necessary permissions
|
||||
if hasResourcePermissions(m.logger, resource, m.checker) {
|
||||
validResources = append(validResources, resource)
|
||||
}
|
||||
}
|
||||
return validResources
|
||||
}
|
||||
|
||||
func (m *manager) reconcile(ctx context.Context, workers int) error {
|
||||
defer m.logger.Info("manager reconciliation done")
|
||||
m.logger.Info("start manager reconciliation")
|
||||
desiredState, err := m.getDesiredState()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
observedState, err := m.getObservedState()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
for gvr := range observedState.Difference(desiredState) {
|
||||
if err := m.stop(ctx, gvr); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
for gvr := range desiredState.Difference(observedState) {
|
||||
if err := m.start(ctx, gvr, workers); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
70
pkg/controllers/ttl-controller/utils.go
Normal file
70
pkg/controllers/ttl-controller/utils.go
Normal file
|
@ -0,0 +1,70 @@
|
|||
package ttlcontroller
|
||||
|
||||
import (
|
||||
"context"
|
||||
"time"
|
||||
|
||||
"github.com/go-logr/logr"
|
||||
checker "github.com/kyverno/kyverno/pkg/auth/checker"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/runtime/schema"
|
||||
"k8s.io/apimachinery/pkg/util/sets"
|
||||
"k8s.io/client-go/discovery"
|
||||
)
|
||||
|
||||
func discoverResources(logger logr.Logger, discoveryClient discovery.DiscoveryInterface) ([]schema.GroupVersionResource, error) {
|
||||
var resources []schema.GroupVersionResource
|
||||
apiResourceList, err := discoveryClient.ServerPreferredResources()
|
||||
if err != nil {
|
||||
if !discovery.IsGroupDiscoveryFailedError(err) {
|
||||
return nil, err
|
||||
}
|
||||
// the error should be recoverable, let's log missing groups and process the partial results we received
|
||||
err := err.(*discovery.ErrGroupDiscoveryFailed)
|
||||
for gv, err := range err.Groups {
|
||||
// Handling the specific group error
|
||||
logger.Error(err, "error in discovering group", "gv", gv)
|
||||
}
|
||||
}
|
||||
for _, apiResourceList := range apiResourceList {
|
||||
for _, apiResource := range apiResourceList.APIResources {
|
||||
if sets.NewString(apiResource.Verbs...).HasAll("list", "watch", "delete") {
|
||||
groupVersion, err := schema.ParseGroupVersion(apiResourceList.GroupVersion)
|
||||
if err != nil {
|
||||
return resources, err
|
||||
}
|
||||
resources = append(resources, groupVersion.WithResource(apiResource.Name))
|
||||
}
|
||||
}
|
||||
}
|
||||
return resources, nil
|
||||
}
|
||||
|
||||
func hasResourcePermissions(logger logr.Logger, resource schema.GroupVersionResource, s checker.AuthChecker) bool {
|
||||
can, err := checker.Check(context.TODO(), s, resource.Group, resource.Version, resource.Resource, "", "", "watch", "list", "delete")
|
||||
if err != nil {
|
||||
logger.Error(err, "failed to check permissions")
|
||||
return false
|
||||
}
|
||||
return can
|
||||
}
|
||||
|
||||
func parseDeletionTime(metaObj metav1.Object, deletionTime *time.Time, ttlValue string) error {
|
||||
ttlDuration, err := time.ParseDuration(ttlValue)
|
||||
if err == nil {
|
||||
creationTime := metaObj.GetCreationTimestamp().Time
|
||||
*deletionTime = creationTime.Add(ttlDuration)
|
||||
} else {
|
||||
layoutRFCC := "2006-01-02T150405Z"
|
||||
// Try parsing ttlValue as a time in ISO 8601 format
|
||||
*deletionTime, err = time.Parse(layoutRFCC, ttlValue)
|
||||
if err != nil {
|
||||
layoutCustom := "2006-01-02"
|
||||
*deletionTime, err = time.Parse(layoutCustom, ttlValue)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
73
pkg/controllers/ttl-controller/utils_test.go
Normal file
73
pkg/controllers/ttl-controller/utils_test.go
Normal file
|
@ -0,0 +1,73 @@
|
|||
package ttlcontroller
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
)
|
||||
|
||||
type mockMetaObj struct {
|
||||
metav1.ObjectMeta
|
||||
}
|
||||
|
||||
func TestParseDeletionTime(t *testing.T) {
|
||||
// Test cases
|
||||
tests := []struct {
|
||||
creationTime time.Time
|
||||
ttlValue string
|
||||
expectedDeletionTime time.Time
|
||||
expectError bool
|
||||
}{
|
||||
// Test case 1: ttlValue is a valid duration
|
||||
{
|
||||
creationTime: time.Date(2023, 7, 18, 12, 0, 0, 0, time.UTC),
|
||||
ttlValue: "2h30m",
|
||||
expectedDeletionTime: time.Date(2023, 7, 18, 14, 30, 0, 0, time.UTC),
|
||||
expectError: false,
|
||||
},
|
||||
// Test case 2: ttlValue is in RFC3339 format
|
||||
{
|
||||
creationTime: time.Date(2023, 7, 18, 12, 0, 0, 0, time.UTC),
|
||||
ttlValue: "2023-07-19T120000Z",
|
||||
expectedDeletionTime: time.Date(2023, 7, 19, 12, 0, 0, 0, time.UTC),
|
||||
expectError: false,
|
||||
},
|
||||
// Test case 3: ttlValue is in custom date format
|
||||
{
|
||||
creationTime: time.Date(2023, 7, 18, 12, 0, 0, 0, time.UTC),
|
||||
ttlValue: "2023-07-19",
|
||||
expectedDeletionTime: time.Date(2023, 7, 19, 0, 0, 0, 0, time.UTC),
|
||||
expectError: false,
|
||||
},
|
||||
// Test case 4: Invalid ttlValue
|
||||
{
|
||||
creationTime: time.Date(2023, 7, 18, 12, 0, 0, 0, time.UTC),
|
||||
ttlValue: "invalid-value",
|
||||
expectError: true,
|
||||
},
|
||||
}
|
||||
|
||||
for _, test := range tests {
|
||||
var deletionTime time.Time
|
||||
metaObj := &mockMetaObj{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
CreationTimestamp: metav1.NewTime(test.creationTime),
|
||||
},
|
||||
}
|
||||
err := parseDeletionTime(metaObj, &deletionTime, test.ttlValue)
|
||||
if test.expectError {
|
||||
if err == nil {
|
||||
t.Errorf("Expected an error but got nil for ttlValue: %s", test.ttlValue)
|
||||
}
|
||||
} else {
|
||||
if err != nil {
|
||||
t.Errorf("Expected no error but got: %v for ttlValue: %s", err, test.ttlValue)
|
||||
}
|
||||
if !deletionTime.Equal(test.expectedDeletionTime) {
|
||||
t.Errorf("Expected deletion time: %v but got: %v for ttlValue: %s",
|
||||
test.expectedDeletionTime, deletionTime, test.ttlValue)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
|
@ -4,6 +4,7 @@ import (
|
|||
"encoding/json"
|
||||
"fmt"
|
||||
|
||||
"github.com/kyverno/kyverno/api/kyverno"
|
||||
kyvernov2alpha1 "github.com/kyverno/kyverno/api/kyverno/v2alpha1"
|
||||
admissionv1 "k8s.io/api/admission/v1"
|
||||
)
|
||||
|
@ -37,3 +38,28 @@ func GetCleanupPolicies(request admissionv1.AdmissionRequest) (kyvernov2alpha1.C
|
|||
}
|
||||
return policy, emptypolicy, nil
|
||||
}
|
||||
|
||||
// UnmarshalTTLLabel extracts the cleanup.kyverno.io/ttl label value from the raw admission request.
|
||||
func GetTtlLabel(raw []byte) (string, error) {
|
||||
var resourceObj map[string]interface{}
|
||||
if err := json.Unmarshal(raw, &resourceObj); err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
metadata, found := resourceObj["metadata"].(map[string]interface{})
|
||||
if !found {
|
||||
return "", fmt.Errorf("resource has no metadata field")
|
||||
}
|
||||
|
||||
labels, found := metadata["labels"].(map[string]interface{})
|
||||
if !found {
|
||||
return "", fmt.Errorf("resource has no labels field")
|
||||
}
|
||||
|
||||
ttlValue, found := labels[kyverno.LabelCleanupTtl].(string)
|
||||
if !found {
|
||||
return "", fmt.Errorf("resource has no %s label", kyverno.LabelCleanupTtl)
|
||||
}
|
||||
|
||||
return ttlValue, nil
|
||||
}
|
||||
|
|
20
pkg/validation/ttl-label/validate.go
Normal file
20
pkg/validation/ttl-label/validate.go
Normal file
|
@ -0,0 +1,20 @@
|
|||
package ttllabel
|
||||
|
||||
import "time"
|
||||
|
||||
func Validate(ttlValue string) error {
|
||||
_, err := time.ParseDuration(ttlValue)
|
||||
if err != nil {
|
||||
layoutRFCC := "2006-01-02T150405Z"
|
||||
// Try parsing ttlValue as a time in ISO 8601 format
|
||||
_, err := time.Parse(layoutRFCC, ttlValue)
|
||||
if err != nil {
|
||||
layoutCustom := "2006-01-02"
|
||||
_, err = time.Parse(layoutCustom, ttlValue)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
|
@ -29,6 +29,9 @@ cleanupController:
|
|||
- ''
|
||||
resources:
|
||||
- pods
|
||||
verbs:
|
||||
- list
|
||||
- delete
|
||||
|
||||
serviceMonitor:
|
||||
enabled: true
|
||||
|
|
|
@ -36,3 +36,6 @@ cleanupController:
|
|||
- ''
|
||||
resources:
|
||||
- pods
|
||||
verbs:
|
||||
- list
|
||||
- delete
|
||||
|
|
12
scripts/config/ttl/kyverno.yaml
Normal file
12
scripts/config/ttl/kyverno.yaml
Normal file
|
@ -0,0 +1,12 @@
|
|||
cleanupController:
|
||||
rbac:
|
||||
clusterRole:
|
||||
extraResources:
|
||||
- apiGroups:
|
||||
- ''
|
||||
resources:
|
||||
- pods
|
||||
verbs:
|
||||
- list
|
||||
- delete
|
||||
- watch
|
6
test/conformance/kuttl/ttl/invalid-label/01-pod.yaml
Normal file
6
test/conformance/kuttl/ttl/invalid-label/01-pod.yaml
Normal file
|
@ -0,0 +1,6 @@
|
|||
apiVersion: kuttl.dev/v1beta1
|
||||
kind: TestStep
|
||||
apply:
|
||||
- pod.yaml
|
||||
assert:
|
||||
- pod-assert.yaml
|
4
test/conformance/kuttl/ttl/invalid-label/02-wait.yaml
Normal file
4
test/conformance/kuttl/ttl/invalid-label/02-wait.yaml
Normal file
|
@ -0,0 +1,4 @@
|
|||
apiVersion: kuttl.dev/v1beta1
|
||||
kind: TestStep
|
||||
commands:
|
||||
- command: sleep 15
|
4
test/conformance/kuttl/ttl/invalid-label/03-check.yaml
Normal file
4
test/conformance/kuttl/ttl/invalid-label/03-check.yaml
Normal file
|
@ -0,0 +1,4 @@
|
|||
apiVersion: kuttl.dev/v1beta1
|
||||
kind: TestStep
|
||||
assert:
|
||||
- pod-assert.yaml
|
9
test/conformance/kuttl/ttl/invalid-label/README.md
Normal file
9
test/conformance/kuttl/ttl/invalid-label/README.md
Normal file
|
@ -0,0 +1,9 @@
|
|||
# ## Description
|
||||
|
||||
This test must not be able to clean up pod as the label assignment is invalid which will not be recognized by the controller in this case the label is named `cleanup.kyverno.io/ttl: 10ay`.
|
||||
|
||||
## Expected Behavior
|
||||
|
||||
The pod `test-pod` is not cleaned up successfully after 10s.
|
||||
|
||||
## Reference Issue(s)
|
6
test/conformance/kuttl/ttl/invalid-label/pod-assert.yaml
Normal file
6
test/conformance/kuttl/ttl/invalid-label/pod-assert.yaml
Normal file
|
@ -0,0 +1,6 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: test-pod
|
||||
labels:
|
||||
cleanup.kyverno.io/ttl: 10ay
|
10
test/conformance/kuttl/ttl/invalid-label/pod.yaml
Normal file
10
test/conformance/kuttl/ttl/invalid-label/pod.yaml
Normal file
|
@ -0,0 +1,10 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: test-pod
|
||||
labels:
|
||||
cleanup.kyverno.io/ttl: 10ay
|
||||
spec:
|
||||
containers:
|
||||
- image: nginx:latest
|
||||
name: nginx
|
6
test/conformance/kuttl/ttl/past-timestamp/01-pod.yaml
Normal file
6
test/conformance/kuttl/ttl/past-timestamp/01-pod.yaml
Normal file
|
@ -0,0 +1,6 @@
|
|||
apiVersion: kuttl.dev/v1beta1
|
||||
kind: TestStep
|
||||
apply:
|
||||
- pod.yaml
|
||||
assert:
|
||||
- pod-assert.yaml
|
4
test/conformance/kuttl/ttl/past-timestamp/02-wait.yaml
Normal file
4
test/conformance/kuttl/ttl/past-timestamp/02-wait.yaml
Normal file
|
@ -0,0 +1,4 @@
|
|||
apiVersion: kuttl.dev/v1beta1
|
||||
kind: TestStep
|
||||
commands:
|
||||
- command: sleep 5
|
4
test/conformance/kuttl/ttl/past-timestamp/03-check.yaml
Normal file
4
test/conformance/kuttl/ttl/past-timestamp/03-check.yaml
Normal file
|
@ -0,0 +1,4 @@
|
|||
apiVersion: kuttl.dev/v1beta1
|
||||
kind: TestStep
|
||||
error:
|
||||
- pod-assert.yaml
|
9
test/conformance/kuttl/ttl/past-timestamp/README.md
Normal file
9
test/conformance/kuttl/ttl/past-timestamp/README.md
Normal file
|
@ -0,0 +1,9 @@
|
|||
# ## Description
|
||||
|
||||
This test cleans up pods instanteaously without any delay as the value of the label is `cleanup.kyverno.io/ttl: 2023-07-19T120000Z` the timestamp is mentioned in past.
|
||||
|
||||
## Expected Behavior
|
||||
|
||||
The pod `test-pod` is cleaned up instantaneously.
|
||||
|
||||
## Reference Issue(s)
|
|
@ -0,0 +1,6 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: test-pod
|
||||
labels:
|
||||
cleanup.kyverno.io/ttl: 2023-07-19T120000Z
|
10
test/conformance/kuttl/ttl/past-timestamp/pod.yaml
Normal file
10
test/conformance/kuttl/ttl/past-timestamp/pod.yaml
Normal file
|
@ -0,0 +1,10 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: test-pod
|
||||
labels:
|
||||
cleanup.kyverno.io/ttl: 2023-07-19T120000Z
|
||||
spec:
|
||||
containers:
|
||||
- image: nginx:latest
|
||||
name: nginx
|
|
@ -0,0 +1,6 @@
|
|||
apiVersion: kuttl.dev/v1beta1
|
||||
kind: TestStep
|
||||
apply:
|
||||
- resource.yaml
|
||||
assert:
|
||||
- resource-assert.yaml
|
4
test/conformance/kuttl/ttl/permission-lack/02-wait.yaml
Normal file
4
test/conformance/kuttl/ttl/permission-lack/02-wait.yaml
Normal file
|
@ -0,0 +1,4 @@
|
|||
apiVersion: kuttl.dev/v1beta1
|
||||
kind: TestStep
|
||||
commands:
|
||||
- command: sleep 15
|
4
test/conformance/kuttl/ttl/permission-lack/03-check.yaml
Normal file
4
test/conformance/kuttl/ttl/permission-lack/03-check.yaml
Normal file
|
@ -0,0 +1,4 @@
|
|||
apiVersion: kuttl.dev/v1beta1
|
||||
kind: TestStep
|
||||
assert:
|
||||
- resource-assert.yaml
|
9
test/conformance/kuttl/ttl/permission-lack/README.md
Normal file
9
test/conformance/kuttl/ttl/permission-lack/README.md
Normal file
|
@ -0,0 +1,9 @@
|
|||
# ## Description
|
||||
|
||||
This test must not be able to clean up config map as the service account mounted does not have required permission to cleanup the config map via the `cleanup.kyverno.io/ttl: 10s` label assignment.
|
||||
|
||||
## Expected Behavior
|
||||
|
||||
The pod `test-cm` is not cleaned up successfully after 10s.
|
||||
|
||||
## Reference Issue(s)
|
|
@ -0,0 +1,6 @@
|
|||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: test-cm
|
||||
labels:
|
||||
cleanup.kyverno.io/ttl: 10s
|
8
test/conformance/kuttl/ttl/permission-lack/resource.yaml
Normal file
8
test/conformance/kuttl/ttl/permission-lack/resource.yaml
Normal file
|
@ -0,0 +1,8 @@
|
|||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: test-cm
|
||||
labels:
|
||||
cleanup.kyverno.io/ttl: 10s
|
||||
data:
|
||||
foo: bar
|
6
test/conformance/kuttl/ttl/valid-label/01-pod.yaml
Normal file
6
test/conformance/kuttl/ttl/valid-label/01-pod.yaml
Normal file
|
@ -0,0 +1,6 @@
|
|||
apiVersion: kuttl.dev/v1beta1
|
||||
kind: TestStep
|
||||
apply:
|
||||
- pod.yaml
|
||||
assert:
|
||||
- pod-assert.yaml
|
4
test/conformance/kuttl/ttl/valid-label/02-wait.yaml
Normal file
4
test/conformance/kuttl/ttl/valid-label/02-wait.yaml
Normal file
|
@ -0,0 +1,4 @@
|
|||
apiVersion: kuttl.dev/v1beta1
|
||||
kind: TestStep
|
||||
commands:
|
||||
- command: sleep 15
|
4
test/conformance/kuttl/ttl/valid-label/03-check.yaml
Normal file
4
test/conformance/kuttl/ttl/valid-label/03-check.yaml
Normal file
|
@ -0,0 +1,4 @@
|
|||
apiVersion: kuttl.dev/v1beta1
|
||||
kind: TestStep
|
||||
error:
|
||||
- pod-assert.yaml
|
9
test/conformance/kuttl/ttl/valid-label/README.md
Normal file
9
test/conformance/kuttl/ttl/valid-label/README.md
Normal file
|
@ -0,0 +1,9 @@
|
|||
# ## Description
|
||||
|
||||
This test cleans up pods via a label assignment named `cleanup.kyverno.io/ttl: 10s`.
|
||||
|
||||
## Expected Behavior
|
||||
|
||||
The pod `test-pod` is cleaned up successfully after 10s.
|
||||
|
||||
## Reference Issue(s)
|
6
test/conformance/kuttl/ttl/valid-label/pod-assert.yaml
Normal file
6
test/conformance/kuttl/ttl/valid-label/pod-assert.yaml
Normal file
|
@ -0,0 +1,6 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: test-pod
|
||||
labels:
|
||||
cleanup.kyverno.io/ttl: 10s
|
10
test/conformance/kuttl/ttl/valid-label/pod.yaml
Normal file
10
test/conformance/kuttl/ttl/valid-label/pod.yaml
Normal file
|
@ -0,0 +1,10 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: test-pod
|
||||
labels:
|
||||
cleanup.kyverno.io/ttl: 10s
|
||||
spec:
|
||||
containers:
|
||||
- image: nginx:latest
|
||||
name: nginx
|
Loading…
Reference in a new issue