mirror of
https://github.com/kyverno/policy-reporter.git
synced 2024-12-14 11:57:32 +00:00
Internal Rewrite (#91)
* Internal Rewrite Signed-off-by: Frank Jogeleit <frank.jogeleit@web.de>
This commit is contained in:
parent
ad8fa022fd
commit
0de8e8bead
112 changed files with 4894 additions and 3269 deletions
|
@ -1,4 +1,8 @@
|
|||
.deploy
|
||||
config.yaml
|
||||
build
|
||||
README.md
|
||||
docs
|
||||
**/test.db
|
||||
sqlite-database.db
|
||||
values.yaml
|
4
.gitignore
vendored
4
.gitignore
vendored
|
@ -2,3 +2,7 @@
|
|||
/config.yaml
|
||||
build
|
||||
/test.yaml
|
||||
**/test.db
|
||||
sqlite-database.db
|
||||
values.yaml
|
||||
coverage.out
|
37
CHANGELOG.md
37
CHANGELOG.md
|
@ -1,5 +1,42 @@
|
|||
# Changelog
|
||||
|
||||
# 2.0.0
|
||||
|
||||
## Chart
|
||||
* Removed deprecated values `crdVersion`, `cleanupDebounceTime`
|
||||
* Simplify `policyPriorities`, `policyPriorities.enabled` was removed along with the watch feature
|
||||
* Priority determined mainly over severity
|
||||
* Add `sources` filter to target configurations
|
||||
* Improved `NetworkPolicy` configuration for all components
|
||||
* Metrics now an optional feature
|
||||
* Each component expose a single Port `8080`
|
||||
|
||||
See [Migration Docs](http://localhost:3000/guide/05-migration) for details
|
||||
|
||||
## Policy Reporter
|
||||
* modular functions for separate activation/deactivation
|
||||
* REST API
|
||||
* Metrics API
|
||||
* Target pushes
|
||||
* PolicyReports are now stored in an internal SQLite
|
||||
* extended REST API based on the new SQLite DB for filters and grouping of data
|
||||
* metrics API is now optional
|
||||
* metrics and REST API using the same HTTP Server (were separated before)
|
||||
* improved CRD watch logic with Kubernetes client informer
|
||||
* `Yandex` changed to a general `S3` target.
|
||||
|
||||
## Policy Reporter UI
|
||||
* Rewrite with NuxtJS
|
||||
* Simplified Proxy
|
||||
* Improved SPA file handling
|
||||
|
||||
## Policy Reporter Kyverno Plugin
|
||||
* modular functions for separate activation/deactivation
|
||||
* REST API
|
||||
* Metrics API
|
||||
* metrics and REST API using the same HTTP Server (were separated before)
|
||||
* improved CRD watch logic with Kubernetes client informer
|
||||
|
||||
# 1.12.6
|
||||
* Update Go Base Image for all Components
|
||||
* Policy Reporter [[#90](https://github.com/kyverno/policy-reporter-ui/pull/90) by [fjogeleit](https://github.com/fjogeleit)]
|
||||
|
|
|
@ -14,7 +14,7 @@ RUN go env
|
|||
RUN go get -d -v \
|
||||
&& go install -v
|
||||
|
||||
RUN CGO_ENABLED=0 go build -ldflags="${LD_FLAGS}" -o /app/build/policyreporter -v
|
||||
RUN CGO_ENABLED=1 go build -ldflags="${LD_FLAGS}" -o /app/build/policyreporter -v
|
||||
|
||||
FROM scratch
|
||||
LABEL MAINTAINER="Frank Jogeleit <frank.jogeleit@gweb.de>"
|
||||
|
|
12
Makefile
12
Makefile
|
@ -1,8 +1,8 @@
|
|||
GO ?= go
|
||||
BUILD ?= build
|
||||
REPO ?= ghcr.io/kyverno/policy-reporter
|
||||
IMAGE_TAG ?= 1.10.1
|
||||
LD_FLAGS="-s -w"
|
||||
IMAGE_TAG ?= 1.11.0
|
||||
LD_FLAGS='-s -w -linkmode external -extldflags "-static"'
|
||||
|
||||
all: build
|
||||
|
||||
|
@ -16,15 +16,15 @@ prepare:
|
|||
|
||||
.PHONY: test
|
||||
test:
|
||||
go test -v ./... -timeout=120s
|
||||
go test -v ./... -timeout=10s
|
||||
|
||||
.PHONY: coverage
|
||||
coverage:
|
||||
go test -v ./... -covermode=count -coverprofile=coverage.out -timeout=120s
|
||||
go test -v ./... -covermode=count -coverprofile=coverage.out -timeout=30s
|
||||
|
||||
.PHONY: build
|
||||
build: prepare
|
||||
CGO_ENABLED=0 $(GO) build -v -ldflags="-s -w" $(GOFLAGS) -o $(BUILD)/policyreporter .
|
||||
CGO_ENABLED=1 $(GO) build -v -ldflags="-s -w" $(GOFLAGS) -o $(BUILD)/policyreporter .
|
||||
|
||||
.PHONY: docker-build
|
||||
docker-build:
|
||||
|
@ -37,4 +37,4 @@ docker-push:
|
|||
|
||||
.PHONY: docker-push-dev
|
||||
docker-push-dev:
|
||||
@docker buildx build --progress plane --platform linux/amd64 --tag $(REPO):dev . --build-arg LD_FLAGS=$(LD_FLAGS) --push
|
||||
@docker buildx build --progress plane --platform linux/arm64,linux/amd64 --tag $(REPO):dev . --build-arg LD_FLAGS=$(LD_FLAGS) --push
|
||||
|
|
27
README.md
27
README.md
|
@ -5,13 +5,13 @@
|
|||
|
||||
Kyverno ships with two types of validation. You can either enforce a rule or audit it. If you don't want to block developers or if you want to try out a new rule, you can use the audit functionality. The audit configuration creates [PolicyReports](https://kyverno.io/docs/policy-reports/) which you can access with `kubectl`. Because I can't find a simple solution to get a general overview of this PolicyReports and PolicyReportResults, I created this tool to send information about PolicyReports to different targets like [Grafana Loki](https://grafana.com/oss/loki/), [Elasticsearch](https://www.elastic.co/de/elasticsearch/) or [Slack](https://slack.com/).
|
||||
|
||||
Policy Reporter provides also a Prometheus Metrics API as well as an standalone mode along with the [Policy Reporter UI](https://github.com/kyverno/policy-reporter/wiki/policy-reporter-ui).
|
||||
Policy Reporter provides also a Prometheus Metrics API as well as an standalone mode along with the [Policy Reporter UI](https://kyverno.github.io/policy-reporter/guide/02-getting-started#core--policy-reporter-ui).
|
||||
|
||||
This project is in an early stage. Please let me know if anything did not work as expected or if you want to send your audits to unsupported targets.
|
||||
|
||||
## Documentation
|
||||
|
||||
You can find detailed Information and Screens about Features and Configurations in the [Documentation](https://github.com/kyverno/policy-reporter/wiki).
|
||||
You can find detailed Information and Screens about Features and Configurations in the [Documentation](https://kyverno.github.io/policy-reporter).
|
||||
|
||||
## Getting Started
|
||||
|
||||
|
@ -27,10 +27,10 @@ helm repo update
|
|||
|
||||
### Basic Installation
|
||||
|
||||
The basic installation provides an Prometheus Metrics Endpoint and different REST APIs, for more details have a look at the [Documentation](https://github.com/kyverno/policy-reporter/wiki/getting-started).
|
||||
The basic installation provides optional Prometheus Metrics and/or optional REST APIs, for more details have a look at the [Documentation](https://kyverno.github.io/policy-reporter/guide/02-getting-started).
|
||||
|
||||
```bash
|
||||
helm install policy-reporter policy-reporter/policy-reporter -n policy-reporter --create-namespace
|
||||
helm install policy-reporter policy-reporter/policy-reporter -n policy-reporter --set metrics.enabled=true --set rest.enabled=true --create-namespace
|
||||
```
|
||||
|
||||
### Installation without Helm or Kustomize
|
||||
|
@ -48,24 +48,25 @@ kubectl port-forward service/policy-reporter-ui 8082:8080 -n policy-reporter
|
|||
```
|
||||
Open `http://localhost:8082/` in your browser.
|
||||
|
||||
Check the [Documentation](https://github.com/kyverno/policy-reporter/wiki/policy-reporter-ui) for Screens and additional Information
|
||||
Check the [Documentation](https://kyverno.github.io/policy-reporter/guide/02-getting-started#core--policy-reporter-ui) for Screens and additional Information
|
||||
|
||||
## Targets
|
||||
|
||||
Policy Reporter supports the following [Targets](https://github.com/kyverno/policy-reporter/wiki/targets) to send new (Cluster)PolicyReport Results too:
|
||||
* [Grafana Loki](https://github.com/kyverno/policy-reporter/wiki/grafana-loki)
|
||||
* [Elasticsearch](https://github.com/kyverno/policy-reporter/wiki/elasticsearch)
|
||||
* [Slack](https://github.com/kyverno/policy-reporter/wiki/slack)
|
||||
* [Discord](https://github.com/kyverno/policy-reporter/wiki/discord)
|
||||
* [MS Teams](https://github.com/kyverno/policy-reporter/wiki/ms-teams)
|
||||
* [Policy Reporter UI](https://github.com/kyverno/policy-reporter/wiki/policy-reporter-ui-log)
|
||||
Policy Reporter supports the following [Targets](https://kyverno.github.io/policy-reporter/core/06-targets) to send new (Cluster)PolicyReport Results too:
|
||||
* [Grafana Loki](https://kyverno.github.io/policy-reporter/core/06-targets#grafana-loki)
|
||||
* [Elasticsearch](https://kyverno.github.io/policy-reporter/core/06-targets#elasticsearch)
|
||||
* [Slack](https://kyverno.github.io/policy-reporter/core/06-targets#slack)
|
||||
* [Discord](https://kyverno.github.io/policy-reporter/core/06-targets#discord)
|
||||
* [MS Teams](https://kyverno.github.io/policy-reporter/core/06-targets#microsoft-teams)
|
||||
* [Policy Reporter UI](https://kyverno.github.io/policy-reporter/core/06-targets#policy-reporter-ui)
|
||||
* [S3](https://kyverno.github.io/policy-reporter/core/06-targets#s3)
|
||||
|
||||
|
||||
## Monitoring
|
||||
|
||||
The Helm Chart includes optional SubChart for [Prometheus Operator](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack) Integration. The provided Dashboards working without Loki and are only based on the Prometheus Metrics.
|
||||
|
||||
Have a look into the [Documentation](https://github.com/kyverno/policy-reporter/wiki/prometheus-operator-integration) for details.
|
||||
Have a look into the [Documentation](https://kyverno.github.io/policy-reporter/guide/04-helm-chart-core/#configure-the-servicemonitor) for details.
|
||||
|
||||
### Grafana Dashboard Import
|
||||
|
||||
|
|
|
@ -1,12 +1,12 @@
|
|||
dependencies:
|
||||
- name: monitoring
|
||||
repository: ""
|
||||
version: 1.5.0
|
||||
version: 2.0.0
|
||||
- name: ui
|
||||
repository: ""
|
||||
version: 1.10.3
|
||||
version: 2.0.0
|
||||
- name: kyvernoPlugin
|
||||
repository: ""
|
||||
version: 0.7.1
|
||||
digest: sha256:ee1646e3f1a6dd7c329a7a4e6acb4d629aa1a4f750c82de737ae55c10e3136c0
|
||||
generated: "2021-11-11T09:48:37.183013+01:00"
|
||||
version: 1.0.0
|
||||
digest: sha256:7346779f27b9446f94271cb4b7233bac1b2549cf1205219b055bef926d2ea110
|
||||
generated: "2021-12-13T15:40:00.73344+01:00"
|
||||
|
|
|
@ -5,11 +5,11 @@ description: |
|
|||
It creates Prometheus Metrics and can send rule validation events to different targets like Loki, Elasticsearch, Slack or Discord
|
||||
|
||||
type: application
|
||||
version: 1.12.6
|
||||
appVersion: 1.10.3
|
||||
version: 2.0.0
|
||||
appVersion: 2.0.0
|
||||
|
||||
icon: https://github.com/kyverno/kyverno/raw/main/img/logo.png
|
||||
home: https://github.com/kyverno/policy-reporter/wiki
|
||||
home: https://kyverno.github.io/policy-reporter
|
||||
sources:
|
||||
- https://github.com/kyverno/policy-reporter
|
||||
maintainers:
|
||||
|
@ -18,10 +18,10 @@ maintainers:
|
|||
dependencies:
|
||||
- name: monitoring
|
||||
condition: monitoring.enabled
|
||||
version: "1.5.0"
|
||||
version: "2.0.0"
|
||||
- name: ui
|
||||
condition: ui.enabled
|
||||
version: "1.10.3"
|
||||
version: "2.0.0"
|
||||
- name: kyvernoPlugin
|
||||
condition: kyvernoPlugin.enabled
|
||||
version: "0.7.1"
|
||||
version: "1.0.0"
|
||||
|
|
|
@ -4,7 +4,7 @@ Kyverno ships with two types of validation. You can either enforce a rule or aud
|
|||
|
||||
## Documentation
|
||||
|
||||
You can find detailed Information and Screens about Features and Configurations in the [Documentation](https://github.com/kyverno/policy-reporter/wiki).
|
||||
You can find detailed Information and Screens about Features and Configurations in the [Documentation](https://kyverno.github.io/policy-reporter/guide/02-getting-started#core--policy-reporter-ui).
|
||||
|
||||
## Getting Started
|
||||
|
||||
|
@ -20,7 +20,7 @@ helm repo update
|
|||
|
||||
### Basic Installation
|
||||
|
||||
The basic installation provides an Prometheus Metrics Endpoint and different REST APIs, for more details have a look at the [Documentation](https://github.com/kyverno/policy-reporter/wiki/getting-started).
|
||||
The basic installation provides an Prometheus Metrics Endpoint and different REST APIs, for more details have a look at the [Documentation](https://kyverno.github.io/policy-reporter/guide/02-getting-started).
|
||||
|
||||
```bash
|
||||
helm install policy-reporter policy-reporter/policy-reporter -n policy-reporter --create-namespace
|
||||
|
@ -37,7 +37,7 @@ kubectl port-forward service/policy-reporter-ui 8082:8080 -n policy-reporter
|
|||
```
|
||||
Open `http://localhost:8082/` in your browser.
|
||||
|
||||
Check the [Documentation](https://github.com/kyverno/policy-reporter/wiki/policy-reporter-ui) for Screens and additional Information
|
||||
Check the [Documentation](https://kyverno.github.io/policy-reporter/guide/02-getting-started#core--policy-reporter-ui) for Screens and additional Information
|
||||
|
||||
## Resources
|
||||
|
||||
|
|
|
@ -3,5 +3,5 @@ name: kyvernoPlugin
|
|||
description: Policy Reporter Kyverno Plugin
|
||||
|
||||
type: application
|
||||
version: 0.7.1
|
||||
appVersion: 0.3.3
|
||||
version: 1.0.0
|
||||
appVersion: 1.0.0
|
|
@ -58,3 +58,11 @@ Create the name of the service account to use
|
|||
{{- default "default" .Values.serviceAccount.name }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Selector labels
|
||||
*/}}
|
||||
{{- define "ui.selectorLabels" -}}
|
||||
app.kubernetes.io/name: ui
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
{{- end }}
|
|
@ -45,7 +45,9 @@ spec:
|
|||
{{- toYaml .Values.securityContext | nindent 12 }}
|
||||
{{- end }}
|
||||
args:
|
||||
- --apiPort=8080
|
||||
- --port=8080
|
||||
- --metrics-enabled={{ .Values.metrics.enabled }}
|
||||
- --rest-enabled={{ .Values.rest.enabled }}
|
||||
ports:
|
||||
- name: http
|
||||
containerPort: 2113
|
||||
|
|
|
@ -12,19 +12,17 @@ spec:
|
|||
- Egress
|
||||
ingress:
|
||||
- from:
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
{{- include "ui.selectorLabels" . | nindent 10 }}
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 8080
|
||||
- from:
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 2113
|
||||
egress:
|
||||
- to:
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: {{ .Values.networkPolicy.kubernetesApiPort }}
|
||||
{{- with .Values.networkPolicy.ingress }}
|
||||
{{- toYaml . | nindent 2 }}
|
||||
{{- end }}
|
||||
{{- with .Values.networkPolicy.egress }}
|
||||
egress:
|
||||
{{- toYaml . | nindent 2 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
|
|
@ -2,7 +2,7 @@ image:
|
|||
registry: ghcr.io
|
||||
repository: kyverno/policy-reporter-kyverno-plugin
|
||||
pullPolicy: IfNotPresent
|
||||
tag: 0.3.3
|
||||
tag: 1.0.0
|
||||
|
||||
imagePullSecrets: []
|
||||
|
||||
|
@ -81,9 +81,22 @@ tolerations: []
|
|||
# Anti-affinity to disallow deploying client and master nodes on the same worker node
|
||||
affinity: {}
|
||||
|
||||
# REST API
|
||||
rest:
|
||||
enabled: true
|
||||
|
||||
# Prometheus Metrics API
|
||||
metrics:
|
||||
enabled: true
|
||||
|
||||
# Enable a NetworkPolicy for this chart. Useful on clusters where Network Policies are
|
||||
# used and configured in a default-deny fashion.
|
||||
networkPolicy:
|
||||
enabled: false
|
||||
kubernetesApiPort: 6443
|
||||
egress: []
|
||||
# Kubernetes API Server
|
||||
egress:
|
||||
- to:
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 6443
|
||||
ingress: []
|
|
@ -3,5 +3,5 @@ name: monitoring
|
|||
description: Policy Reporter Monitoring with predefined ServiceMonitor and Grafana Dashboards
|
||||
|
||||
type: application
|
||||
version: 1.5.0
|
||||
version: 2.0.0
|
||||
appVersion: 0.0.0
|
||||
|
|
|
@ -44,8 +44,6 @@ app.kubernetes.io/instance: {{ .Release.Name }}
|
|||
{{- define "monitoring.namespace" -}}
|
||||
{{- if .Values.grafana.namespace -}}
|
||||
{{- .Values.grafana.namespace -}}
|
||||
{{- else if .Values.namespace -}}
|
||||
{{- .Values.namespace -}}
|
||||
{{- else -}}
|
||||
{{- .Release.Namespace -}}
|
||||
{{- end }}
|
||||
|
|
|
@ -1,6 +1,3 @@
|
|||
# monitoring namespace for Dashboard Configurations
|
||||
namespace: cattle-dashboards
|
||||
|
||||
plugins:
|
||||
kyverno: false
|
||||
|
||||
|
|
|
@ -3,5 +3,5 @@ name: ui
|
|||
description: Policy Reporter UI
|
||||
|
||||
type: application
|
||||
version: 1.10.3
|
||||
appVersion: 0.15.1
|
||||
version: 2.0.0
|
||||
appVersion: 1.0.0
|
||||
|
|
|
@ -51,6 +51,22 @@ app.kubernetes.io/name: {{ include "ui.name" . }}
|
|||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Policy Reporter Selector labels
|
||||
*/}}
|
||||
{{- define "policyreporter.selectorLabels" -}}
|
||||
app.kubernetes.io/name: policy-reporter
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Kyverno Plugin Selector labels
|
||||
*/}}
|
||||
{{- define "kyvernoplugin.selectorLabels" -}}
|
||||
app.kubernetes.io/name: kyverno-plugin
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Create the name of the service account to use
|
||||
*/}}
|
||||
|
|
10
charts/policy-reporter/charts/ui/templates/config.yaml
Normal file
10
charts/policy-reporter/charts/ui/templates/config.yaml
Normal file
|
@ -0,0 +1,10 @@
|
|||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: {{ include "ui.fullname" . }}-config
|
||||
labels:
|
||||
{{- include "ui.labels" . | nindent 4 }}
|
||||
data:
|
||||
config.yaml: |-
|
||||
logSize: {{ .Values.log.size }}
|
||||
displayMode: {{ .Values.displayMode | quote }}
|
|
@ -45,8 +45,8 @@ spec:
|
|||
{{- toYaml .Values.securityContext | nindent 12 }}
|
||||
{{- end }}
|
||||
args:
|
||||
- -backend=http://{{ include "ui.policyReportServiceName" . }}:{{ .Values.global.port }}
|
||||
- -log-size={{ .Values.log.size }}
|
||||
- -config=/app/config/config.yaml
|
||||
- -policy-reporter=http://{{ include "ui.policyReportServiceName" . }}:{{ .Values.global.port }}
|
||||
{{- if or .Values.plugins.kyverno .Values.global.plugins.kyverno }}
|
||||
- -kyverno-plugin=http://{{ include "ui.kyvernoPluginServiceName" . }}:8080
|
||||
{{- end }}
|
||||
|
@ -62,8 +62,16 @@ spec:
|
|||
httpGet:
|
||||
path: /
|
||||
port: http
|
||||
volumeMounts:
|
||||
- name: config-file
|
||||
mountPath: /app/config
|
||||
subPath: config.yaml
|
||||
resources:
|
||||
{{- toYaml .Values.resources | nindent 12 }}
|
||||
volumes:
|
||||
- name: config-file
|
||||
configMap:
|
||||
name: {{ include "ui.fullname" . }}-config
|
||||
{{- with .Values.nodeSelector }}
|
||||
nodeSelector:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
|
|
|
@ -2,8 +2,9 @@
|
|||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
labels: {{ include "policyreporter.labels" . | nindent 4 }}
|
||||
name: {{ include "policyreporter.fullname" . }}
|
||||
name: {{ include "ui.fullname" . }}
|
||||
labels:
|
||||
{{- include "ui.labels" . | nindent 4 }}
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels: {{- include "ui.selectorLabels" . | nindent 6 }}
|
||||
|
@ -17,9 +18,21 @@ spec:
|
|||
port: {{ .Values.service.port }}
|
||||
egress:
|
||||
- to:
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
{{- include "policyreporter.selectorLabels" . | nindent 10 }}
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: {{ .Values.global.port }}
|
||||
port: 8080
|
||||
{{- if or .Values.plugins.kyverno .Values.global.plugins.kyverno }}
|
||||
- to:
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
{{- include "kyvernoplugin.selectorLabels" . | nindent 10 }}
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 8080
|
||||
{{- end }}
|
||||
{{- with .Values.networkPolicy.egress }}
|
||||
{{- toYaml . | nindent 2 }}
|
||||
{{- end }}
|
||||
|
|
|
@ -1,5 +1,8 @@
|
|||
enabled: false
|
||||
|
||||
# possible default displayModes: light/dark
|
||||
displayMode: ""
|
||||
|
||||
log:
|
||||
# holds the latest 200 validation results in the UI Log
|
||||
size: 200
|
||||
|
@ -11,7 +14,7 @@ image:
|
|||
registry: ghcr.io
|
||||
repository: kyverno/policy-reporter-ui
|
||||
pullPolicy: IfNotPresent
|
||||
tag: 0.15.1
|
||||
tag: 1.0.0
|
||||
|
||||
imagePullSecrets: []
|
||||
|
||||
|
|
|
@ -2,6 +2,10 @@ loki:
|
|||
host: {{ .Values.target.loki.host | quote }}
|
||||
minimumPriority: {{ .Values.target.loki.minimumPriority | quote }}
|
||||
skipExistingOnStartup: {{ .Values.target.loki.skipExistingOnStartup }}
|
||||
{{- with .Values.target.loki.sources }}
|
||||
sources:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
|
||||
elasticsearch:
|
||||
host: {{ .Values.target.elasticsearch.host | quote }}
|
||||
|
@ -9,33 +13,62 @@ elasticsearch:
|
|||
rotation: {{ .Values.target.elasticsearch.rotation | default "dayli" | quote }}
|
||||
minimumPriority: {{ .Values.target.elasticsearch.minimumPriority | quote }}
|
||||
skipExistingOnStartup: {{ .Values.target.elasticsearch.skipExistingOnStartup }}
|
||||
{{- with .Values.target.elasticsearch.sources }}
|
||||
sources:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
|
||||
slack:
|
||||
webhook: {{ .Values.target.slack.webhook | quote }}
|
||||
minimumPriority: {{ .Values.target.slack.minimumPriority | quote }}
|
||||
skipExistingOnStartup: {{ .Values.target.slack.skipExistingOnStartup }}
|
||||
{{- with .Values.target.slack.sources }}
|
||||
sources:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
|
||||
discord:
|
||||
webhook: {{ .Values.target.discord.webhook | quote }}
|
||||
minimumPriority: {{ .Values.target.discord.minimumPriority | quote }}
|
||||
skipExistingOnStartup: {{ .Values.target.discord.skipExistingOnStartup }}
|
||||
{{- with .Values.target.discord.sources }}
|
||||
sources:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
|
||||
teams:
|
||||
webhook: {{ .Values.target.teams.webhook | quote }}
|
||||
minimumPriority: {{ .Values.target.teams.minimumPriority | quote }}
|
||||
skipExistingOnStartup: {{ .Values.target.teams.skipExistingOnStartup }}
|
||||
{{- with .Values.target.teams.sources }}
|
||||
sources:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
|
||||
ui:
|
||||
host: {{ include "policyreporter.uihost" . }}
|
||||
minimumPriority: {{ .Values.target.ui.minimumPriority | quote }}
|
||||
skipExistingOnStartup: {{ .Values.target.ui.skipExistingOnStartup }}
|
||||
{{- with .Values.target.ui.sources }}
|
||||
sources:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
|
||||
yandex:
|
||||
accessKeyID: {{ .Values.target.yandex.accessKeyID }}
|
||||
secretAccessKey: {{ .Values.target.yandex.secretAccessKey }}
|
||||
region: {{ .Values.target.yandex.region }}
|
||||
endpoint: {{ .Values.target.yandex.endpoint }}
|
||||
bucket: {{ .Values.target.yandex.bucket }}
|
||||
prefix: {{ .Values.target.yandex.prefix }}
|
||||
minimumPriority: {{ .Values.target.yandex.minimumPriority | quote }}
|
||||
skipExistingOnStartup: {{ .Values.target.yandex.skipExistingOnStartup }}
|
||||
s3:
|
||||
accessKeyID: {{ .Values.target.s3.accessKeyID }}
|
||||
secretAccessKey: {{ .Values.target.s3.secretAccessKey }}
|
||||
region: {{ .Values.target.s3.region }}
|
||||
endpoint: {{ .Values.target.s3.endpoint }}
|
||||
bucket: {{ .Values.target.s3.bucket }}
|
||||
prefix: {{ .Values.target.s3.prefix }}
|
||||
minimumPriority: {{ .Values.target.s3.minimumPriority | quote }}
|
||||
skipExistingOnStartup: {{ .Values.target.s3.skipExistingOnStartup }}
|
||||
{{- with .Values.target.s3.sources }}
|
||||
sources:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
|
||||
{{- with .Values.policyPriorities }}
|
||||
priorityMap:
|
||||
{{- toYaml . | nindent 2 }}
|
||||
{{- end }}
|
|
@ -2,7 +2,7 @@
|
|||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: {{ include "policyreporter.fullname" . }}-targets
|
||||
name: {{ include "policyreporter.fullname" . }}-config
|
||||
labels:
|
||||
{{- include "policyreporter.labels" . | nindent 4 }}
|
||||
type: Opaque
|
|
@ -28,8 +28,7 @@ spec:
|
|||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
annotations:
|
||||
checksum/secret: {{ include (print .Template.BasePath "/targetssecret.yaml") . | sha256sum | quote }}
|
||||
policy-priorities/enabled: {{ .Values.policyPriorities.enabled | quote }}
|
||||
checksum/secret: {{ include (print .Template.BasePath "/config-secret.yaml") . | sha256sum | quote }}
|
||||
{{- with .Values.podAnnotations }}
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
|
@ -40,6 +39,10 @@ spec:
|
|||
{{- end }}
|
||||
serviceAccountName: {{ include "policyreporter.serviceAccountName" . }}
|
||||
automountServiceAccountToken: true
|
||||
{{- if .Values.podSecurityContext }}
|
||||
securityContext:
|
||||
{{- toYaml .Values.podSecurityContext | nindent 8 }}
|
||||
{{- end }}
|
||||
containers:
|
||||
- name: {{ .Chart.Name }}
|
||||
image: "{{ .Values.image.registry }}/{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
|
||||
|
@ -50,11 +53,11 @@ spec:
|
|||
{{- end }}
|
||||
args:
|
||||
- --config=/app/config.yaml
|
||||
- --dbfile=/sqlite/database.db
|
||||
- --metrics-enabled={{ or .Values.metrics.enabled .Values.monitoring.enabled }}
|
||||
- --rest-enabled={{ or .Values.rest.enabled .Values.ui.enabled }}
|
||||
ports:
|
||||
- name: http
|
||||
containerPort: 2112
|
||||
protocol: TCP
|
||||
- name: rest
|
||||
containerPort: 8080
|
||||
protocol: TCP
|
||||
livenessProbe:
|
||||
|
@ -64,6 +67,8 @@ spec:
|
|||
resources:
|
||||
{{- toYaml .Values.resources | nindent 12 }}
|
||||
volumeMounts:
|
||||
- name: sqlite
|
||||
mountPath: /sqlite
|
||||
- name: config-file
|
||||
mountPath: /app/config.yaml
|
||||
{{- if and .Values.existingTargetConfig.enabled .Values.existingTargetConfig.subPath }}
|
||||
|
@ -75,12 +80,14 @@ spec:
|
|||
- name: NAMESPACE
|
||||
value: {{ .Release.Namespace }}
|
||||
volumes:
|
||||
- name: sqlite
|
||||
emptyDir: {}
|
||||
- name: config-file
|
||||
secret:
|
||||
{{- if and .Values.existingTargetConfig.enabled .Values.existingTargetConfig.name }}
|
||||
secretName: {{ .Values.existingTargetConfig.name }}
|
||||
{{- else }}
|
||||
secretName: {{ include "policyreporter.fullname" . }}-targets
|
||||
secretName: {{ include "policyreporter.fullname" . }}-config
|
||||
{{- end }}
|
||||
optional: true
|
||||
{{- with .Values.nodeSelector }}
|
||||
|
|
|
@ -12,24 +12,23 @@ spec:
|
|||
- Egress
|
||||
ingress:
|
||||
- from:
|
||||
- podSelector:
|
||||
matchLabels: {{- include "ui.selectorLabels" . | nindent 10 }}
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: {{ .Values.global.port }}
|
||||
- from:
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: {{ .Values.service.port }}
|
||||
port: 8080
|
||||
{{- with .Values.networkPolicy.ingress }}
|
||||
{{- toYaml . | nindent 2 }}
|
||||
{{- end }}
|
||||
egress:
|
||||
{{- if .Values.ui.enabled }}
|
||||
- to:
|
||||
- podSelector:
|
||||
matchLabels: {{- include "ui.selectorLabels" . | nindent 10 }}
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: {{ .Values.ui.service.port }}
|
||||
- to:
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: {{ .Values.networkPolicy.kubernetesApiPort }}
|
||||
{{- end }}
|
||||
{{- with .Values.networkPolicy.egress }}
|
||||
{{- toYaml . | nindent 2 }}
|
||||
{{- end }}
|
||||
|
|
|
@ -1,12 +0,0 @@
|
|||
{{- if and .Values.policyPriorities.enabled .Values.policyPriorities.mapping -}}
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: policy-reporter-priorities
|
||||
labels:
|
||||
{{- include "policyreporter.labels" . | nindent 4 }}
|
||||
data:
|
||||
{{- with .Values.policyPriorities.mapping }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
|
@ -1,17 +0,0 @@
|
|||
{{- if .Values.policyPriorities.enabled -}}
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: Role
|
||||
metadata:
|
||||
name: {{ include "policyreporter.fullname" . }}
|
||||
labels:
|
||||
{{- include "policyreporter.labels" . | nindent 4 }}
|
||||
rules:
|
||||
- apiGroups:
|
||||
- ''
|
||||
resources:
|
||||
- configmaps
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
{{- end }}
|
|
@ -1,16 +0,0 @@
|
|||
{{- if .Values.policyPriorities.enabled -}}
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: {{ include "policyreporter.fullname" . }}
|
||||
labels:
|
||||
{{- include "policyreporter.labels" . | nindent 4 }}
|
||||
roleRef:
|
||||
kind: Role
|
||||
name: {{ include "policyreporter.fullname" . }}
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
subjects:
|
||||
- kind: "ServiceAccount"
|
||||
name: {{ include "policyreporter.serviceAccountName" . }}
|
||||
namespace: {{ .Release.Namespace }}
|
||||
{{- end }}
|
|
@ -1,5 +1,3 @@
|
|||
{{- $apiEnabled := .Values.api.enabled -}}
|
||||
{{- $uiEnabled := .Values.ui.enabled -}}
|
||||
{{- if .Values.service.enabled -}}
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
|
@ -21,12 +19,6 @@ spec:
|
|||
targetPort: http
|
||||
protocol: TCP
|
||||
name: http
|
||||
{{- if or $apiEnabled $uiEnabled }}
|
||||
- port: {{ .Values.global.port }}
|
||||
targetPort: rest
|
||||
protocol: TCP
|
||||
name: rest
|
||||
{{- end }}
|
||||
selector:
|
||||
{{- include "policyreporter.selectorLabels" . | nindent 4 }}
|
||||
{{- end }}
|
||||
|
|
|
@ -2,7 +2,7 @@ image:
|
|||
registry: ghcr.io
|
||||
repository: kyverno/policy-reporter
|
||||
pullPolicy: IfNotPresent
|
||||
tag: 1.10.3
|
||||
tag: 2.0.0
|
||||
|
||||
imagePullSecrets: []
|
||||
|
||||
|
@ -42,7 +42,10 @@ service:
|
|||
labels: {}
|
||||
type: ClusterIP
|
||||
# integer number. This is port for service
|
||||
port: 2112
|
||||
port: 8080
|
||||
|
||||
podSecurityContext:
|
||||
fsGroup: 1234
|
||||
|
||||
securityContext:
|
||||
runAsUser: 1234
|
||||
|
@ -66,18 +69,31 @@ resources: {}
|
|||
# resources, such as Minikube. If you do want to specify resources, uncomment the following
|
||||
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
|
||||
# limits:
|
||||
# memory: 30Mi
|
||||
# memory: 100Mi
|
||||
# cpu: 10m
|
||||
# requests:
|
||||
# memory: 20Mi
|
||||
# memory: 75Mi
|
||||
# cpu: 5m
|
||||
|
||||
# Enable a NetworkPolicy for this chart. Useful on clusters where Network Policies are
|
||||
# used and configured in a default-deny fashion.
|
||||
networkPolicy:
|
||||
enabled: false
|
||||
egress: []
|
||||
kubernetesApiPort: 6443
|
||||
# Kubernetes API Server
|
||||
egress:
|
||||
- to:
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 6443
|
||||
ingress: []
|
||||
|
||||
# REST API
|
||||
rest:
|
||||
enabled: false
|
||||
|
||||
# Prometheus Metrics API
|
||||
metrics:
|
||||
enabled: false
|
||||
|
||||
# enable policy-report-ui
|
||||
ui:
|
||||
|
@ -107,26 +123,12 @@ global:
|
|||
# additional labels added on each resource
|
||||
labels: {}
|
||||
|
||||
# DEPRECTED - Can be removed
|
||||
# Policy Reporter watches now for both existing versions by default
|
||||
crdVersion: v1alpha1
|
||||
|
||||
# DEPRECTED - Can be removed
|
||||
# Policy Reporter uses a new internal cache instead
|
||||
cleanupDebounceTime: 20
|
||||
|
||||
api:
|
||||
enabled: false
|
||||
|
||||
# Policy Priorities
|
||||
policyPriorities:
|
||||
enabled: false
|
||||
# configure mappings from policy to priority
|
||||
# you can use default to configure a default priority not passing results
|
||||
# example mapping
|
||||
# default: warning
|
||||
# require-ns-labels: error
|
||||
mapping: {}
|
||||
# configure mappings from policy to priority
|
||||
# you can use default to configure a default priority for fail results
|
||||
# example mapping
|
||||
# default: warning
|
||||
# require-ns-labels: error
|
||||
policyPriorities: {}
|
||||
|
||||
# Reference a configuration which already exists instead of creating one
|
||||
existingTargetConfig:
|
||||
|
@ -143,6 +145,8 @@ target:
|
|||
host: ""
|
||||
# minimum priority "" < info < warning < critical < error
|
||||
minimumPriority: ""
|
||||
# list of sources which should send to loki
|
||||
sources: []
|
||||
# Skip already existing PolicyReportResults on startup
|
||||
skipExistingOnStartup: true
|
||||
|
||||
|
@ -156,6 +160,8 @@ target:
|
|||
rotation: ""
|
||||
# minimum priority "" < info < warning < critical < error
|
||||
minimumPriority: ""
|
||||
# list of sources which should send to elasticsearch
|
||||
sources: []
|
||||
# Skip already existing PolicyReportResults on startup
|
||||
skipExistingOnStartup: true
|
||||
|
||||
|
@ -164,6 +170,8 @@ target:
|
|||
webhook: ""
|
||||
# minimum priority "" < info < warning < critical < error
|
||||
minimumPriority: ""
|
||||
# list of sources which should send to slack
|
||||
sources: []
|
||||
# Skip already existing PolicyReportResults on startup
|
||||
skipExistingOnStartup: true
|
||||
|
||||
|
@ -172,6 +180,8 @@ target:
|
|||
webhook: ""
|
||||
# minimum priority "" < info < warning < critical < error
|
||||
minimumPriority: ""
|
||||
# list of sources which should send to discord
|
||||
sources: []
|
||||
# Skip already existing PolicyReportResults on startup
|
||||
skipExistingOnStartup: true
|
||||
|
||||
|
@ -180,6 +190,8 @@ target:
|
|||
webhook: ""
|
||||
# minimum priority "" < info < warning < critical < error
|
||||
minimumPriority: ""
|
||||
# list of sources which should send to teams
|
||||
sources: []
|
||||
# Skip already existing PolicyReportResults on startup
|
||||
skipExistingOnStartup: true
|
||||
|
||||
|
@ -188,18 +200,30 @@ target:
|
|||
host: ""
|
||||
# minimum priority "" < info < warning < critical < error
|
||||
minimumPriority: "warning"
|
||||
# list of sources which should send to the UI Log
|
||||
sources: []
|
||||
# Skip already existing PolicyReportResults on startup
|
||||
skipExistingOnStartup: true
|
||||
|
||||
yandex:
|
||||
accessKeyID: "" # yandex access key
|
||||
secretAccessKey: "" # yandex secret access key
|
||||
region: "" # yandex storage region (default: ru-central-1)
|
||||
endpoint: "" # yandex storage endpoint (default: https://storage.yandexcloud.net)
|
||||
bucket: "" # Yandex storage, bucket name
|
||||
prefix: "" # name of prefix, keys will have format: s3://<bucket>/<prefix>/YYYY-MM-DD/YYYY-MM-DDTHH:mm:ss.s+01:00.json
|
||||
minimumPriority: "" # minimum priority "" < info < warning < critical < error
|
||||
skipExistingOnStartup: true # Skip already existing PolicyReportResults on startup
|
||||
s3:
|
||||
# S3 access key
|
||||
accessKeyID: ""
|
||||
# S3 secret access key
|
||||
secretAccessKey: ""
|
||||
# S3 storage region
|
||||
region: ""
|
||||
# S3 storage endpoint
|
||||
endpoint: ""
|
||||
# S3 storage, bucket name
|
||||
bucket: ""
|
||||
# name of prefix, keys will have format: s3://<bucket>/<prefix>/YYYY-MM-DD/YYYY-MM-DDTHH:mm:ss.s+01:00.json
|
||||
prefix: ""
|
||||
# minimum priority "" < info < warning < critical < error
|
||||
minimumPriority: ""
|
||||
# list of sources which should send to S3
|
||||
sources: []
|
||||
# Skip already existing PolicyReportResults on startup
|
||||
skipExistingOnStartup: true
|
||||
|
||||
# Node labels for pod assignment
|
||||
# ref: https://kubernetes.io/docs/user-guide/node-selection/
|
||||
|
@ -216,10 +240,10 @@ affinity: {}
|
|||
livenessProbe:
|
||||
httpGet:
|
||||
path: /ready
|
||||
port: rest
|
||||
port: http
|
||||
|
||||
# readinessProbe for policy-reporter
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: rest
|
||||
port: http
|
||||
|
|
40
cmd/root.go
40
cmd/root.go
|
@ -25,8 +25,6 @@ func NewCLI() *cobra.Command {
|
|||
func loadConfig(cmd *cobra.Command) (*config.Config, error) {
|
||||
v := viper.New()
|
||||
|
||||
v.SetDefault("namespace", "policy-reporter")
|
||||
|
||||
cfgFile := ""
|
||||
|
||||
configFlag := cmd.Flags().Lookup("config")
|
||||
|
@ -44,38 +42,36 @@ func loadConfig(cmd *cobra.Command) (*config.Config, error) {
|
|||
v.AutomaticEnv()
|
||||
|
||||
if err := v.ReadInConfig(); err != nil {
|
||||
log.Println("[INFO] No target configuration file found")
|
||||
}
|
||||
|
||||
if flag := cmd.Flags().Lookup("loki"); flag != nil {
|
||||
v.BindPFlag("loki.host", flag)
|
||||
}
|
||||
if flag := cmd.Flags().Lookup("loki-minimum-priority"); flag != nil {
|
||||
v.BindPFlag("loki.minimumPriority", flag)
|
||||
}
|
||||
if flag := cmd.Flags().Lookup("loki-skip-existing-on-startup"); flag != nil {
|
||||
v.BindPFlag("loki.skipExistingOnStartup", flag)
|
||||
log.Println("[INFO] No configuration file found")
|
||||
}
|
||||
|
||||
if flag := cmd.Flags().Lookup("kubeconfig"); flag != nil {
|
||||
v.BindPFlag("kubeconfig", flag)
|
||||
}
|
||||
|
||||
if flag := cmd.Flags().Lookup("crd-version"); flag != nil {
|
||||
v.BindPFlag("crdVersion", flag)
|
||||
}
|
||||
|
||||
if flag := cmd.Flags().Lookup("cleanup-debounce-time"); flag != nil {
|
||||
v.BindPFlag("cleanupDebounceTime", flag)
|
||||
}
|
||||
|
||||
if flag := cmd.Flags().Lookup("apiPort"); flag != nil {
|
||||
if flag := cmd.Flags().Lookup("port"); flag != nil {
|
||||
v.BindPFlag("api.port", flag)
|
||||
}
|
||||
|
||||
if flag := cmd.Flags().Lookup("rest-enabled"); flag != nil {
|
||||
v.BindPFlag("rest.enabled", flag)
|
||||
}
|
||||
|
||||
if flag := cmd.Flags().Lookup("metrics-enabled"); flag != nil {
|
||||
v.BindPFlag("metrics.enabled", flag)
|
||||
}
|
||||
|
||||
if flag := cmd.Flags().Lookup("dbfile"); flag != nil {
|
||||
v.BindPFlag("dbfile", flag)
|
||||
}
|
||||
|
||||
c := &config.Config{}
|
||||
|
||||
err := v.Unmarshal(c)
|
||||
|
||||
if c.DBFile == "" {
|
||||
c.DBFile = "sqlite-database.db"
|
||||
}
|
||||
|
||||
return c, err
|
||||
}
|
||||
|
|
67
cmd/run.go
67
cmd/run.go
|
@ -2,15 +2,12 @@ package cmd
|
|||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"flag"
|
||||
|
||||
"golang.org/x/sync/errgroup"
|
||||
"net/http"
|
||||
|
||||
"github.com/kyverno/policy-reporter/pkg/config"
|
||||
"github.com/kyverno/policy-reporter/pkg/metrics"
|
||||
"github.com/kyverno/policy-reporter/pkg/report"
|
||||
"github.com/kyverno/policy-reporter/pkg/target"
|
||||
"github.com/prometheus/client_golang/prometheus/promhttp"
|
||||
"github.com/spf13/cobra"
|
||||
"k8s.io/client-go/rest"
|
||||
"k8s.io/client-go/tools/clientcmd"
|
||||
|
@ -40,43 +37,46 @@ func newRunCMD() *cobra.Command {
|
|||
|
||||
resolver := config.NewResolver(c, k8sConfig)
|
||||
|
||||
client, err := resolver.PolicyReportClient(ctx)
|
||||
client, err := resolver.PolicyReportClient()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
client.RegisterCallback(metrics.CreateMetricsCallback())
|
||||
|
||||
targets := resolver.TargetClients()
|
||||
|
||||
if len(targets) > 0 {
|
||||
client.RegisterPolicyResultCallback(func(r report.Result, e bool) {
|
||||
for _, t := range targets {
|
||||
go func(target target.Client, result report.Result, preExisted bool) {
|
||||
if preExisted && target.SkipExistingOnStartup() {
|
||||
return
|
||||
}
|
||||
|
||||
target.Send(result)
|
||||
}(t, r, e)
|
||||
}
|
||||
})
|
||||
|
||||
client.RegisterPolicyResultWatcher(resolver.SkipExistingOnStartup())
|
||||
}
|
||||
resolver.RegisterSendResultListener()
|
||||
|
||||
g := &errgroup.Group{}
|
||||
|
||||
g.Go(func() error {
|
||||
return client.StartWatching(ctx)
|
||||
})
|
||||
server := resolver.APIServer(client.GetFoundResources())
|
||||
|
||||
g.Go(resolver.APIServer().Start)
|
||||
if c.REST.Enabled {
|
||||
db, err := resolver.Database()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer db.Close()
|
||||
|
||||
store, err := resolver.PolicyReportStore(db)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
resolver.RegisterStoreListener(store)
|
||||
server.RegisterV1Handler(store)
|
||||
}
|
||||
|
||||
if c.Metrics.Enabled {
|
||||
resolver.RegisterMetricsListener()
|
||||
server.RegisterMetricsHandler()
|
||||
}
|
||||
|
||||
g.Go(server.Start)
|
||||
|
||||
g.Go(func() error {
|
||||
http.Handle("/metrics", promhttp.Handler())
|
||||
eventChan := client.WatchPolicyReports(ctx)
|
||||
|
||||
return http.ListenAndServe(":2112", nil)
|
||||
resolver.EventPublisher().Publish(eventChan)
|
||||
|
||||
return errors.New("event publisher stoped")
|
||||
})
|
||||
|
||||
return g.Wait()
|
||||
|
@ -86,7 +86,10 @@ func newRunCMD() *cobra.Command {
|
|||
// For local usage
|
||||
cmd.PersistentFlags().StringP("kubeconfig", "k", "", "absolute path to the kubeconfig file")
|
||||
cmd.PersistentFlags().StringP("config", "c", "", "target configuration file")
|
||||
cmd.PersistentFlags().IntP("apiPort", "a", 8080, "http port for the optional rest api")
|
||||
cmd.PersistentFlags().IntP("port", "p", 8080, "http port for the optional rest api")
|
||||
cmd.PersistentFlags().StringP("dbfile", "d", "sqlite-database.db", "path to the SQLite DB File")
|
||||
cmd.PersistentFlags().BoolP("metrics-enabled", "m", false, "Enable Policy Reporter's Metrics API")
|
||||
cmd.PersistentFlags().BoolP("rest-enabled", "r", false, "Enable Policy Reporter's REST API")
|
||||
|
||||
flag.Parse()
|
||||
|
||||
|
|
67
go.mod
67
go.mod
|
@ -1,30 +1,67 @@
|
|||
module github.com/kyverno/policy-reporter
|
||||
|
||||
go 1.15
|
||||
go 1.17
|
||||
|
||||
require (
|
||||
github.com/aws/aws-sdk-go v1.41.9
|
||||
github.com/cespare/xxhash/v2 v2.1.2 // indirect
|
||||
github.com/google/gofuzz v1.2.0 // indirect
|
||||
github.com/imdario/mergo v0.3.12 // indirect
|
||||
github.com/json-iterator/go v1.1.12 // indirect
|
||||
github.com/aws/aws-sdk-go v1.42.8
|
||||
github.com/mattn/go-sqlite3 v1.14.9
|
||||
github.com/patrickmn/go-cache v2.1.0+incompatible
|
||||
github.com/prometheus/client_golang v1.11.0
|
||||
github.com/prometheus/client_model v0.2.0
|
||||
github.com/prometheus/common v0.32.1 // indirect
|
||||
github.com/prometheus/procfs v0.7.3 // indirect
|
||||
github.com/spf13/cobra v1.2.1
|
||||
github.com/spf13/viper v1.9.0
|
||||
golang.org/x/net v0.0.0-20211020060615-d418f374d309 // indirect
|
||||
golang.org/x/oauth2 v0.0.0-20211005180243-6b3c2da341f1 // indirect
|
||||
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c
|
||||
golang.org/x/sys v0.0.0-20211022215931-8e5104632af7 // indirect
|
||||
k8s.io/api v0.22.4
|
||||
k8s.io/apimachinery v0.22.4
|
||||
k8s.io/client-go v0.22.4
|
||||
)
|
||||
|
||||
require (
|
||||
github.com/beorn7/perks v1.0.1 // indirect
|
||||
github.com/cespare/xxhash/v2 v2.1.2 // indirect
|
||||
github.com/davecgh/go-spew v1.1.1 // indirect
|
||||
github.com/evanphx/json-patch v4.11.0+incompatible // indirect
|
||||
github.com/fsnotify/fsnotify v1.5.1 // indirect
|
||||
github.com/go-logr/logr v1.2.0 // indirect
|
||||
github.com/gogo/protobuf v1.3.2 // indirect
|
||||
github.com/golang/protobuf v1.5.2 // indirect
|
||||
github.com/google/go-cmp v0.5.6 // indirect
|
||||
github.com/google/gofuzz v1.2.0 // indirect
|
||||
github.com/googleapis/gnostic v0.5.5 // indirect
|
||||
github.com/hashicorp/hcl v1.0.0 // indirect
|
||||
github.com/imdario/mergo v0.3.12 // indirect
|
||||
github.com/inconshreveable/mousetrap v1.0.0 // indirect
|
||||
github.com/jmespath/go-jmespath v0.4.0 // indirect
|
||||
github.com/json-iterator/go v1.1.12 // indirect
|
||||
github.com/magiconair/properties v1.8.5 // indirect
|
||||
github.com/matttproud/golang_protobuf_extensions v1.0.1 // indirect
|
||||
github.com/mitchellh/mapstructure v1.4.2 // indirect
|
||||
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
|
||||
github.com/modern-go/reflect2 v1.0.2 // indirect
|
||||
github.com/pelletier/go-toml v1.9.4 // indirect
|
||||
github.com/pkg/errors v0.9.1 // indirect
|
||||
github.com/prometheus/common v0.32.1 // indirect
|
||||
github.com/prometheus/procfs v0.7.3 // indirect
|
||||
github.com/spf13/afero v1.6.0 // indirect
|
||||
github.com/spf13/cast v1.4.1 // indirect
|
||||
github.com/spf13/jwalterweatherman v1.1.0 // indirect
|
||||
github.com/spf13/pflag v1.0.5 // indirect
|
||||
github.com/subosito/gotenv v1.2.0 // indirect
|
||||
golang.org/x/net v0.0.0-20211118161319-6a13c67c3ce4 // indirect
|
||||
golang.org/x/oauth2 v0.0.0-20211104180415-d3ed0bb246c8 // indirect
|
||||
golang.org/x/sys v0.0.0-20211117180635-dee7805ff2e1 // indirect
|
||||
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211 // indirect
|
||||
golang.org/x/text v0.3.7 // indirect
|
||||
k8s.io/api v0.22.2
|
||||
k8s.io/apimachinery v0.22.2
|
||||
k8s.io/client-go v0.22.2
|
||||
golang.org/x/time v0.0.0-20211116232009-f0f3c7e86c11 // indirect
|
||||
google.golang.org/appengine v1.6.7 // indirect
|
||||
google.golang.org/protobuf v1.27.1 // indirect
|
||||
gopkg.in/inf.v0 v0.9.1 // indirect
|
||||
gopkg.in/ini.v1 v1.64.0 // indirect
|
||||
gopkg.in/yaml.v2 v2.4.0 // indirect
|
||||
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b // indirect
|
||||
k8s.io/klog/v2 v2.30.0 // indirect
|
||||
k8s.io/utils v0.0.0-20210930125809-cb0fa318a74b // indirect
|
||||
k8s.io/kube-openapi v0.0.0-20211109043538-20434351676c // indirect
|
||||
k8s.io/utils v0.0.0-20211116205334-6203023598ed // indirect
|
||||
sigs.k8s.io/structured-merge-diff/v4 v4.2.0 // indirect
|
||||
sigs.k8s.io/yaml v1.3.0 // indirect
|
||||
)
|
||||
|
|
48
go.sum
48
go.sum
|
@ -67,8 +67,8 @@ github.com/armon/go-metrics v0.0.0-20180917152333-f0300d1749da/go.mod h1:Q73ZrmV
|
|||
github.com/armon/go-radix v0.0.0-20180808171621-7fddfc383310/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8=
|
||||
github.com/armon/go-radix v1.0.0/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8=
|
||||
github.com/asaskevich/govalidator v0.0.0-20190424111038-f61b66f89f4a/go.mod h1:lB+ZfQJz7igIIfQNfa7Ml4HSf2uFQQRzpGGRXenZAgY=
|
||||
github.com/aws/aws-sdk-go v1.41.9 h1:Xb4gWjA90ju0u6Fr2lMAsMOGuhw1g4sTFOqh9SUHgN0=
|
||||
github.com/aws/aws-sdk-go v1.41.9/go.mod h1:585smgzpB/KqRA+K3y/NL/oYRqQvpNJYvLm+LY1U59Q=
|
||||
github.com/aws/aws-sdk-go v1.42.8 h1:Tj2RP4Fas1mYchwbmw0qWLJIEATAseyp5iTa1D+LWYQ=
|
||||
github.com/aws/aws-sdk-go v1.42.8/go.mod h1:585smgzpB/KqRA+K3y/NL/oYRqQvpNJYvLm+LY1U59Q=
|
||||
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
|
||||
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
|
||||
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
|
||||
|
@ -212,6 +212,7 @@ github.com/google/pprof v0.0.0-20210609004039-a478d1d731e9/go.mod h1:kpwsk12EmLe
|
|||
github.com/google/pprof v0.0.0-20210720184732-4bb14d4b1be1/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
|
||||
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
|
||||
github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
||||
github.com/google/uuid v1.1.2 h1:EVhdT+1Kseyi1/pUmXKaFxYsDNy9RQYkMWRH68J/W7Y=
|
||||
github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
||||
github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg=
|
||||
github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk=
|
||||
|
@ -299,6 +300,8 @@ github.com/mattn/go-isatty v0.0.8/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hd
|
|||
github.com/mattn/go-isatty v0.0.10/go.mod h1:qgIWMr58cqv1PHHyhnkY9lrL7etaEgOFcMEpPG5Rm84=
|
||||
github.com/mattn/go-isatty v0.0.11/go.mod h1:PhnuNfih5lzO57/f3n+odYbM4JtupLOxQOAqxQCu2WE=
|
||||
github.com/mattn/go-isatty v0.0.12/go.mod h1:cbi8OIDigv2wuxKPP5vlRcQ1OAZbq2CE4Kysco4FUpU=
|
||||
github.com/mattn/go-sqlite3 v1.14.9 h1:10HX2Td0ocZpYEjhilsuo6WWtUqttj2Kb0KtD86/KYA=
|
||||
github.com/mattn/go-sqlite3 v1.14.9/go.mod h1:NyWgC/yNuGj7Q9rpYnZvas74GogHl5/Z4A/KQRfk6bU=
|
||||
github.com/matttproud/golang_protobuf_extensions v1.0.1 h1:4hp9jkHxhMHkqkrB3Ix0jegS5sx/RkqARlsWZ6pIwiU=
|
||||
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
|
||||
github.com/miekg/dns v1.0.14/go.mod h1:W1PPwlIAgtquWBMBEV9nkV9Cazfe8ScdGz/Lj7v3Nrg=
|
||||
|
@ -530,8 +533,8 @@ golang.org/x/net v0.0.0-20210503060351-7fd8e65b6420/go.mod h1:9nx3DQGgdP8bBQD5qx
|
|||
golang.org/x/net v0.0.0-20210520170846-37e1c6afe023/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
|
||||
golang.org/x/net v0.0.0-20210525063256-abc453219eb5/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
|
||||
golang.org/x/net v0.0.0-20210614182718-04defd469f4e/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
|
||||
golang.org/x/net v0.0.0-20211020060615-d418f374d309 h1:A0lJIi+hcTR6aajJH4YqKWwohY4aW9RO7oRMcdv+HKI=
|
||||
golang.org/x/net v0.0.0-20211020060615-d418f374d309/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
|
||||
golang.org/x/net v0.0.0-20211118161319-6a13c67c3ce4 h1:DZshvxDdVoeKIbudAdFEKi+f70l51luSy/7b76ibTY0=
|
||||
golang.org/x/net v0.0.0-20211118161319-6a13c67c3ce4/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
|
||||
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
|
||||
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
||||
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
||||
|
@ -548,8 +551,8 @@ golang.org/x/oauth2 v0.0.0-20210514164344-f6687ab2804c/go.mod h1:KelEdhl1UZF7XfJ
|
|||
golang.org/x/oauth2 v0.0.0-20210628180205-a41e5a781914/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
|
||||
golang.org/x/oauth2 v0.0.0-20210805134026-6f1e6394065a/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
|
||||
golang.org/x/oauth2 v0.0.0-20210819190943-2bc19b11175f/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
|
||||
golang.org/x/oauth2 v0.0.0-20211005180243-6b3c2da341f1 h1:B333XXssMuKQeBwiNODx4TupZy7bf4sxFZnN2ZOcvUE=
|
||||
golang.org/x/oauth2 v0.0.0-20211005180243-6b3c2da341f1/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
|
||||
golang.org/x/oauth2 v0.0.0-20211104180415-d3ed0bb246c8 h1:RerP+noqYHUQ8CMRcPlC2nvTa4dcBIjegkuWdcUDuqg=
|
||||
golang.org/x/oauth2 v0.0.0-20211104180415-d3ed0bb246c8/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
|
||||
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
|
@ -630,8 +633,8 @@ golang.org/x/sys v0.0.0-20210616094352-59db8d763f22/go.mod h1:oPkhp1MJrh7nUepCBc
|
|||
golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20210806184541-e5e7981a1069/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20210823070655-63515b42dcdf/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20211022215931-8e5104632af7 h1:e2q1CMOFXDvurT2sa2yhJAkuA2n8Rd9tMDd7Tcfvs6M=
|
||||
golang.org/x/sys v0.0.0-20211022215931-8e5104632af7/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20211117180635-dee7805ff2e1 h1:kwrAHlwJ0DUBZwQ238v+Uod/3eZ8B2K5rYsUHBQvzmI=
|
||||
golang.org/x/sys v0.0.0-20211117180635-dee7805ff2e1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
|
||||
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
|
||||
golang.org/x/term v0.0.0-20210220032956-6a3ed077a48d/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
|
||||
|
@ -650,8 +653,9 @@ golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
|
|||
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||
golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||
golang.org/x/time v0.0.0-20210723032227-1f47c861a9ac h1:7zkz7BUtwNFFqcowJ+RIgu2MaV/MapERkDIy+mwPyjs=
|
||||
golang.org/x/time v0.0.0-20210723032227-1f47c861a9ac/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||
golang.org/x/time v0.0.0-20211116232009-f0f3c7e86c11 h1:GZokNIeuVkl3aZHJchRrr13WCsols02MLUcz1U9is6M=
|
||||
golang.org/x/time v0.0.0-20211116232009-f0f3c7e86c11/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
|
||||
|
@ -853,8 +857,9 @@ gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMy
|
|||
gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc=
|
||||
gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=
|
||||
gopkg.in/ini.v1 v1.62.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
|
||||
gopkg.in/ini.v1 v1.63.2 h1:tGK/CyBg7SMzb60vP1M03vNZ3VDu3wGQJwn7Sxi9r3c=
|
||||
gopkg.in/ini.v1 v1.63.2/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
|
||||
gopkg.in/ini.v1 v1.64.0 h1:Mj2zXEXcNb5joEiSA0zc3HZpTst/iyjNiR4CN8tDzOg=
|
||||
gopkg.in/ini.v1 v1.64.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
|
||||
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ=
|
||||
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
|
||||
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
|
@ -877,28 +882,29 @@ honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWh
|
|||
honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg=
|
||||
honnef.co/go/tools v0.0.1-2020.1.3/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
|
||||
honnef.co/go/tools v0.0.1-2020.1.4/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
|
||||
k8s.io/api v0.22.2 h1:M8ZzAD0V6725Fjg53fKeTJxGsJvRbk4TEm/fexHMtfw=
|
||||
k8s.io/api v0.22.2/go.mod h1:y3ydYpLJAaDI+BbSe2xmGcqxiWHmWjkEeIbiwHvnPR8=
|
||||
k8s.io/apimachinery v0.22.2 h1:ejz6y/zNma8clPVfNDLnPbleBo6MpoFy/HBiBqCouVk=
|
||||
k8s.io/apimachinery v0.22.2/go.mod h1:O3oNtNadZdeOMxHFVxOreoznohCpy0z6mocxbZr7oJ0=
|
||||
k8s.io/client-go v0.22.2 h1:DaSQgs02aCC1QcwUdkKZWOeaVsQjYvWv8ZazcZ6JcHc=
|
||||
k8s.io/client-go v0.22.2/go.mod h1:sAlhrkVDf50ZHx6z4K0S40wISNTarf1r800F+RlCF6U=
|
||||
k8s.io/api v0.22.4 h1:UvyHW0ezB2oIgHAxlYoo6UJQObYXU7awuNarwoHEOjw=
|
||||
k8s.io/api v0.22.4/go.mod h1:Rgs+9gIGYC5laXQSZZ9JqT5NevNgoGiOdVWi1BAB3qk=
|
||||
k8s.io/apimachinery v0.22.4 h1:9uwcvPpukBw/Ri0EUmWz+49cnFtaoiyEhQTK+xOe7Ck=
|
||||
k8s.io/apimachinery v0.22.4/go.mod h1:yU6oA6Gnax9RrxGzVvPFFJ+mpnW6PBSqp0sx0I0HHW0=
|
||||
k8s.io/client-go v0.22.4 h1:aAQ1Wk+I3bjCNk35YWUqbaueqrIonkfDPJSPDDe8Kfg=
|
||||
k8s.io/client-go v0.22.4/go.mod h1:Yzw4e5e7h1LNHA4uqnMVrpEpUs1hJOiuBsJKIlRCHDA=
|
||||
k8s.io/gengo v0.0.0-20200413195148-3a45101e95ac/go.mod h1:ezvh/TsK7cY6rbqRK0oQQ8IAqLxYwwyPxAX1Pzy0ii0=
|
||||
k8s.io/klog/v2 v2.0.0/go.mod h1:PBfzABfn139FHAV07az/IF9Wp1bkk3vpT2XSJ76fSDE=
|
||||
k8s.io/klog/v2 v2.9.0/go.mod h1:hy9LJ/NvuK+iVyP4Ehqva4HxZG/oXyIS3n3Jmire4Ec=
|
||||
k8s.io/klog/v2 v2.30.0 h1:bUO6drIvCIsvZ/XFgfxoGFQU/a4Qkh0iAlvUR7vlHJw=
|
||||
k8s.io/klog/v2 v2.30.0/go.mod h1:y1WjHnz7Dj687irZUWR/WLkLc5N1YHtjLdmgWjndZn0=
|
||||
k8s.io/kube-openapi v0.0.0-20210421082810-95288971da7e h1:KLHHjkdQFomZy8+06csTWZ0m1343QqxZhR2LJ1OxCYM=
|
||||
k8s.io/kube-openapi v0.0.0-20210421082810-95288971da7e/go.mod h1:vHXdDvt9+2spS2Rx9ql3I8tycm3H9FDfdUoIuKCefvw=
|
||||
k8s.io/kube-openapi v0.0.0-20211109043538-20434351676c h1:jvamsI1tn9V0S8jicyX82qaFC0H/NKxv2e5mbqsgR80=
|
||||
k8s.io/kube-openapi v0.0.0-20211109043538-20434351676c/go.mod h1:vHXdDvt9+2spS2Rx9ql3I8tycm3H9FDfdUoIuKCefvw=
|
||||
k8s.io/utils v0.0.0-20210819203725-bdf08cb9a70a/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA=
|
||||
k8s.io/utils v0.0.0-20210930125809-cb0fa318a74b h1:wxEMGetGMur3J1xuGLQY7GEQYg9bZxKn3tKo5k/eYcs=
|
||||
k8s.io/utils v0.0.0-20210930125809-cb0fa318a74b/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA=
|
||||
k8s.io/utils v0.0.0-20211116205334-6203023598ed h1:ck1fRPWPJWsMd8ZRFsWc6mh/zHp5fZ/shhbrgPUxDAE=
|
||||
k8s.io/utils v0.0.0-20211116205334-6203023598ed/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA=
|
||||
rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=
|
||||
rsc.io/quote/v3 v3.1.0/go.mod h1:yEA65RcK8LyAZtP9Kv3t0HmxON59tX3rD+tICJqUlj0=
|
||||
rsc.io/sampler v1.3.0/go.mod h1:T1hPZKmBbMNahiBKFy5HrXp6adAjACjK9JXDnKaTXpA=
|
||||
sigs.k8s.io/structured-merge-diff/v4 v4.0.2/go.mod h1:bJZC9H9iH24zzfZ/41RGcq60oK1F7G282QMXDPYydCw=
|
||||
sigs.k8s.io/structured-merge-diff/v4 v4.1.2 h1:Hr/htKFmJEbtMgS/UD0N+gtgctAqz81t3nu+sPzynno=
|
||||
sigs.k8s.io/structured-merge-diff/v4 v4.1.2/go.mod h1:j/nl6xW8vLS49O8YvXW1ocPhZawJtm+Yrr7PPRQ0Vg4=
|
||||
sigs.k8s.io/structured-merge-diff/v4 v4.2.0 h1:kDvPBbnPk+qYmkHmSo8vKGp438IASWofnbbUKDE/bv0=
|
||||
sigs.k8s.io/structured-merge-diff/v4 v4.2.0/go.mod h1:j/nl6xW8vLS49O8YvXW1ocPhZawJtm+Yrr7PPRQ0Vg4=
|
||||
sigs.k8s.io/yaml v1.2.0/go.mod h1:yfXDCHCao9+ENCvLSE62v9VSji2MKu5jeNfTrofGhJc=
|
||||
sigs.k8s.io/yaml v1.3.0 h1:a2VclLzOGrwOHDiV8EfBGhvjHvP46CtW5j6POvhYGGo=
|
||||
sigs.k8s.io/yaml v1.3.0/go.mod h1:GeOyir5tyXNByN85N/dRIT9es5UQNerPYEKK56eTBm8=
|
||||
|
|
|
@ -6,7 +6,7 @@ The installation requires a `policy-reporter` namespace. Because the installatio
|
|||
|
||||
## Policy Reporter
|
||||
|
||||
The `policy-reporter` folder is the basic installation for Policy Reporter without the UI. Includes a basic Configuration Secret `policy-reporter-targets`, empty by default and the `http://policy-reporter:2112/metrics` Endpoint.
|
||||
The `policy-reporter` folder is the basic installation for Policy Reporter without the UI. Includes a basic Configuration Secret `policy-reporter-targets`, empty by default and the `http://policy-reporter:8080/metrics` Endpoint.
|
||||
|
||||
### Installation
|
||||
|
||||
|
@ -58,6 +58,8 @@ kubectl apply -f https://raw.githubusercontent.com/kyverno/policy-reporter/main/
|
|||
To configure your notification targets for Policy Reporter create a secret called `policy-reporter-targets` in the `policy-reporter` namespace with an key `config.yaml` as key and the following structure as value:
|
||||
|
||||
```yaml
|
||||
priorityMap: {}
|
||||
|
||||
loki:
|
||||
host: ""
|
||||
minimumPriority: ""
|
||||
|
@ -90,6 +92,15 @@ ui:
|
|||
minimumPriority: ""
|
||||
skipExistingOnStartup: true
|
||||
|
||||
s3:
|
||||
endpoint: ""
|
||||
region: ""
|
||||
bucket: ""
|
||||
secretAccessKey: ""
|
||||
accessKeyID: ""
|
||||
minimumPriority: "warning"
|
||||
skipExistingOnStartup: true
|
||||
sources: []
|
||||
```
|
||||
|
||||
The `kyverno-policy-reporter-ui` and `default-policy-reporter-ui` installation has an optional preconfigured `target-security.yaml` to apply. This secret configures the Policy Reporter UI as target for Policy Reporter.
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: policy-reporter-targets
|
||||
name: policy-reporter-config
|
||||
namespace: policy-reporter
|
||||
labels:
|
||||
app.kubernetes.io/name: policy-reporter
|
|
@ -66,14 +66,10 @@ metadata:
|
|||
spec:
|
||||
type: ClusterIP
|
||||
ports:
|
||||
- port: 2112
|
||||
- port: 8080
|
||||
targetPort: http
|
||||
protocol: TCP
|
||||
name: http
|
||||
- port: 8080
|
||||
targetPort: rest
|
||||
protocol: TCP
|
||||
name: rest
|
||||
selector:
|
||||
app.kubernetes.io/name: policy-reporter
|
||||
---
|
||||
|
@ -97,7 +93,7 @@ spec:
|
|||
automountServiceAccountToken: false
|
||||
containers:
|
||||
- name: ui
|
||||
image: "ghcr.io/kyverno/policy-reporter-ui:0.15.0"
|
||||
image: "ghcr.io/kyverno/policy-reporter-ui:1.0.0"
|
||||
imagePullPolicy: IfNotPresent
|
||||
securityContext:
|
||||
allowPrivilegeEscalation: false
|
||||
|
@ -109,8 +105,7 @@ spec:
|
|||
runAsNonRoot: true
|
||||
runAsUser: 1234
|
||||
args:
|
||||
- -backend=http://policy-reporter:8080
|
||||
- -log-size=200
|
||||
- -policy-reporter=http://policy-reporter:8080
|
||||
ports:
|
||||
- name: http
|
||||
containerPort: 8080
|
||||
|
@ -146,9 +141,11 @@ spec:
|
|||
spec:
|
||||
serviceAccountName: policy-reporter
|
||||
automountServiceAccountToken: true
|
||||
securityContext:
|
||||
fsGroup: 1234
|
||||
containers:
|
||||
- name: policy-reporter
|
||||
image: "ghcr.io/kyverno/policy-reporter:1.8.7"
|
||||
image: "ghcr.io/kyverno/policy-reporter:2.0.0"
|
||||
imagePullPolicy: IfNotPresent
|
||||
securityContext:
|
||||
allowPrivilegeEscalation: false
|
||||
|
@ -161,32 +158,32 @@ spec:
|
|||
runAsUser: 1234
|
||||
args:
|
||||
- --config=/app/config.yaml
|
||||
- --dbfile=/sqlite/database.db
|
||||
- --rest-enabled
|
||||
ports:
|
||||
- name: http
|
||||
containerPort: 2112
|
||||
protocol: TCP
|
||||
- name: rest
|
||||
containerPort: 8080
|
||||
protocol: TCP
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: rest
|
||||
port: http
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /ready
|
||||
port: rest
|
||||
port: http
|
||||
resources:
|
||||
{}
|
||||
volumeMounts:
|
||||
- name: sqlite
|
||||
mountPath: /sqlite
|
||||
- name: config-file
|
||||
mountPath: /app/config.yaml
|
||||
subPath: config.yaml
|
||||
env:
|
||||
- name: NAMESPACE
|
||||
value: policy-reporter
|
||||
volumes:
|
||||
- name: sqlite
|
||||
emptyDir: {}
|
||||
- name: config-file
|
||||
secret:
|
||||
secretName: policy-reporter-targets
|
||||
secretName: policy-reporter-config
|
||||
optional: true
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: policy-reporter-targets
|
||||
name: policy-reporter-config
|
||||
namespace: policy-reporter
|
||||
labels:
|
||||
app.kubernetes.io/name: policy-reporter
|
|
@ -91,14 +91,10 @@ metadata:
|
|||
spec:
|
||||
type: ClusterIP
|
||||
ports:
|
||||
- port: 2113
|
||||
- port: 8080
|
||||
targetPort: http
|
||||
protocol: TCP
|
||||
name: http
|
||||
- port: 8080
|
||||
targetPort: rest
|
||||
protocol: TCP
|
||||
name: rest
|
||||
selector:
|
||||
app.kubernetes.io/name: kyverno-plugin
|
||||
app.kubernetes.io/instance: policy-reporter
|
||||
|
@ -130,14 +126,10 @@ metadata:
|
|||
spec:
|
||||
type: ClusterIP
|
||||
ports:
|
||||
- port: 2112
|
||||
- port: 8080
|
||||
targetPort: http
|
||||
protocol: TCP
|
||||
name: http
|
||||
- port: 8080
|
||||
targetPort: rest
|
||||
protocol: TCP
|
||||
name: rest
|
||||
selector:
|
||||
app.kubernetes.io/name: policy-reporter
|
||||
---
|
||||
|
@ -165,7 +157,7 @@ spec:
|
|||
automountServiceAccountToken: true
|
||||
containers:
|
||||
- name: "kyverno-plugin"
|
||||
image: "ghcr.io/kyverno/policy-reporter-kyverno-plugin:0.3.2"
|
||||
image: "ghcr.io/kyverno/policy-reporter-kyverno-plugin:1.0.0"
|
||||
imagePullPolicy: IfNotPresent
|
||||
securityContext:
|
||||
allowPrivilegeEscalation: false
|
||||
|
@ -177,22 +169,19 @@ spec:
|
|||
runAsNonRoot: true
|
||||
runAsUser: 1234
|
||||
args:
|
||||
- --apiPort=8080
|
||||
- --rest-enabled
|
||||
ports:
|
||||
- name: http
|
||||
containerPort: 2113
|
||||
protocol: TCP
|
||||
- name: rest
|
||||
containerPort: 8080
|
||||
protocol: TCP
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /policies
|
||||
port: rest
|
||||
port: http
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /policies
|
||||
port: rest
|
||||
port: http
|
||||
resources:
|
||||
{}
|
||||
---
|
||||
|
@ -215,7 +204,7 @@ spec:
|
|||
spec:
|
||||
containers:
|
||||
- name: ui
|
||||
image: "fjogeleit/policy-reporter-ui:0.14.0"
|
||||
image: "fjogeleit/policy-reporter-ui:1.0.0"
|
||||
imagePullPolicy: IfNotPresent
|
||||
securityContext:
|
||||
allowPrivilegeEscalation: false
|
||||
|
@ -227,8 +216,7 @@ spec:
|
|||
runAsNonRoot: true
|
||||
runAsUser: 1234
|
||||
args:
|
||||
- -backend=http://policy-reporter:8080
|
||||
- -log-size=200
|
||||
- -policy-reporter=http://policy-reporter:8080
|
||||
- -kyverno-plugin=http://policy-reporter-kyverno-plugin:8080
|
||||
ports:
|
||||
- name: http
|
||||
|
@ -264,9 +252,11 @@ spec:
|
|||
spec:
|
||||
serviceAccountName: policy-reporter
|
||||
automountServiceAccountToken: true
|
||||
securityContext:
|
||||
fsGroup: 1234
|
||||
containers:
|
||||
- name: policy-reporter
|
||||
image: "ghcr.io/kyverno/policy-reporter:1.8.7"
|
||||
image: "ghcr.io/kyverno/policy-reporter:2.0.0"
|
||||
imagePullPolicy: IfNotPresent
|
||||
securityContext:
|
||||
allowPrivilegeEscalation: false
|
||||
|
@ -279,32 +269,32 @@ spec:
|
|||
runAsUser: 1234
|
||||
args:
|
||||
- --config=/app/config.yaml
|
||||
- --dbfile=/sqlite/database.db
|
||||
- --rest-enabled
|
||||
ports:
|
||||
- name: http
|
||||
containerPort: 2112
|
||||
protocol: TCP
|
||||
- name: rest
|
||||
containerPort: 8080
|
||||
protocol: TCP
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /ready
|
||||
port: rest
|
||||
port: http
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: rest
|
||||
port: http
|
||||
resources:
|
||||
{}
|
||||
volumeMounts:
|
||||
- name: sqlite
|
||||
mountPath: /sqlite
|
||||
- name: config-file
|
||||
mountPath: /app/config.yaml
|
||||
subPath: config.yaml
|
||||
env:
|
||||
- name: NAMESPACE
|
||||
value: policy-reporter
|
||||
volumes:
|
||||
- name: sqlite
|
||||
emptyDir: {}
|
||||
- name: config-file
|
||||
secret:
|
||||
secretName: policy-reporter-targets
|
||||
secretName: policy-reporter-config
|
||||
optional: true
|
||||
|
|
|
@ -56,7 +56,7 @@ metadata:
|
|||
spec:
|
||||
type: ClusterIP
|
||||
ports:
|
||||
- port: 2112
|
||||
- port: 8080
|
||||
targetPort: http
|
||||
protocol: TCP
|
||||
name: http
|
||||
|
@ -84,7 +84,7 @@ spec:
|
|||
automountServiceAccountToken: true
|
||||
containers:
|
||||
- name: policy-reporter
|
||||
image: "ghcr.io/kyverno/policy-reporter:1.8.7"
|
||||
image: "ghcr.io/kyverno/policy-reporter:2.0.0"
|
||||
imagePullPolicy: IfNotPresent
|
||||
securityContext:
|
||||
allowPrivilegeEscalation: false
|
||||
|
@ -97,21 +97,19 @@ spec:
|
|||
runAsUser: 1234
|
||||
args:
|
||||
- --config=/app/config.yaml
|
||||
- --metrics-enabled=true
|
||||
ports:
|
||||
- name: http
|
||||
containerPort: 2112
|
||||
protocol: TCP
|
||||
- name: rest
|
||||
containerPort: 8080
|
||||
protocol: TCP
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: rest
|
||||
port: http
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /ready
|
||||
port: rest
|
||||
port: http
|
||||
resources:
|
||||
{}
|
||||
volumeMounts:
|
||||
|
|
|
@ -5,9 +5,12 @@ import (
|
|||
"compress/gzip"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/kyverno/policy-reporter/pkg/api"
|
||||
v1 "github.com/kyverno/policy-reporter/pkg/api/v1"
|
||||
"github.com/kyverno/policy-reporter/pkg/target"
|
||||
)
|
||||
|
||||
func Test_GzipCompression(t *testing.T) {
|
||||
|
@ -19,7 +22,7 @@ func Test_GzipCompression(t *testing.T) {
|
|||
req.Header.Add("Accept-Encoding", "gzip")
|
||||
|
||||
rr := httptest.NewRecorder()
|
||||
handler := api.Gzip(api.TargetsHandler(make([]api.Target, 0)))
|
||||
handler := api.Gzip(v1.TargetsHandler(make([]target.Client, 0)))
|
||||
|
||||
handler.ServeHTTP(rr, req)
|
||||
|
||||
|
@ -33,8 +36,8 @@ func Test_GzipCompression(t *testing.T) {
|
|||
t.Errorf("handler returned wrong status code: got %v want %v", status, http.StatusOK)
|
||||
}
|
||||
|
||||
expected := `[]`
|
||||
if buf.String() != expected {
|
||||
expected := "[]"
|
||||
if !strings.Contains(buf.String(), expected) {
|
||||
t.Errorf("handler returned unexpected body: got %v want %v", buf.String(), expected)
|
||||
}
|
||||
})
|
||||
|
@ -45,7 +48,7 @@ func Test_GzipCompression(t *testing.T) {
|
|||
}
|
||||
|
||||
rr := httptest.NewRecorder()
|
||||
handler := api.Gzip(api.TargetsHandler(make([]api.Target, 0)))
|
||||
handler := api.Gzip(v1.TargetsHandler(make([]target.Client, 0)))
|
||||
|
||||
handler.ServeHTTP(rr, req)
|
||||
|
||||
|
@ -53,9 +56,28 @@ func Test_GzipCompression(t *testing.T) {
|
|||
t.Errorf("handler returned wrong status code: got %v want %v", status, http.StatusOK)
|
||||
}
|
||||
|
||||
expected := `[]`
|
||||
if rr.Body.String() != expected {
|
||||
expected := "[]"
|
||||
if !strings.Contains(rr.Body.String(), expected) {
|
||||
t.Errorf("handler returned unexpected body: got %v want %v", rr.Body.String(), expected)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("Uncompressed Respose", func(t *testing.T) {
|
||||
req, err := http.NewRequest("GET", "/targets", nil)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
req.Header.Add("Accept-Encoding", "gzip")
|
||||
|
||||
rr := httptest.NewRecorder()
|
||||
handler := api.Gzip(func(w http.ResponseWriter, req *http.Request) {
|
||||
w.WriteHeader(204)
|
||||
})
|
||||
|
||||
handler.ServeHTTP(rr, req)
|
||||
|
||||
if status := rr.Code; status != http.StatusNoContent {
|
||||
t.Errorf("handler returned wrong status code: got %v want %v", status, http.StatusOK)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
|
|
@ -1,12 +1,9 @@
|
|||
package api
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"log"
|
||||
"net/http"
|
||||
|
||||
"github.com/kyverno/policy-reporter/pkg/report"
|
||||
)
|
||||
|
||||
// HealthzHandler for the Halthz REST API
|
||||
|
@ -38,72 +35,3 @@ func ReadyHandler() http.HandlerFunc {
|
|||
fmt.Fprint(w, "{}")
|
||||
}
|
||||
}
|
||||
|
||||
// PolicyReportHandler for the PolicyReport REST API
|
||||
func PolicyReportHandler(s *report.PolicyReportStore) http.HandlerFunc {
|
||||
return func(w http.ResponseWriter, req *http.Request) {
|
||||
w.Header().Set("Content-Type", "application/json; charset=UTF-8")
|
||||
w.WriteHeader(http.StatusOK)
|
||||
|
||||
reports := s.List("PolicyReport")
|
||||
if len(reports) == 0 {
|
||||
fmt.Fprint(w, "[]")
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
apiReports := make([]PolicyReport, 0, len(reports))
|
||||
for _, r := range reports {
|
||||
apiReports = append(apiReports, mapPolicyReport(r))
|
||||
}
|
||||
|
||||
if err := json.NewEncoder(w).Encode(apiReports); err != nil {
|
||||
w.WriteHeader(http.StatusInternalServerError)
|
||||
fmt.Fprintf(w, `{ "message": "%s" }`, err.Error())
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ClusterPolicyReportHandler for the ClusterPolicyReport REST API
|
||||
func ClusterPolicyReportHandler(s *report.PolicyReportStore) http.HandlerFunc {
|
||||
return func(w http.ResponseWriter, req *http.Request) {
|
||||
w.Header().Set("Content-Type", "application/json; charset=UTF-8")
|
||||
w.WriteHeader(http.StatusOK)
|
||||
|
||||
reports := s.List(report.ClusterPolicyReportType)
|
||||
if len(reports) == 0 {
|
||||
fmt.Fprint(w, "[]")
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
apiReports := make([]PolicyReport, 0, len(reports))
|
||||
for _, r := range reports {
|
||||
apiReports = append(apiReports, mapPolicyReport(r))
|
||||
}
|
||||
|
||||
if err := json.NewEncoder(w).Encode(apiReports); err != nil {
|
||||
w.WriteHeader(http.StatusInternalServerError)
|
||||
fmt.Fprintf(w, `{ "message": "%s" }`, err.Error())
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// TargetsHandler for the Targets REST API
|
||||
func TargetsHandler(targets []Target) http.HandlerFunc {
|
||||
return func(w http.ResponseWriter, req *http.Request) {
|
||||
w.Header().Set("Content-Type", "application/json; charset=UTF-8")
|
||||
w.WriteHeader(http.StatusOK)
|
||||
|
||||
if len(targets) == 0 {
|
||||
fmt.Fprint(w, "[]")
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
if err := json.NewEncoder(w).Encode(targets); err != nil {
|
||||
w.WriteHeader(http.StatusInternalServerError)
|
||||
fmt.Fprintf(w, `{ "message": "%s" }`, err.Error())
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -3,199 +3,11 @@ package api_test
|
|||
import (
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/kyverno/policy-reporter/pkg/api"
|
||||
"github.com/kyverno/policy-reporter/pkg/report"
|
||||
)
|
||||
|
||||
func Test_TargetsAPI(t *testing.T) {
|
||||
t.Run("Empty Respose", func(t *testing.T) {
|
||||
req, err := http.NewRequest("GET", "/targets", nil)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
rr := httptest.NewRecorder()
|
||||
handler := api.TargetsHandler(make([]api.Target, 0))
|
||||
|
||||
handler.ServeHTTP(rr, req)
|
||||
|
||||
if status := rr.Code; status != http.StatusOK {
|
||||
t.Errorf("handler returned wrong status code: got %v want %v", status, http.StatusOK)
|
||||
}
|
||||
|
||||
expected := `[]`
|
||||
if rr.Body.String() != expected {
|
||||
t.Errorf("handler returned unexpected body: got %v want %v", rr.Body.String(), expected)
|
||||
}
|
||||
})
|
||||
t.Run("Respose", func(t *testing.T) {
|
||||
req, err := http.NewRequest("GET", "/targets", nil)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
rr := httptest.NewRecorder()
|
||||
handler := api.TargetsHandler([]api.Target{
|
||||
{Name: "Loki", MinimumPriority: "debug", SkipExistingOnStartup: true},
|
||||
})
|
||||
|
||||
handler.ServeHTTP(rr, req)
|
||||
|
||||
if status := rr.Code; status != http.StatusOK {
|
||||
t.Errorf("handler returned wrong status code: got %v want %v", status, http.StatusOK)
|
||||
}
|
||||
|
||||
expected := `[{"name":"Loki","minimumPriority":"debug","skipExistingOnStartup":true}]`
|
||||
if !strings.Contains(rr.Body.String(), expected) {
|
||||
t.Errorf("handler returned unexpected body: got %v want %v", rr.Body.String(), expected)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func Test_PolicyReportAPI(t *testing.T) {
|
||||
t.Run("Empty Respose", func(t *testing.T) {
|
||||
req, err := http.NewRequest("GET", "/policy-reports", nil)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
rr := httptest.NewRecorder()
|
||||
handler := api.PolicyReportHandler(report.NewPolicyReportStore())
|
||||
|
||||
handler.ServeHTTP(rr, req)
|
||||
|
||||
if status := rr.Code; status != http.StatusOK {
|
||||
t.Errorf("handler returned wrong status code: got %v want %v", status, http.StatusOK)
|
||||
}
|
||||
|
||||
expected := `[]`
|
||||
if rr.Body.String() != expected {
|
||||
t.Errorf("handler returned unexpected body: got %v want %v", rr.Body.String(), expected)
|
||||
}
|
||||
})
|
||||
t.Run("Respose", func(t *testing.T) {
|
||||
req, err := http.NewRequest("GET", "/policy-reports", nil)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
result := report.Result{
|
||||
Message: "validation error: requests and limits required. Rule autogen-check-for-requests-and-limits failed at path /spec/template/spec/containers/0/resources/requests/",
|
||||
Policy: "require-requests-and-limits-required",
|
||||
Rule: "autogen-check-for-requests-and-limits",
|
||||
Priority: report.ErrorPriority,
|
||||
Status: report.Fail,
|
||||
Category: "resources",
|
||||
Scored: true,
|
||||
Resource: report.Resource{
|
||||
APIVersion: "v1",
|
||||
Kind: "Deployment",
|
||||
Name: "nginx",
|
||||
Namespace: "test",
|
||||
UID: "536ab69f-1b3c-4bd9-9ba4-274a56188409",
|
||||
},
|
||||
}
|
||||
|
||||
preport := report.PolicyReport{
|
||||
Name: "polr-test",
|
||||
Namespace: "test",
|
||||
Results: map[string]report.Result{"": result},
|
||||
Summary: report.Summary{},
|
||||
CreationTimestamp: time.Now(),
|
||||
}
|
||||
|
||||
store := report.NewPolicyReportStore()
|
||||
store.Add(preport)
|
||||
|
||||
rr := httptest.NewRecorder()
|
||||
handler := api.PolicyReportHandler(store)
|
||||
|
||||
handler.ServeHTTP(rr, req)
|
||||
|
||||
if status := rr.Code; status != http.StatusOK {
|
||||
t.Errorf("handler returned wrong status code: got %v want %v", status, http.StatusOK)
|
||||
}
|
||||
|
||||
expected := `[{"name":"polr-test","namespace":"test","results":[{"message":"validation error: requests and limits required. Rule autogen-check-for-requests-and-limits failed at path /spec/template/spec/containers/0/resources/requests/","policy":"require-requests-and-limits-required","rule":"autogen-check-for-requests-and-limits","priority":"error","status":"fail","category":"resources","scored":true,"resource":{"apiVersion":"v1","kind":"Deployment","name":"nginx","namespace":"test","uid":"536ab69f-1b3c-4bd9-9ba4-274a56188409"}}],"summary":{"pass":0,"skip":0,"warn":0,"error":0,"fail":0}`
|
||||
if !strings.Contains(rr.Body.String(), expected) {
|
||||
t.Errorf("handler returned unexpected body: got %v want %v", rr.Body.String(), expected)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func Test_ClusterPolicyReportAPI(t *testing.T) {
|
||||
t.Run("Empty Respose", func(t *testing.T) {
|
||||
req, err := http.NewRequest("GET", "/cluster-policy-reports", nil)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
rr := httptest.NewRecorder()
|
||||
handler := api.ClusterPolicyReportHandler(report.NewPolicyReportStore())
|
||||
|
||||
handler.ServeHTTP(rr, req)
|
||||
|
||||
if status := rr.Code; status != http.StatusOK {
|
||||
t.Errorf("handler returned wrong status code: got %v want %v", status, http.StatusOK)
|
||||
}
|
||||
|
||||
expected := `[]`
|
||||
if rr.Body.String() != expected {
|
||||
t.Errorf("handler returned unexpected body: got %v want %v", rr.Body.String(), expected)
|
||||
}
|
||||
})
|
||||
t.Run("Respose", func(t *testing.T) {
|
||||
req, err := http.NewRequest("GET", "/cluster-policy-reports", nil)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
result := report.Result{
|
||||
Message: "validation error: Namespace label missing",
|
||||
Policy: "ns-label-env-required",
|
||||
Rule: "ns-label-required",
|
||||
Priority: report.ErrorPriority,
|
||||
Status: report.Fail,
|
||||
Category: "resources",
|
||||
Scored: true,
|
||||
Resource: report.Resource{
|
||||
APIVersion: "v1",
|
||||
Kind: "Namespace",
|
||||
Name: "dev",
|
||||
UID: "536ab69f-1b3c-4bd9-9ba4-274a56188409",
|
||||
},
|
||||
}
|
||||
|
||||
creport := report.PolicyReport{
|
||||
Name: "cpolr-test",
|
||||
Summary: report.Summary{},
|
||||
CreationTimestamp: time.Now(),
|
||||
Results: map[string]report.Result{"": result},
|
||||
}
|
||||
|
||||
store := report.NewPolicyReportStore()
|
||||
store.Add(creport)
|
||||
|
||||
rr := httptest.NewRecorder()
|
||||
handler := api.ClusterPolicyReportHandler(store)
|
||||
|
||||
handler.ServeHTTP(rr, req)
|
||||
|
||||
if status := rr.Code; status != http.StatusOK {
|
||||
t.Errorf("handler returned wrong status code: got %v want %v", status, http.StatusOK)
|
||||
}
|
||||
|
||||
expected := `[{"name":"cpolr-test","results":[{"message":"validation error: Namespace label missing","policy":"ns-label-env-required","rule":"ns-label-required","priority":"error","status":"fail","category":"resources","scored":true,"resource":{"apiVersion":"v1","kind":"Namespace","name":"dev","uid":"536ab69f-1b3c-4bd9-9ba4-274a56188409"}}],"summary":{"pass":0,"skip":0,"warn":0,"error":0,"fail":0}`
|
||||
if !strings.Contains(rr.Body.String(), expected) {
|
||||
t.Errorf("handler returned unexpected body: got %v want %v", rr.Body.String(), expected)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func Test_HealthzAPI(t *testing.T) {
|
||||
t.Run("Respose", func(t *testing.T) {
|
||||
req, err := http.NewRequest("GET", "/healthz", nil)
|
||||
|
|
115
pkg/api/model.go
115
pkg/api/model.go
|
@ -1,115 +0,0 @@
|
|||
package api
|
||||
|
||||
import (
|
||||
"time"
|
||||
|
||||
"github.com/kyverno/policy-reporter/pkg/report"
|
||||
"github.com/kyverno/policy-reporter/pkg/target"
|
||||
)
|
||||
|
||||
// Resource API Model
|
||||
type Resource struct {
|
||||
APIVersion string `json:"apiVersion"`
|
||||
Kind string `json:"kind"`
|
||||
Name string `json:"name"`
|
||||
Namespace string `json:"namespace,omitempty"`
|
||||
UID string `json:"uid"`
|
||||
}
|
||||
|
||||
// Result API Model
|
||||
type Result struct {
|
||||
Message string `json:"message"`
|
||||
Policy string `json:"policy"`
|
||||
Rule string `json:"rule"`
|
||||
Priority string `json:"priority"`
|
||||
Status string `json:"status"`
|
||||
Severity string `json:"severity,omitempty"`
|
||||
Category string `json:"category,omitempty"`
|
||||
Scored bool `json:"scored"`
|
||||
Properties map[string]string `json:"properties,omitempty"`
|
||||
Source string `json:"source,omitempty"`
|
||||
Resource *Resource `json:"resource,omitempty"`
|
||||
}
|
||||
|
||||
// Summary API Model
|
||||
type Summary struct {
|
||||
Pass int `json:"pass"`
|
||||
Skip int `json:"skip"`
|
||||
Warn int `json:"warn"`
|
||||
Error int `json:"error"`
|
||||
Fail int `json:"fail"`
|
||||
}
|
||||
|
||||
// PolicyReport API Model
|
||||
type PolicyReport struct {
|
||||
Name string `json:"name"`
|
||||
Namespace string `json:"namespace,omitempty"`
|
||||
Results []Result `json:"results"`
|
||||
Summary Summary `json:"summary"`
|
||||
CreationTimestamp time.Time `json:"creationTimestamp"`
|
||||
}
|
||||
|
||||
func mapPolicyReport(p report.PolicyReport) PolicyReport {
|
||||
results := make([]Result, 0, len(p.Results))
|
||||
|
||||
for _, r := range p.Results {
|
||||
result := Result{
|
||||
Message: r.Message,
|
||||
Policy: r.Policy,
|
||||
Rule: r.Rule,
|
||||
Priority: r.Priority.String(),
|
||||
Status: r.Status,
|
||||
Severity: r.Severity,
|
||||
Category: r.Category,
|
||||
Scored: r.Scored,
|
||||
Properties: r.Properties,
|
||||
Source: r.Source,
|
||||
}
|
||||
|
||||
if r.HasResource() {
|
||||
result.Resource = &Resource{
|
||||
Namespace: r.Resource.Namespace,
|
||||
APIVersion: r.Resource.APIVersion,
|
||||
Kind: r.Resource.Kind,
|
||||
Name: r.Resource.Name,
|
||||
UID: r.Resource.UID,
|
||||
}
|
||||
}
|
||||
|
||||
results = append(results, result)
|
||||
}
|
||||
|
||||
return PolicyReport{
|
||||
Name: p.Name,
|
||||
Namespace: p.Namespace,
|
||||
CreationTimestamp: p.CreationTimestamp,
|
||||
Summary: Summary{
|
||||
Skip: p.Summary.Skip,
|
||||
Pass: p.Summary.Pass,
|
||||
Warn: p.Summary.Warn,
|
||||
Fail: p.Summary.Fail,
|
||||
Error: p.Summary.Error,
|
||||
},
|
||||
Results: results,
|
||||
}
|
||||
}
|
||||
|
||||
// Target API Model
|
||||
type Target struct {
|
||||
Name string `json:"name"`
|
||||
MinimumPriority string `json:"minimumPriority"`
|
||||
SkipExistingOnStartup bool `json:"skipExistingOnStartup"`
|
||||
}
|
||||
|
||||
func mapTarget(t target.Client) Target {
|
||||
minPrio := t.MinimumPriority()
|
||||
if minPrio == "" {
|
||||
minPrio = report.Priority(report.DebugPriority).String()
|
||||
}
|
||||
|
||||
return Target{
|
||||
Name: t.Name(),
|
||||
MinimumPriority: minPrio,
|
||||
SkipExistingOnStartup: t.SkipExistingOnStartup(),
|
||||
}
|
||||
}
|
|
@ -1,60 +1,85 @@
|
|||
package api
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"net/http"
|
||||
|
||||
"github.com/kyverno/policy-reporter/pkg/report"
|
||||
v1 "github.com/kyverno/policy-reporter/pkg/api/v1"
|
||||
"github.com/kyverno/policy-reporter/pkg/target"
|
||||
"github.com/prometheus/client_golang/prometheus/promhttp"
|
||||
)
|
||||
|
||||
// Server for the optional HTTP REST API
|
||||
// Server for the Lifecycle and optional HTTP REST API
|
||||
type Server interface {
|
||||
// Start the HTTP REST API
|
||||
// Start the HTTP Server
|
||||
Start() error
|
||||
// Shutdown the HTTP Sever
|
||||
Shutdown(ctx context.Context) error
|
||||
// RegisterLifecycleHandler adds healthy and readiness APIs
|
||||
RegisterLifecycleHandler()
|
||||
// RegisterMetricsHandler adds the optional metrics endpoint
|
||||
RegisterMetricsHandler()
|
||||
// RegisterV1Handler adds the optional v1 REST APIs
|
||||
RegisterV1Handler(finder v1.PolicyReportFinder)
|
||||
}
|
||||
|
||||
type httpServer struct {
|
||||
port int
|
||||
http http.Server
|
||||
mux *http.ServeMux
|
||||
store *report.PolicyReportStore
|
||||
targets []Target
|
||||
targets []target.Client
|
||||
foundResources map[string]string
|
||||
}
|
||||
|
||||
func (s *httpServer) registerHandler() {
|
||||
s.mux.HandleFunc("/policy-reports", Gzip(PolicyReportHandler(s.store)))
|
||||
s.mux.HandleFunc("/cluster-policy-reports", Gzip(ClusterPolicyReportHandler(s.store)))
|
||||
s.mux.HandleFunc("/targets", Gzip(TargetsHandler(s.targets)))
|
||||
func (s *httpServer) RegisterLifecycleHandler() {
|
||||
s.mux.HandleFunc("/healthz", HealthzHandler(s.foundResources))
|
||||
s.mux.HandleFunc("/ready", ReadyHandler())
|
||||
}
|
||||
|
||||
func (s *httpServer) Start() error {
|
||||
server := http.Server{
|
||||
Addr: fmt.Sprintf(":%d", s.port),
|
||||
Handler: s.mux,
|
||||
}
|
||||
func (s *httpServer) RegisterV1Handler(finder v1.PolicyReportFinder) {
|
||||
s.mux.HandleFunc("/v1/targets", Gzip(v1.TargetsHandler(s.targets)))
|
||||
s.mux.HandleFunc("/v1/categories", Gzip(v1.CategoryListHandler(finder)))
|
||||
s.mux.HandleFunc("/v1/namespaces", Gzip(v1.NamespaceListHandler(finder)))
|
||||
s.mux.HandleFunc("/v1/rule-status-count", Gzip(v1.RuleStatusCountHandler(finder)))
|
||||
|
||||
return server.ListenAndServe()
|
||||
s.mux.HandleFunc("/v1/namespaced-resources/policies", Gzip(v1.NamespacedResourcesPolicyListHandler(finder)))
|
||||
s.mux.HandleFunc("/v1/namespaced-resources/kinds", Gzip(v1.NamespacedResourcesKindListHandler(finder)))
|
||||
s.mux.HandleFunc("/v1/namespaced-resources/sources", Gzip(v1.NamespacedSourceListHandler(finder)))
|
||||
s.mux.HandleFunc("/v1/namespaced-resources/status-counts", Gzip(v1.NamespacedResourcesStatusCountsHandler(finder)))
|
||||
s.mux.HandleFunc("/v1/namespaced-resources/results", Gzip(v1.NamespacedResourcesResultHandler(finder)))
|
||||
|
||||
s.mux.HandleFunc("/v1/cluster-resources/policies", Gzip(v1.ClusterResourcesPolicyListHandler(finder)))
|
||||
s.mux.HandleFunc("/v1/cluster-resources/kinds", Gzip(v1.ClusterResourcesKindListHandler(finder)))
|
||||
s.mux.HandleFunc("/v1/cluster-resources/sources", Gzip(v1.ClusterResourcesSourceListHandler(finder)))
|
||||
s.mux.HandleFunc("/v1/cluster-resources/status-counts", Gzip(v1.ClusterResourcesStatusCountHandler(finder)))
|
||||
s.mux.HandleFunc("/v1/cluster-resources/results", Gzip(v1.ClusterResourcesResultHandler(finder)))
|
||||
}
|
||||
|
||||
func (s *httpServer) RegisterMetricsHandler() {
|
||||
s.mux.Handle("/metrics", promhttp.Handler())
|
||||
}
|
||||
|
||||
func (s *httpServer) Start() error {
|
||||
return s.http.ListenAndServe()
|
||||
}
|
||||
|
||||
func (s *httpServer) Shutdown(ctx context.Context) error {
|
||||
return s.http.Shutdown(ctx)
|
||||
}
|
||||
|
||||
// NewServer constructor for a new API Server
|
||||
func NewServer(store *report.PolicyReportStore, targets []target.Client, port int, foundResources map[string]string) Server {
|
||||
apiTargets := make([]Target, 0, len(targets))
|
||||
for _, t := range targets {
|
||||
apiTargets = append(apiTargets, mapTarget(t))
|
||||
}
|
||||
|
||||
func NewServer(targets []target.Client, port int, foundResources map[string]string) Server {
|
||||
s := &httpServer{
|
||||
port: port,
|
||||
targets: apiTargets,
|
||||
store: store,
|
||||
mux: http.NewServeMux(),
|
||||
targets: targets,
|
||||
mux: http.DefaultServeMux,
|
||||
foundResources: foundResources,
|
||||
http: http.Server{
|
||||
Addr: fmt.Sprintf(":%d", port),
|
||||
Handler: http.DefaultServeMux,
|
||||
},
|
||||
}
|
||||
|
||||
s.registerHandler()
|
||||
s.RegisterLifecycleHandler()
|
||||
|
||||
return s
|
||||
}
|
||||
|
|
|
@ -1,26 +1,64 @@
|
|||
package api_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"math/rand"
|
||||
"net/http"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/kyverno/policy-reporter/pkg/api"
|
||||
"github.com/kyverno/policy-reporter/pkg/report"
|
||||
"github.com/kyverno/policy-reporter/pkg/target"
|
||||
"github.com/kyverno/policy-reporter/pkg/target/discord"
|
||||
"github.com/kyverno/policy-reporter/pkg/target/loki"
|
||||
)
|
||||
|
||||
func Test_NewServer(t *testing.T) {
|
||||
server := api.NewServer(
|
||||
report.NewPolicyReportStore(),
|
||||
[]target.Client{
|
||||
loki.NewClient("http://localhost:3100", "debug", true, &http.Client{}),
|
||||
discord.NewClient("http://webhook:2000", "", false, &http.Client{}),
|
||||
},
|
||||
8080,
|
||||
make(map[string]string),
|
||||
)
|
||||
rnd := rand.New(rand.NewSource(time.Now().Unix())).Float64()
|
||||
if rnd < 0.3 {
|
||||
rnd += 0.4
|
||||
}
|
||||
|
||||
go server.Start()
|
||||
port := int(rnd * 10000)
|
||||
|
||||
server := api.NewServer(make([]target.Client, 0), port, make(map[string]string))
|
||||
|
||||
server.RegisterMetricsHandler()
|
||||
server.RegisterV1Handler(nil)
|
||||
|
||||
serviceRunning := make(chan struct{})
|
||||
serviceDone := make(chan struct{})
|
||||
|
||||
go func() {
|
||||
close(serviceRunning)
|
||||
err := server.Start()
|
||||
if err != nil {
|
||||
fmt.Println(err)
|
||||
}
|
||||
defer close(serviceDone)
|
||||
}()
|
||||
|
||||
<-serviceRunning
|
||||
|
||||
client := http.Client{}
|
||||
|
||||
req, err := http.NewRequest("GET", fmt.Sprintf("http://localhost:%d/ready", port), nil)
|
||||
if err != nil {
|
||||
t.Errorf("Unexpected Error: %s", err)
|
||||
return
|
||||
}
|
||||
|
||||
res, err := client.Do(req)
|
||||
|
||||
server.Shutdown(context.Background())
|
||||
|
||||
if err != nil {
|
||||
t.Errorf("Unexpected Error: %s", err)
|
||||
return
|
||||
}
|
||||
|
||||
if res.StatusCode != http.StatusOK {
|
||||
t.Errorf("Unexpected Error Code: %d", res.StatusCode)
|
||||
}
|
||||
|
||||
<-serviceDone
|
||||
}
|
||||
|
|
40
pkg/api/v1/finder.go
Normal file
40
pkg/api/v1/finder.go
Normal file
|
@ -0,0 +1,40 @@
|
|||
package v1
|
||||
|
||||
type Filter struct {
|
||||
Kinds []string
|
||||
Categories []string
|
||||
Namespaces []string
|
||||
Sources []string
|
||||
Policies []string
|
||||
Severities []string
|
||||
Status []string
|
||||
}
|
||||
|
||||
type PolicyReportFinder interface {
|
||||
// FetchClusterPolicies from current PolicyReportResults
|
||||
FetchClusterPolicies(source string) ([]string, error)
|
||||
// FetchNamespacedPolicies from current PolicyReportResults with a Namespace
|
||||
FetchNamespacedPolicies(source string) ([]string, error)
|
||||
// FetchCategories from current PolicyReportResults
|
||||
FetchCategories(source string) ([]string, error)
|
||||
// FetchClusterSources from current PolicyReportResults
|
||||
FetchClusterSources() ([]string, error)
|
||||
// FetchNamespacedSources from current PolicyReportResults with a Namespace
|
||||
FetchNamespacedSources() ([]string, error)
|
||||
// FetchNamespacedKinds from current PolicyReportResults with a Namespace
|
||||
FetchNamespacedKinds(source string) ([]string, error)
|
||||
// FetchClusterKinds from current PolicyReportResults
|
||||
FetchClusterKinds(source string) ([]string, error)
|
||||
// FetchNamespaces from current PolicyReports
|
||||
FetchNamespaces(source string) ([]string, error)
|
||||
// FetchNamespacedStatusCounts from current PolicyReportResults with a Namespace
|
||||
FetchNamespacedStatusCounts(Filter) ([]NamespacedStatusCount, error)
|
||||
// FetchStatusCounts from current PolicyReportResults
|
||||
FetchStatusCounts(Filter) ([]StatusCount, error)
|
||||
// FetchNamespacedResults from current PolicyReportResults with a Namespace
|
||||
FetchNamespacedResults(filter Filter) ([]*ListResult, error)
|
||||
// FetchClusterResults from current PolicyReportResults
|
||||
FetchClusterResults(filter Filter) ([]*ListResult, error)
|
||||
// FetchRuleStatusCounts from current PolicyReportResults
|
||||
FetchRuleStatusCounts(policy, rule string) ([]StatusCount, error)
|
||||
}
|
157
pkg/api/v1/handler.go
Normal file
157
pkg/api/v1/handler.go
Normal file
|
@ -0,0 +1,157 @@
|
|||
package v1
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
|
||||
"github.com/kyverno/policy-reporter/pkg/helper"
|
||||
"github.com/kyverno/policy-reporter/pkg/target"
|
||||
)
|
||||
|
||||
// TargetsHandler for the Targets REST API
|
||||
func TargetsHandler(targets []target.Client) http.HandlerFunc {
|
||||
apiTargets := make([]Target, 0, len(targets))
|
||||
for _, t := range targets {
|
||||
apiTargets = append(apiTargets, mapTarget(t))
|
||||
}
|
||||
|
||||
return func(w http.ResponseWriter, req *http.Request) {
|
||||
helper.SendJSONResponse(w, apiTargets, nil)
|
||||
}
|
||||
}
|
||||
|
||||
// ClusterResourcesPolicyListHandler REST API
|
||||
func ClusterResourcesPolicyListHandler(finder PolicyReportFinder) http.HandlerFunc {
|
||||
return func(w http.ResponseWriter, req *http.Request) {
|
||||
list, err := finder.FetchClusterPolicies(req.URL.Query().Get("source"))
|
||||
helper.SendJSONResponse(w, list, err)
|
||||
}
|
||||
}
|
||||
|
||||
// NamespacedResourcesPolicyListHandler REST API
|
||||
func NamespacedResourcesPolicyListHandler(finder PolicyReportFinder) http.HandlerFunc {
|
||||
return func(w http.ResponseWriter, req *http.Request) {
|
||||
list, err := finder.FetchNamespacedPolicies(req.URL.Query().Get("source"))
|
||||
helper.SendJSONResponse(w, list, err)
|
||||
}
|
||||
}
|
||||
|
||||
// CategoryListHandler REST API
|
||||
func CategoryListHandler(finder PolicyReportFinder) http.HandlerFunc {
|
||||
return func(w http.ResponseWriter, req *http.Request) {
|
||||
list, err := finder.FetchCategories(req.URL.Query().Get("source"))
|
||||
helper.SendJSONResponse(w, list, err)
|
||||
}
|
||||
}
|
||||
|
||||
// ClusterResourcesKindListHandler REST API
|
||||
func ClusterResourcesKindListHandler(finder PolicyReportFinder) http.HandlerFunc {
|
||||
return func(w http.ResponseWriter, req *http.Request) {
|
||||
list, err := finder.FetchClusterKinds(req.URL.Query().Get("source"))
|
||||
helper.SendJSONResponse(w, list, err)
|
||||
}
|
||||
}
|
||||
|
||||
// NamespacedResourcesKindListHandler REST API
|
||||
func NamespacedResourcesKindListHandler(finder PolicyReportFinder) http.HandlerFunc {
|
||||
return func(w http.ResponseWriter, req *http.Request) {
|
||||
list, err := finder.FetchNamespacedKinds(req.URL.Query().Get("source"))
|
||||
helper.SendJSONResponse(w, list, err)
|
||||
}
|
||||
}
|
||||
|
||||
// ClusterResourcesSourceListHandler REST API
|
||||
func ClusterResourcesSourceListHandler(finder PolicyReportFinder) http.HandlerFunc {
|
||||
return func(w http.ResponseWriter, req *http.Request) {
|
||||
list, err := finder.FetchClusterSources()
|
||||
helper.SendJSONResponse(w, list, err)
|
||||
}
|
||||
}
|
||||
|
||||
// NamespacedSourceListHandler REST API
|
||||
func NamespacedSourceListHandler(finder PolicyReportFinder) http.HandlerFunc {
|
||||
return func(w http.ResponseWriter, req *http.Request) {
|
||||
list, err := finder.FetchNamespacedSources()
|
||||
helper.SendJSONResponse(w, list, err)
|
||||
}
|
||||
}
|
||||
|
||||
// ClusterResourcesStatusCountHandler REST API
|
||||
func ClusterResourcesStatusCountHandler(finder PolicyReportFinder) http.HandlerFunc {
|
||||
return func(w http.ResponseWriter, req *http.Request) {
|
||||
list, err := finder.FetchStatusCounts(Filter{
|
||||
Kinds: req.URL.Query()["kinds"],
|
||||
Sources: req.URL.Query()["sources"],
|
||||
Categories: req.URL.Query()["categories"],
|
||||
Severities: req.URL.Query()["severities"],
|
||||
Policies: req.URL.Query()["policies"],
|
||||
Status: req.URL.Query()["status"],
|
||||
})
|
||||
helper.SendJSONResponse(w, list, err)
|
||||
}
|
||||
}
|
||||
|
||||
// NamespacedResourcesStatusCountsHandler REST API
|
||||
func NamespacedResourcesStatusCountsHandler(finder PolicyReportFinder) http.HandlerFunc {
|
||||
return func(w http.ResponseWriter, req *http.Request) {
|
||||
list, err := finder.FetchNamespacedStatusCounts(Filter{
|
||||
Namespaces: req.URL.Query()["namespaces"],
|
||||
Kinds: req.URL.Query()["kinds"],
|
||||
Sources: req.URL.Query()["sources"],
|
||||
Categories: req.URL.Query()["categories"],
|
||||
Severities: req.URL.Query()["severities"],
|
||||
Policies: req.URL.Query()["policies"],
|
||||
Status: req.URL.Query()["status"],
|
||||
})
|
||||
helper.SendJSONResponse(w, list, err)
|
||||
}
|
||||
}
|
||||
|
||||
// RuleStatusCountHandler REST API
|
||||
func RuleStatusCountHandler(finder PolicyReportFinder) http.HandlerFunc {
|
||||
return func(w http.ResponseWriter, req *http.Request) {
|
||||
list, err := finder.FetchRuleStatusCounts(
|
||||
req.URL.Query().Get("policy"),
|
||||
req.URL.Query().Get("rule"),
|
||||
)
|
||||
helper.SendJSONResponse(w, list, err)
|
||||
}
|
||||
}
|
||||
|
||||
// NamespacedResourcesResultHandler REST API
|
||||
func NamespacedResourcesResultHandler(finder PolicyReportFinder) http.HandlerFunc {
|
||||
return func(w http.ResponseWriter, req *http.Request) {
|
||||
list, err := finder.FetchNamespacedResults(Filter{
|
||||
Namespaces: req.URL.Query()["namespaces"],
|
||||
Kinds: req.URL.Query()["kinds"],
|
||||
Sources: req.URL.Query()["sources"],
|
||||
Categories: req.URL.Query()["categories"],
|
||||
Severities: req.URL.Query()["severities"],
|
||||
Policies: req.URL.Query()["policies"],
|
||||
Status: req.URL.Query()["status"],
|
||||
})
|
||||
helper.SendJSONResponse(w, list, err)
|
||||
}
|
||||
}
|
||||
|
||||
// ClusterResourcesResultHandler REST API
|
||||
func ClusterResourcesResultHandler(finder PolicyReportFinder) http.HandlerFunc {
|
||||
return func(w http.ResponseWriter, req *http.Request) {
|
||||
list, err := finder.FetchClusterResults(Filter{
|
||||
Kinds: req.URL.Query()["kinds"],
|
||||
Sources: req.URL.Query()["sources"],
|
||||
Categories: req.URL.Query()["categories"],
|
||||
Severities: req.URL.Query()["severities"],
|
||||
Policies: req.URL.Query()["policies"],
|
||||
Status: req.URL.Query()["status"],
|
||||
})
|
||||
helper.SendJSONResponse(w, list, err)
|
||||
}
|
||||
}
|
||||
|
||||
// NamespaceListHandler REST API
|
||||
func NamespaceListHandler(finder PolicyReportFinder) http.HandlerFunc {
|
||||
return func(w http.ResponseWriter, req *http.Request) {
|
||||
list, err := finder.FetchNamespaces(req.URL.Query().Get("source"))
|
||||
helper.SendJSONResponse(w, list, err)
|
||||
}
|
||||
}
|
450
pkg/api/v1/handler_test.go
Normal file
450
pkg/api/v1/handler_test.go
Normal file
|
@ -0,0 +1,450 @@
|
|||
package v1_test
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
v1 "github.com/kyverno/policy-reporter/pkg/api/v1"
|
||||
"github.com/kyverno/policy-reporter/pkg/report"
|
||||
"github.com/kyverno/policy-reporter/pkg/sqlite3"
|
||||
"github.com/kyverno/policy-reporter/pkg/target"
|
||||
"github.com/kyverno/policy-reporter/pkg/target/loki"
|
||||
)
|
||||
|
||||
var result1 = &report.Result{
|
||||
ID: "123",
|
||||
Message: "validation error: requests and limits required. Rule autogen-check-for-requests-and-limits failed at path /spec/template/spec/containers/0/resources/requests/",
|
||||
Policy: "require-requests-and-limits-required",
|
||||
Rule: "autogen-check-for-requests-and-limits",
|
||||
Priority: report.ErrorPriority,
|
||||
Status: report.Fail,
|
||||
Category: "Best Practices",
|
||||
Severity: report.High,
|
||||
Scored: true,
|
||||
Source: "Kyverno",
|
||||
Resource: &report.Resource{
|
||||
APIVersion: "v1",
|
||||
Kind: "Deployment",
|
||||
Name: "nginx",
|
||||
Namespace: "test",
|
||||
UID: "536ab69f-1b3c-4bd9-9ba4-274a56188409",
|
||||
},
|
||||
}
|
||||
|
||||
var result2 = &report.Result{
|
||||
ID: "124",
|
||||
Message: "validation error: requests and limits required. Rule autogen-check-for-requests-and-limits failed at path /spec/template/spec/containers/0/resources/requests/",
|
||||
Policy: "require-requests-and-limits-required",
|
||||
Rule: "autogen-check-for-requests-and-limits",
|
||||
Priority: report.WarningPriority,
|
||||
Status: report.Pass,
|
||||
Category: "Best Practices",
|
||||
Scored: true,
|
||||
Source: "Kyverno",
|
||||
Resource: &report.Resource{
|
||||
APIVersion: "v1",
|
||||
Kind: "Pod",
|
||||
Name: "nginx",
|
||||
Namespace: "test",
|
||||
UID: "536ab69f-1b3c-4bd9-9ba4-274a56188419",
|
||||
},
|
||||
}
|
||||
|
||||
var cresult1 = &report.Result{
|
||||
ID: "125",
|
||||
Message: "validation error: The label `test` is required. Rule check-for-labels-on-namespace",
|
||||
Policy: "require-ns-labels",
|
||||
Rule: "check-for-labels-on-namespace",
|
||||
Priority: report.ErrorPriority,
|
||||
Status: report.Pass,
|
||||
Category: "Convention",
|
||||
Severity: report.Medium,
|
||||
Scored: true,
|
||||
Source: "Kyverno",
|
||||
Resource: &report.Resource{
|
||||
APIVersion: "v1",
|
||||
Kind: "Namespace",
|
||||
Name: "test",
|
||||
UID: "536ab69f-1b3c-4bd9-9ba4-274a56188411",
|
||||
},
|
||||
}
|
||||
|
||||
var cresult2 = &report.Result{
|
||||
ID: "126",
|
||||
Message: "validation error: The label `test` is required. Rule check-for-labels-on-namespace",
|
||||
Policy: "require-ns-labels",
|
||||
Rule: "check-for-labels-on-namespace",
|
||||
Priority: report.WarningPriority,
|
||||
Status: report.Fail,
|
||||
Category: "Convention",
|
||||
Severity: report.High,
|
||||
Scored: true,
|
||||
Source: "Kyverno",
|
||||
Resource: &report.Resource{
|
||||
APIVersion: "v1",
|
||||
Kind: "Namespace",
|
||||
Name: "dev",
|
||||
UID: "536ab69f-1b3c-4bd9-9ba4-274a56188412",
|
||||
},
|
||||
}
|
||||
|
||||
var preport = &report.PolicyReport{
|
||||
ID: report.GeneratePolicyReportID("polr-test", "test"),
|
||||
Name: "polr-test",
|
||||
Namespace: "test",
|
||||
Results: map[string]*report.Result{
|
||||
result1.GetIdentifier(): result1,
|
||||
result2.GetIdentifier(): result2,
|
||||
},
|
||||
Summary: &report.Summary{Fail: 1},
|
||||
CreationTimestamp: time.Now(),
|
||||
}
|
||||
|
||||
var creport = &report.PolicyReport{
|
||||
ID: report.GeneratePolicyReportID("cpolr", ""),
|
||||
Name: "cpolr",
|
||||
Results: map[string]*report.Result{
|
||||
cresult1.GetIdentifier(): cresult1,
|
||||
cresult2.GetIdentifier(): cresult2,
|
||||
},
|
||||
Summary: &report.Summary{},
|
||||
CreationTimestamp: time.Now(),
|
||||
}
|
||||
|
||||
func Test_V1_API(t *testing.T) {
|
||||
db, err := sqlite3.NewDatabase("test.db")
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
defer db.Close()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
store, err := sqlite3.NewPolicyReportStore(db)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer store.CleanUp()
|
||||
|
||||
store.Add(preport)
|
||||
store.Add(creport)
|
||||
|
||||
t.Run("ClusterPolicyListHandler", func(t *testing.T) {
|
||||
req, err := http.NewRequest("GET", "/v1/cluster-policies", nil)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
rr := httptest.NewRecorder()
|
||||
handler := v1.ClusterResourcesPolicyListHandler(store)
|
||||
handler.ServeHTTP(rr, req)
|
||||
|
||||
if status := rr.Code; status != http.StatusOK {
|
||||
t.Errorf("handler returned wrong status code: got %v want %v", status, http.StatusOK)
|
||||
}
|
||||
|
||||
expected := `["require-ns-labels"]`
|
||||
if !strings.Contains(rr.Body.String(), expected) {
|
||||
t.Errorf("handler returned unexpected body: got %v want %v", rr.Body.String(), expected)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("NamespacedPolicyListHandler", func(t *testing.T) {
|
||||
req, err := http.NewRequest("GET", "/v1/namespaced-policies", nil)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
rr := httptest.NewRecorder()
|
||||
handler := v1.NamespacedResourcesPolicyListHandler(store)
|
||||
handler.ServeHTTP(rr, req)
|
||||
|
||||
if status := rr.Code; status != http.StatusOK {
|
||||
t.Errorf("handler returned wrong status code: got %v want %v", status, http.StatusOK)
|
||||
}
|
||||
|
||||
expected := `["require-requests-and-limits-required"]`
|
||||
if !strings.Contains(rr.Body.String(), expected) {
|
||||
t.Errorf("handler returned unexpected body: got %v want %v", rr.Body.String(), expected)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("CategoryListHandler", func(t *testing.T) {
|
||||
req, err := http.NewRequest("GET", "/v1/categories", nil)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
rr := httptest.NewRecorder()
|
||||
handler := v1.CategoryListHandler(store)
|
||||
handler.ServeHTTP(rr, req)
|
||||
|
||||
if status := rr.Code; status != http.StatusOK {
|
||||
t.Errorf("handler returned wrong status code: got %v want %v", status, http.StatusOK)
|
||||
}
|
||||
|
||||
expected := `["Best Practices","Convention"]`
|
||||
if !strings.Contains(rr.Body.String(), expected) {
|
||||
t.Errorf("handler returned unexpected body: got %v want %v", rr.Body.String(), expected)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("ClusterKindListHandler", func(t *testing.T) {
|
||||
req, err := http.NewRequest("GET", "/v1/cluster-kinds", nil)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
rr := httptest.NewRecorder()
|
||||
handler := v1.ClusterResourcesKindListHandler(store)
|
||||
handler.ServeHTTP(rr, req)
|
||||
|
||||
if status := rr.Code; status != http.StatusOK {
|
||||
t.Errorf("handler returned wrong status code: got %v want %v", status, http.StatusOK)
|
||||
}
|
||||
|
||||
expected := `["Namespace"]`
|
||||
if !strings.Contains(rr.Body.String(), expected) {
|
||||
t.Errorf("handler returned unexpected body: got %v want %v", rr.Body.String(), expected)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("NamespacedKindListHandler", func(t *testing.T) {
|
||||
req, err := http.NewRequest("GET", "/v1/namespaced-kinds", nil)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
rr := httptest.NewRecorder()
|
||||
handler := v1.NamespacedResourcesKindListHandler(store)
|
||||
handler.ServeHTTP(rr, req)
|
||||
|
||||
if status := rr.Code; status != http.StatusOK {
|
||||
t.Errorf("handler returned wrong status code: got %v want %v", status, http.StatusOK)
|
||||
}
|
||||
|
||||
expected := `["Deployment","Pod"]`
|
||||
if !strings.Contains(rr.Body.String(), expected) {
|
||||
t.Errorf("handler returned unexpected body: got %v want %v", rr.Body.String(), expected)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("ClusterSourceListHandler", func(t *testing.T) {
|
||||
req, err := http.NewRequest("GET", "/v1/cluster-sources", nil)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
rr := httptest.NewRecorder()
|
||||
handler := v1.ClusterResourcesSourceListHandler(store)
|
||||
handler.ServeHTTP(rr, req)
|
||||
|
||||
if status := rr.Code; status != http.StatusOK {
|
||||
t.Errorf("handler returned wrong status code: got %v want %v", status, http.StatusOK)
|
||||
}
|
||||
|
||||
expected := `["Kyverno"]`
|
||||
if !strings.Contains(rr.Body.String(), expected) {
|
||||
t.Errorf("handler returned unexpected body: got %v want %v", rr.Body.String(), expected)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("NamespacedSourceListHandler", func(t *testing.T) {
|
||||
req, err := http.NewRequest("GET", "/v1/namspaced-sources", nil)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
rr := httptest.NewRecorder()
|
||||
handler := v1.NamespacedSourceListHandler(store)
|
||||
handler.ServeHTTP(rr, req)
|
||||
|
||||
if status := rr.Code; status != http.StatusOK {
|
||||
t.Errorf("handler returned wrong status code: got %v want %v", status, http.StatusOK)
|
||||
}
|
||||
|
||||
expected := `["Kyverno"]`
|
||||
if !strings.Contains(rr.Body.String(), expected) {
|
||||
t.Errorf("handler returned unexpected body: got %v want %v", rr.Body.String(), expected)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("ClusterStatusCountHandler", func(t *testing.T) {
|
||||
req, err := http.NewRequest("GET", "/v1/cluster-status-counts?status=pass", nil)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
rr := httptest.NewRecorder()
|
||||
handler := v1.ClusterResourcesStatusCountHandler(store)
|
||||
handler.ServeHTTP(rr, req)
|
||||
|
||||
if status := rr.Code; status != http.StatusOK {
|
||||
t.Errorf("handler returned wrong status code: got %v want %v", status, http.StatusOK)
|
||||
}
|
||||
|
||||
expected := `[{"status":"pass","count":1}]`
|
||||
if !strings.Contains(rr.Body.String(), expected) {
|
||||
t.Errorf("handler returned unexpected body: got %v want %v", rr.Body.String(), expected)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("NamespacedStatusCountHandler", func(t *testing.T) {
|
||||
req, err := http.NewRequest("GET", "/v1/namespaced-status-counts?status=pass", nil)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
rr := httptest.NewRecorder()
|
||||
handler := v1.NamespacedResourcesStatusCountsHandler(store)
|
||||
handler.ServeHTTP(rr, req)
|
||||
|
||||
if status := rr.Code; status != http.StatusOK {
|
||||
t.Errorf("handler returned wrong status code: got %v want %v", status, http.StatusOK)
|
||||
}
|
||||
|
||||
expected := `[{"status":"pass","items":[{"namespace":"test","count":1}]}]`
|
||||
if !strings.Contains(rr.Body.String(), expected) {
|
||||
t.Errorf("handler returned unexpected body: got %v want %v", rr.Body.String(), expected)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("RuleStatusCountHandler", func(t *testing.T) {
|
||||
req, err := http.NewRequest("GET", "/v1/rule-status-count?policy=require-requests-and-limits-required&rule=autogen-check-for-requests-and-limits", nil)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
rr := httptest.NewRecorder()
|
||||
handler := v1.RuleStatusCountHandler(store)
|
||||
handler.ServeHTTP(rr, req)
|
||||
|
||||
if status := rr.Code; status != http.StatusOK {
|
||||
t.Errorf("handler returned wrong status code: got %v want %v", status, http.StatusOK)
|
||||
}
|
||||
|
||||
expected := `{"status":"fail","count":1}`
|
||||
if !strings.Contains(rr.Body.String(), expected) {
|
||||
t.Errorf("handler returned unexpected body: got %v want %v", rr.Body.String(), expected)
|
||||
}
|
||||
|
||||
expected = `{"status":"pass","count":1}`
|
||||
if !strings.Contains(rr.Body.String(), expected) {
|
||||
t.Errorf("handler returned unexpected body: got %v want %v", rr.Body.String(), expected)
|
||||
}
|
||||
|
||||
expected = `{"status":"warn","count":0}`
|
||||
if !strings.Contains(rr.Body.String(), expected) {
|
||||
t.Errorf("handler returned unexpected body: got %v want %v", rr.Body.String(), expected)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("NamespacedResultHandler", func(t *testing.T) {
|
||||
req, err := http.NewRequest("GET", "/v1/namespaced-results", nil)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
rr := httptest.NewRecorder()
|
||||
handler := v1.NamespacedResourcesResultHandler(store)
|
||||
handler.ServeHTTP(rr, req)
|
||||
|
||||
if status := rr.Code; status != http.StatusOK {
|
||||
t.Errorf("handler returned wrong status code: got %v want %v", status, http.StatusOK)
|
||||
}
|
||||
|
||||
expected := `[{"id":"123","namespace":"test","kind":"Deployment","name":"nginx","message":"validation error: requests and limits required. Rule autogen-check-for-requests-and-limits failed at path /spec/template/spec/containers/0/resources/requests/","policy":"require-requests-and-limits-required","rule":"autogen-check-for-requests-and-limits","status":"fail","severity":"high"},{"id":"124","namespace":"test","kind":"Pod","name":"nginx","message":"validation error: requests and limits required. Rule autogen-check-for-requests-and-limits failed at path /spec/template/spec/containers/0/resources/requests/","policy":"require-requests-and-limits-required","rule":"autogen-check-for-requests-and-limits","status":"pass"}]`
|
||||
if !strings.Contains(rr.Body.String(), expected) {
|
||||
t.Errorf("handler returned unexpected body: got %v want %v", rr.Body.String(), expected)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("ClusterResultHandler", func(t *testing.T) {
|
||||
req, err := http.NewRequest("GET", "/v1/cluster-results", nil)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
rr := httptest.NewRecorder()
|
||||
handler := v1.ClusterResourcesResultHandler(store)
|
||||
handler.ServeHTTP(rr, req)
|
||||
|
||||
if status := rr.Code; status != http.StatusOK {
|
||||
t.Errorf("handler returned wrong status code: got %v want %v", status, http.StatusOK)
|
||||
}
|
||||
|
||||
expected := "{\"id\":\"125\",\"kind\":\"Namespace\",\"name\":\"test\",\"message\":\"validation error: The label `test` is required. Rule check-for-labels-on-namespace\",\"policy\":\"require-ns-labels\",\"rule\":\"check-for-labels-on-namespace\",\"status\":\"pass\",\"severity\":\"medium\"}"
|
||||
if !strings.Contains(rr.Body.String(), expected) {
|
||||
t.Errorf("handler returned unexpected body: got %v want %v", rr.Body.String(), expected)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("NamespaceListHandler", func(t *testing.T) {
|
||||
req, err := http.NewRequest("GET", "/v1/namespaces", nil)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
rr := httptest.NewRecorder()
|
||||
handler := v1.NamespaceListHandler(store)
|
||||
handler.ServeHTTP(rr, req)
|
||||
|
||||
if status := rr.Code; status != http.StatusOK {
|
||||
t.Errorf("handler returned wrong status code: got %v want %v", status, http.StatusOK)
|
||||
}
|
||||
|
||||
expected := `["test"]`
|
||||
if !strings.Contains(rr.Body.String(), expected) {
|
||||
t.Errorf("handler returned unexpected body: got %v want %v", rr.Body.String(), expected)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func Test_TargetsAPI(t *testing.T) {
|
||||
t.Run("Empty Respose", func(t *testing.T) {
|
||||
req, err := http.NewRequest("GET", "/targets", nil)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
rr := httptest.NewRecorder()
|
||||
handler := v1.TargetsHandler(make([]target.Client, 0))
|
||||
|
||||
handler.ServeHTTP(rr, req)
|
||||
|
||||
if status := rr.Code; status != http.StatusOK {
|
||||
t.Errorf("handler returned wrong status code: got %v want %v", status, http.StatusOK)
|
||||
}
|
||||
|
||||
expected := "[]"
|
||||
|
||||
if !strings.Contains(rr.Body.String(), expected) {
|
||||
t.Errorf("handler returned unexpected body: got %v want %v", rr.Body.String(), expected)
|
||||
}
|
||||
})
|
||||
t.Run("Respose", func(t *testing.T) {
|
||||
req, err := http.NewRequest("GET", "/targets", nil)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
rr := httptest.NewRecorder()
|
||||
handler := v1.TargetsHandler([]target.Client{
|
||||
loki.NewClient("", "", []string{}, true, &http.Client{}),
|
||||
})
|
||||
|
||||
handler.ServeHTTP(rr, req)
|
||||
|
||||
if status := rr.Code; status != http.StatusOK {
|
||||
t.Errorf("handler returned wrong status code: got %v want %v", status, http.StatusOK)
|
||||
}
|
||||
|
||||
expected := `[{"name":"Loki","minimumPriority":"debug","skipExistingOnStartup":true}]`
|
||||
if !strings.Contains(rr.Body.String(), expected) {
|
||||
t.Errorf("handler returned unexpected body: got %v want %v", rr.Body.String(), expected)
|
||||
}
|
||||
})
|
||||
}
|
56
pkg/api/v1/model.go
Normal file
56
pkg/api/v1/model.go
Normal file
|
@ -0,0 +1,56 @@
|
|||
package v1
|
||||
|
||||
import (
|
||||
"github.com/kyverno/policy-reporter/pkg/report"
|
||||
"github.com/kyverno/policy-reporter/pkg/target"
|
||||
)
|
||||
|
||||
type StatusCount struct {
|
||||
Status string `json:"status"`
|
||||
Count int `json:"count"`
|
||||
}
|
||||
|
||||
type NamespacedStatusCount struct {
|
||||
Status string `json:"status"`
|
||||
Items []NamespaceCount `json:"items"`
|
||||
}
|
||||
|
||||
type NamespaceCount struct {
|
||||
Namespace string `json:"namespace"`
|
||||
Count int `json:"count"`
|
||||
}
|
||||
|
||||
type ListResult struct {
|
||||
ID string `json:"id"`
|
||||
Namespace string `json:"namespace,omitempty"`
|
||||
Kind string `json:"kind"`
|
||||
Name string `json:"name"`
|
||||
Message string `json:"message"`
|
||||
Policy string `json:"policy"`
|
||||
Rule string `json:"rule"`
|
||||
Status string `json:"status"`
|
||||
Severity string `json:"severity,omitempty"`
|
||||
Properties map[string]string `json:"properties,omitempty"`
|
||||
}
|
||||
|
||||
// Target API Model
|
||||
type Target struct {
|
||||
Name string `json:"name"`
|
||||
MinimumPriority string `json:"minimumPriority"`
|
||||
Sources []string `json:"sources,omitempty"`
|
||||
SkipExistingOnStartup bool `json:"skipExistingOnStartup"`
|
||||
}
|
||||
|
||||
func mapTarget(t target.Client) Target {
|
||||
minPrio := t.MinimumPriority()
|
||||
if minPrio == "" {
|
||||
minPrio = report.Priority(report.DebugPriority).String()
|
||||
}
|
||||
|
||||
return Target{
|
||||
Name: t.Name(),
|
||||
MinimumPriority: minPrio,
|
||||
Sources: t.Sources(),
|
||||
SkipExistingOnStartup: t.SkipExistingOnStartup(),
|
||||
}
|
||||
}
|
|
@ -5,6 +5,7 @@ type Loki struct {
|
|||
Host string `mapstructure:"host"`
|
||||
SkipExisting bool `mapstructure:"skipExistingOnStartup"`
|
||||
MinimumPriority string `mapstructure:"minimumPriority"`
|
||||
Sources []string `mapstructure:"sources"`
|
||||
}
|
||||
|
||||
// Elasticsearch configuration
|
||||
|
@ -14,6 +15,7 @@ type Elasticsearch struct {
|
|||
Rotation string `mapstructure:"rotation"`
|
||||
SkipExisting bool `mapstructure:"skipExistingOnStartup"`
|
||||
MinimumPriority string `mapstructure:"minimumPriority"`
|
||||
Sources []string `mapstructure:"sources"`
|
||||
}
|
||||
|
||||
// Slack configuration
|
||||
|
@ -21,6 +23,7 @@ type Slack struct {
|
|||
Webhook string `mapstructure:"webhook"`
|
||||
SkipExisting bool `mapstructure:"skipExistingOnStartup"`
|
||||
MinimumPriority string `mapstructure:"minimumPriority"`
|
||||
Sources []string `mapstructure:"sources"`
|
||||
}
|
||||
|
||||
// Discord configuration
|
||||
|
@ -28,6 +31,7 @@ type Discord struct {
|
|||
Webhook string `mapstructure:"webhook"`
|
||||
SkipExisting bool `mapstructure:"skipExistingOnStartup"`
|
||||
MinimumPriority string `mapstructure:"minimumPriority"`
|
||||
Sources []string `mapstructure:"sources"`
|
||||
}
|
||||
|
||||
// Teams configuration
|
||||
|
@ -35,6 +39,7 @@ type Teams struct {
|
|||
Webhook string `mapstructure:"webhook"`
|
||||
SkipExisting bool `mapstructure:"skipExistingOnStartup"`
|
||||
MinimumPriority string `mapstructure:"minimumPriority"`
|
||||
Sources []string `mapstructure:"sources"`
|
||||
}
|
||||
|
||||
// UI configuration
|
||||
|
@ -42,9 +47,10 @@ type UI struct {
|
|||
Host string `mapstructure:"host"`
|
||||
SkipExisting bool `mapstructure:"skipExistingOnStartup"`
|
||||
MinimumPriority string `mapstructure:"minimumPriority"`
|
||||
Sources []string `mapstructure:"sources"`
|
||||
}
|
||||
|
||||
type Yandex struct {
|
||||
type S3 struct {
|
||||
AccessKeyID string `mapstructure:"accessKeyID"`
|
||||
SecretAccessKey string `mapstructure:"secretAccessKey"`
|
||||
Region string `mapstructure:"region"`
|
||||
|
@ -53,6 +59,7 @@ type Yandex struct {
|
|||
Bucket string `mapstructure:"bucket"`
|
||||
SkipExisting bool `mapstructure:"skipExistingOnStartup"`
|
||||
MinimumPriority string `mapstructure:"minimumPriority"`
|
||||
Sources []string `mapstructure:"sources"`
|
||||
}
|
||||
|
||||
// API configuration
|
||||
|
@ -60,6 +67,18 @@ type API struct {
|
|||
Port int `mapstructure:"port"`
|
||||
}
|
||||
|
||||
// REST configuration
|
||||
type REST struct {
|
||||
Enabled bool `mapstructure:"enabled"`
|
||||
}
|
||||
|
||||
// Metrics configuration
|
||||
type Metrics struct {
|
||||
Enabled bool `mapstructure:"enabled"`
|
||||
}
|
||||
|
||||
type PriorityMap = map[string]string
|
||||
|
||||
// Config of the PolicyReporter
|
||||
type Config struct {
|
||||
Loki Loki `mapstructure:"loki"`
|
||||
|
@ -67,9 +86,12 @@ type Config struct {
|
|||
Slack Slack `mapstructure:"slack"`
|
||||
Discord Discord `mapstructure:"discord"`
|
||||
Teams Teams `mapstructure:"teams"`
|
||||
Yandex Yandex `mapstructure:"yandex"`
|
||||
S3 S3 `mapstructure:"s3"`
|
||||
UI UI `mapstructure:"ui"`
|
||||
API API `mapstructure:"api"`
|
||||
Kubeconfig string `mapstructure:"kubeconfig"`
|
||||
Namespace string `mapstructure:"namespace"`
|
||||
DBFile string `mapstructure:"dbfile"`
|
||||
Metrics Metrics `mapstructure:"metrics"`
|
||||
REST REST `mapstructure:"rest"`
|
||||
PriorityMap PriorityMap `mapstructure:"priorityMap"`
|
||||
}
|
||||
|
|
|
@ -1,28 +1,30 @@
|
|||
package config
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"log"
|
||||
"net/http"
|
||||
"time"
|
||||
|
||||
"github.com/kyverno/policy-reporter/pkg/api"
|
||||
"github.com/kyverno/policy-reporter/pkg/helper"
|
||||
"github.com/kyverno/policy-reporter/pkg/kubernetes"
|
||||
"github.com/kyverno/policy-reporter/pkg/listener"
|
||||
"github.com/kyverno/policy-reporter/pkg/report"
|
||||
"github.com/kyverno/policy-reporter/pkg/sqlite3"
|
||||
"github.com/kyverno/policy-reporter/pkg/target"
|
||||
"github.com/kyverno/policy-reporter/pkg/target/discord"
|
||||
"github.com/kyverno/policy-reporter/pkg/target/elasticsearch"
|
||||
"github.com/kyverno/policy-reporter/pkg/target/helper"
|
||||
"github.com/kyverno/policy-reporter/pkg/target/loki"
|
||||
"github.com/kyverno/policy-reporter/pkg/target/s3"
|
||||
"github.com/kyverno/policy-reporter/pkg/target/slack"
|
||||
"github.com/kyverno/policy-reporter/pkg/target/teams"
|
||||
"github.com/kyverno/policy-reporter/pkg/target/ui"
|
||||
"github.com/kyverno/policy-reporter/pkg/target/yandex"
|
||||
|
||||
"github.com/patrickmn/go-cache"
|
||||
"k8s.io/client-go/dynamic"
|
||||
v1 "k8s.io/client-go/kubernetes/typed/core/v1"
|
||||
|
||||
_ "github.com/mattn/go-sqlite3"
|
||||
"k8s.io/client-go/rest"
|
||||
)
|
||||
|
||||
|
@ -31,88 +33,89 @@ type Resolver struct {
|
|||
config *Config
|
||||
k8sConfig *rest.Config
|
||||
mapper kubernetes.Mapper
|
||||
policyAdapter kubernetes.PolicyReportAdapter
|
||||
policyStore *report.PolicyReportStore
|
||||
policyClient report.PolicyResultClient
|
||||
publisher report.EventPublisher
|
||||
policyStore sqlite3.PolicyReportStore
|
||||
policyReportClient report.PolicyReportClient
|
||||
lokiClient target.Client
|
||||
elasticsearchClient target.Client
|
||||
slackClient target.Client
|
||||
discordClient target.Client
|
||||
teamsClient target.Client
|
||||
uiClient target.Client
|
||||
yandexClient target.Client
|
||||
s3Client target.Client
|
||||
resultCache *cache.Cache
|
||||
}
|
||||
|
||||
// APIServer resolver method
|
||||
func (r *Resolver) APIServer() api.Server {
|
||||
foundResources := make(map[string]string)
|
||||
|
||||
client := r.policyClient
|
||||
if client != nil {
|
||||
foundResources = client.GetFoundResources()
|
||||
}
|
||||
|
||||
func (r *Resolver) APIServer(foundResources map[string]string) api.Server {
|
||||
return api.NewServer(
|
||||
r.PolicyReportStore(),
|
||||
r.TargetClients(),
|
||||
r.config.API.Port,
|
||||
foundResources,
|
||||
)
|
||||
}
|
||||
|
||||
// PolicyReportStore resolver method
|
||||
func (r *Resolver) PolicyReportStore() *report.PolicyReportStore {
|
||||
if r.policyStore != nil {
|
||||
return r.policyStore
|
||||
}
|
||||
|
||||
r.policyStore = report.NewPolicyReportStore()
|
||||
|
||||
return r.policyStore
|
||||
// Database resolver method
|
||||
func (r *Resolver) Database() (*sql.DB, error) {
|
||||
return sqlite3.NewDatabase(r.config.DBFile)
|
||||
}
|
||||
|
||||
// PolicyReportClient resolver method
|
||||
func (r *Resolver) PolicyReportClient(ctx context.Context) (report.PolicyResultClient, error) {
|
||||
if r.policyClient != nil {
|
||||
return r.policyClient, nil
|
||||
// PolicyReportStore resolver method
|
||||
func (r *Resolver) PolicyReportStore(db *sql.DB) (sqlite3.PolicyReportStore, error) {
|
||||
if r.policyStore != nil {
|
||||
return r.policyStore, nil
|
||||
}
|
||||
|
||||
policyAPI, err := r.policyReportAPI(ctx)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
s, err := sqlite3.NewPolicyReportStore(db)
|
||||
r.policyStore = s
|
||||
|
||||
return r.policyStore, err
|
||||
}
|
||||
|
||||
// EventPublisher resolver method
|
||||
func (r *Resolver) EventPublisher() report.EventPublisher {
|
||||
if r.publisher != nil {
|
||||
return r.publisher
|
||||
}
|
||||
|
||||
client := kubernetes.NewPolicyReportClient(
|
||||
policyAPI,
|
||||
r.PolicyReportStore(),
|
||||
time.Now(),
|
||||
r.ResultCache(),
|
||||
)
|
||||
s := report.NewEventPublisher()
|
||||
r.publisher = s
|
||||
|
||||
r.policyClient = client
|
||||
return r.publisher
|
||||
}
|
||||
|
||||
return client, nil
|
||||
// RegisterSendResultListener resolver method
|
||||
func (r *Resolver) RegisterSendResultListener() {
|
||||
targets := r.TargetClients()
|
||||
if len(targets) > 0 {
|
||||
newResultListener := listener.NewResultListener(r.SkipExistingOnStartup(), r.ResultCache(), time.Now())
|
||||
newResultListener.RegisterListener(listener.NewSendResultListener(targets))
|
||||
|
||||
r.EventPublisher().RegisterListener(newResultListener.Listen)
|
||||
}
|
||||
}
|
||||
|
||||
// RegisterSendResultListener resolver method
|
||||
func (r *Resolver) RegisterStoreListener(store report.PolicyReportStore) {
|
||||
r.EventPublisher().RegisterListener(listener.NewStoreListener(store))
|
||||
}
|
||||
|
||||
// RegisterMetricsListener resolver method
|
||||
func (r *Resolver) RegisterMetricsListener() {
|
||||
r.EventPublisher().RegisterListener(listener.NewMetricsListener())
|
||||
}
|
||||
|
||||
// Mapper resolver method
|
||||
func (r *Resolver) Mapper(ctx context.Context) (kubernetes.Mapper, error) {
|
||||
func (r *Resolver) Mapper() kubernetes.Mapper {
|
||||
if r.mapper != nil {
|
||||
return r.mapper, nil
|
||||
return r.mapper
|
||||
}
|
||||
|
||||
cmAPI, err := r.configMapAPI()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
mapper := kubernetes.NewMapper(make(map[string]string), cmAPI)
|
||||
mapper.FetchPriorities(ctx)
|
||||
go mapper.SyncPriorities(ctx)
|
||||
mapper := kubernetes.NewMapper(r.config.PriorityMap)
|
||||
|
||||
r.mapper = mapper
|
||||
|
||||
return mapper, err
|
||||
return mapper
|
||||
}
|
||||
|
||||
// LokiClient resolver method
|
||||
|
@ -128,6 +131,7 @@ func (r *Resolver) LokiClient() target.Client {
|
|||
r.lokiClient = loki.NewClient(
|
||||
r.config.Loki.Host,
|
||||
r.config.Loki.MinimumPriority,
|
||||
r.config.Loki.Sources,
|
||||
r.config.Loki.SkipExisting,
|
||||
&http.Client{},
|
||||
)
|
||||
|
@ -158,6 +162,7 @@ func (r *Resolver) ElasticsearchClient() target.Client {
|
|||
r.config.Elasticsearch.Index,
|
||||
r.config.Elasticsearch.Rotation,
|
||||
r.config.Elasticsearch.MinimumPriority,
|
||||
r.config.Elasticsearch.Sources,
|
||||
r.config.Elasticsearch.SkipExisting,
|
||||
&http.Client{},
|
||||
)
|
||||
|
@ -180,6 +185,7 @@ func (r *Resolver) SlackClient() target.Client {
|
|||
r.slackClient = slack.NewClient(
|
||||
r.config.Slack.Webhook,
|
||||
r.config.Slack.MinimumPriority,
|
||||
r.config.Slack.Sources,
|
||||
r.config.Slack.SkipExisting,
|
||||
&http.Client{},
|
||||
)
|
||||
|
@ -202,6 +208,7 @@ func (r *Resolver) DiscordClient() target.Client {
|
|||
r.discordClient = discord.NewClient(
|
||||
r.config.Discord.Webhook,
|
||||
r.config.Discord.MinimumPriority,
|
||||
r.config.Discord.Sources,
|
||||
r.config.Discord.SkipExisting,
|
||||
&http.Client{},
|
||||
)
|
||||
|
@ -224,6 +231,7 @@ func (r *Resolver) TeamsClient() target.Client {
|
|||
r.teamsClient = teams.NewClient(
|
||||
r.config.Teams.Webhook,
|
||||
r.config.Teams.MinimumPriority,
|
||||
r.config.Teams.Sources,
|
||||
r.config.Teams.SkipExisting,
|
||||
&http.Client{},
|
||||
)
|
||||
|
@ -246,6 +254,7 @@ func (r *Resolver) UIClient() target.Client {
|
|||
r.uiClient = ui.NewClient(
|
||||
r.config.UI.Host,
|
||||
r.config.UI.MinimumPriority,
|
||||
r.config.UI.Sources,
|
||||
r.config.UI.SkipExisting,
|
||||
&http.Client{},
|
||||
)
|
||||
|
@ -255,49 +264,52 @@ func (r *Resolver) UIClient() target.Client {
|
|||
return r.uiClient
|
||||
}
|
||||
|
||||
func (r *Resolver) YandexClient() target.Client {
|
||||
if r.yandexClient != nil {
|
||||
return r.yandexClient
|
||||
func (r *Resolver) S3Client() target.Client {
|
||||
if r.s3Client != nil {
|
||||
return r.s3Client
|
||||
}
|
||||
if r.config.Yandex.AccessKeyID == "" || r.config.Yandex.SecretAccessKey == "" {
|
||||
if r.config.S3.Endpoint == "" {
|
||||
return nil
|
||||
}
|
||||
|
||||
if r.config.Yandex.Region == "" {
|
||||
log.Printf("[INFO] Yandex.Region has not been declared using ru-central1")
|
||||
r.config.Yandex.Region = "ru-central1"
|
||||
}
|
||||
if r.config.Yandex.Endpoint == "" {
|
||||
log.Printf("[INFO] Yandex.Endpoint has not been declared using https://storage.yandexcloud.net")
|
||||
r.config.Yandex.Endpoint = "https://storage.yandexcloud.net"
|
||||
}
|
||||
if r.config.Yandex.Prefix == "" {
|
||||
log.Printf("[INFO] Yandex.Prefix has not been declared using policy-reporter prefix")
|
||||
r.config.Yandex.Prefix = "policy-reporter/"
|
||||
}
|
||||
if r.config.Yandex.Bucket == "" {
|
||||
log.Printf("[ERROR] Yandex : Bucket has to be declared")
|
||||
if r.config.S3.AccessKeyID == "" {
|
||||
log.Printf("[ERROR] S3.AccessKeyID has not been declared")
|
||||
return nil
|
||||
}
|
||||
if r.config.S3.SecretAccessKey == "" {
|
||||
log.Printf("[ERROR] S3.SecretAccessKey has not been declared")
|
||||
return nil
|
||||
}
|
||||
if r.config.S3.Region == "" {
|
||||
log.Printf("[ERROR] S3.Region has not been declared")
|
||||
return nil
|
||||
}
|
||||
if r.config.S3.Bucket == "" {
|
||||
log.Printf("[ERROR] S3.Bucket has to be declared")
|
||||
return nil
|
||||
}
|
||||
if r.config.S3.Prefix == "" {
|
||||
r.config.S3.Prefix = "policy-reporter/"
|
||||
}
|
||||
|
||||
s3Client := helper.NewClient(
|
||||
r.config.Yandex.AccessKeyID,
|
||||
r.config.Yandex.SecretAccessKey,
|
||||
r.config.Yandex.Region,
|
||||
r.config.Yandex.Endpoint,
|
||||
r.config.Yandex.Bucket,
|
||||
r.config.S3.AccessKeyID,
|
||||
r.config.S3.SecretAccessKey,
|
||||
r.config.S3.Region,
|
||||
r.config.S3.Endpoint,
|
||||
r.config.S3.Bucket,
|
||||
)
|
||||
|
||||
r.yandexClient = yandex.NewClient(
|
||||
r.s3Client = s3.NewClient(
|
||||
s3Client,
|
||||
r.config.Yandex.Prefix,
|
||||
r.config.Yandex.MinimumPriority,
|
||||
r.config.Yandex.SkipExisting,
|
||||
r.config.S3.Prefix,
|
||||
r.config.S3.MinimumPriority,
|
||||
r.config.S3.Sources,
|
||||
r.config.S3.SkipExisting,
|
||||
)
|
||||
|
||||
log.Println("[INFO] Yandex configured")
|
||||
log.Println("[INFO] S3 configured")
|
||||
|
||||
return r.yandexClient
|
||||
return r.s3Client
|
||||
}
|
||||
|
||||
// TargetClients resolver method
|
||||
|
@ -328,8 +340,8 @@ func (r *Resolver) TargetClients() []target.Client {
|
|||
clients = append(clients, ui)
|
||||
}
|
||||
|
||||
if yandex := r.YandexClient(); yandex != nil {
|
||||
clients = append(clients, yandex)
|
||||
if s3 := r.S3Client(); s3 != nil {
|
||||
clients = append(clients, s3)
|
||||
}
|
||||
|
||||
return clients
|
||||
|
@ -346,44 +358,19 @@ func (r *Resolver) SkipExistingOnStartup() bool {
|
|||
return true
|
||||
}
|
||||
|
||||
// ConfigMapClient resolver method
|
||||
func (r *Resolver) ConfigMapClient() (v1.ConfigMapInterface, error) {
|
||||
var err error
|
||||
|
||||
client, err := v1.NewForConfig(r.k8sConfig)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return client.ConfigMaps(r.config.Namespace), nil
|
||||
}
|
||||
|
||||
func (r *Resolver) configMapAPI() (kubernetes.ConfigMapAdapter, error) {
|
||||
client, err := r.ConfigMapClient()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return kubernetes.NewConfigMapAdapter(client), nil
|
||||
}
|
||||
|
||||
func (r *Resolver) policyReportAPI(ctx context.Context) (kubernetes.PolicyReportAdapter, error) {
|
||||
if r.policyAdapter != nil {
|
||||
return r.policyAdapter, nil
|
||||
func (r *Resolver) PolicyReportClient() (report.PolicyReportClient, error) {
|
||||
if r.policyReportClient != nil {
|
||||
return r.policyReportClient, nil
|
||||
}
|
||||
|
||||
client, err := dynamic.NewForConfig(r.k8sConfig)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
mapper, err := r.Mapper(ctx)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
r.policyAdapter = kubernetes.NewPolicyReportAdapter(client, mapper)
|
||||
r.policyReportClient = kubernetes.NewPolicyReportClient(client, r.Mapper(), 5*time.Second)
|
||||
|
||||
return r.policyAdapter, nil
|
||||
return r.policyReportClient, nil
|
||||
}
|
||||
|
||||
// ResultCache resolver method
|
||||
|
|
|
@ -1,10 +1,10 @@
|
|||
package config_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"github.com/kyverno/policy-reporter/pkg/config"
|
||||
"github.com/kyverno/policy-reporter/pkg/report"
|
||||
"k8s.io/client-go/rest"
|
||||
)
|
||||
|
||||
|
@ -41,12 +41,14 @@ var testConfig = &config.Config{
|
|||
SkipExisting: true,
|
||||
MinimumPriority: "debug",
|
||||
},
|
||||
Yandex: config.Yandex{
|
||||
S3: config.S3{
|
||||
AccessKeyID: "AccessKey",
|
||||
SecretAccessKey: "SecretAccessKey",
|
||||
Bucket: "test",
|
||||
SkipExisting: true,
|
||||
MinimumPriority: "debug",
|
||||
Endpoint: "https://storage.yandexcloud.net",
|
||||
Region: "ru-central1",
|
||||
},
|
||||
}
|
||||
|
||||
|
@ -108,13 +110,13 @@ func Test_ResolveTarget(t *testing.T) {
|
|||
t.Error("Error: Should reuse first instance")
|
||||
}
|
||||
})
|
||||
t.Run("Yandex", func(t *testing.T) {
|
||||
client := resolver.YandexClient()
|
||||
t.Run("S3", func(t *testing.T) {
|
||||
client := resolver.S3Client()
|
||||
if client == nil {
|
||||
t.Error("Expected Client, got nil")
|
||||
}
|
||||
|
||||
client2 := resolver.YandexClient()
|
||||
client2 := resolver.S3Client()
|
||||
if client != client2 {
|
||||
t.Error("Error: Should reuse first instance")
|
||||
}
|
||||
|
@ -194,10 +196,12 @@ func Test_ResolveTargetWithoutHost(t *testing.T) {
|
|||
SkipExisting: true,
|
||||
MinimumPriority: "debug",
|
||||
},
|
||||
Yandex: config.Yandex{
|
||||
S3: config.S3{
|
||||
Endpoint: "",
|
||||
Region: "",
|
||||
AccessKeyID: "",
|
||||
SecretAccessKey: "",
|
||||
Bucket: "test",
|
||||
Bucket: "",
|
||||
SkipExisting: true,
|
||||
MinimumPriority: "debug",
|
||||
},
|
||||
|
@ -238,33 +242,94 @@ func Test_ResolveTargetWithoutHost(t *testing.T) {
|
|||
t.Error("Expected Client to be nil if no host is configured")
|
||||
}
|
||||
})
|
||||
t.Run("Yandex", func(t *testing.T) {
|
||||
t.Run("S3.Endoint", func(t *testing.T) {
|
||||
resolver := config.NewResolver(config2, nil)
|
||||
|
||||
if resolver.YandexClient() != nil {
|
||||
t.Error("Expected Client to be nil if no host is configured")
|
||||
if resolver.S3Client() != nil {
|
||||
t.Error("Expected Client to be nil if no endpoint is configured")
|
||||
}
|
||||
})
|
||||
t.Run("S3.AccessKey", func(t *testing.T) {
|
||||
config2.S3.Endpoint = "https://storage.yandexcloud.net"
|
||||
|
||||
resolver := config.NewResolver(config2, nil)
|
||||
|
||||
if resolver.S3Client() != nil {
|
||||
t.Error("Expected Client to be nil if no accessKey is configured")
|
||||
}
|
||||
})
|
||||
t.Run("S3.AccessKey", func(t *testing.T) {
|
||||
config2.S3.Endpoint = "https://storage.yandexcloud.net"
|
||||
|
||||
resolver := config.NewResolver(config2, nil)
|
||||
|
||||
if resolver.S3Client() != nil {
|
||||
t.Error("Expected Client to be nil if no accessKey is configured")
|
||||
}
|
||||
})
|
||||
t.Run("S3.SecretAccessKey", func(t *testing.T) {
|
||||
config2.S3.AccessKeyID = "access"
|
||||
|
||||
resolver := config.NewResolver(config2, nil)
|
||||
|
||||
if resolver.S3Client() != nil {
|
||||
t.Error("Expected Client to be nil if no secretAccessKey is configured")
|
||||
}
|
||||
})
|
||||
t.Run("S3.Region", func(t *testing.T) {
|
||||
config2.S3.SecretAccessKey = "secret"
|
||||
|
||||
resolver := config.NewResolver(config2, nil)
|
||||
|
||||
if resolver.S3Client() != nil {
|
||||
t.Error("Expected Client to be nil if no region is configured")
|
||||
}
|
||||
})
|
||||
t.Run("S3.Bucket", func(t *testing.T) {
|
||||
config2.S3.Region = "ru-central1"
|
||||
|
||||
resolver := config.NewResolver(config2, nil)
|
||||
|
||||
if resolver.S3Client() != nil {
|
||||
t.Error("Expected Client to be nil if no bucket is configured")
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func Test_ResolvePolicyClient(t *testing.T) {
|
||||
resolver := config.NewResolver(&config.Config{}, &rest.Config{})
|
||||
resolver := config.NewResolver(&config.Config{DBFile: "test.db"}, &rest.Config{})
|
||||
|
||||
client1, err := resolver.PolicyReportClient(context.Background())
|
||||
client1, err := resolver.PolicyReportClient()
|
||||
if err != nil {
|
||||
t.Errorf("Unexpected Error: %s", err)
|
||||
}
|
||||
|
||||
client2, _ := resolver.PolicyReportClient(context.Background())
|
||||
client2, _ := resolver.PolicyReportClient()
|
||||
if client1 != client2 {
|
||||
t.Error("A second call resolver.PolicyReportClient() should return the cached first client")
|
||||
}
|
||||
}
|
||||
|
||||
func Test_ResolveAPIServer(t *testing.T) {
|
||||
resolver := config.NewResolver(testConfig, &rest.Config{})
|
||||
func Test_ResolvePolicyStore(t *testing.T) {
|
||||
resolver := config.NewResolver(&config.Config{DBFile: "test.db"}, &rest.Config{})
|
||||
db, _ := resolver.Database()
|
||||
defer db.Close()
|
||||
|
||||
server := resolver.APIServer()
|
||||
store1, err := resolver.PolicyReportStore(db)
|
||||
if err != nil {
|
||||
t.Errorf("Unexpected Error: %s", err)
|
||||
}
|
||||
|
||||
store2, _ := resolver.PolicyReportStore(db)
|
||||
if store1 != store2 {
|
||||
t.Error("A second call resolver.PolicyReportClient() should return the cached first client")
|
||||
}
|
||||
}
|
||||
|
||||
func Test_ResolveAPIServer(t *testing.T) {
|
||||
resolver := config.NewResolver(&config.Config{}, &rest.Config{})
|
||||
|
||||
server := resolver.APIServer(make(map[string]string))
|
||||
if server == nil {
|
||||
t.Error("Error: Should return API Server")
|
||||
}
|
||||
|
@ -284,14 +349,70 @@ func Test_ResolveCache(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
func Test_ResolveMapper(t *testing.T) {
|
||||
resolver := config.NewResolver(testConfig, &rest.Config{})
|
||||
|
||||
mapper1 := resolver.Mapper()
|
||||
if mapper1 == nil {
|
||||
t.Error("Error: Should return Mapper")
|
||||
}
|
||||
|
||||
mapper2 := resolver.Mapper()
|
||||
if mapper1 != mapper2 {
|
||||
t.Error("A second call resolver.Mapper() should return the cached first cache")
|
||||
}
|
||||
}
|
||||
|
||||
func Test_ResolveClientWithInvalidK8sConfig(t *testing.T) {
|
||||
k8sConfig := &rest.Config{}
|
||||
k8sConfig.Host = "invalid/url"
|
||||
|
||||
resolver := config.NewResolver(&config.Config{}, k8sConfig)
|
||||
resolver := config.NewResolver(testConfig, k8sConfig)
|
||||
|
||||
_, err := resolver.PolicyReportClient(context.Background())
|
||||
_, err := resolver.PolicyReportClient()
|
||||
if err == nil {
|
||||
t.Error("Error: 'host must be a URL or a host:port pair' was expected")
|
||||
}
|
||||
}
|
||||
|
||||
func Test_RegisterStoreListener(t *testing.T) {
|
||||
t.Run("Register StoreListener", func(t *testing.T) {
|
||||
resolver := config.NewResolver(testConfig, &rest.Config{})
|
||||
resolver.RegisterStoreListener(report.NewPolicyReportStore())
|
||||
|
||||
if len(resolver.EventPublisher().GetListener()) != 1 {
|
||||
t.Error("Expected one Listener to be registered")
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func Test_RegisterMetricsListener(t *testing.T) {
|
||||
t.Run("Register MetricsListener", func(t *testing.T) {
|
||||
resolver := config.NewResolver(testConfig, &rest.Config{})
|
||||
resolver.RegisterMetricsListener()
|
||||
|
||||
if len(resolver.EventPublisher().GetListener()) != 1 {
|
||||
t.Error("Expected one Listener to be registered")
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func Test_RegisterSendResultListener(t *testing.T) {
|
||||
t.Run("Register SendResultListener with Targets", func(t *testing.T) {
|
||||
resolver := config.NewResolver(testConfig, &rest.Config{})
|
||||
resolver.RegisterSendResultListener()
|
||||
|
||||
if len(resolver.EventPublisher().GetListener()) != 1 {
|
||||
t.Error("Expected one Listener to be registered")
|
||||
}
|
||||
})
|
||||
t.Run("Register SendResultListener without Targets", func(t *testing.T) {
|
||||
resolver := config.NewResolver(&config.Config{}, &rest.Config{})
|
||||
|
||||
resolver.RegisterSendResultListener()
|
||||
|
||||
if len(resolver.EventPublisher().GetListener()) != 0 {
|
||||
t.Error("Expected no Listener to be registered because no target exists")
|
||||
}
|
||||
})
|
||||
}
|
||||
|
|
64
pkg/helper/http.go
Normal file
64
pkg/helper/http.go
Normal file
|
@ -0,0 +1,64 @@
|
|||
package helper
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"html"
|
||||
"log"
|
||||
"net/http"
|
||||
)
|
||||
|
||||
func CreateJSONRequest(target, method, host string, payload interface{}) (*http.Request, error) {
|
||||
body := new(bytes.Buffer)
|
||||
|
||||
if err := json.NewEncoder(body).Encode(payload); err != nil {
|
||||
log.Printf("[ERROR] %s : %v\n", target, err.Error())
|
||||
return nil, err
|
||||
}
|
||||
|
||||
req, err := http.NewRequest(method, host, body)
|
||||
if err != nil {
|
||||
log.Printf("[ERROR] %s : %v\n", target, err.Error())
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return req, nil
|
||||
}
|
||||
|
||||
// ProcessHTTPResponse Logs Error or Success messages
|
||||
func ProcessHTTPResponse(target string, resp *http.Response, err error) {
|
||||
defer func() {
|
||||
if resp != nil && resp.Body != nil {
|
||||
resp.Body.Close()
|
||||
}
|
||||
}()
|
||||
|
||||
if err != nil {
|
||||
log.Printf("[ERROR] %s PUSH failed: %s\n", target, err.Error())
|
||||
} else if resp.StatusCode >= 400 {
|
||||
fmt.Printf("StatusCode: %d\n", resp.StatusCode)
|
||||
buf := new(bytes.Buffer)
|
||||
buf.ReadFrom(resp.Body)
|
||||
|
||||
log.Printf("[ERROR] %s PUSH failed [%d]: %s\n", target, resp.StatusCode, buf.String())
|
||||
} else {
|
||||
log.Printf("[INFO] %s PUSH OK\n", target)
|
||||
}
|
||||
}
|
||||
|
||||
func SendJSONResponse(w http.ResponseWriter, list interface{}, err error) {
|
||||
w.Header().Set("Content-Type", "application/json; charset=UTF-8")
|
||||
if err != nil {
|
||||
w.WriteHeader(http.StatusInternalServerError)
|
||||
fmt.Fprintf(w, `{ "message": "%s" }`, html.EscapeString(err.Error()))
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
if err := json.NewEncoder(w).Encode(list); err != nil {
|
||||
log.Println(err)
|
||||
w.WriteHeader(http.StatusInternalServerError)
|
||||
fmt.Fprintf(w, `{ "message": "%s" }`, html.EscapeString(err.Error()))
|
||||
}
|
||||
}
|
|
@ -28,7 +28,7 @@ func (s *s3Client) Upload(body *bytes.Buffer, key string) error {
|
|||
return err
|
||||
}
|
||||
|
||||
// NewClient creates a new Yandex.client to send Results to S3. It doesnt' work right now
|
||||
// NewClient creates a new S3.client to send Results to S3. It doesnt' work right now
|
||||
func NewClient(accessKeyID, secretAccessKey, region, endpoint, bucket string) S3Client {
|
||||
sess, err := session.NewSession(&aws.Config{
|
||||
Region: aws.String(region),
|
|
@ -1,53 +0,0 @@
|
|||
package kubernetes
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
apiv1 "k8s.io/api/core/v1"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/watch"
|
||||
v1 "k8s.io/client-go/kubernetes/typed/core/v1"
|
||||
)
|
||||
|
||||
const (
|
||||
prioriyConfig = "policy-reporter-priorities"
|
||||
)
|
||||
|
||||
// ConfigMapAdapter provides simplified APIs for ConfigMap Resources
|
||||
type ConfigMapAdapter interface {
|
||||
// GetConfig return a single ConfigMap by name if exist
|
||||
GetConfig(ctx context.Context, name string) (*apiv1.ConfigMap, error)
|
||||
// WatchConfigs calls its ConfigMapCallback whenever a ConfigMap was added, modified or deleted
|
||||
WatchConfigs(ctx context.Context, cb ConfigMapCallback) error
|
||||
}
|
||||
|
||||
// ConfigMapCallback is used by WatchConfigs
|
||||
type ConfigMapCallback = func(watch.EventType, *apiv1.ConfigMap)
|
||||
|
||||
type cmAdapter struct {
|
||||
api v1.ConfigMapInterface
|
||||
}
|
||||
|
||||
func (c cmAdapter) GetConfig(ctx context.Context, name string) (*apiv1.ConfigMap, error) {
|
||||
return c.api.Get(ctx, name, metav1.GetOptions{})
|
||||
}
|
||||
|
||||
func (c cmAdapter) WatchConfigs(ctx context.Context, cb ConfigMapCallback) error {
|
||||
for {
|
||||
watch, err := c.api.Watch(ctx, metav1.ListOptions{})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
for event := range watch.ResultChan() {
|
||||
if cm, ok := event.Object.(*apiv1.ConfigMap); ok {
|
||||
cb(event.Type, cm)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// NewConfigMapAdapter creates a new ConfigMapClient
|
||||
func NewConfigMapAdapter(api v1.ConfigMapInterface) ConfigMapAdapter {
|
||||
return &cmAdapter{api}
|
||||
}
|
|
@ -1,92 +0,0 @@
|
|||
package kubernetes_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"sync"
|
||||
"testing"
|
||||
|
||||
"github.com/kyverno/policy-reporter/pkg/kubernetes"
|
||||
v1 "k8s.io/api/core/v1"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/watch"
|
||||
"k8s.io/client-go/kubernetes/fake"
|
||||
clientv1 "k8s.io/client-go/kubernetes/typed/core/v1"
|
||||
testcore "k8s.io/client-go/testing"
|
||||
)
|
||||
|
||||
var configMap = &v1.ConfigMap{
|
||||
TypeMeta: metav1.TypeMeta{
|
||||
Kind: "ConfigMap",
|
||||
APIVersion: "v1",
|
||||
},
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "policy-reporter-priorities",
|
||||
},
|
||||
Data: map[string]string{
|
||||
"default": "critical",
|
||||
},
|
||||
}
|
||||
|
||||
func Test_GetConfigMap(t *testing.T) {
|
||||
_, cmAPI := newFakeAPI()
|
||||
cmAPI.Create(context.Background(), configMap, metav1.CreateOptions{})
|
||||
|
||||
cmClient := kubernetes.NewConfigMapAdapter(cmAPI)
|
||||
|
||||
cm, err := cmClient.GetConfig(context.Background(), "policy-reporter-priorities")
|
||||
if err != nil {
|
||||
t.Fatalf("Unexpected Error: %s", err)
|
||||
}
|
||||
|
||||
if cm.Name != "policy-reporter-priorities" {
|
||||
t.Error("Unexpted ConfigMapReturned")
|
||||
}
|
||||
if priority, ok := cm.Data["default"]; !ok || priority != "critical" {
|
||||
t.Error("Unexpted default priority")
|
||||
}
|
||||
}
|
||||
|
||||
func Test_WatchConfigMap(t *testing.T) {
|
||||
client, cmAPI := newFakeAPI()
|
||||
|
||||
watcher := watch.NewFake()
|
||||
client.PrependWatchReactor("configmaps", testcore.DefaultWatchReactor(watcher, nil))
|
||||
|
||||
cmClient := kubernetes.NewConfigMapAdapter(cmAPI)
|
||||
|
||||
wg := sync.WaitGroup{}
|
||||
wg.Add(1)
|
||||
|
||||
go cmClient.WatchConfigs(context.Background(), func(et watch.EventType, cm *v1.ConfigMap) {
|
||||
defer wg.Done()
|
||||
|
||||
if cm.Name != "policy-reporter-priorities" {
|
||||
t.Error("Unexpted ConfigMapReturned")
|
||||
}
|
||||
if priority, ok := cm.Data["default"]; !ok || priority != "critical" {
|
||||
t.Error("Unexpted default priority")
|
||||
}
|
||||
})
|
||||
|
||||
watcher.Add(configMap)
|
||||
|
||||
wg.Wait()
|
||||
}
|
||||
|
||||
func Test_WatchConfigMapError(t *testing.T) {
|
||||
client, cmAPI := newFakeAPI()
|
||||
client.PrependWatchReactor("configmaps", testcore.DefaultWatchReactor(watch.NewFake(), errors.New("")))
|
||||
|
||||
cmClient := kubernetes.NewConfigMapAdapter(cmAPI)
|
||||
|
||||
err := cmClient.WatchConfigs(context.Background(), func(et watch.EventType, cm *v1.ConfigMap) {})
|
||||
if err == nil {
|
||||
t.Error("Watch Error should stop execution")
|
||||
}
|
||||
}
|
||||
|
||||
func newFakeAPI() (*fake.Clientset, clientv1.ConfigMapInterface) {
|
||||
client := fake.NewSimpleClientset()
|
||||
return client, client.CoreV1().ConfigMaps("policy-reporter")
|
||||
}
|
|
@ -4,40 +4,47 @@ import (
|
|||
"sync"
|
||||
"time"
|
||||
|
||||
"k8s.io/apimachinery/pkg/watch"
|
||||
"github.com/kyverno/policy-reporter/pkg/report"
|
||||
)
|
||||
|
||||
type Debouncer interface {
|
||||
Add(e report.LifecycleEvent)
|
||||
ReportChan() <-chan report.LifecycleEvent
|
||||
Close()
|
||||
}
|
||||
|
||||
type debouncer struct {
|
||||
events map[string]WatchEvent
|
||||
channel chan WatchEvent
|
||||
waitDuration time.Duration
|
||||
events map[string]report.LifecycleEvent
|
||||
channel chan report.LifecycleEvent
|
||||
mutx *sync.Mutex
|
||||
}
|
||||
|
||||
func (d *debouncer) Add(e WatchEvent) {
|
||||
_, ok := d.events[e.Report.GetIdentifier()]
|
||||
if e.Type != watch.Modified && ok {
|
||||
func (d *debouncer) Add(event report.LifecycleEvent) {
|
||||
_, ok := d.events[event.NewPolicyReport.GetIdentifier()]
|
||||
if event.Type != report.Updated && ok {
|
||||
d.mutx.Lock()
|
||||
delete(d.events, e.Report.GetIdentifier())
|
||||
delete(d.events, event.NewPolicyReport.GetIdentifier())
|
||||
d.mutx.Unlock()
|
||||
}
|
||||
|
||||
if e.Type != watch.Modified {
|
||||
d.channel <- e
|
||||
if event.Type != report.Updated {
|
||||
d.channel <- event
|
||||
return
|
||||
}
|
||||
|
||||
if len(e.Report.Results) == 0 && !ok {
|
||||
if len(event.NewPolicyReport.Results) == 0 && !ok {
|
||||
d.mutx.Lock()
|
||||
d.events[e.Report.GetIdentifier()] = e
|
||||
d.events[event.NewPolicyReport.GetIdentifier()] = event
|
||||
d.mutx.Unlock()
|
||||
|
||||
go func() {
|
||||
time.Sleep(1 * time.Minute)
|
||||
time.Sleep(d.waitDuration)
|
||||
|
||||
d.mutx.Lock()
|
||||
if event, ok := d.events[e.Report.GetIdentifier()]; ok {
|
||||
if event, ok := d.events[event.NewPolicyReport.GetIdentifier()]; ok {
|
||||
d.channel <- event
|
||||
delete(d.events, e.Report.GetIdentifier())
|
||||
delete(d.events, event.NewPolicyReport.GetIdentifier())
|
||||
}
|
||||
d.mutx.Unlock()
|
||||
}()
|
||||
|
@ -47,23 +54,28 @@ func (d *debouncer) Add(e WatchEvent) {
|
|||
|
||||
if ok {
|
||||
d.mutx.Lock()
|
||||
d.events[e.Report.GetIdentifier()] = e
|
||||
d.events[event.NewPolicyReport.GetIdentifier()] = event
|
||||
d.mutx.Unlock()
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
d.channel <- e
|
||||
d.channel <- event
|
||||
}
|
||||
|
||||
func (d *debouncer) ReportChan() chan WatchEvent {
|
||||
func (d *debouncer) ReportChan() <-chan report.LifecycleEvent {
|
||||
return d.channel
|
||||
}
|
||||
|
||||
func newDebouncer() *debouncer {
|
||||
func (d *debouncer) Close() {
|
||||
close(d.channel)
|
||||
}
|
||||
|
||||
func NewDebouncer(waitDuration time.Duration) Debouncer {
|
||||
return &debouncer{
|
||||
events: make(map[string]WatchEvent),
|
||||
waitDuration: waitDuration,
|
||||
events: make(map[string]report.LifecycleEvent),
|
||||
mutx: new(sync.Mutex),
|
||||
channel: make(chan WatchEvent),
|
||||
channel: make(chan report.LifecycleEvent),
|
||||
}
|
||||
}
|
||||
|
|
49
pkg/kubernetes/debouncer_test.go
Normal file
49
pkg/kubernetes/debouncer_test.go
Normal file
|
@ -0,0 +1,49 @@
|
|||
package kubernetes_test
|
||||
|
||||
import (
|
||||
"sync"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/kyverno/policy-reporter/pkg/kubernetes"
|
||||
"github.com/kyverno/policy-reporter/pkg/report"
|
||||
)
|
||||
|
||||
func Test_Debouncer(t *testing.T) {
|
||||
t.Run("Skip Empty Update", func(t *testing.T) {
|
||||
debouncer := kubernetes.NewDebouncer(200 * time.Millisecond)
|
||||
|
||||
wg := sync.WaitGroup{}
|
||||
wg.Add(2)
|
||||
|
||||
go func() {
|
||||
for event := range debouncer.ReportChan() {
|
||||
wg.Done()
|
||||
if len(event.NewPolicyReport.Results) == 0 {
|
||||
t.Error("Expected to skip the empty modify event")
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
debouncer.Add(report.LifecycleEvent{
|
||||
Type: report.Added,
|
||||
NewPolicyReport: mapper.MapPolicyReport(policyMap),
|
||||
})
|
||||
|
||||
debouncer.Add(report.LifecycleEvent{
|
||||
Type: report.Updated,
|
||||
NewPolicyReport: mapper.MapPolicyReport(minPolicyMap),
|
||||
})
|
||||
|
||||
time.Sleep(10 * time.Millisecond)
|
||||
|
||||
debouncer.Add(report.LifecycleEvent{
|
||||
Type: report.Updated,
|
||||
NewPolicyReport: mapper.MapPolicyReport(policyMap),
|
||||
})
|
||||
|
||||
wg.Wait()
|
||||
|
||||
debouncer.Close()
|
||||
})
|
||||
}
|
166
pkg/kubernetes/fixtures_test.go
Normal file
166
pkg/kubernetes/fixtures_test.go
Normal file
|
@ -0,0 +1,166 @@
|
|||
package kubernetes_test
|
||||
|
||||
import (
|
||||
"sync"
|
||||
|
||||
"github.com/kyverno/policy-reporter/pkg/kubernetes"
|
||||
"github.com/kyverno/policy-reporter/pkg/report"
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
"k8s.io/apimachinery/pkg/runtime/schema"
|
||||
"k8s.io/client-go/dynamic"
|
||||
"k8s.io/client-go/dynamic/fake"
|
||||
)
|
||||
|
||||
var policyReportSchema = schema.GroupVersionResource{
|
||||
Group: "wgpolicyk8s.io",
|
||||
Version: "v1alpha2",
|
||||
Resource: "policyreports",
|
||||
}
|
||||
|
||||
var clusterPolicyReportSchema = schema.GroupVersionResource{
|
||||
Group: "wgpolicyk8s.io",
|
||||
Version: "v1alpha2",
|
||||
Resource: "clusterpolicyreports",
|
||||
}
|
||||
|
||||
var gvrToListKind = map[schema.GroupVersionResource]string{
|
||||
policyReportSchema: "PolicyReportList",
|
||||
clusterPolicyReportSchema: "ClusterPolicyReportList",
|
||||
}
|
||||
|
||||
func NewFakeCilent() (dynamic.Interface, dynamic.ResourceInterface) {
|
||||
client := fake.NewSimpleDynamicClientWithCustomListKinds(runtime.NewScheme(), gvrToListKind)
|
||||
|
||||
return client, client.Resource(policyReportSchema).Namespace("test")
|
||||
}
|
||||
|
||||
func NewMapper() kubernetes.Mapper {
|
||||
return kubernetes.NewMapper(make(map[string]string))
|
||||
}
|
||||
|
||||
type store struct {
|
||||
store []report.LifecycleEvent
|
||||
rwm *sync.RWMutex
|
||||
}
|
||||
|
||||
func (s *store) Add(r report.LifecycleEvent) {
|
||||
s.rwm.Lock()
|
||||
s.store = append(s.store, r)
|
||||
s.rwm.Unlock()
|
||||
}
|
||||
|
||||
func (s *store) Get(index int) report.LifecycleEvent {
|
||||
return s.store[index]
|
||||
}
|
||||
|
||||
func (s *store) List() []report.LifecycleEvent {
|
||||
return s.store
|
||||
}
|
||||
|
||||
func newStore(size int) *store {
|
||||
return &store{
|
||||
store: make([]report.LifecycleEvent, 0, size),
|
||||
rwm: &sync.RWMutex{},
|
||||
}
|
||||
}
|
||||
|
||||
var policyMap = map[string]interface{}{
|
||||
"metadata": map[string]interface{}{
|
||||
"name": "policy-report",
|
||||
"namespace": "test",
|
||||
"creationTimestamp": "2021-02-23T15:00:00Z",
|
||||
},
|
||||
"summary": map[string]interface{}{
|
||||
"pass": int64(1),
|
||||
"skip": int64(2),
|
||||
"warn": int64(3),
|
||||
"fail": int64(4),
|
||||
"error": int64(5),
|
||||
},
|
||||
"results": []interface{}{
|
||||
map[string]interface{}{
|
||||
"message": "message",
|
||||
"result": "fail",
|
||||
"scored": true,
|
||||
"policy": "required-label",
|
||||
"rule": "app-label-required",
|
||||
"timestamp": map[string]interface{}{
|
||||
"seconds": int64(1614093000),
|
||||
},
|
||||
"source": "test",
|
||||
"category": "test",
|
||||
"severity": "high",
|
||||
"resources": []interface{}{
|
||||
map[string]interface{}{
|
||||
"apiVersion": "v1",
|
||||
"kind": "Deployment",
|
||||
"name": "nginx",
|
||||
"namespace": "test",
|
||||
"uid": "dfd57c50-f30c-4729-b63f-b1954d8988d1",
|
||||
},
|
||||
},
|
||||
"properties": map[string]interface{}{
|
||||
"version": "1.2.0",
|
||||
},
|
||||
},
|
||||
map[string]interface{}{
|
||||
"message": "message 2",
|
||||
"result": "fail",
|
||||
"scored": true,
|
||||
"timestamp": map[string]interface{}{
|
||||
"seconds": int64(1614093000),
|
||||
},
|
||||
"policy": "priority-test",
|
||||
"resources": []interface{}{},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
var minPolicyMap = map[string]interface{}{
|
||||
"metadata": map[string]interface{}{
|
||||
"name": "policy-report",
|
||||
"namespace": "test",
|
||||
},
|
||||
"results": []interface{}{},
|
||||
}
|
||||
|
||||
var clusterPolicyMap = map[string]interface{}{
|
||||
"metadata": map[string]interface{}{
|
||||
"name": "clusterpolicy-report",
|
||||
"creationTimestamp": "2021-02-23T15:00:00Z",
|
||||
},
|
||||
"summary": map[string]interface{}{
|
||||
"pass": int64(1),
|
||||
"skip": int64(2),
|
||||
"warn": int64(3),
|
||||
"fail": int64(4),
|
||||
"error": int64(5),
|
||||
},
|
||||
"results": []interface{}{
|
||||
map[string]interface{}{
|
||||
"message": "message",
|
||||
"result": "fail",
|
||||
"scored": true,
|
||||
"policy": "cluster-required-label",
|
||||
"rule": "ns-label-required",
|
||||
"category": "test",
|
||||
"severity": "high",
|
||||
"timestamp": map[string]interface{}{"seconds": ""},
|
||||
"resources": []interface{}{
|
||||
map[string]interface{}{
|
||||
"apiVersion": "v1",
|
||||
"kind": "Namespace",
|
||||
"name": "policy-reporter",
|
||||
"uid": "dfd57c50-f30c-4729-b63f-b1954d8988d1",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
var priorityMap = map[string]string{
|
||||
"priority-test": "warning",
|
||||
}
|
||||
|
||||
var result1ID string = report.GeneratePolicyReportResultID("dfd57c50-f30c-4729-b63f-b1954d8988d1", "required-label", "app-label-required", "fail", "message")
|
||||
var result2ID string = report.GeneratePolicyReportResultID("", "priority-test", "", "fail", "message 2")
|
|
@ -1,35 +1,24 @@
|
|||
package kubernetes
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"log"
|
||||
"time"
|
||||
|
||||
"github.com/kyverno/policy-reporter/pkg/report"
|
||||
v1 "k8s.io/api/core/v1"
|
||||
"k8s.io/apimachinery/pkg/watch"
|
||||
)
|
||||
|
||||
// Mapper converts maps into report structs
|
||||
type Mapper interface {
|
||||
// MapPolicyReport maps a map into a PolicyReport
|
||||
MapPolicyReport(reportMap map[string]interface{}) report.PolicyReport
|
||||
// SetPriorityMap updates the policy/status to priority mapping
|
||||
SetPriorityMap(map[string]string)
|
||||
// SyncPriorities when ConfigMap has changed
|
||||
SyncPriorities(ctx context.Context) error
|
||||
// FetchPriorities from ConfigMap
|
||||
FetchPriorities(ctx context.Context) error
|
||||
MapPolicyReport(reportMap map[string]interface{}) *report.PolicyReport
|
||||
}
|
||||
|
||||
type mapper struct {
|
||||
priorityMap map[string]string
|
||||
cmAdapter ConfigMapAdapter
|
||||
}
|
||||
|
||||
func (m *mapper) MapPolicyReport(reportMap map[string]interface{}) report.PolicyReport {
|
||||
summary := report.Summary{}
|
||||
func (m *mapper) MapPolicyReport(reportMap map[string]interface{}) *report.PolicyReport {
|
||||
summary := &report.Summary{}
|
||||
|
||||
if s, ok := reportMap["summary"].(map[string]interface{}); ok {
|
||||
summary.Pass = int(s["pass"].(int64))
|
||||
|
@ -39,12 +28,15 @@ func (m *mapper) MapPolicyReport(reportMap map[string]interface{}) report.Policy
|
|||
summary.Fail = int(s["fail"].(int64))
|
||||
}
|
||||
|
||||
metadata := reportMap["metadata"].(map[string]interface{})
|
||||
metadata, ok := reportMap["metadata"].(map[string]interface{})
|
||||
if !ok {
|
||||
return &report.PolicyReport{}
|
||||
}
|
||||
|
||||
r := report.PolicyReport{
|
||||
r := &report.PolicyReport{
|
||||
Name: metadata["name"].(string),
|
||||
Summary: summary,
|
||||
Results: make(map[string]report.Result),
|
||||
Results: make(map[string]*report.Result),
|
||||
}
|
||||
|
||||
if ns, ok := metadata["namespace"]; ok {
|
||||
|
@ -60,13 +52,15 @@ func (m *mapper) MapPolicyReport(reportMap map[string]interface{}) report.Policy
|
|||
|
||||
if rs, ok := reportMap["results"].([]interface{}); ok {
|
||||
for _, resultItem := range rs {
|
||||
resources := m.mapResult(resultItem.(map[string]interface{}))
|
||||
for _, resource := range resources {
|
||||
r.Results[resource.GetIdentifier()] = resource
|
||||
results := m.mapResult(resultItem.(map[string]interface{}))
|
||||
for _, result := range results {
|
||||
r.Results[result.GetIdentifier()] = result
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
r.ID = report.GeneratePolicyReportID(r.Name, r.Namespace)
|
||||
|
||||
return r
|
||||
}
|
||||
|
||||
|
@ -75,24 +69,21 @@ func (m *mapper) SetPriorityMap(priorityMap map[string]string) {
|
|||
}
|
||||
|
||||
func (m *mapper) mapCreationTime(result map[string]interface{}) (time.Time, error) {
|
||||
if metadata, ok := result["metadata"].(map[string]interface{}); ok {
|
||||
metadata := result["metadata"].(map[string]interface{})
|
||||
if created, ok2 := metadata["creationTimestamp"].(string); ok2 {
|
||||
return time.Parse("2006-01-02T15:04:05Z", created)
|
||||
}
|
||||
|
||||
return time.Time{}, errors.New("no creationTimestamp provided")
|
||||
}
|
||||
|
||||
return time.Time{}, errors.New("no metadata provided")
|
||||
}
|
||||
|
||||
func (m *mapper) mapResult(result map[string]interface{}) []report.Result {
|
||||
var resources []report.Resource
|
||||
func (m *mapper) mapResult(result map[string]interface{}) []*report.Result {
|
||||
var resources []*report.Resource
|
||||
|
||||
if ress, ok := result["resources"].([]interface{}); ok {
|
||||
for _, res := range ress {
|
||||
if resMap, ok := res.(map[string]interface{}); ok {
|
||||
r := report.Resource{
|
||||
r := &report.Resource{
|
||||
APIVersion: resMap["apiVersion"].(string),
|
||||
Kind: resMap["kind"].(string),
|
||||
Name: resMap["name"].(string),
|
||||
|
@ -117,11 +108,10 @@ func (m *mapper) mapResult(result map[string]interface{}) []report.Result {
|
|||
status = r.(report.Status)
|
||||
}
|
||||
|
||||
var results []report.Result
|
||||
var results []*report.Result
|
||||
|
||||
factory := func(res report.Resource) report.Result {
|
||||
r := report.Result{
|
||||
Message: result["message"].(string),
|
||||
factory := func(res *report.Resource) *report.Result {
|
||||
r := &report.Result{
|
||||
Policy: result["policy"].(string),
|
||||
Status: status,
|
||||
Priority: report.PriorityFromStatus(status),
|
||||
|
@ -129,6 +119,10 @@ func (m *mapper) mapResult(result map[string]interface{}) []report.Result {
|
|||
Properties: make(map[string]string, 0),
|
||||
}
|
||||
|
||||
if message, ok := result["message"].(string); ok {
|
||||
r.Message = message
|
||||
}
|
||||
|
||||
if scored, ok := result["scored"]; ok {
|
||||
r.Scored = scored.(bool)
|
||||
}
|
||||
|
@ -137,7 +131,7 @@ func (m *mapper) mapResult(result map[string]interface{}) []report.Result {
|
|||
r.Severity = severity.(report.Severity)
|
||||
}
|
||||
|
||||
if r.Status == report.Error || r.Status == report.Fail {
|
||||
if r.Status == report.Fail {
|
||||
r.Priority = m.resolvePriority(r.Policy, r.Severity)
|
||||
}
|
||||
|
||||
|
@ -166,6 +160,8 @@ func (m *mapper) mapResult(result map[string]interface{}) []report.Result {
|
|||
}
|
||||
}
|
||||
|
||||
r.ID = report.GeneratePolicyReportResultID(r.Resource.UID, r.Policy, r.Rule, r.Status, r.Message)
|
||||
|
||||
return r
|
||||
}
|
||||
|
||||
|
@ -174,7 +170,7 @@ func (m *mapper) mapResult(result map[string]interface{}) []report.Result {
|
|||
}
|
||||
|
||||
if len(results) == 0 {
|
||||
results = append(results, factory(report.Resource{}))
|
||||
results = append(results, factory(&report.Resource{}))
|
||||
}
|
||||
|
||||
return results
|
||||
|
@ -214,48 +210,9 @@ func (m *mapper) resolvePriority(policy string, severity report.Severity) report
|
|||
return report.Priority(report.WarningPriority)
|
||||
}
|
||||
|
||||
func (m *mapper) FetchPriorities(ctx context.Context) error {
|
||||
cm, err := m.cmAdapter.GetConfig(ctx, prioriyConfig)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if cm != nil {
|
||||
m.SetPriorityMap(cm.Data)
|
||||
log.Println("[INFO] Priorities loaded")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *mapper) SyncPriorities(ctx context.Context) error {
|
||||
err := m.cmAdapter.WatchConfigs(ctx, func(e watch.EventType, cm *v1.ConfigMap) {
|
||||
if cm.Name != prioriyConfig {
|
||||
return
|
||||
}
|
||||
|
||||
switch e {
|
||||
case watch.Added:
|
||||
m.SetPriorityMap(cm.Data)
|
||||
case watch.Modified:
|
||||
m.SetPriorityMap(cm.Data)
|
||||
case watch.Deleted:
|
||||
m.SetPriorityMap(map[string]string{})
|
||||
}
|
||||
|
||||
log.Println("[INFO] Priorities synchronized")
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
log.Printf("[INFO] Unable to sync Priorities: %s", err.Error())
|
||||
}
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
// NewMapper creates an new Mapper instance
|
||||
func NewMapper(priorities map[string]string, cmAdapter ConfigMapAdapter) Mapper {
|
||||
m := &mapper{cmAdapter: cmAdapter}
|
||||
func NewMapper(priorities map[string]string) Mapper {
|
||||
m := &mapper{}
|
||||
m.SetPriorityMap(priorities)
|
||||
|
||||
return m
|
||||
|
|
|
@ -1,117 +1,13 @@
|
|||
package kubernetes_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/kyverno/policy-reporter/pkg/kubernetes"
|
||||
"github.com/kyverno/policy-reporter/pkg/report"
|
||||
v1 "k8s.io/api/core/v1"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/watch"
|
||||
testcore "k8s.io/client-go/testing"
|
||||
)
|
||||
|
||||
var policyMap = map[string]interface{}{
|
||||
"metadata": map[string]interface{}{
|
||||
"name": "policy-report",
|
||||
"namespace": "test",
|
||||
"creationTimestamp": "2021-02-23T15:00:00Z",
|
||||
},
|
||||
"summary": map[string]interface{}{
|
||||
"pass": int64(1),
|
||||
"skip": int64(2),
|
||||
"warn": int64(3),
|
||||
"fail": int64(4),
|
||||
"error": int64(5),
|
||||
},
|
||||
"results": []interface{}{
|
||||
map[string]interface{}{
|
||||
"message": "message",
|
||||
"status": "fail",
|
||||
"scored": true,
|
||||
"policy": "required-label",
|
||||
"rule": "app-label-required",
|
||||
"timestamp": map[string]interface{}{
|
||||
"seconds": 1614093000,
|
||||
},
|
||||
"source": "test",
|
||||
"category": "test",
|
||||
"severity": "high",
|
||||
"resources": []interface{}{
|
||||
map[string]interface{}{
|
||||
"apiVersion": "v1",
|
||||
"kind": "Deployment",
|
||||
"name": "nginx",
|
||||
"namespace": "test",
|
||||
"uid": "dfd57c50-f30c-4729-b63f-b1954d8988d1",
|
||||
},
|
||||
},
|
||||
"properties": map[string]interface{}{
|
||||
"version": "1.2.0",
|
||||
},
|
||||
},
|
||||
map[string]interface{}{
|
||||
"message": "message 2",
|
||||
"status": "fail",
|
||||
"scored": true,
|
||||
"timestamp": map[string]interface{}{
|
||||
"seconds": int64(1614093000),
|
||||
},
|
||||
"policy": "priority-test",
|
||||
"resources": []interface{}{},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
var minPolicyMap = map[string]interface{}{
|
||||
"metadata": map[string]interface{}{
|
||||
"name": "policy-report",
|
||||
"namespace": "test",
|
||||
},
|
||||
"results": []interface{}{},
|
||||
}
|
||||
|
||||
var clusterPolicyMap = map[string]interface{}{
|
||||
"metadata": map[string]interface{}{
|
||||
"name": "clusterpolicy-report",
|
||||
"creationTimestamp": "2021-02-23T15:00:00Z",
|
||||
},
|
||||
"summary": map[string]interface{}{
|
||||
"pass": int64(1),
|
||||
"skip": int64(2),
|
||||
"warn": int64(3),
|
||||
"fail": int64(4),
|
||||
"error": int64(5),
|
||||
},
|
||||
"results": []interface{}{
|
||||
map[string]interface{}{
|
||||
"message": "message",
|
||||
"result": "fail",
|
||||
"scored": true,
|
||||
"policy": "cluster-required-label",
|
||||
"rule": "ns-label-required",
|
||||
"category": "test",
|
||||
"severity": "high",
|
||||
"timestamp": map[string]interface{}{"seconds": ""},
|
||||
"resources": []interface{}{
|
||||
map[string]interface{}{
|
||||
"apiVersion": "v1",
|
||||
"kind": "Namespace",
|
||||
"name": "policy-reporter",
|
||||
"uid": "dfd57c50-f30c-4729-b63f-b1954d8988d1",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
var priorityMap = map[string]string{
|
||||
"priority-test": "warning",
|
||||
}
|
||||
|
||||
var mapper = kubernetes.NewMapper(priorityMap, nil)
|
||||
var mapper = kubernetes.NewMapper(priorityMap)
|
||||
|
||||
func Test_MapPolicyReport(t *testing.T) {
|
||||
preport := mapper.MapPolicyReport(policyMap)
|
||||
|
@ -138,7 +34,7 @@ func Test_MapPolicyReport(t *testing.T) {
|
|||
t.Errorf("Unexpected Summary.Error value %d (expected 5)", preport.Summary.Error)
|
||||
}
|
||||
|
||||
result1, ok := preport.Results["required-label__app-label-required__fail__dfd57c50-f30c-4729-b63f-b1954d8988d1"]
|
||||
result1, ok := preport.Results[result1ID]
|
||||
if !ok {
|
||||
t.Error("Expected result not found")
|
||||
}
|
||||
|
@ -194,7 +90,7 @@ func Test_MapPolicyReport(t *testing.T) {
|
|||
t.Errorf("Expected Resource.Namespace 'dfd57c50-f30c-4729-b63f-b1954d8988d1' (acutal %s)", resource.UID)
|
||||
}
|
||||
|
||||
result2, ok := preport.Results["priority-test____fail"]
|
||||
result2, ok := preport.Results[result2ID]
|
||||
if !ok {
|
||||
t.Error("Expected result not found")
|
||||
}
|
||||
|
@ -253,11 +149,11 @@ func Test_MapMinPolicyReport(t *testing.T) {
|
|||
|
||||
func Test_PriorityMap(t *testing.T) {
|
||||
t.Run("Test exact match, without default", func(t *testing.T) {
|
||||
mapper := kubernetes.NewMapper(map[string]string{"required-label": "debug"}, nil)
|
||||
mapper := kubernetes.NewMapper(map[string]string{"required-label": "debug"})
|
||||
|
||||
preport := mapper.MapPolicyReport(policyMap)
|
||||
|
||||
result := preport.Results["required-label__app-label-required__fail__dfd57c50-f30c-4729-b63f-b1954d8988d1"]
|
||||
result := preport.Results[result1ID]
|
||||
|
||||
if result.Priority != report.DebugPriority {
|
||||
t.Errorf("Expected Policy '%d' (acutal %d)", report.DebugPriority, result.Priority)
|
||||
|
@ -265,11 +161,11 @@ func Test_PriorityMap(t *testing.T) {
|
|||
})
|
||||
|
||||
t.Run("Test exact match handled over default", func(t *testing.T) {
|
||||
mapper := kubernetes.NewMapper(map[string]string{"required-label": "debug", "default": "warning"}, nil)
|
||||
mapper := kubernetes.NewMapper(map[string]string{"required-label": "debug", "default": "warning"})
|
||||
|
||||
preport := mapper.MapPolicyReport(policyMap)
|
||||
|
||||
result := preport.Results["required-label__app-label-required__fail__dfd57c50-f30c-4729-b63f-b1954d8988d1"]
|
||||
result := preport.Results[result1ID]
|
||||
|
||||
if result.Priority != report.DebugPriority {
|
||||
t.Errorf("Expected Policy '%d' (acutal %d)", report.DebugPriority, result.Priority)
|
||||
|
@ -277,11 +173,11 @@ func Test_PriorityMap(t *testing.T) {
|
|||
})
|
||||
|
||||
t.Run("Test default expressions", func(t *testing.T) {
|
||||
mapper := kubernetes.NewMapper(map[string]string{"default": "warning"}, nil)
|
||||
mapper := kubernetes.NewMapper(map[string]string{"default": "warning"})
|
||||
|
||||
preport := mapper.MapPolicyReport(policyMap)
|
||||
|
||||
result := preport.Results["priority-test____fail"]
|
||||
result := preport.Results[result2ID]
|
||||
|
||||
if result.Priority != report.WarningPriority {
|
||||
t.Errorf("Expected Policy '%d' (acutal %d)", report.WarningPriority, result.Priority)
|
||||
|
@ -289,92 +185,67 @@ func Test_PriorityMap(t *testing.T) {
|
|||
})
|
||||
}
|
||||
|
||||
func Test_PriorityFetch(t *testing.T) {
|
||||
_, cmAPI := newFakeAPI()
|
||||
cmAPI.Create(context.Background(), configMap, metav1.CreateOptions{})
|
||||
mapper := kubernetes.NewMapper(make(map[string]string), kubernetes.NewConfigMapAdapter(cmAPI))
|
||||
func Test_MapWithoutMetadata(t *testing.T) {
|
||||
mapper := kubernetes.NewMapper(make(map[string]string))
|
||||
|
||||
preport1 := mapper.MapPolicyReport(policyMap)
|
||||
result1 := preport1.Results["priority-test____fail"]
|
||||
policyReport := map[string]interface{}{}
|
||||
|
||||
if result1.Priority != report.WarningPriority {
|
||||
t.Errorf("Default Priority should be Warning")
|
||||
report := mapper.MapPolicyReport(policyReport)
|
||||
|
||||
if report.Name != "" {
|
||||
t.Errorf("Expected empty PolicyReport")
|
||||
}
|
||||
}
|
||||
func Test_MapWithoutResultTimestamp(t *testing.T) {
|
||||
mapper := kubernetes.NewMapper(make(map[string]string))
|
||||
|
||||
policyReport := map[string]interface{}{
|
||||
"metadata": map[string]interface{}{
|
||||
"name": "policy-report",
|
||||
"namespace": "test",
|
||||
"creationTimestamp": "2021-02-23T15:00:00Z",
|
||||
},
|
||||
"results": []interface{}{map[string]interface{}{
|
||||
"message": "message 2",
|
||||
"status": "fail",
|
||||
"scored": true,
|
||||
"policy": "priority-test",
|
||||
"resources": []interface{}{},
|
||||
}},
|
||||
}
|
||||
|
||||
mapper.FetchPriorities(context.Background())
|
||||
preport2 := mapper.MapPolicyReport(policyMap)
|
||||
result2 := preport2.Results["priority-test____fail"]
|
||||
if result2.Priority != report.CriticalPriority {
|
||||
t.Errorf("Default Priority should be Critical after ConigMap fetch")
|
||||
report := mapper.MapPolicyReport(policyReport)
|
||||
|
||||
if report.Results[result2ID].Timestamp.IsZero() {
|
||||
t.Errorf("Expected valid Timestamp")
|
||||
}
|
||||
}
|
||||
|
||||
func Test_PriorityFetchError(t *testing.T) {
|
||||
_, cmAPI := newFakeAPI()
|
||||
mapper := kubernetes.NewMapper(make(map[string]string), kubernetes.NewConfigMapAdapter(cmAPI))
|
||||
func Test_MapTimestamoAsInt(t *testing.T) {
|
||||
mapper := kubernetes.NewMapper(make(map[string]string))
|
||||
|
||||
mapper.FetchPriorities(context.Background())
|
||||
preport := mapper.MapPolicyReport(policyMap)
|
||||
result := preport.Results["priority-test____fail"]
|
||||
if result.Priority != report.WarningPriority {
|
||||
t.Errorf("Fetch Error should not effect the functionality and continue using Warning as default")
|
||||
}
|
||||
}
|
||||
|
||||
func Test_PrioritySync(t *testing.T) {
|
||||
client, cmAPI := newFakeAPI()
|
||||
watcher := watch.NewFake()
|
||||
client.PrependWatchReactor("configmaps", testcore.DefaultWatchReactor(watcher, nil))
|
||||
|
||||
mapper := kubernetes.NewMapper(make(map[string]string), kubernetes.NewConfigMapAdapter(cmAPI))
|
||||
|
||||
preport1 := mapper.MapPolicyReport(policyMap)
|
||||
result1 := preport1.Results["priority-test____fail"]
|
||||
|
||||
if result1.Priority != report.WarningPriority {
|
||||
t.Errorf("Default Priority should be Warning")
|
||||
}
|
||||
|
||||
go mapper.SyncPriorities(context.Background())
|
||||
|
||||
watcher.Add(configMap)
|
||||
|
||||
preport2 := mapper.MapPolicyReport(policyMap)
|
||||
result2 := preport2.Results["priority-test____fail"]
|
||||
if result2.Priority != report.CriticalPriority {
|
||||
t.Errorf("Default Priority should be Critical after ConigMap add sync")
|
||||
}
|
||||
|
||||
configMap2 := &v1.ConfigMap{
|
||||
TypeMeta: metav1.TypeMeta{
|
||||
Kind: "ConfigMap",
|
||||
APIVersion: "v1",
|
||||
},
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "policy-reporter-priorities",
|
||||
},
|
||||
Data: map[string]string{
|
||||
"default": "debug",
|
||||
},
|
||||
}
|
||||
|
||||
watcher.Modify(configMap2)
|
||||
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
|
||||
preport3 := mapper.MapPolicyReport(policyMap)
|
||||
result3 := preport3.Results["priority-test____fail"]
|
||||
if result3.Priority != report.DebugPriority {
|
||||
t.Errorf("Default Priority should be Debug after ConigMap modify sync")
|
||||
}
|
||||
|
||||
watcher.Delete(configMap2)
|
||||
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
|
||||
preport4 := mapper.MapPolicyReport(policyMap)
|
||||
result4 := preport4.Results["priority-test____fail"]
|
||||
if result4.Priority != report.WarningPriority {
|
||||
t.Errorf("Default Priority should be fallback to Warning after ConigMap delete sync")
|
||||
policyReport := map[string]interface{}{
|
||||
"metadata": map[string]interface{}{
|
||||
"name": "policy-report",
|
||||
"namespace": "test",
|
||||
"creationTimestamp": "2021-02-23T15:00:00Z",
|
||||
},
|
||||
"results": []interface{}{map[string]interface{}{
|
||||
"message": "message 2",
|
||||
"status": "fail",
|
||||
"scored": true,
|
||||
"timestamp": map[string]interface{}{
|
||||
"seconds": 1614093000,
|
||||
},
|
||||
"policy": "priority-test",
|
||||
"resources": []interface{}{},
|
||||
}},
|
||||
}
|
||||
|
||||
r := mapper.MapPolicyReport(policyReport)
|
||||
id := report.GeneratePolicyReportResultID("", "priority-test", "", "fail", "message 2")
|
||||
|
||||
if r.Results[id].Timestamp.IsZero() {
|
||||
t.Errorf("Expected valid Timestamp")
|
||||
}
|
||||
}
|
||||
|
|
|
@ -2,188 +2,164 @@ package kubernetes
|
|||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"log"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/kyverno/policy-reporter/pkg/report"
|
||||
"github.com/patrickmn/go-cache"
|
||||
"k8s.io/apimachinery/pkg/watch"
|
||||
corev1 "k8s.io/api/core/v1"
|
||||
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
|
||||
"k8s.io/apimachinery/pkg/runtime/schema"
|
||||
"k8s.io/client-go/dynamic"
|
||||
"k8s.io/client-go/dynamic/dynamicinformer"
|
||||
"k8s.io/client-go/tools/cache"
|
||||
)
|
||||
|
||||
type policyReportClient struct {
|
||||
policyAPI PolicyReportAdapter
|
||||
store *report.PolicyReportStore
|
||||
callbacks []report.PolicyReportCallback
|
||||
resultCallbacks []report.PolicyResultCallback
|
||||
debouncer *debouncer
|
||||
startUp time.Time
|
||||
skipExisting bool
|
||||
started bool
|
||||
resultCache *cache.Cache
|
||||
var (
|
||||
policyReportAlphaV1 = schema.GroupVersionResource{
|
||||
Group: "wgpolicyk8s.io",
|
||||
Version: "v1alpha1",
|
||||
Resource: "policyreports",
|
||||
}
|
||||
policyReportAlphaV2 = schema.GroupVersionResource{
|
||||
Group: "wgpolicyk8s.io",
|
||||
Version: "v1alpha2",
|
||||
Resource: "policyreports",
|
||||
}
|
||||
|
||||
clusterPolicyReportAlphaV1 = schema.GroupVersionResource{
|
||||
Group: "wgpolicyk8s.io",
|
||||
Version: "v1alpha1",
|
||||
Resource: "clusterpolicyreports",
|
||||
}
|
||||
clusterPolicyReportAlphaV2 = schema.GroupVersionResource{
|
||||
Group: "wgpolicyk8s.io",
|
||||
Version: "v1alpha2",
|
||||
Resource: "clusterpolicyreports",
|
||||
}
|
||||
)
|
||||
|
||||
type k8sPolicyReportClient struct {
|
||||
debouncer Debouncer
|
||||
client dynamic.Interface
|
||||
found map[string]string
|
||||
mapper Mapper
|
||||
mx *sync.Mutex
|
||||
restartWatchOnFailure time.Duration
|
||||
}
|
||||
|
||||
func (c *policyReportClient) RegisterCallback(cb report.PolicyReportCallback) {
|
||||
c.callbacks = append(c.callbacks, cb)
|
||||
func (k *k8sPolicyReportClient) GetFoundResources() map[string]string {
|
||||
return k.found
|
||||
}
|
||||
|
||||
func (c *policyReportClient) RegisterPolicyResultCallback(cb report.PolicyResultCallback) {
|
||||
c.resultCallbacks = append(c.resultCallbacks, cb)
|
||||
func (k *k8sPolicyReportClient) WatchPolicyReports(ctx context.Context) <-chan report.LifecycleEvent {
|
||||
pr := []schema.GroupVersionResource{
|
||||
policyReportAlphaV2,
|
||||
policyReportAlphaV1,
|
||||
}
|
||||
|
||||
cpor := []schema.GroupVersionResource{
|
||||
clusterPolicyReportAlphaV2,
|
||||
clusterPolicyReportAlphaV1,
|
||||
}
|
||||
|
||||
for _, versions := range [][]schema.GroupVersionResource{pr, cpor} {
|
||||
go func(vs []schema.GroupVersionResource) {
|
||||
for {
|
||||
factory := dynamicinformer.NewFilteredDynamicSharedInformerFactory(k.client, 30*time.Minute, corev1.NamespaceAll, nil)
|
||||
for _, resource := range vs {
|
||||
k.watchCRD(ctx, resource, factory)
|
||||
}
|
||||
|
||||
time.Sleep(2 * time.Second)
|
||||
}
|
||||
|
||||
}(versions)
|
||||
}
|
||||
|
||||
for {
|
||||
if len(k.found) == 2 {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
return k.debouncer.ReportChan()
|
||||
}
|
||||
|
||||
func (c *policyReportClient) GetFoundResources() map[string]string {
|
||||
return c.policyAPI.GetFoundResources()
|
||||
func (k *k8sPolicyReportClient) watchCRD(ctx context.Context, r schema.GroupVersionResource, factory dynamicinformer.DynamicSharedInformerFactory) {
|
||||
informer := factory.ForResource(r).Informer()
|
||||
|
||||
ctx, cancel := context.WithCancel(ctx)
|
||||
|
||||
informer.SetWatchErrorHandler(func(c *cache.Reflector, err error) {
|
||||
k.mx.Lock()
|
||||
delete(k.found, r.String())
|
||||
k.mx.Unlock()
|
||||
cancel()
|
||||
|
||||
log.Printf("[WARNING] Resource registration failed: %s\n", r.String())
|
||||
})
|
||||
|
||||
go k.handleCRDRegistration(ctx, informer, r)
|
||||
|
||||
informer.AddEventHandler(cache.ResourceEventHandlerFuncs{
|
||||
AddFunc: func(obj interface{}) {
|
||||
if item, ok := obj.(*unstructured.Unstructured); ok {
|
||||
preport := k.mapper.MapPolicyReport(item.Object)
|
||||
k.debouncer.Add(report.LifecycleEvent{NewPolicyReport: preport, OldPolicyReport: &report.PolicyReport{}, Type: report.Added})
|
||||
}
|
||||
},
|
||||
DeleteFunc: func(obj interface{}) {
|
||||
if item, ok := obj.(*unstructured.Unstructured); ok {
|
||||
preport := k.mapper.MapPolicyReport(item.Object)
|
||||
k.debouncer.Add(report.LifecycleEvent{NewPolicyReport: preport, OldPolicyReport: &report.PolicyReport{}, Type: report.Deleted})
|
||||
}
|
||||
},
|
||||
UpdateFunc: func(oldObj, newObj interface{}) {
|
||||
if item, ok := newObj.(*unstructured.Unstructured); ok {
|
||||
preport := k.mapper.MapPolicyReport(item.Object)
|
||||
|
||||
var oreport *report.PolicyReport
|
||||
if oldItem, ok := oldObj.(*unstructured.Unstructured); ok {
|
||||
oreport = k.mapper.MapPolicyReport(oldItem.Object)
|
||||
}
|
||||
|
||||
k.debouncer.Add(report.LifecycleEvent{NewPolicyReport: preport, OldPolicyReport: oreport, Type: report.Updated})
|
||||
}
|
||||
},
|
||||
})
|
||||
|
||||
informer.Run(ctx.Done())
|
||||
}
|
||||
|
||||
func (c *policyReportClient) StartWatching(ctx context.Context) error {
|
||||
if c.started {
|
||||
return errors.New("StartWatching was already started")
|
||||
}
|
||||
func (k *k8sPolicyReportClient) handleCRDRegistration(ctx context.Context, informer cache.SharedIndexInformer, r schema.GroupVersionResource) {
|
||||
ticker := time.NewTicker(k.restartWatchOnFailure)
|
||||
defer ticker.Stop()
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return
|
||||
case <-ticker.C:
|
||||
if informer.HasSynced() {
|
||||
k.mx.Lock()
|
||||
k.found[r.String()] = r.String()
|
||||
k.mx.Unlock()
|
||||
|
||||
c.started = true
|
||||
|
||||
events, err := c.policyAPI.WatchPolicyReports(ctx)
|
||||
if err != nil {
|
||||
c.started = false
|
||||
return err
|
||||
}
|
||||
|
||||
go func() {
|
||||
for event := range events {
|
||||
c.debouncer.Add(event)
|
||||
}
|
||||
|
||||
close(c.debouncer.channel)
|
||||
}()
|
||||
|
||||
for event := range c.debouncer.ReportChan() {
|
||||
c.executeReportHandler(event.Type, event.Report)
|
||||
}
|
||||
|
||||
c.started = false
|
||||
|
||||
return errors.New("watching stopped")
|
||||
}
|
||||
|
||||
func (c *policyReportClient) cacheResults(opr report.PolicyReport) {
|
||||
for id := range opr.Results {
|
||||
c.resultCache.SetDefault(id, true)
|
||||
}
|
||||
}
|
||||
|
||||
func (c *policyReportClient) executeReportHandler(e watch.EventType, pr report.PolicyReport) {
|
||||
opr, ok := c.store.Get(pr.GetType(), pr.GetIdentifier())
|
||||
if !ok {
|
||||
opr = report.PolicyReport{}
|
||||
}
|
||||
|
||||
if len(opr.Results) > 0 {
|
||||
c.cacheResults(opr)
|
||||
}
|
||||
|
||||
wg := sync.WaitGroup{}
|
||||
wg.Add(len(c.callbacks))
|
||||
|
||||
for _, cb := range c.callbacks {
|
||||
go func(
|
||||
callback report.PolicyReportCallback,
|
||||
event watch.EventType,
|
||||
creport report.PolicyReport,
|
||||
oreport report.PolicyReport,
|
||||
) {
|
||||
callback(event, creport, oreport)
|
||||
wg.Done()
|
||||
}(cb, e, pr, opr)
|
||||
}
|
||||
|
||||
wg.Wait()
|
||||
|
||||
if e == watch.Deleted {
|
||||
c.store.Remove(pr.GetType(), pr.GetIdentifier())
|
||||
log.Printf("[INFO] Resource registered: %s\n", r.String())
|
||||
return
|
||||
}
|
||||
|
||||
c.store.Add(pr)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (c *policyReportClient) RegisterPolicyResultWatcher(skipExisting bool) {
|
||||
c.skipExisting = skipExisting
|
||||
|
||||
c.RegisterCallback(
|
||||
func(e watch.EventType, pr report.PolicyReport, or report.PolicyReport) {
|
||||
switch e {
|
||||
case watch.Added:
|
||||
if len(pr.Results) == 0 {
|
||||
break
|
||||
}
|
||||
|
||||
preExisted := pr.CreationTimestamp.Before(c.startUp)
|
||||
|
||||
if c.skipExisting && preExisted {
|
||||
break
|
||||
}
|
||||
|
||||
diff := pr.GetNewResults(or)
|
||||
|
||||
wg := sync.WaitGroup{}
|
||||
|
||||
for _, r := range diff {
|
||||
if _, found := c.resultCache.Get(r.GetIdentifier()); found {
|
||||
continue
|
||||
}
|
||||
|
||||
wg.Add(len(c.resultCallbacks))
|
||||
|
||||
for _, cb := range c.resultCallbacks {
|
||||
go func(callback report.PolicyResultCallback, result report.Result) {
|
||||
callback(result, preExisted)
|
||||
wg.Done()
|
||||
}(cb, r)
|
||||
}
|
||||
}
|
||||
|
||||
wg.Wait()
|
||||
case watch.Modified:
|
||||
if len(pr.Results) == 0 {
|
||||
break
|
||||
}
|
||||
|
||||
diff := pr.GetNewResults(or)
|
||||
|
||||
wg := sync.WaitGroup{}
|
||||
|
||||
for _, r := range diff {
|
||||
if _, found := c.resultCache.Get(r.GetIdentifier()); found {
|
||||
continue
|
||||
}
|
||||
|
||||
wg.Add(len(c.resultCallbacks))
|
||||
|
||||
for _, cb := range c.resultCallbacks {
|
||||
go func(callback report.PolicyResultCallback, result report.Result) {
|
||||
callback(result, false)
|
||||
wg.Done()
|
||||
}(cb, r)
|
||||
}
|
||||
}
|
||||
|
||||
wg.Wait()
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// NewPolicyReportClient creates a new PolicyReportClient based on the kubernetes go-client
|
||||
func NewPolicyReportClient(
|
||||
client PolicyReportAdapter,
|
||||
store *report.PolicyReportStore,
|
||||
startUp time.Time,
|
||||
resultCache *cache.Cache,
|
||||
) report.PolicyResultClient {
|
||||
return &policyReportClient{
|
||||
policyAPI: client,
|
||||
store: store,
|
||||
startUp: startUp,
|
||||
resultCache: resultCache,
|
||||
debouncer: newDebouncer(),
|
||||
// NewPolicyReportAdapter new Adapter for Policy Report Kubernetes API
|
||||
func NewPolicyReportClient(dynamic dynamic.Interface, mapper Mapper, restartWatchOnFailure time.Duration) report.PolicyReportClient {
|
||||
return &k8sPolicyReportClient{
|
||||
client: dynamic,
|
||||
mapper: mapper,
|
||||
mx: &sync.Mutex{},
|
||||
found: make(map[string]string),
|
||||
debouncer: NewDebouncer(time.Minute),
|
||||
restartWatchOnFailure: restartWatchOnFailure,
|
||||
}
|
||||
}
|
||||
|
|
|
@ -2,300 +2,59 @@ package kubernetes_test
|
|||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"sync"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/kyverno/policy-reporter/pkg/kubernetes"
|
||||
"github.com/kyverno/policy-reporter/pkg/report"
|
||||
"github.com/patrickmn/go-cache"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
|
||||
"k8s.io/apimachinery/pkg/watch"
|
||||
)
|
||||
|
||||
func Test_PolicyWatcher(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
|
||||
_, k8sCMClient := newFakeAPI()
|
||||
k8sCMClient.Create(ctx, configMap, metav1.CreateOptions{})
|
||||
kclient, rclient := NewFakeCilent()
|
||||
client := kubernetes.NewPolicyReportClient(kclient, NewMapper(), 100*time.Millisecond)
|
||||
|
||||
fakeAdapter := NewPolicyReportAdapter(NewMapper(k8sCMClient))
|
||||
client := kubernetes.NewPolicyReportClient(fakeAdapter, report.NewPolicyReportStore(), time.Now(), cache.New(cache.DefaultExpiration, time.Minute*5))
|
||||
eventChan := client.WatchPolicyReports(ctx)
|
||||
|
||||
client.RegisterPolicyResultWatcher(false)
|
||||
|
||||
wg := sync.WaitGroup{}
|
||||
wg.Add(2)
|
||||
|
||||
results := make([]report.Result, 0, 3)
|
||||
|
||||
client.RegisterPolicyResultCallback(func(r report.Result, b bool) {
|
||||
results = append(results, r)
|
||||
wg.Done()
|
||||
})
|
||||
|
||||
go client.StartWatching(ctx)
|
||||
|
||||
fakeAdapter.Watcher.Add(&unstructured.Unstructured{Object: policyMap})
|
||||
|
||||
wg.Wait()
|
||||
|
||||
if len(results) != 2 {
|
||||
t.Error("Should receive 2 Results from the Policy")
|
||||
}
|
||||
}
|
||||
|
||||
func Test_PolicyWatcherTwice(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
|
||||
_, k8sCMClient := newFakeAPI()
|
||||
k8sCMClient.Create(ctx, configMap, metav1.CreateOptions{})
|
||||
|
||||
fakeAdapter := NewPolicyReportAdapter(NewMapper(k8sCMClient))
|
||||
client := kubernetes.NewPolicyReportClient(fakeAdapter, report.NewPolicyReportStore(), time.Now(), cache.New(cache.DefaultExpiration, time.Minute*5))
|
||||
|
||||
go client.StartWatching(ctx)
|
||||
|
||||
time.Sleep(10 * time.Millisecond)
|
||||
|
||||
err := client.StartWatching(ctx)
|
||||
if err == nil {
|
||||
t.Error("Second StartWatching call should return immediately with error")
|
||||
}
|
||||
}
|
||||
|
||||
var notSkippedPolicyMap = map[string]interface{}{
|
||||
"metadata": map[string]interface{}{
|
||||
"name": "policy-report",
|
||||
"namespace": "test",
|
||||
"creationTimestamp": time.Now().Add(10 * time.Minute).Format("2006-01-02T15:04:05Z"),
|
||||
},
|
||||
"summary": map[string]interface{}{
|
||||
"pass": int64(1),
|
||||
"skip": int64(2),
|
||||
"warn": int64(3),
|
||||
"fail": int64(4),
|
||||
"error": int64(5),
|
||||
},
|
||||
"results": []interface{}{
|
||||
map[string]interface{}{
|
||||
"message": "message",
|
||||
"status": "fail",
|
||||
"scored": true,
|
||||
"policy": "not-skiped-policy-result",
|
||||
"rule": "app-label-required",
|
||||
"category": "test",
|
||||
"severity": "low",
|
||||
"resources": []interface{}{
|
||||
map[string]interface{}{
|
||||
"apiVersion": "v1",
|
||||
"kind": "Deployment",
|
||||
"name": "nginx",
|
||||
"namespace": "test",
|
||||
"uid": "dfd57c50-f30c-4729-b63f-b1954d8988d1",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
func Test_PolicySkipExisting(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
|
||||
_, k8sCMClient := newFakeAPI()
|
||||
k8sCMClient.Create(ctx, configMap, metav1.CreateOptions{})
|
||||
|
||||
fakeAdapter := NewPolicyReportAdapter(NewMapper(k8sCMClient))
|
||||
client := kubernetes.NewPolicyReportClient(fakeAdapter, report.NewPolicyReportStore(), time.Now(), cache.New(cache.DefaultExpiration, time.Minute*5))
|
||||
|
||||
client.RegisterPolicyResultWatcher(true)
|
||||
|
||||
wg := sync.WaitGroup{}
|
||||
wg.Add(1)
|
||||
|
||||
results := make([]report.Result, 0, 1)
|
||||
|
||||
client.RegisterPolicyResultCallback(func(r report.Result, b bool) {
|
||||
results = append(results, r)
|
||||
wg.Done()
|
||||
})
|
||||
|
||||
go client.StartWatching(ctx)
|
||||
|
||||
fakeAdapter.Watcher.Add(&unstructured.Unstructured{Object: policyMap})
|
||||
fakeAdapter.Watcher.Add(&unstructured.Unstructured{Object: notSkippedPolicyMap})
|
||||
|
||||
wg.Wait()
|
||||
|
||||
if len(results) != 1 {
|
||||
t.Error("Should receive one not skipped Result form notSkippedPolicyMap")
|
||||
}
|
||||
|
||||
if results[0].Policy != "not-skiped-policy-result" {
|
||||
t.Error("Should be 'not-skiped-policy-result'")
|
||||
}
|
||||
}
|
||||
|
||||
func Test_PolicyWatcherError(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
|
||||
_, k8sCMClient := newFakeAPI()
|
||||
k8sCMClient.Create(ctx, configMap, metav1.CreateOptions{})
|
||||
|
||||
fakeAdapter := NewPolicyReportAdapter(NewMapper(k8sCMClient))
|
||||
fakeAdapter.Error = errors.New("")
|
||||
|
||||
client := kubernetes.NewPolicyReportClient(fakeAdapter, report.NewPolicyReportStore(), time.Now(), cache.New(cache.DefaultExpiration, time.Minute*5))
|
||||
|
||||
client.RegisterPolicyResultWatcher(false)
|
||||
|
||||
err := client.StartWatching(ctx)
|
||||
if err == nil {
|
||||
t.Error("Shoud stop execution when error is returned")
|
||||
}
|
||||
}
|
||||
|
||||
func Test_PolicyWatchDeleteEvent(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
|
||||
_, k8sCMClient := newFakeAPI()
|
||||
k8sCMClient.Create(ctx, configMap, metav1.CreateOptions{})
|
||||
|
||||
fakeAdapter := NewPolicyReportAdapter(NewMapper(k8sCMClient))
|
||||
client := kubernetes.NewPolicyReportClient(fakeAdapter, report.NewPolicyReportStore(), time.Now(), cache.New(cache.DefaultExpiration, time.Minute*5))
|
||||
|
||||
client.RegisterPolicyResultWatcher(false)
|
||||
|
||||
wg := sync.WaitGroup{}
|
||||
wg.Add(2)
|
||||
|
||||
results := make([]report.Result, 0, 2)
|
||||
|
||||
client.RegisterPolicyResultCallback(func(r report.Result, b bool) {
|
||||
results = append(results, r)
|
||||
wg.Done()
|
||||
})
|
||||
|
||||
go client.StartWatching(ctx)
|
||||
|
||||
fakeAdapter.Watcher.Add(&unstructured.Unstructured{Object: policyMap})
|
||||
fakeAdapter.Watcher.Delete(&unstructured.Unstructured{Object: policyMap})
|
||||
|
||||
wg.Wait()
|
||||
|
||||
if len(results) != 2 {
|
||||
t.Error("Should receive initial 2 and no result from deletion")
|
||||
}
|
||||
}
|
||||
|
||||
func Test_PolicyWatchModifiedEvent(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
|
||||
_, k8sCMClient := newFakeAPI()
|
||||
k8sCMClient.Create(ctx, configMap, metav1.CreateOptions{})
|
||||
|
||||
fakeAdapter := NewPolicyReportAdapter(NewMapper(k8sCMClient))
|
||||
client := kubernetes.NewPolicyReportClient(fakeAdapter, report.NewPolicyReportStore(), time.Now(), cache.New(cache.DefaultExpiration, time.Minute*5))
|
||||
|
||||
client.RegisterPolicyResultWatcher(false)
|
||||
store := newStore(3)
|
||||
|
||||
wg := sync.WaitGroup{}
|
||||
wg.Add(3)
|
||||
|
||||
results := make([]report.Result, 0, 3)
|
||||
client.RegisterPolicyResultCallback(func(r report.Result, b bool) {
|
||||
results = append(results, r)
|
||||
go func() {
|
||||
for event := range eventChan {
|
||||
store.Add(event)
|
||||
wg.Done()
|
||||
})
|
||||
|
||||
go client.StartWatching(ctx)
|
||||
|
||||
fakeAdapter.Watcher.Add(&unstructured.Unstructured{Object: policyMap})
|
||||
|
||||
var policyMap2 = map[string]interface{}{
|
||||
"metadata": map[string]interface{}{
|
||||
"name": "policy-report",
|
||||
"namespace": "test",
|
||||
"creationTimestamp": "2021-02-23T15:00:00Z",
|
||||
},
|
||||
"summary": map[string]interface{}{
|
||||
"pass": int64(1),
|
||||
"skip": int64(2),
|
||||
"warn": int64(3),
|
||||
"fail": int64(4),
|
||||
"error": int64(5),
|
||||
},
|
||||
"results": []interface{}{
|
||||
map[string]interface{}{
|
||||
"message": "message",
|
||||
"status": "fail",
|
||||
"scored": true,
|
||||
"policy": "required-label",
|
||||
"rule": "app-label-required",
|
||||
"category": "test",
|
||||
"severity": "medium",
|
||||
"resources": []interface{}{
|
||||
map[string]interface{}{
|
||||
"apiVersion": "v1",
|
||||
"kind": "Deployment",
|
||||
"name": "nginx",
|
||||
"namespace": "test",
|
||||
"uid": "dfd57c50-f30c-4729-b63f-b1954d8988d1",
|
||||
},
|
||||
},
|
||||
},
|
||||
map[string]interface{}{
|
||||
"message": "message 2",
|
||||
"status": "fail",
|
||||
"scored": true,
|
||||
"policy": "priority-test",
|
||||
"resources": []interface{}{},
|
||||
},
|
||||
map[string]interface{}{
|
||||
"message": "message 3",
|
||||
"status": "pass",
|
||||
"scored": true,
|
||||
"policy": "priority-test",
|
||||
"resources": []interface{}{},
|
||||
},
|
||||
},
|
||||
}
|
||||
}()
|
||||
|
||||
fakeAdapter.Watcher.Modify(&unstructured.Unstructured{Object: policyMap2})
|
||||
rclient.Create(ctx, &unstructured.Unstructured{Object: policyMap}, metav1.CreateOptions{})
|
||||
time.Sleep(10 * time.Millisecond)
|
||||
rclient.Update(ctx, &unstructured.Unstructured{Object: policyMap}, metav1.UpdateOptions{})
|
||||
time.Sleep(10 * time.Millisecond)
|
||||
rclient.Delete(ctx, policyMap["metadata"].(map[string]interface{})["name"].(string), metav1.DeleteOptions{})
|
||||
|
||||
wg.Wait()
|
||||
|
||||
if len(results) != 3 {
|
||||
t.Error("Should receive initial 2 and 1 modification")
|
||||
if len(store.List()) != 3 {
|
||||
t.Error("Should receive the Added, Updated and Deleted Event")
|
||||
}
|
||||
}
|
||||
|
||||
func Test_PolicyDelayReset(t *testing.T) {
|
||||
func Test_GetFoundResources(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
|
||||
_, k8sCMClient := newFakeAPI()
|
||||
k8sCMClient.Create(ctx, configMap, metav1.CreateOptions{})
|
||||
kclient, _ := NewFakeCilent()
|
||||
client := kubernetes.NewPolicyReportClient(kclient, NewMapper(), 100*time.Millisecond)
|
||||
|
||||
fakeAdapter := NewPolicyReportAdapter(NewMapper(k8sCMClient))
|
||||
client := kubernetes.NewPolicyReportClient(fakeAdapter, report.NewPolicyReportStore(), time.Now(), cache.New(cache.DefaultExpiration, time.Minute*5))
|
||||
client.WatchPolicyReports(ctx)
|
||||
|
||||
client.RegisterPolicyResultWatcher(false)
|
||||
time.Sleep(1 * time.Second)
|
||||
|
||||
wg := sync.WaitGroup{}
|
||||
wg.Add(2)
|
||||
|
||||
client.RegisterCallback(func(e watch.EventType, r report.PolicyReport, o report.PolicyReport) {
|
||||
wg.Done()
|
||||
})
|
||||
|
||||
go client.StartWatching(ctx)
|
||||
|
||||
fakeAdapter.Watcher.Add(&unstructured.Unstructured{Object: policyMap})
|
||||
fakeAdapter.Watcher.Modify(&unstructured.Unstructured{Object: minPolicyMap})
|
||||
fakeAdapter.Watcher.Modify(&unstructured.Unstructured{Object: policyMap})
|
||||
fakeAdapter.Watcher.Delete(&unstructured.Unstructured{Object: policyMap})
|
||||
|
||||
wg.Wait()
|
||||
if len(client.GetFoundResources()) != 2 {
|
||||
t.Errorf("Should find PolicyReport and ClusterPolicyReport Resource")
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,124 +0,0 @@
|
|||
package kubernetes
|
||||
|
||||
import (
|
||||
"context"
|
||||
"log"
|
||||
"sync"
|
||||
|
||||
"github.com/kyverno/policy-reporter/pkg/report"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
|
||||
"k8s.io/apimachinery/pkg/runtime/schema"
|
||||
"k8s.io/apimachinery/pkg/watch"
|
||||
"k8s.io/client-go/dynamic"
|
||||
)
|
||||
|
||||
var (
|
||||
policyReportAlphaV1 = schema.GroupVersionResource{
|
||||
Group: "wgpolicyk8s.io",
|
||||
Version: "v1alpha1",
|
||||
Resource: "policyreports",
|
||||
}
|
||||
policyReportAlphaV2 = schema.GroupVersionResource{
|
||||
Group: "wgpolicyk8s.io",
|
||||
Version: "v1alpha2",
|
||||
Resource: "policyreports",
|
||||
}
|
||||
|
||||
clusterPolicyReportAlphaV1 = schema.GroupVersionResource{
|
||||
Group: "wgpolicyk8s.io",
|
||||
Version: "v1alpha1",
|
||||
Resource: "clusterpolicyreports",
|
||||
}
|
||||
clusterPolicyReportAlphaV2 = schema.GroupVersionResource{
|
||||
Group: "wgpolicyk8s.io",
|
||||
Version: "v1alpha2",
|
||||
Resource: "clusterpolicyreports",
|
||||
}
|
||||
)
|
||||
|
||||
// WatchEvent of PolicyReports
|
||||
type WatchEvent struct {
|
||||
Report report.PolicyReport
|
||||
Type watch.EventType
|
||||
}
|
||||
|
||||
// PolicyReportAdapter translates API responses to an internal struct
|
||||
type PolicyReportAdapter interface {
|
||||
WatchPolicyReports(ctx context.Context) (chan WatchEvent, error)
|
||||
GetFoundResources() map[string]string
|
||||
}
|
||||
|
||||
type k8sPolicyReportAdapter struct {
|
||||
client dynamic.Interface
|
||||
found map[string]string
|
||||
mapper Mapper
|
||||
mx *sync.Mutex
|
||||
}
|
||||
|
||||
func (k *k8sPolicyReportAdapter) GetFoundResources() map[string]string {
|
||||
return k.found
|
||||
}
|
||||
|
||||
func (k *k8sPolicyReportAdapter) WatchPolicyReports(ctx context.Context) (chan WatchEvent, error) {
|
||||
events := make(chan WatchEvent)
|
||||
|
||||
pr := []schema.GroupVersionResource{
|
||||
policyReportAlphaV2,
|
||||
policyReportAlphaV1,
|
||||
}
|
||||
|
||||
cpor := []schema.GroupVersionResource{
|
||||
clusterPolicyReportAlphaV2,
|
||||
clusterPolicyReportAlphaV1,
|
||||
}
|
||||
|
||||
for _, versions := range [][]schema.GroupVersionResource{pr, cpor} {
|
||||
go func(vs []schema.GroupVersionResource) {
|
||||
for {
|
||||
for _, resource := range vs {
|
||||
k.WatchCRD(ctx, resource, events)
|
||||
}
|
||||
}
|
||||
|
||||
}(versions)
|
||||
}
|
||||
|
||||
return events, nil
|
||||
}
|
||||
|
||||
func (k *k8sPolicyReportAdapter) WatchCRD(ctx context.Context, r schema.GroupVersionResource, events chan WatchEvent) {
|
||||
for {
|
||||
w, err := k.client.Resource(r).Watch(ctx, metav1.ListOptions{})
|
||||
if err != nil {
|
||||
k.mx.Lock()
|
||||
delete(k.found, r.String())
|
||||
k.mx.Unlock()
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
log.Printf("[INFO] Resource registered: %s\n", r.String())
|
||||
|
||||
k.mx.Lock()
|
||||
k.found[r.String()] = r.String()
|
||||
k.mx.Unlock()
|
||||
|
||||
for result := range w.ResultChan() {
|
||||
if item, ok := result.Object.(*unstructured.Unstructured); ok {
|
||||
preport := k.mapper.MapPolicyReport(item.Object)
|
||||
events <- WatchEvent{preport, result.Type}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// NewPolicyReportAdapter new Adapter for Policy Report Kubernetes API
|
||||
func NewPolicyReportAdapter(dynamic dynamic.Interface, mapper Mapper) PolicyReportAdapter {
|
||||
return &k8sPolicyReportAdapter{
|
||||
client: dynamic,
|
||||
mapper: mapper,
|
||||
mx: &sync.Mutex{},
|
||||
found: make(map[string]string),
|
||||
}
|
||||
}
|
|
@ -1,29 +0,0 @@
|
|||
package kubernetes_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"github.com/kyverno/policy-reporter/pkg/kubernetes"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
"k8s.io/client-go/dynamic/fake"
|
||||
)
|
||||
|
||||
func NewFakeClient(items ...runtime.Object) *fake.FakeDynamicClient {
|
||||
return fake.NewSimpleDynamicClient(runtime.NewScheme(), items...)
|
||||
}
|
||||
|
||||
func Test_WatchPolicyReports(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
dynamic := NewFakeClient()
|
||||
_, k8sCMClient := newFakeAPI()
|
||||
k8sCMClient.Create(ctx, configMap, metav1.CreateOptions{})
|
||||
|
||||
client := kubernetes.NewPolicyReportAdapter(dynamic, NewMapper(k8sCMClient))
|
||||
|
||||
_, err := client.WatchPolicyReports(ctx)
|
||||
if err != nil {
|
||||
t.Error("Unexpected WatchError")
|
||||
}
|
||||
}
|
|
@ -1,323 +0,0 @@
|
|||
package kubernetes_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"sync"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/kyverno/policy-reporter/pkg/kubernetes"
|
||||
"github.com/kyverno/policy-reporter/pkg/report"
|
||||
"github.com/patrickmn/go-cache"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
|
||||
"k8s.io/apimachinery/pkg/watch"
|
||||
v1 "k8s.io/client-go/kubernetes/typed/core/v1"
|
||||
)
|
||||
|
||||
type fakeClient struct {
|
||||
List []report.PolicyReport
|
||||
Watcher *watch.FakeWatcher
|
||||
Error error
|
||||
mapper kubernetes.Mapper
|
||||
}
|
||||
|
||||
func (f *fakeClient) GetFoundResources() map[string]string {
|
||||
return make(map[string]string)
|
||||
}
|
||||
|
||||
func (f *fakeClient) ListPolicyReports() ([]report.PolicyReport, error) {
|
||||
return f.List, f.Error
|
||||
}
|
||||
|
||||
func (f *fakeClient) ListClusterPolicyReports() ([]report.PolicyReport, error) {
|
||||
return f.List, f.Error
|
||||
}
|
||||
|
||||
func (f *fakeClient) WatchPolicyReports(_ context.Context) (chan kubernetes.WatchEvent, error) {
|
||||
channel := make(chan kubernetes.WatchEvent)
|
||||
|
||||
go func() {
|
||||
for result := range f.Watcher.ResultChan() {
|
||||
if item, ok := result.Object.(*unstructured.Unstructured); ok {
|
||||
channel <- kubernetes.WatchEvent{
|
||||
Report: f.mapper.MapPolicyReport(item.Object),
|
||||
Type: result.Type,
|
||||
}
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
return channel, f.Error
|
||||
}
|
||||
|
||||
func NewPolicyReportAdapter(mapper kubernetes.Mapper) *fakeClient {
|
||||
return &fakeClient{
|
||||
List: make([]report.PolicyReport, 0),
|
||||
Watcher: watch.NewFake(),
|
||||
mapper: mapper,
|
||||
}
|
||||
}
|
||||
|
||||
func NewMapper(k8sCMClient v1.ConfigMapInterface) kubernetes.Mapper {
|
||||
return kubernetes.NewMapper(make(map[string]string), kubernetes.NewConfigMapAdapter(k8sCMClient))
|
||||
}
|
||||
|
||||
func Test_ResultClient_RegisterPolicyResultWatcher(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
|
||||
_, k8sCMClient := newFakeAPI()
|
||||
k8sCMClient.Create(ctx, configMap, metav1.CreateOptions{})
|
||||
|
||||
fakeAdapter := NewPolicyReportAdapter(NewMapper(k8sCMClient))
|
||||
client := kubernetes.NewPolicyReportClient(fakeAdapter, report.NewPolicyReportStore(), time.Now(), cache.New(cache.DefaultExpiration, time.Minute*5))
|
||||
|
||||
client.RegisterPolicyResultWatcher(false)
|
||||
|
||||
wg := sync.WaitGroup{}
|
||||
wg.Add(3)
|
||||
|
||||
results := make([]report.Result, 0, 3)
|
||||
|
||||
client.RegisterPolicyResultCallback(func(r report.Result, b bool) {
|
||||
results = append(results, r)
|
||||
wg.Done()
|
||||
})
|
||||
|
||||
go client.StartWatching(ctx)
|
||||
|
||||
fakeAdapter.Watcher.Add(&unstructured.Unstructured{Object: clusterPolicyMap})
|
||||
fakeAdapter.Watcher.Add(&unstructured.Unstructured{Object: policyMap})
|
||||
|
||||
wg.Wait()
|
||||
|
||||
if len(results) != 3 {
|
||||
t.Error("Should receive 3 Result from all PolicyReports")
|
||||
}
|
||||
}
|
||||
|
||||
func Test_ResultClient_SkipCachedResults(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
|
||||
_, k8sCMClient := newFakeAPI()
|
||||
k8sCMClient.Create(ctx, configMap, metav1.CreateOptions{})
|
||||
|
||||
fakeAdapter := NewPolicyReportAdapter(NewMapper(k8sCMClient))
|
||||
client := kubernetes.NewPolicyReportClient(fakeAdapter, report.NewPolicyReportStore(), time.Now(), cache.New(cache.DefaultExpiration, time.Minute*5))
|
||||
|
||||
client.RegisterPolicyResultWatcher(false)
|
||||
|
||||
wg := sync.WaitGroup{}
|
||||
wg.Add(3)
|
||||
|
||||
results := make([]report.Result, 0, 3)
|
||||
|
||||
client.RegisterPolicyResultCallback(func(r report.Result, b bool) {
|
||||
results = append(results, r)
|
||||
wg.Done()
|
||||
})
|
||||
|
||||
go client.StartWatching(ctx)
|
||||
|
||||
var policyMap1 = map[string]interface{}{
|
||||
"metadata": map[string]interface{}{
|
||||
"name": "policy-report",
|
||||
"namespace": "test",
|
||||
"creationTimestamp": "2021-02-23T15:00:00Z",
|
||||
},
|
||||
"summary": map[string]interface{}{
|
||||
"pass": int64(1),
|
||||
"skip": int64(2),
|
||||
"warn": int64(3),
|
||||
"fail": int64(4),
|
||||
"error": int64(5),
|
||||
},
|
||||
"results": []interface{}{
|
||||
map[string]interface{}{
|
||||
"message": "message",
|
||||
"status": "fail",
|
||||
"scored": true,
|
||||
"policy": "required-label",
|
||||
"rule": "app-label-required",
|
||||
"timestamp": map[string]interface{}{
|
||||
"seconds": 1614093000,
|
||||
},
|
||||
"category": "test",
|
||||
"severity": "high",
|
||||
"resources": []interface{}{
|
||||
map[string]interface{}{
|
||||
"apiVersion": "v1",
|
||||
"kind": "Deployment",
|
||||
"name": "nginx",
|
||||
"namespace": "test",
|
||||
"uid": "dfd57c50-f30c-4729-b63f-b1954d8988d1",
|
||||
},
|
||||
},
|
||||
"properties": map[string]interface{}{
|
||||
"version": "1.2.0",
|
||||
},
|
||||
},
|
||||
map[string]interface{}{
|
||||
"message": "message 2",
|
||||
"status": "fail",
|
||||
"scored": true,
|
||||
"timestamp": map[string]interface{}{
|
||||
"seconds": int64(1614093000),
|
||||
},
|
||||
"policy": "priority-test",
|
||||
"resources": []interface{}{},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
var policyMap2 = map[string]interface{}{
|
||||
"metadata": map[string]interface{}{
|
||||
"name": "policy-report",
|
||||
"namespace": "test",
|
||||
"creationTimestamp": "2021-02-23T15:00:00Z",
|
||||
},
|
||||
"summary": map[string]interface{}{
|
||||
"pass": int64(1),
|
||||
"skip": int64(2),
|
||||
"warn": int64(3),
|
||||
"fail": int64(4),
|
||||
"error": int64(5),
|
||||
},
|
||||
"results": []interface{}{},
|
||||
}
|
||||
|
||||
var clusterPolicyMap2 = map[string]interface{}{
|
||||
"metadata": map[string]interface{}{
|
||||
"name": "clusterpolicy-report",
|
||||
"creationTimestamp": "2021-02-23T15:00:00Z",
|
||||
},
|
||||
"summary": map[string]interface{}{
|
||||
"pass": int64(0),
|
||||
"skip": int64(0),
|
||||
"warn": int64(0),
|
||||
"fail": int64(0),
|
||||
"error": int64(0),
|
||||
},
|
||||
"results": []interface{}{},
|
||||
}
|
||||
|
||||
fakeAdapter.Watcher.Add(&unstructured.Unstructured{Object: clusterPolicyMap})
|
||||
fakeAdapter.Watcher.Modify(&unstructured.Unstructured{Object: clusterPolicyMap2})
|
||||
fakeAdapter.Watcher.Modify(&unstructured.Unstructured{Object: clusterPolicyMap})
|
||||
|
||||
fakeAdapter.Watcher.Add(&unstructured.Unstructured{Object: policyMap})
|
||||
fakeAdapter.Watcher.Modify(&unstructured.Unstructured{Object: policyMap2})
|
||||
fakeAdapter.Watcher.Modify(&unstructured.Unstructured{Object: policyMap1})
|
||||
fakeAdapter.Watcher.Modify(&unstructured.Unstructured{Object: policyMap})
|
||||
|
||||
wg.Wait()
|
||||
|
||||
if len(results) != 3 {
|
||||
t.Error("Should receive 3 Result from none empty PolicyReport and ClusterPolicyReport Modify")
|
||||
}
|
||||
}
|
||||
|
||||
func Test_ResultClient_SkipReportsCleanUpEvents(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
|
||||
_, k8sCMClient := newFakeAPI()
|
||||
k8sCMClient.Create(ctx, configMap, metav1.CreateOptions{})
|
||||
|
||||
fakeAdapter := NewPolicyReportAdapter(NewMapper(k8sCMClient))
|
||||
client := kubernetes.NewPolicyReportClient(fakeAdapter, report.NewPolicyReportStore(), time.Now(), cache.New(cache.DefaultExpiration, time.Minute*5))
|
||||
|
||||
client.RegisterPolicyResultWatcher(false)
|
||||
|
||||
wg := sync.WaitGroup{}
|
||||
wg.Add(3)
|
||||
|
||||
results := make([]report.Result, 0, 3)
|
||||
|
||||
client.RegisterPolicyResultCallback(func(r report.Result, b bool) {
|
||||
results = append(results, r)
|
||||
wg.Done()
|
||||
})
|
||||
|
||||
go client.StartWatching(ctx)
|
||||
|
||||
var policyMap2 = map[string]interface{}{
|
||||
"metadata": map[string]interface{}{
|
||||
"name": "policy-report",
|
||||
"namespace": "test",
|
||||
"creationTimestamp": "2021-02-23T15:00:00Z",
|
||||
},
|
||||
"summary": map[string]interface{}{
|
||||
"pass": int64(0),
|
||||
"skip": int64(0),
|
||||
"warn": int64(0),
|
||||
"fail": int64(0),
|
||||
"error": int64(0),
|
||||
},
|
||||
"results": []interface{}{},
|
||||
}
|
||||
|
||||
var clusterPolicyMap2 = map[string]interface{}{
|
||||
"metadata": map[string]interface{}{
|
||||
"name": "clusterpolicy-report",
|
||||
"creationTimestamp": "2021-02-23T15:00:00Z",
|
||||
},
|
||||
"summary": map[string]interface{}{
|
||||
"pass": int64(0),
|
||||
"skip": int64(0),
|
||||
"warn": int64(0),
|
||||
"fail": int64(0),
|
||||
"error": int64(0),
|
||||
},
|
||||
"results": []interface{}{},
|
||||
}
|
||||
|
||||
fakeAdapter.Watcher.Add(&unstructured.Unstructured{Object: clusterPolicyMap})
|
||||
fakeAdapter.Watcher.Modify(&unstructured.Unstructured{Object: clusterPolicyMap2})
|
||||
fakeAdapter.Watcher.Modify(&unstructured.Unstructured{Object: clusterPolicyMap})
|
||||
|
||||
fakeAdapter.Watcher.Add(&unstructured.Unstructured{Object: policyMap})
|
||||
fakeAdapter.Watcher.Modify(&unstructured.Unstructured{Object: policyMap2})
|
||||
fakeAdapter.Watcher.Modify(&unstructured.Unstructured{Object: policyMap})
|
||||
|
||||
wg.Wait()
|
||||
|
||||
if len(results) != 3 {
|
||||
t.Error("Should receive 3 Results from the initial add events, not from the cleanup modify events")
|
||||
}
|
||||
}
|
||||
|
||||
func Test_ResultClient_SkipReportsReconnectEvents(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
|
||||
_, k8sCMClient := newFakeAPI()
|
||||
k8sCMClient.Create(ctx, configMap, metav1.CreateOptions{})
|
||||
|
||||
fakeAdapter := NewPolicyReportAdapter(NewMapper(k8sCMClient))
|
||||
client := kubernetes.NewPolicyReportClient(fakeAdapter, report.NewPolicyReportStore(), time.Now(), cache.New(cache.DefaultExpiration, time.Minute*5))
|
||||
|
||||
client.RegisterPolicyResultWatcher(false)
|
||||
|
||||
wg := sync.WaitGroup{}
|
||||
wg.Add(3)
|
||||
|
||||
results := make([]report.Result, 0, 3)
|
||||
|
||||
client.RegisterPolicyResultCallback(func(r report.Result, b bool) {
|
||||
results = append(results, r)
|
||||
wg.Done()
|
||||
})
|
||||
|
||||
go client.StartWatching(ctx)
|
||||
|
||||
fakeAdapter.Watcher.Add(&unstructured.Unstructured{Object: clusterPolicyMap})
|
||||
fakeAdapter.Watcher.Add(&unstructured.Unstructured{Object: clusterPolicyMap})
|
||||
|
||||
fakeAdapter.Watcher.Add(&unstructured.Unstructured{Object: policyMap})
|
||||
fakeAdapter.Watcher.Add(&unstructured.Unstructured{Object: policyMap})
|
||||
|
||||
wg.Wait()
|
||||
|
||||
if len(results) != 3 {
|
||||
t.Error("Should receive 3 Results from the initial add events, not from the restart events")
|
||||
}
|
||||
}
|
75
pkg/listener/fixture_test.go
Normal file
75
pkg/listener/fixture_test.go
Normal file
|
@ -0,0 +1,75 @@
|
|||
package listener_test
|
||||
|
||||
import (
|
||||
"time"
|
||||
|
||||
"github.com/kyverno/policy-reporter/pkg/report"
|
||||
)
|
||||
|
||||
var result1 = &report.Result{
|
||||
ID: "123",
|
||||
Message: "validation error: requests and limits required. Rule autogen-check-for-requests-and-limits failed at path /spec/template/spec/containers/0/resources/requests/",
|
||||
Policy: "require-requests-and-limits-required",
|
||||
Rule: "autogen-check-for-requests-and-limits",
|
||||
Priority: report.ErrorPriority,
|
||||
Status: report.Fail,
|
||||
Category: "Best Practices",
|
||||
Severity: report.High,
|
||||
Scored: true,
|
||||
Source: "Kyverno",
|
||||
Resource: &report.Resource{
|
||||
APIVersion: "v1",
|
||||
Kind: "Deployment",
|
||||
Name: "nginx",
|
||||
Namespace: "test",
|
||||
UID: "536ab69f-1b3c-4bd9-9ba4-274a56188409",
|
||||
},
|
||||
}
|
||||
|
||||
var result2 = &report.Result{
|
||||
ID: "124",
|
||||
Message: "validation error: requests and limits required. Rule autogen-check-for-requests-and-limits failed at path /spec/template/spec/containers/0/resources/requests/",
|
||||
Policy: "require-requests-and-limits-required",
|
||||
Rule: "autogen-check-for-requests-and-limits",
|
||||
Priority: report.WarningPriority,
|
||||
Status: report.Pass,
|
||||
Category: "Best Practices",
|
||||
Scored: true,
|
||||
Source: "Kyverno",
|
||||
Resource: &report.Resource{
|
||||
APIVersion: "v1",
|
||||
Kind: "Pod",
|
||||
Name: "nginx",
|
||||
Namespace: "test",
|
||||
UID: "536ab69f-1b3c-4bd9-9ba4-274a56188419",
|
||||
},
|
||||
}
|
||||
|
||||
var preport1 = &report.PolicyReport{
|
||||
ID: report.GeneratePolicyReportID("polr-test", "test"),
|
||||
Name: "polr-test",
|
||||
Namespace: "test",
|
||||
Results: map[string]*report.Result{
|
||||
result1.GetIdentifier(): result1,
|
||||
},
|
||||
Summary: &report.Summary{Fail: 1},
|
||||
CreationTimestamp: time.Now(),
|
||||
}
|
||||
|
||||
var preport2 = &report.PolicyReport{
|
||||
ID: report.GeneratePolicyReportID("polr-test", "test"),
|
||||
Name: "polr-test",
|
||||
Namespace: "test",
|
||||
Results: map[string]*report.Result{
|
||||
result1.GetIdentifier(): result1,
|
||||
result2.GetIdentifier(): result2,
|
||||
},
|
||||
Summary: &report.Summary{Fail: 1, Pass: 1},
|
||||
CreationTimestamp: time.Now(),
|
||||
}
|
||||
|
||||
var creport = &report.PolicyReport{
|
||||
Name: "cpolr-test",
|
||||
Summary: &report.Summary{},
|
||||
CreationTimestamp: time.Now(),
|
||||
}
|
20
pkg/listener/metrics.go
Normal file
20
pkg/listener/metrics.go
Normal file
|
@ -0,0 +1,20 @@
|
|||
package listener
|
||||
|
||||
import (
|
||||
"github.com/kyverno/policy-reporter/pkg/listener/metrics"
|
||||
"github.com/kyverno/policy-reporter/pkg/report"
|
||||
)
|
||||
|
||||
// NewMetricsListener for PolicyReport watch.Events
|
||||
func NewMetricsListener() report.PolicyReportListener {
|
||||
pCallback := metrics.CreatePolicyReportMetricsListener()
|
||||
cCallback := metrics.CreateClusterPolicyReportMetricsListener()
|
||||
|
||||
return func(event report.LifecycleEvent) {
|
||||
if event.NewPolicyReport.Namespace == "" {
|
||||
cCallback(event)
|
||||
} else {
|
||||
pCallback(event)
|
||||
}
|
||||
}
|
||||
}
|
97
pkg/listener/metrics/cluster_policy_report.go
Normal file
97
pkg/listener/metrics/cluster_policy_report.go
Normal file
|
@ -0,0 +1,97 @@
|
|||
package metrics
|
||||
|
||||
import (
|
||||
"github.com/kyverno/policy-reporter/pkg/report"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"github.com/prometheus/client_golang/prometheus/promauto"
|
||||
)
|
||||
|
||||
var clusterPolicyGauge = promauto.NewGaugeVec(prometheus.GaugeOpts{
|
||||
Name: "cluster_policy_report_summary",
|
||||
Help: "Summary of all ClusterPolicyReports",
|
||||
}, []string{"name", "status"})
|
||||
|
||||
var clusterRuleGauge = promauto.NewGaugeVec(prometheus.GaugeOpts{
|
||||
Name: "cluster_policy_report_result",
|
||||
Help: "List of all ClusterPolicyReport Results",
|
||||
}, []string{"rule", "policy", "report", "kind", "name", "status", "severity", "category"})
|
||||
|
||||
func CreateClusterPolicyReportMetricsListener() report.PolicyReportListener {
|
||||
prometheus.Register(clusterPolicyGauge)
|
||||
prometheus.Register(clusterRuleGauge)
|
||||
|
||||
var newReport *report.PolicyReport
|
||||
var oldReport *report.PolicyReport
|
||||
|
||||
return func(event report.LifecycleEvent) {
|
||||
newReport = event.NewPolicyReport
|
||||
oldReport = event.OldPolicyReport
|
||||
|
||||
switch event.Type {
|
||||
case report.Added:
|
||||
updateClusterPolicyGauge(newReport)
|
||||
|
||||
for _, result := range newReport.Results {
|
||||
clusterRuleGauge.With(generateClusterResultLabels(newReport, result)).Set(1)
|
||||
}
|
||||
case report.Updated:
|
||||
updateClusterPolicyGauge(newReport)
|
||||
|
||||
for _, result := range oldReport.Results {
|
||||
clusterRuleGauge.Delete(generateClusterResultLabels(oldReport, result))
|
||||
}
|
||||
|
||||
for _, result := range newReport.Results {
|
||||
clusterRuleGauge.With(generateClusterResultLabels(newReport, result)).Set(1)
|
||||
}
|
||||
case report.Deleted:
|
||||
clusterPolicyGauge.DeleteLabelValues(newReport.Name, "Pass")
|
||||
clusterPolicyGauge.DeleteLabelValues(newReport.Name, "Fail")
|
||||
clusterPolicyGauge.DeleteLabelValues(newReport.Name, "Warn")
|
||||
clusterPolicyGauge.DeleteLabelValues(newReport.Name, "Error")
|
||||
clusterPolicyGauge.DeleteLabelValues(newReport.Name, "Skip")
|
||||
|
||||
for _, result := range newReport.Results {
|
||||
clusterRuleGauge.Delete(generateClusterResultLabels(newReport, result))
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func generateClusterResultLabels(newReport *report.PolicyReport, result *report.Result) prometheus.Labels {
|
||||
labels := prometheus.Labels{
|
||||
"rule": result.Rule,
|
||||
"policy": result.Policy,
|
||||
"report": newReport.Name,
|
||||
"kind": "",
|
||||
"name": "",
|
||||
"status": result.Status,
|
||||
"severity": result.Severity,
|
||||
"category": result.Category,
|
||||
}
|
||||
|
||||
if result.HasResource() {
|
||||
labels["kind"] = result.Resource.Kind
|
||||
labels["name"] = result.Resource.Name
|
||||
}
|
||||
|
||||
return labels
|
||||
}
|
||||
|
||||
func updateClusterPolicyGauge(newReport *report.PolicyReport) {
|
||||
clusterPolicyGauge.
|
||||
WithLabelValues(newReport.Name, "Pass").
|
||||
Set(float64(newReport.Summary.Pass))
|
||||
clusterPolicyGauge.
|
||||
WithLabelValues(newReport.Name, "Fail").
|
||||
Set(float64(newReport.Summary.Fail))
|
||||
clusterPolicyGauge.
|
||||
WithLabelValues(newReport.Name, "Warn").
|
||||
Set(float64(newReport.Summary.Warn))
|
||||
clusterPolicyGauge.
|
||||
WithLabelValues(newReport.Name, "Error").
|
||||
Set(float64(newReport.Summary.Error))
|
||||
clusterPolicyGauge.
|
||||
WithLabelValues(newReport.Name, "Skip").
|
||||
Set(float64(newReport.Summary.Skip))
|
||||
}
|
|
@ -5,38 +5,43 @@ import (
|
|||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/kyverno/policy-reporter/pkg/metrics"
|
||||
"github.com/kyverno/policy-reporter/pkg/listener/metrics"
|
||||
"github.com/kyverno/policy-reporter/pkg/report"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
ioprometheusclient "github.com/prometheus/client_model/go"
|
||||
"k8s.io/apimachinery/pkg/watch"
|
||||
)
|
||||
|
||||
var creport = report.PolicyReport{
|
||||
var creport = &report.PolicyReport{
|
||||
Name: "cpolr-test",
|
||||
Results: make(map[string]report.Result, 0),
|
||||
Summary: report.Summary{},
|
||||
Results: make(map[string]*report.Result),
|
||||
Summary: &report.Summary{},
|
||||
CreationTimestamp: time.Now(),
|
||||
}
|
||||
|
||||
func Test_ClusterPolicyReportMetricGeneration(t *testing.T) {
|
||||
report1 := creport
|
||||
report1.Summary = report.Summary{Pass: 1, Fail: 1}
|
||||
report1.Results = map[string]report.Result{
|
||||
report1 := &report.PolicyReport{
|
||||
Name: "cpolr-test",
|
||||
Summary: &report.Summary{Pass: 1, Fail: 1},
|
||||
CreationTimestamp: time.Now(),
|
||||
Results: map[string]*report.Result{
|
||||
result1.GetIdentifier(): result1,
|
||||
result2.GetIdentifier(): result2,
|
||||
},
|
||||
}
|
||||
|
||||
report2 := creport
|
||||
report2.Summary = report.Summary{Pass: 0, Fail: 1}
|
||||
report2.Results = map[string]report.Result{
|
||||
report2 := &report.PolicyReport{
|
||||
Name: "cpolr-test",
|
||||
Summary: &report.Summary{Pass: 0, Fail: 1},
|
||||
CreationTimestamp: time.Now(),
|
||||
Results: map[string]*report.Result{
|
||||
result1.GetIdentifier(): result1,
|
||||
},
|
||||
}
|
||||
|
||||
handler := metrics.CreateMetricsCallback()
|
||||
handler := metrics.CreateClusterPolicyReportMetricsListener()
|
||||
|
||||
t.Run("Added Metric", func(t *testing.T) {
|
||||
handler(watch.Added, report1, report.PolicyReport{})
|
||||
handler(report.LifecycleEvent{Type: report.Added, NewPolicyReport: report1, OldPolicyReport: &report.PolicyReport{}})
|
||||
|
||||
metricFam, err := prometheus.DefaultGatherer.Gather()
|
||||
if err != nil {
|
||||
|
@ -81,8 +86,8 @@ func Test_ClusterPolicyReportMetricGeneration(t *testing.T) {
|
|||
})
|
||||
|
||||
t.Run("Modified Metric", func(t *testing.T) {
|
||||
handler(watch.Added, report1, report.PolicyReport{})
|
||||
handler(watch.Modified, report2, report1)
|
||||
handler(report.LifecycleEvent{Type: report.Added, NewPolicyReport: report1, OldPolicyReport: &report.PolicyReport{}})
|
||||
handler(report.LifecycleEvent{Type: report.Updated, NewPolicyReport: report2, OldPolicyReport: report1})
|
||||
|
||||
metricFam, err := prometheus.DefaultGatherer.Gather()
|
||||
if err != nil {
|
||||
|
@ -127,9 +132,9 @@ func Test_ClusterPolicyReportMetricGeneration(t *testing.T) {
|
|||
})
|
||||
|
||||
t.Run("Deleted Metric", func(t *testing.T) {
|
||||
handler(watch.Added, report1, report.PolicyReport{})
|
||||
handler(watch.Modified, report2, report1)
|
||||
handler(watch.Deleted, report2, report2)
|
||||
handler(report.LifecycleEvent{Type: report.Added, NewPolicyReport: report1, OldPolicyReport: &report.PolicyReport{}})
|
||||
handler(report.LifecycleEvent{Type: report.Updated, NewPolicyReport: report2, OldPolicyReport: report1})
|
||||
handler(report.LifecycleEvent{Type: report.Deleted, NewPolicyReport: report2, OldPolicyReport: &report.PolicyReport{}})
|
||||
|
||||
metricFam, err := prometheus.DefaultGatherer.Gather()
|
||||
if err != nil {
|
||||
|
@ -151,7 +156,7 @@ func Test_ClusterPolicyReportMetricGeneration(t *testing.T) {
|
|||
|
||||
func testClusterSummaryMetricLabels(
|
||||
metric *ioprometheusclient.Metric,
|
||||
preport report.PolicyReport,
|
||||
preport *report.PolicyReport,
|
||||
status string,
|
||||
gauge float64,
|
||||
) error {
|
||||
|
@ -176,7 +181,7 @@ func testClusterSummaryMetricLabels(
|
|||
return nil
|
||||
}
|
||||
|
||||
func testClusterResultMetricLabels(metric *ioprometheusclient.Metric, result report.Result) error {
|
||||
func testClusterResultMetricLabels(metric *ioprometheusclient.Metric, result *report.Result) error {
|
||||
if name := *metric.Label[0].Name; name != "category" {
|
||||
return fmt.Errorf("unexpected Name Label: %s", name)
|
||||
}
|
98
pkg/listener/metrics/policy_report.go
Normal file
98
pkg/listener/metrics/policy_report.go
Normal file
|
@ -0,0 +1,98 @@
|
|||
package metrics
|
||||
|
||||
import (
|
||||
"github.com/kyverno/policy-reporter/pkg/report"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"github.com/prometheus/client_golang/prometheus/promauto"
|
||||
)
|
||||
|
||||
var policyGauge = promauto.NewGaugeVec(prometheus.GaugeOpts{
|
||||
Name: "policy_report_summary",
|
||||
Help: "Summary of all PolicyReports",
|
||||
}, []string{"namespace", "name", "status"})
|
||||
|
||||
var ruleGauge = promauto.NewGaugeVec(prometheus.GaugeOpts{
|
||||
Name: "policy_report_result",
|
||||
Help: "List of all PolicyReport Results",
|
||||
}, []string{"namespace", "rule", "policy", "report", "kind", "name", "status", "severity", "category"})
|
||||
|
||||
func CreatePolicyReportMetricsListener() report.PolicyReportListener {
|
||||
prometheus.Register(policyGauge)
|
||||
prometheus.Register(ruleGauge)
|
||||
|
||||
var newReport *report.PolicyReport
|
||||
var oldReport *report.PolicyReport
|
||||
|
||||
return func(event report.LifecycleEvent) {
|
||||
newReport = event.NewPolicyReport
|
||||
oldReport = event.OldPolicyReport
|
||||
|
||||
switch event.Type {
|
||||
case report.Added:
|
||||
updatePolicyGauge(newReport)
|
||||
|
||||
for _, result := range newReport.Results {
|
||||
ruleGauge.With(generateResultLabels(newReport, result)).Set(1)
|
||||
}
|
||||
case report.Updated:
|
||||
updatePolicyGauge(newReport)
|
||||
|
||||
for _, result := range oldReport.Results {
|
||||
ruleGauge.Delete(generateResultLabels(oldReport, result))
|
||||
}
|
||||
|
||||
for _, result := range newReport.Results {
|
||||
ruleGauge.With(generateResultLabels(newReport, result)).Set(1)
|
||||
}
|
||||
case report.Deleted:
|
||||
policyGauge.DeleteLabelValues(newReport.Namespace, newReport.Name, "Pass")
|
||||
policyGauge.DeleteLabelValues(newReport.Namespace, newReport.Name, "Fail")
|
||||
policyGauge.DeleteLabelValues(newReport.Namespace, newReport.Name, "Warn")
|
||||
policyGauge.DeleteLabelValues(newReport.Namespace, newReport.Name, "Error")
|
||||
policyGauge.DeleteLabelValues(newReport.Namespace, newReport.Name, "Skip")
|
||||
|
||||
for _, result := range newReport.Results {
|
||||
ruleGauge.Delete(generateResultLabels(newReport, result))
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func generateResultLabels(report *report.PolicyReport, result *report.Result) prometheus.Labels {
|
||||
labels := prometheus.Labels{
|
||||
"namespace": report.Namespace,
|
||||
"rule": result.Rule,
|
||||
"policy": result.Policy,
|
||||
"report": report.Name,
|
||||
"kind": "",
|
||||
"name": "",
|
||||
"status": result.Status,
|
||||
"severity": result.Severity,
|
||||
"category": result.Category,
|
||||
}
|
||||
|
||||
if result.HasResource() {
|
||||
labels["kind"] = result.Resource.Kind
|
||||
labels["name"] = result.Resource.Name
|
||||
}
|
||||
|
||||
return labels
|
||||
}
|
||||
|
||||
func updatePolicyGauge(newReport *report.PolicyReport) {
|
||||
policyGauge.
|
||||
WithLabelValues(newReport.Namespace, newReport.Name, "Pass").
|
||||
Set(float64(newReport.Summary.Pass))
|
||||
policyGauge.
|
||||
WithLabelValues(newReport.Namespace, newReport.Name, "Fail").
|
||||
Set(float64(newReport.Summary.Fail))
|
||||
policyGauge.
|
||||
WithLabelValues(newReport.Namespace, newReport.Name, "Warn").
|
||||
Set(float64(newReport.Summary.Warn))
|
||||
policyGauge.
|
||||
WithLabelValues(newReport.Namespace, newReport.Name, "Error").
|
||||
Set(float64(newReport.Summary.Error))
|
||||
policyGauge.
|
||||
WithLabelValues(newReport.Namespace, newReport.Name, "Skip").
|
||||
Set(float64(newReport.Summary.Skip))
|
||||
}
|
|
@ -5,14 +5,14 @@ import (
|
|||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/kyverno/policy-reporter/pkg/metrics"
|
||||
"github.com/kyverno/policy-reporter/pkg/listener/metrics"
|
||||
"github.com/kyverno/policy-reporter/pkg/report"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
ioprometheusclient "github.com/prometheus/client_model/go"
|
||||
"k8s.io/apimachinery/pkg/watch"
|
||||
)
|
||||
|
||||
var result1 = report.Result{
|
||||
var result1 = &report.Result{
|
||||
ID: "1",
|
||||
Message: "validation error: requests and limits required. Rule autogen-check-for-requests-and-limits failed at path /spec/template/spec/containers/0/resources/requests/",
|
||||
Policy: "require-requests-and-limits-required",
|
||||
Rule: "autogen-check-for-requests-and-limits",
|
||||
|
@ -21,7 +21,7 @@ var result1 = report.Result{
|
|||
Severity: report.High,
|
||||
Category: "resources",
|
||||
Scored: true,
|
||||
Resource: report.Resource{
|
||||
Resource: &report.Resource{
|
||||
APIVersion: "v1",
|
||||
Kind: "Deployment",
|
||||
Name: "nginx",
|
||||
|
@ -30,7 +30,8 @@ var result1 = report.Result{
|
|||
},
|
||||
}
|
||||
|
||||
var result2 = report.Result{
|
||||
var result2 = &report.Result{
|
||||
ID: "2",
|
||||
Message: "validation error: requests and limits required. Rule autogen-check-for-requests-and-limits failed at path /spec/template/spec/containers/0/resources/requests/",
|
||||
Policy: "check-requests-and-limits-required",
|
||||
Rule: "check-for-requests-and-limits",
|
||||
|
@ -38,7 +39,7 @@ var result2 = report.Result{
|
|||
Status: report.Pass,
|
||||
Category: "resources",
|
||||
Scored: true,
|
||||
Resource: report.Resource{
|
||||
Resource: &report.Resource{
|
||||
APIVersion: "v1",
|
||||
Kind: "Deployment",
|
||||
Name: "nginx",
|
||||
|
@ -47,32 +48,43 @@ var result2 = report.Result{
|
|||
},
|
||||
}
|
||||
|
||||
var preport = report.PolicyReport{
|
||||
var preport = &report.PolicyReport{
|
||||
ID: "1",
|
||||
Name: "polr-test",
|
||||
Namespace: "test",
|
||||
Results: make(map[string]report.Result, 0),
|
||||
Summary: report.Summary{},
|
||||
Results: make(map[string]*report.Result),
|
||||
Summary: &report.Summary{},
|
||||
CreationTimestamp: time.Now(),
|
||||
}
|
||||
|
||||
func Test_PolicyReportMetricGeneration(t *testing.T) {
|
||||
report1 := preport
|
||||
report1.Summary = report.Summary{Pass: 1, Fail: 1}
|
||||
report1.Results = map[string]report.Result{
|
||||
report1 := &report.PolicyReport{
|
||||
ID: "1",
|
||||
Name: "polr-test",
|
||||
Namespace: "test",
|
||||
Summary: &report.Summary{Pass: 1, Fail: 1},
|
||||
CreationTimestamp: time.Now(),
|
||||
Results: map[string]*report.Result{
|
||||
result1.GetIdentifier(): result1,
|
||||
result2.GetIdentifier(): result2,
|
||||
},
|
||||
}
|
||||
|
||||
report2 := preport
|
||||
report2.Summary = report.Summary{Pass: 0, Fail: 1}
|
||||
report2.Results = map[string]report.Result{
|
||||
report2 := &report.PolicyReport{
|
||||
ID: "1",
|
||||
Name: "polr-test",
|
||||
Namespace: "test",
|
||||
Summary: &report.Summary{Pass: 0, Fail: 1},
|
||||
CreationTimestamp: time.Now(),
|
||||
Results: map[string]*report.Result{
|
||||
result1.GetIdentifier(): result1,
|
||||
},
|
||||
}
|
||||
|
||||
handler := metrics.CreateMetricsCallback()
|
||||
handler := metrics.CreatePolicyReportMetricsListener()
|
||||
|
||||
t.Run("Added Metric", func(t *testing.T) {
|
||||
handler(watch.Added, report1, report.PolicyReport{})
|
||||
handler(report.LifecycleEvent{Type: report.Added, NewPolicyReport: report1, OldPolicyReport: &report.PolicyReport{}})
|
||||
|
||||
metricFam, err := prometheus.DefaultGatherer.Gather()
|
||||
if err != nil {
|
||||
|
@ -117,8 +129,8 @@ func Test_PolicyReportMetricGeneration(t *testing.T) {
|
|||
})
|
||||
|
||||
t.Run("Modified Metric", func(t *testing.T) {
|
||||
handler(watch.Added, report1, report.PolicyReport{})
|
||||
handler(watch.Modified, report2, report1)
|
||||
handler(report.LifecycleEvent{Type: report.Added, NewPolicyReport: report1, OldPolicyReport: &report.PolicyReport{}})
|
||||
handler(report.LifecycleEvent{Type: report.Updated, NewPolicyReport: report2, OldPolicyReport: report1})
|
||||
|
||||
metricFam, err := prometheus.DefaultGatherer.Gather()
|
||||
if err != nil {
|
||||
|
@ -163,9 +175,9 @@ func Test_PolicyReportMetricGeneration(t *testing.T) {
|
|||
})
|
||||
|
||||
t.Run("Deleted Metric", func(t *testing.T) {
|
||||
handler(watch.Added, report1, report.PolicyReport{})
|
||||
handler(watch.Modified, report2, report1)
|
||||
handler(watch.Deleted, report2, report2)
|
||||
handler(report.LifecycleEvent{Type: report.Added, NewPolicyReport: report1, OldPolicyReport: &report.PolicyReport{}})
|
||||
handler(report.LifecycleEvent{Type: report.Updated, NewPolicyReport: report2, OldPolicyReport: report1})
|
||||
handler(report.LifecycleEvent{Type: report.Deleted, NewPolicyReport: report2, OldPolicyReport: &report.PolicyReport{}})
|
||||
|
||||
metricFam, err := prometheus.DefaultGatherer.Gather()
|
||||
if err != nil {
|
||||
|
@ -186,7 +198,7 @@ func Test_PolicyReportMetricGeneration(t *testing.T) {
|
|||
|
||||
func testSummaryMetricLabels(
|
||||
metric *ioprometheusclient.Metric,
|
||||
preport report.PolicyReport,
|
||||
preport *report.PolicyReport,
|
||||
status string,
|
||||
gauge float64,
|
||||
) error {
|
||||
|
@ -218,7 +230,7 @@ func testSummaryMetricLabels(
|
|||
return nil
|
||||
}
|
||||
|
||||
func testResultMetricLabels(metric *ioprometheusclient.Metric, result report.Result) error {
|
||||
func testResultMetricLabels(metric *ioprometheusclient.Metric, result *report.Result) error {
|
||||
if name := *metric.Label[0].Name; name != "category" {
|
||||
return fmt.Errorf("unexpected Name Label: %s", name)
|
||||
}
|
52
pkg/listener/metrics_test.go
Normal file
52
pkg/listener/metrics_test.go
Normal file
|
@ -0,0 +1,52 @@
|
|||
package listener_test
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/kyverno/policy-reporter/pkg/listener"
|
||||
"github.com/kyverno/policy-reporter/pkg/report"
|
||||
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
ioprometheusclient "github.com/prometheus/client_model/go"
|
||||
)
|
||||
|
||||
func Test_MetricsListener(t *testing.T) {
|
||||
slistener := listener.NewMetricsListener()
|
||||
|
||||
t.Run("Add ClusterPolicyReport Metric", func(t *testing.T) {
|
||||
slistener(report.LifecycleEvent{Type: report.Added, NewPolicyReport: creport, OldPolicyReport: &report.PolicyReport{}})
|
||||
|
||||
metricFam, err := prometheus.DefaultGatherer.Gather()
|
||||
if err != nil {
|
||||
t.Errorf("unexpected Error: %s", err)
|
||||
}
|
||||
|
||||
summary := findMetric(metricFam, "cluster_policy_report_summary")
|
||||
if summary == nil {
|
||||
t.Fatalf("Metric not found: cluster_policy_report_summary")
|
||||
}
|
||||
})
|
||||
t.Run("Add PolicyReport Metric", func(t *testing.T) {
|
||||
slistener(report.LifecycleEvent{Type: report.Added, NewPolicyReport: preport1, OldPolicyReport: &report.PolicyReport{}})
|
||||
|
||||
metricFam, err := prometheus.DefaultGatherer.Gather()
|
||||
if err != nil {
|
||||
t.Errorf("unexpected Error: %s", err)
|
||||
}
|
||||
|
||||
summary := findMetric(metricFam, "policy_report_summary")
|
||||
if summary == nil {
|
||||
t.Fatalf("Metric not found: policy_report_summary")
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func findMetric(metrics []*ioprometheusclient.MetricFamily, name string) *ioprometheusclient.MetricFamily {
|
||||
for _, metric := range metrics {
|
||||
if *metric.Name == name {
|
||||
return metric
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
76
pkg/listener/new_result.go
Normal file
76
pkg/listener/new_result.go
Normal file
|
@ -0,0 +1,76 @@
|
|||
package listener
|
||||
|
||||
import (
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/kyverno/policy-reporter/pkg/report"
|
||||
"github.com/patrickmn/go-cache"
|
||||
)
|
||||
|
||||
type ResultListener struct {
|
||||
skipExisting bool
|
||||
listener []report.PolicyReportResultListener
|
||||
cache *cache.Cache
|
||||
startUp time.Time
|
||||
}
|
||||
|
||||
func (l *ResultListener) RegisterListener(listener report.PolicyReportResultListener) {
|
||||
l.listener = append(l.listener, listener)
|
||||
}
|
||||
|
||||
func (l *ResultListener) Listen(event report.LifecycleEvent) {
|
||||
if len(event.OldPolicyReport.Results) > 0 {
|
||||
for id := range event.OldPolicyReport.Results {
|
||||
l.cache.SetDefault(id, true)
|
||||
}
|
||||
}
|
||||
|
||||
if event.Type != report.Added && event.Type != report.Updated {
|
||||
return
|
||||
}
|
||||
|
||||
var preExisted bool
|
||||
|
||||
if event.Type == report.Added {
|
||||
preExisted = event.NewPolicyReport.CreationTimestamp.Before(l.startUp)
|
||||
|
||||
if l.skipExisting && preExisted {
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
if len(event.NewPolicyReport.Results) == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
diff := event.NewPolicyReport.GetNewResults(event.OldPolicyReport)
|
||||
|
||||
wg := sync.WaitGroup{}
|
||||
|
||||
for _, r := range diff {
|
||||
if _, found := l.cache.Get(r.GetIdentifier()); found {
|
||||
continue
|
||||
}
|
||||
|
||||
wg.Add(len(l.listener))
|
||||
|
||||
for _, cb := range l.listener {
|
||||
go func(callback report.PolicyReportResultListener, result *report.Result) {
|
||||
callback(result, preExisted)
|
||||
wg.Done()
|
||||
}(cb, r)
|
||||
}
|
||||
}
|
||||
|
||||
wg.Wait()
|
||||
}
|
||||
|
||||
func NewResultListener(skipExisting bool, rcache *cache.Cache, startUp time.Time) *ResultListener {
|
||||
return &ResultListener{
|
||||
skipExisting: skipExisting,
|
||||
cache: rcache,
|
||||
startUp: startUp,
|
||||
listener: make([]report.PolicyReportResultListener, 0),
|
||||
}
|
||||
}
|
93
pkg/listener/new_result_test.go
Normal file
93
pkg/listener/new_result_test.go
Normal file
|
@ -0,0 +1,93 @@
|
|||
package listener_test
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/kyverno/policy-reporter/pkg/listener"
|
||||
"github.com/kyverno/policy-reporter/pkg/report"
|
||||
"github.com/patrickmn/go-cache"
|
||||
)
|
||||
|
||||
func Test_ResultListener(t *testing.T) {
|
||||
t.Run("Publish Result", func(t *testing.T) {
|
||||
var called *report.Result
|
||||
|
||||
slistener := listener.NewResultListener(true, cache.New(cache.DefaultExpiration, 5*time.Minute), time.Now())
|
||||
slistener.RegisterListener(func(r *report.Result, b bool) {
|
||||
called = r
|
||||
})
|
||||
|
||||
slistener.Listen(report.LifecycleEvent{Type: report.Updated, NewPolicyReport: preport2, OldPolicyReport: preport1})
|
||||
|
||||
if called.GetIdentifier() != result2.GetIdentifier() {
|
||||
t.Error("Expected Listener to be called with Result2")
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("Ignore Delete Event", func(t *testing.T) {
|
||||
var called bool
|
||||
|
||||
slistener := listener.NewResultListener(true, cache.New(cache.DefaultExpiration, 5*time.Minute), time.Now())
|
||||
slistener.RegisterListener(func(r *report.Result, b bool) {
|
||||
called = true
|
||||
})
|
||||
|
||||
slistener.Listen(report.LifecycleEvent{Type: report.Deleted, NewPolicyReport: preport2, OldPolicyReport: preport1})
|
||||
|
||||
if called {
|
||||
t.Error("Expected Listener not be called on Deleted event")
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("Ignore Added Results created before startup", func(t *testing.T) {
|
||||
var called bool
|
||||
|
||||
slistener := listener.NewResultListener(true, cache.New(cache.DefaultExpiration, 5*time.Minute), time.Now())
|
||||
slistener.RegisterListener(func(r *report.Result, b bool) {
|
||||
called = true
|
||||
})
|
||||
|
||||
slistener.Listen(report.LifecycleEvent{Type: report.Added, NewPolicyReport: preport2, OldPolicyReport: preport1})
|
||||
|
||||
if called {
|
||||
t.Error("Expected Listener not be called on Deleted event")
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("Ignore CacheResults", func(t *testing.T) {
|
||||
var called bool
|
||||
|
||||
rcache := cache.New(cache.DefaultExpiration, 5*time.Minute)
|
||||
rcache.SetDefault(result2.ID, true)
|
||||
|
||||
slistener := listener.NewResultListener(true, rcache, time.Now())
|
||||
slistener.RegisterListener(func(r *report.Result, b bool) {
|
||||
called = true
|
||||
})
|
||||
|
||||
slistener.Listen(report.LifecycleEvent{Type: report.Updated, NewPolicyReport: preport2, OldPolicyReport: preport1})
|
||||
|
||||
if called {
|
||||
t.Error("Expected Listener not be called on cached results")
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("Early Return if Rsults are empty", func(t *testing.T) {
|
||||
var called bool
|
||||
|
||||
rcache := cache.New(cache.DefaultExpiration, 5*time.Minute)
|
||||
rcache.SetDefault(result2.ID, true)
|
||||
|
||||
slistener := listener.NewResultListener(true, rcache, time.Now())
|
||||
slistener.RegisterListener(func(r *report.Result, b bool) {
|
||||
called = true
|
||||
})
|
||||
|
||||
slistener.Listen(report.LifecycleEvent{Type: report.Updated, NewPolicyReport: preport2, OldPolicyReport: preport1})
|
||||
|
||||
if called {
|
||||
t.Error("Expected Listener not be called with empty results")
|
||||
}
|
||||
})
|
||||
}
|
29
pkg/listener/send_result.go
Normal file
29
pkg/listener/send_result.go
Normal file
|
@ -0,0 +1,29 @@
|
|||
package listener
|
||||
|
||||
import (
|
||||
"sync"
|
||||
|
||||
"github.com/kyverno/policy-reporter/pkg/report"
|
||||
"github.com/kyverno/policy-reporter/pkg/target"
|
||||
)
|
||||
|
||||
func NewSendResultListener(clients []target.Client) report.PolicyReportResultListener {
|
||||
return func(r *report.Result, e bool) {
|
||||
wg := &sync.WaitGroup{}
|
||||
wg.Add(len(clients))
|
||||
|
||||
for _, t := range clients {
|
||||
go func(target target.Client, result *report.Result, preExisted bool) {
|
||||
defer wg.Done()
|
||||
|
||||
if (preExisted && target.SkipExistingOnStartup()) || !target.Validate(result) {
|
||||
return
|
||||
}
|
||||
|
||||
target.Send(result)
|
||||
}(t, r, e)
|
||||
}
|
||||
|
||||
wg.Wait()
|
||||
}
|
||||
}
|
69
pkg/listener/send_result_test.go
Normal file
69
pkg/listener/send_result_test.go
Normal file
|
@ -0,0 +1,69 @@
|
|||
package listener_test
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/kyverno/policy-reporter/pkg/listener"
|
||||
"github.com/kyverno/policy-reporter/pkg/report"
|
||||
"github.com/kyverno/policy-reporter/pkg/target"
|
||||
)
|
||||
|
||||
type client struct {
|
||||
Called bool
|
||||
skipExistingOnStartup bool
|
||||
validated bool
|
||||
}
|
||||
|
||||
func (c *client) Send(result *report.Result) {
|
||||
c.Called = true
|
||||
}
|
||||
|
||||
func (c *client) MinimumPriority() string {
|
||||
return report.InfoPriority.String()
|
||||
}
|
||||
|
||||
func (c *client) Name() string {
|
||||
return "test"
|
||||
}
|
||||
|
||||
func (c *client) Sources() []string {
|
||||
return []string{}
|
||||
}
|
||||
|
||||
func (c *client) SkipExistingOnStartup() bool {
|
||||
return c.skipExistingOnStartup
|
||||
}
|
||||
|
||||
func (c client) Validate(result *report.Result) bool {
|
||||
return c.validated
|
||||
}
|
||||
|
||||
func Test_SendResultListener(t *testing.T) {
|
||||
t.Run("Send Result", func(t *testing.T) {
|
||||
c := &client{validated: true}
|
||||
slistener := listener.NewSendResultListener([]target.Client{c})
|
||||
slistener(result1, false)
|
||||
|
||||
if !c.Called {
|
||||
t.Error("Expected Send to be called")
|
||||
}
|
||||
})
|
||||
t.Run("Don't Send Result when validation fails", func(t *testing.T) {
|
||||
c := &client{validated: false}
|
||||
slistener := listener.NewSendResultListener([]target.Client{c})
|
||||
slistener(result1, false)
|
||||
|
||||
if c.Called {
|
||||
t.Error("Expected Send not to be called")
|
||||
}
|
||||
})
|
||||
t.Run("Don't Send pre existing Result when skipExistingOnStartup is true", func(t *testing.T) {
|
||||
c := &client{skipExistingOnStartup: true}
|
||||
slistener := listener.NewSendResultListener([]target.Client{c})
|
||||
slistener(result1, true)
|
||||
|
||||
if c.Called {
|
||||
t.Error("Expected Send not to be called")
|
||||
}
|
||||
})
|
||||
}
|
29
pkg/listener/store.go
Normal file
29
pkg/listener/store.go
Normal file
|
@ -0,0 +1,29 @@
|
|||
package listener
|
||||
|
||||
import (
|
||||
"log"
|
||||
|
||||
"github.com/kyverno/policy-reporter/pkg/report"
|
||||
)
|
||||
|
||||
func NewStoreListener(store report.PolicyReportStore) report.PolicyReportListener {
|
||||
return func(event report.LifecycleEvent) {
|
||||
if event.Type == report.Deleted {
|
||||
logOnError("remove", event.NewPolicyReport.Name, store.Remove(event.NewPolicyReport.GetIdentifier()))
|
||||
return
|
||||
}
|
||||
|
||||
if event.Type == report.Updated {
|
||||
logOnError("update", event.NewPolicyReport.Name, store.Update(event.NewPolicyReport))
|
||||
return
|
||||
}
|
||||
|
||||
logOnError("add", event.NewPolicyReport.Name, store.Add(event.NewPolicyReport))
|
||||
}
|
||||
}
|
||||
|
||||
func logOnError(operation, name string, err error) {
|
||||
if err != nil {
|
||||
log.Printf("[ERROR] Failed to %s Policy Report %s (%s)\n", operation, name, err.Error())
|
||||
}
|
||||
}
|
37
pkg/listener/store_test.go
Normal file
37
pkg/listener/store_test.go
Normal file
|
@ -0,0 +1,37 @@
|
|||
package listener_test
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/kyverno/policy-reporter/pkg/listener"
|
||||
"github.com/kyverno/policy-reporter/pkg/report"
|
||||
)
|
||||
|
||||
func Test_StoreListener(t *testing.T) {
|
||||
store := report.NewPolicyReportStore()
|
||||
|
||||
t.Run("Save New Report", func(t *testing.T) {
|
||||
slistener := listener.NewStoreListener(store)
|
||||
slistener(report.LifecycleEvent{Type: report.Added, NewPolicyReport: preport1, OldPolicyReport: &report.PolicyReport{}})
|
||||
|
||||
if _, ok := store.Get(preport1.ID); !ok {
|
||||
t.Error("Expected Report to be stored")
|
||||
}
|
||||
})
|
||||
t.Run("Update Modified Report", func(t *testing.T) {
|
||||
slistener := listener.NewStoreListener(store)
|
||||
slistener(report.LifecycleEvent{Type: report.Updated, NewPolicyReport: preport2, OldPolicyReport: preport1})
|
||||
|
||||
if preport, ok := store.Get(preport2.ID); !ok && len(preport.Results) == 2 {
|
||||
t.Error("Expected Report to be updated")
|
||||
}
|
||||
})
|
||||
t.Run("Remove Deleted Report", func(t *testing.T) {
|
||||
slistener := listener.NewStoreListener(store)
|
||||
slistener(report.LifecycleEvent{Type: report.Deleted, NewPolicyReport: preport2, OldPolicyReport: &report.PolicyReport{}})
|
||||
|
||||
if _, ok := store.Get(preport2.ID); ok {
|
||||
t.Error("Expected Report to be removed")
|
||||
}
|
||||
})
|
||||
}
|
|
@ -1,92 +0,0 @@
|
|||
package metrics
|
||||
|
||||
import (
|
||||
"github.com/kyverno/policy-reporter/pkg/report"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"github.com/prometheus/client_golang/prometheus/promauto"
|
||||
"k8s.io/apimachinery/pkg/watch"
|
||||
)
|
||||
|
||||
func createClusterPolicyReportMetricsCallback() report.PolicyReportCallback {
|
||||
policyGauge := promauto.NewGaugeVec(prometheus.GaugeOpts{
|
||||
Name: "cluster_policy_report_summary",
|
||||
Help: "Summary of all ClusterPolicyReports",
|
||||
}, []string{"name", "status"})
|
||||
|
||||
ruleGauge := promauto.NewGaugeVec(prometheus.GaugeOpts{
|
||||
Name: "cluster_policy_report_result",
|
||||
Help: "List of all ClusterPolicyReport Results",
|
||||
}, []string{"rule", "policy", "report", "kind", "name", "status", "severity", "category"})
|
||||
|
||||
prometheus.Register(policyGauge)
|
||||
prometheus.Register(ruleGauge)
|
||||
|
||||
return func(event watch.EventType, report report.PolicyReport, oldReport report.PolicyReport) {
|
||||
switch event {
|
||||
case watch.Added:
|
||||
updateClusterPolicyGauge(policyGauge, report)
|
||||
|
||||
for _, rule := range report.Results {
|
||||
ruleGauge.With(generateClusterResultLabels(report, rule)).Set(1)
|
||||
}
|
||||
case watch.Modified:
|
||||
updateClusterPolicyGauge(policyGauge, report)
|
||||
|
||||
for _, rule := range oldReport.Results {
|
||||
ruleGauge.Delete(generateClusterResultLabels(oldReport, rule))
|
||||
}
|
||||
|
||||
for _, rule := range report.Results {
|
||||
ruleGauge.With(generateClusterResultLabels(report, rule)).Set(1)
|
||||
}
|
||||
case watch.Deleted:
|
||||
policyGauge.DeleteLabelValues(report.Name, "Pass")
|
||||
policyGauge.DeleteLabelValues(report.Name, "Fail")
|
||||
policyGauge.DeleteLabelValues(report.Name, "Warn")
|
||||
policyGauge.DeleteLabelValues(report.Name, "Error")
|
||||
policyGauge.DeleteLabelValues(report.Name, "Skip")
|
||||
|
||||
for _, rule := range report.Results {
|
||||
ruleGauge.Delete(generateClusterResultLabels(report, rule))
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func generateClusterResultLabels(report report.PolicyReport, result report.Result) prometheus.Labels {
|
||||
labels := prometheus.Labels{
|
||||
"rule": result.Rule,
|
||||
"policy": result.Policy,
|
||||
"report": report.Name,
|
||||
"kind": "",
|
||||
"name": "",
|
||||
"status": result.Status,
|
||||
"severity": result.Severity,
|
||||
"category": result.Category,
|
||||
}
|
||||
|
||||
if result.HasResource() {
|
||||
labels["kind"] = result.Resource.Kind
|
||||
labels["name"] = result.Resource.Name
|
||||
}
|
||||
|
||||
return labels
|
||||
}
|
||||
|
||||
func updateClusterPolicyGauge(policyGauge *prometheus.GaugeVec, report report.PolicyReport) {
|
||||
policyGauge.
|
||||
WithLabelValues(report.Name, "Pass").
|
||||
Set(float64(report.Summary.Pass))
|
||||
policyGauge.
|
||||
WithLabelValues(report.Name, "Fail").
|
||||
Set(float64(report.Summary.Fail))
|
||||
policyGauge.
|
||||
WithLabelValues(report.Name, "Warn").
|
||||
Set(float64(report.Summary.Warn))
|
||||
policyGauge.
|
||||
WithLabelValues(report.Name, "Error").
|
||||
Set(float64(report.Summary.Error))
|
||||
policyGauge.
|
||||
WithLabelValues(report.Name, "Skip").
|
||||
Set(float64(report.Summary.Skip))
|
||||
}
|
|
@ -1,29 +0,0 @@
|
|||
package metrics
|
||||
|
||||
import (
|
||||
"github.com/kyverno/policy-reporter/pkg/report"
|
||||
"k8s.io/apimachinery/pkg/watch"
|
||||
)
|
||||
|
||||
var (
|
||||
pCallback report.PolicyReportCallback
|
||||
cCallback report.PolicyReportCallback
|
||||
)
|
||||
|
||||
// CreateMetricsCallback for PolicyReport watch.Events
|
||||
func CreateMetricsCallback() report.PolicyReportCallback {
|
||||
if pCallback == nil {
|
||||
pCallback = createPolicyReportMetricsCallback()
|
||||
}
|
||||
if cCallback == nil {
|
||||
cCallback = createClusterPolicyReportMetricsCallback()
|
||||
}
|
||||
|
||||
return func(et watch.EventType, pr, opr report.PolicyReport) {
|
||||
if pr.Namespace == "" {
|
||||
cCallback(et, pr, opr)
|
||||
} else {
|
||||
pCallback(et, pr, opr)
|
||||
}
|
||||
}
|
||||
}
|
|
@ -1,93 +0,0 @@
|
|||
package metrics
|
||||
|
||||
import (
|
||||
"github.com/kyverno/policy-reporter/pkg/report"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"github.com/prometheus/client_golang/prometheus/promauto"
|
||||
"k8s.io/apimachinery/pkg/watch"
|
||||
)
|
||||
|
||||
func createPolicyReportMetricsCallback() report.PolicyReportCallback {
|
||||
policyGauge := promauto.NewGaugeVec(prometheus.GaugeOpts{
|
||||
Name: "policy_report_summary",
|
||||
Help: "Summary of all PolicyReports",
|
||||
}, []string{"namespace", "name", "status"})
|
||||
|
||||
ruleGauge := promauto.NewGaugeVec(prometheus.GaugeOpts{
|
||||
Name: "policy_report_result",
|
||||
Help: "List of all PolicyReport Results",
|
||||
}, []string{"namespace", "rule", "policy", "report", "kind", "name", "status", "severity", "category"})
|
||||
|
||||
prometheus.Register(policyGauge)
|
||||
prometheus.Register(ruleGauge)
|
||||
|
||||
return func(event watch.EventType, report report.PolicyReport, oldReport report.PolicyReport) {
|
||||
switch event {
|
||||
case watch.Added:
|
||||
updatePolicyGauge(policyGauge, report)
|
||||
|
||||
for _, rule := range report.Results {
|
||||
ruleGauge.With(generateResultLabels(report, rule)).Set(1)
|
||||
}
|
||||
case watch.Modified:
|
||||
updatePolicyGauge(policyGauge, report)
|
||||
|
||||
for _, rule := range oldReport.Results {
|
||||
ruleGauge.Delete(generateResultLabels(oldReport, rule))
|
||||
}
|
||||
|
||||
for _, rule := range report.Results {
|
||||
ruleGauge.With(generateResultLabels(report, rule)).Set(1)
|
||||
}
|
||||
case watch.Deleted:
|
||||
policyGauge.DeleteLabelValues(report.Namespace, report.Name, "Pass")
|
||||
policyGauge.DeleteLabelValues(report.Namespace, report.Name, "Fail")
|
||||
policyGauge.DeleteLabelValues(report.Namespace, report.Name, "Warn")
|
||||
policyGauge.DeleteLabelValues(report.Namespace, report.Name, "Error")
|
||||
policyGauge.DeleteLabelValues(report.Namespace, report.Name, "Skip")
|
||||
|
||||
for _, rule := range report.Results {
|
||||
ruleGauge.Delete(generateResultLabels(report, rule))
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func generateResultLabels(report report.PolicyReport, result report.Result) prometheus.Labels {
|
||||
labels := prometheus.Labels{
|
||||
"namespace": report.Namespace,
|
||||
"rule": result.Rule,
|
||||
"policy": result.Policy,
|
||||
"report": report.Name,
|
||||
"kind": "",
|
||||
"name": "",
|
||||
"status": result.Status,
|
||||
"severity": result.Severity,
|
||||
"category": result.Category,
|
||||
}
|
||||
|
||||
if result.HasResource() {
|
||||
labels["kind"] = result.Resource.Kind
|
||||
labels["name"] = result.Resource.Name
|
||||
}
|
||||
|
||||
return labels
|
||||
}
|
||||
|
||||
func updatePolicyGauge(policyGauge *prometheus.GaugeVec, report report.PolicyReport) {
|
||||
policyGauge.
|
||||
WithLabelValues(report.Namespace, report.Name, "Pass").
|
||||
Set(float64(report.Summary.Pass))
|
||||
policyGauge.
|
||||
WithLabelValues(report.Namespace, report.Name, "Fail").
|
||||
Set(float64(report.Summary.Fail))
|
||||
policyGauge.
|
||||
WithLabelValues(report.Namespace, report.Name, "Warn").
|
||||
Set(float64(report.Summary.Warn))
|
||||
policyGauge.
|
||||
WithLabelValues(report.Namespace, report.Name, "Error").
|
||||
Set(float64(report.Summary.Error))
|
||||
policyGauge.
|
||||
WithLabelValues(report.Namespace, report.Name, "Skip").
|
||||
Set(float64(report.Summary.Skip))
|
||||
}
|
|
@ -2,25 +2,18 @@ package report
|
|||
|
||||
import (
|
||||
"context"
|
||||
"k8s.io/apimachinery/pkg/watch"
|
||||
)
|
||||
|
||||
// PolicyReportCallback is called whenever a new PolicyReport comes in
|
||||
type PolicyReportCallback = func(watch.EventType, PolicyReport, PolicyReport)
|
||||
// PolicyReportListener is called whenever a new PolicyReport comes in
|
||||
type PolicyReportListener = func(LifecycleEvent)
|
||||
|
||||
// PolicyResultCallback is called whenever a new PolicyResult comes in
|
||||
type PolicyResultCallback = func(Result, bool)
|
||||
// PolicyReportResultListener is called whenever a new PolicyResult comes in
|
||||
type PolicyReportResultListener = func(*Result, bool)
|
||||
|
||||
// PolicyResultClient watches for PolicyReport Events and executes registered callback
|
||||
type PolicyResultClient interface {
|
||||
// RegisterCallback register Handlers called on each PolicyReport watch.Event
|
||||
RegisterCallback(PolicyReportCallback)
|
||||
// RegisterPolicyResultCallback register Handlers called on each PolicyReport watch.Event for each changed PolicyResult
|
||||
RegisterPolicyResultCallback(PolicyResultCallback)
|
||||
// RegisterPolicyResultWatcher register a handler for ClusterPolicyReports and PolicyReports who call the registered PolicyResultCallbacks
|
||||
RegisterPolicyResultWatcher(skipExisting bool)
|
||||
// StartWatching calls the WatchAPI, waiting for incoming PolicyReport watch.Events and call the registered Handlers
|
||||
StartWatching(ctx context.Context) error
|
||||
// PolicyReportClient watches for PolicyReport Events and executes registered callback
|
||||
type PolicyReportClient interface {
|
||||
// WatchPolicyReports starts to watch for PolicyReport LifecycleEvent events
|
||||
WatchPolicyReports(ctx context.Context) <-chan LifecycleEvent
|
||||
// GetFoundResources as Map of Names
|
||||
GetFoundResources() map[string]string
|
||||
}
|
||||
|
|
|
@ -3,13 +3,27 @@ package report
|
|||
import (
|
||||
"bytes"
|
||||
"crypto/sha1"
|
||||
"encoding/hex"
|
||||
"fmt"
|
||||
"sort"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
// Event Enum
|
||||
type Event = int
|
||||
|
||||
// Possible PolicyReport Event Enums
|
||||
const (
|
||||
Added Event = iota
|
||||
Updated
|
||||
Deleted
|
||||
)
|
||||
|
||||
// LifecycleEvent of PolicyReports
|
||||
type LifecycleEvent struct {
|
||||
Type Event
|
||||
NewPolicyReport *PolicyReport
|
||||
OldPolicyReport *PolicyReport
|
||||
}
|
||||
|
||||
// Status Enum defined for PolicyReport
|
||||
type Status = string
|
||||
|
||||
|
@ -36,18 +50,18 @@ const (
|
|||
criticalString = "critical"
|
||||
)
|
||||
|
||||
// Type Enum defined for PolicyReport
|
||||
type Type = string
|
||||
// ResourceType Enum defined for PolicyReport
|
||||
type ResourceType = string
|
||||
|
||||
// ReportType Enum
|
||||
const (
|
||||
PolicyReportType Type = "PolicyReport"
|
||||
ClusterPolicyReportType Type = "ClusterPolicyReport"
|
||||
PolicyReportType ResourceType = "PolicyReport"
|
||||
ClusterPolicyReportType ResourceType = "ClusterPolicyReport"
|
||||
)
|
||||
|
||||
// Internal Priority definitions and weighting
|
||||
const (
|
||||
DefaultPriority = iota
|
||||
DefaultPriority Priority = iota
|
||||
DebugPriority
|
||||
InfoPriority
|
||||
WarningPriority
|
||||
|
@ -142,6 +156,7 @@ type Resource struct {
|
|||
|
||||
// Result from the PolicyReport spec wgpolicyk8s.io/v1alpha1.PolicyReportResult
|
||||
type Result struct {
|
||||
ID string `json:"-"`
|
||||
Message string
|
||||
Policy string
|
||||
Rule string
|
||||
|
@ -149,25 +164,24 @@ type Result struct {
|
|||
Status Status
|
||||
Severity Severity `json:",omitempty"`
|
||||
Category string `json:",omitempty"`
|
||||
Source string `json:"source,omitempty"`
|
||||
Source string `json:",omitempty"`
|
||||
Scored bool
|
||||
Timestamp time.Time
|
||||
Resource Resource
|
||||
Resource *Resource
|
||||
Properties map[string]string
|
||||
}
|
||||
|
||||
// GetIdentifier returns a global unique Result identifier
|
||||
func (r Result) GetIdentifier() string {
|
||||
suffix := ""
|
||||
if r.Resource.UID != "" {
|
||||
suffix = "__" + r.Resource.UID
|
||||
}
|
||||
|
||||
return fmt.Sprintf("%s__%s__%s%s", r.Policy, r.Rule, r.Status, suffix)
|
||||
return r.ID
|
||||
}
|
||||
|
||||
// HasResource checks if the result has an valid Resource
|
||||
func (r Result) HasResource() bool {
|
||||
if r.Resource == nil {
|
||||
return false
|
||||
}
|
||||
|
||||
return r.Resource.UID != ""
|
||||
}
|
||||
|
||||
|
@ -182,36 +196,17 @@ type Summary struct {
|
|||
|
||||
// PolicyReport from the PolicyReport spec wgpolicyk8s.io/v1alpha1.PolicyReport
|
||||
type PolicyReport struct {
|
||||
ID string
|
||||
Name string
|
||||
Namespace string
|
||||
Results map[string]Result
|
||||
Summary Summary
|
||||
Results map[string]*Result
|
||||
Summary *Summary
|
||||
CreationTimestamp time.Time
|
||||
}
|
||||
|
||||
// GetIdentifier returns a global unique PolicyReport identifier
|
||||
func (pr PolicyReport) GetIdentifier() string {
|
||||
if pr.Namespace == "" {
|
||||
return pr.Name
|
||||
}
|
||||
|
||||
return fmt.Sprintf("%s__%s", pr.Namespace, pr.Name)
|
||||
}
|
||||
|
||||
// ResultHash generates a has of the current result set
|
||||
func (pr PolicyReport) ResultHash() string {
|
||||
list := make([]string, 0, len(pr.Results))
|
||||
|
||||
for id := range pr.Results {
|
||||
list = append(list, id)
|
||||
}
|
||||
|
||||
sort.Strings(list)
|
||||
|
||||
h := sha1.New()
|
||||
h.Write([]byte(strings.Join(list, "")))
|
||||
|
||||
return hex.EncodeToString(h.Sum(nil))
|
||||
return pr.ID
|
||||
}
|
||||
|
||||
// HasResult returns if the Report has an Rusult with the given ID
|
||||
|
@ -222,7 +217,7 @@ func (pr PolicyReport) HasResult(id string) bool {
|
|||
}
|
||||
|
||||
// GetType returns the Type of the Report
|
||||
func (pr PolicyReport) GetType() Type {
|
||||
func (pr PolicyReport) GetType() ResourceType {
|
||||
if pr.Namespace == "" {
|
||||
return ClusterPolicyReportType
|
||||
}
|
||||
|
@ -231,8 +226,8 @@ func (pr PolicyReport) GetType() Type {
|
|||
}
|
||||
|
||||
// GetNewResults filters already existing Results from the old PolicyReport and returns only the diff with new Results
|
||||
func (pr PolicyReport) GetNewResults(or PolicyReport) []Result {
|
||||
diff := make([]Result, 0)
|
||||
func (pr PolicyReport) GetNewResults(or *PolicyReport) []*Result {
|
||||
diff := make([]*Result, 0)
|
||||
|
||||
for _, r := range pr.Results {
|
||||
if or.HasResult(r.GetIdentifier()) {
|
||||
|
@ -244,3 +239,30 @@ func (pr PolicyReport) GetNewResults(or PolicyReport) []Result {
|
|||
|
||||
return diff
|
||||
}
|
||||
|
||||
func GeneratePolicyReportID(name, namespace string) string {
|
||||
id := name
|
||||
|
||||
if namespace != "" {
|
||||
id = fmt.Sprintf("%s__%s", namespace, name)
|
||||
}
|
||||
|
||||
h := sha1.New()
|
||||
|
||||
h.Write([]byte(id))
|
||||
|
||||
return fmt.Sprintf("%x", h.Sum(nil))
|
||||
}
|
||||
|
||||
func GeneratePolicyReportResultID(uid, policy, rule, status, suffix string) string {
|
||||
if uid != "" {
|
||||
suffix = "__" + uid
|
||||
}
|
||||
|
||||
id := fmt.Sprintf("%s__%s__%s%s", policy, rule, status, suffix)
|
||||
|
||||
h := sha1.New()
|
||||
h.Write([]byte(id))
|
||||
|
||||
return fmt.Sprintf("%x", h.Sum(nil))
|
||||
}
|
||||
|
|
|
@ -1,14 +1,14 @@
|
|||
package report_test
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/kyverno/policy-reporter/pkg/report"
|
||||
)
|
||||
|
||||
var result1 = report.Result{
|
||||
var result1 = &report.Result{
|
||||
ID: "e0659854c6ee5c1a4df9242b2eb8b40919967842",
|
||||
Message: "validation error: requests and limits required. Rule autogen-check-for-requests-and-limits failed at path /spec/template/spec/containers/0/resources/requests/",
|
||||
Policy: "require-requests-and-limits-required",
|
||||
Rule: "autogen-check-for-requests-and-limits",
|
||||
|
@ -17,7 +17,7 @@ var result1 = report.Result{
|
|||
Category: "resources",
|
||||
Severity: report.High,
|
||||
Scored: true,
|
||||
Resource: report.Resource{
|
||||
Resource: &report.Resource{
|
||||
APIVersion: "v1",
|
||||
Kind: "Deployment",
|
||||
Name: "nginx",
|
||||
|
@ -26,7 +26,8 @@ var result1 = report.Result{
|
|||
},
|
||||
}
|
||||
|
||||
var result2 = report.Result{
|
||||
var result2 = &report.Result{
|
||||
ID: "2",
|
||||
Message: "validation error: requests and limits required. Rule autogen-check-for-requests-and-limits failed at path /spec/template/spec/containers/0/resources/requests/",
|
||||
Policy: "require-requests-and-limits-required",
|
||||
Rule: "autogen-check-for-requests-and-limits",
|
||||
|
@ -34,7 +35,7 @@ var result2 = report.Result{
|
|||
Status: report.Fail,
|
||||
Category: "resources",
|
||||
Scored: true,
|
||||
Resource: report.Resource{
|
||||
Resource: &report.Resource{
|
||||
APIVersion: "v1",
|
||||
Kind: "Deployment",
|
||||
Name: "nginx",
|
||||
|
@ -43,24 +44,26 @@ var result2 = report.Result{
|
|||
},
|
||||
}
|
||||
|
||||
var preport = report.PolicyReport{
|
||||
var preport = &report.PolicyReport{
|
||||
ID: "24cfa233af033d104cd6ce0ff9a5a875c71a5844",
|
||||
Name: "polr-test",
|
||||
Namespace: "test",
|
||||
Results: make(map[string]report.Result, 0),
|
||||
Summary: report.Summary{},
|
||||
Results: make(map[string]*report.Result),
|
||||
Summary: &report.Summary{},
|
||||
CreationTimestamp: time.Now(),
|
||||
}
|
||||
|
||||
var creport = report.PolicyReport{
|
||||
var creport = &report.PolicyReport{
|
||||
ID: "57e1551475e17740bacc3640d2412b1a6aad6a93",
|
||||
Name: "cpolr-test",
|
||||
Results: make(map[string]report.Result, 0),
|
||||
Summary: report.Summary{},
|
||||
Results: make(map[string]*report.Result),
|
||||
Summary: &report.Summary{},
|
||||
CreationTimestamp: time.Now(),
|
||||
}
|
||||
|
||||
func Test_PolicyReport(t *testing.T) {
|
||||
t.Run("Check PolicyReport.GetIdentifier", func(t *testing.T) {
|
||||
expected := fmt.Sprintf("%s__%s", preport.Namespace, preport.Name)
|
||||
expected := report.GeneratePolicyReportID(preport.Name, preport.Namespace)
|
||||
|
||||
if preport.GetIdentifier() != expected {
|
||||
t.Errorf("Expected PolicyReport.GetIdentifier() to be %s (actual: %s)", expected, preport.GetIdentifier())
|
||||
|
@ -68,45 +71,36 @@ func Test_PolicyReport(t *testing.T) {
|
|||
})
|
||||
|
||||
t.Run("Check PolicyReport.GetNewResults", func(t *testing.T) {
|
||||
preport1 := preport
|
||||
preport2 := preport
|
||||
|
||||
preport1.Results = map[string]report.Result{result1.GetIdentifier(): result1}
|
||||
preport2.Results = map[string]report.Result{result1.GetIdentifier(): result1, result2.GetIdentifier(): result2}
|
||||
preport1 := &report.PolicyReport{
|
||||
ID: "24cfa233af033d104cd6ce0ff9a5a875c71a5844",
|
||||
Name: "polr-test",
|
||||
Namespace: "test",
|
||||
Summary: &report.Summary{},
|
||||
CreationTimestamp: time.Now(),
|
||||
Results: map[string]*report.Result{result1.GetIdentifier(): result1},
|
||||
}
|
||||
preport2 := &report.PolicyReport{
|
||||
ID: "24cfa233af033d104cd6ce0ff9a5a875c71a5844",
|
||||
Name: "polr-test",
|
||||
Namespace: "test",
|
||||
Summary: &report.Summary{},
|
||||
CreationTimestamp: time.Now(),
|
||||
Results: map[string]*report.Result{result1.GetIdentifier(): result1, result2.GetIdentifier(): result2},
|
||||
}
|
||||
|
||||
diff := preport2.GetNewResults(preport1)
|
||||
if len(diff) != 1 {
|
||||
t.Error("Expected 1 new result in diff")
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("Check PolicyReport.ResultHash", func(t *testing.T) {
|
||||
preport := preport
|
||||
preport.Results = map[string]report.Result{result1.GetIdentifier(): result1, result2.GetIdentifier(): result2}
|
||||
|
||||
hash := preport.ResultHash()
|
||||
if hash != "cd4a0ebefa915f33649db99063c182488403bb4c" {
|
||||
t.Errorf("Expected 'cd4a0ebefa915f33649db99063c182488403bb4c', got %s", hash)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("Check PolicyReport.ResultHash same with different order", func(t *testing.T) {
|
||||
preport1 := preport
|
||||
preport2 := preport
|
||||
|
||||
preport1.Results = map[string]report.Result{result2.GetIdentifier(): result2, result1.GetIdentifier(): result1}
|
||||
preport2.Results = map[string]report.Result{result1.GetIdentifier(): result1, result2.GetIdentifier(): result2}
|
||||
|
||||
if preport2.ResultHash() != preport1.ResultHash() {
|
||||
t.Error("Expected same hash with different order")
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func Test_ClusterPolicyReport(t *testing.T) {
|
||||
t.Run("Check ClusterPolicyReport.GetIdentifier", func(t *testing.T) {
|
||||
if creport.GetIdentifier() != creport.Name {
|
||||
t.Errorf("Expected ClusterPolicyReport.GetIdentifier() to be %s (actual: %s)", creport.Name, creport.GetIdentifier())
|
||||
expected := report.GeneratePolicyReportID(creport.Name, creport.Namespace)
|
||||
|
||||
if creport.GetIdentifier() != expected {
|
||||
t.Errorf("Expected ClusterPolicyReport.GetIdentifier() to be %s (actual: %s)", expected, creport.GetIdentifier())
|
||||
}
|
||||
})
|
||||
t.Run("Check ClusterPolicyReport.GetType", func(t *testing.T) {
|
||||
|
@ -116,54 +110,49 @@ func Test_ClusterPolicyReport(t *testing.T) {
|
|||
})
|
||||
|
||||
t.Run("Check ClusterPolicyReport.GetNewResults", func(t *testing.T) {
|
||||
creport1 := creport
|
||||
creport2 := creport
|
||||
creport1 := &report.PolicyReport{
|
||||
ID: "57e1551475e17740bacc3640d2412b1a6aad6a93",
|
||||
Name: "cpolr-test",
|
||||
Summary: &report.Summary{},
|
||||
CreationTimestamp: time.Now(),
|
||||
Results: map[string]*report.Result{result1.GetIdentifier(): result1},
|
||||
}
|
||||
|
||||
creport1.Results = map[string]report.Result{result1.GetIdentifier(): result1}
|
||||
creport2.Results = map[string]report.Result{result1.GetIdentifier(): result1, result2.GetIdentifier(): result2}
|
||||
creport2 := &report.PolicyReport{
|
||||
ID: "57e1551475e17740bacc3640d2412b1a6aad6a93",
|
||||
Name: "cpolr-test",
|
||||
Summary: &report.Summary{},
|
||||
CreationTimestamp: time.Now(),
|
||||
Results: map[string]*report.Result{result1.GetIdentifier(): result1, result2.GetIdentifier(): result2},
|
||||
}
|
||||
|
||||
diff := creport2.GetNewResults(creport1)
|
||||
if len(diff) != 1 {
|
||||
t.Error("Expected 1 new result in diff")
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("Check PolicyReport.ResultHash", func(t *testing.T) {
|
||||
report1 := creport
|
||||
report1.Results = map[string]report.Result{result1.GetIdentifier(): result1, result2.GetIdentifier(): result2}
|
||||
|
||||
hash := report1.ResultHash()
|
||||
if hash != "cd4a0ebefa915f33649db99063c182488403bb4c" {
|
||||
t.Errorf("Expected 'cd4a0ebefa915f33649db99063c182488403bb4c', got %s", hash)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("Check PolicyReport.ResultHash same with different order", func(t *testing.T) {
|
||||
report1 := creport
|
||||
report2 := creport
|
||||
|
||||
report1.Results = map[string]report.Result{result2.GetIdentifier(): result2, result1.GetIdentifier(): result1}
|
||||
report2.Results = map[string]report.Result{result1.GetIdentifier(): result1, result2.GetIdentifier(): result2}
|
||||
|
||||
if report2.ResultHash() != report1.ResultHash() {
|
||||
t.Error("Expected same hash with different order")
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func Test_Result(t *testing.T) {
|
||||
t.Run("Check Result.GetIdentifier", func(t *testing.T) {
|
||||
expected := fmt.Sprintf("%s__%s__%s__%s", result1.Policy, result1.Rule, result1.Status, result1.Resource.UID)
|
||||
expected := report.GeneratePolicyReportResultID(result1.Resource.UID, result1.Policy, result1.Rule, result1.Status, "")
|
||||
|
||||
if result1.GetIdentifier() != expected {
|
||||
t.Errorf("Expected ClusterPolicyReport.GetIdentifier() to be %s (actual: %s)", expected, creport.GetIdentifier())
|
||||
}
|
||||
})
|
||||
t.Run("Check Result.HasResource", func(t *testing.T) {
|
||||
t.Run("Check Result.HasResource with Resource", func(t *testing.T) {
|
||||
if result1.HasResource() == false {
|
||||
t.Errorf("Expected result1.HasResource() to be true (actual: %v)", result1.HasResource())
|
||||
}
|
||||
})
|
||||
t.Run("Check Result.HasResource without Resource", func(t *testing.T) {
|
||||
result := report.Result{}
|
||||
|
||||
if result.HasResource() == true {
|
||||
t.Errorf("Expected result.HasResource() to be false without a Resource (actual: %v)", result1.HasResource())
|
||||
}
|
||||
})
|
||||
|
||||
}
|
||||
|
||||
|
|
44
pkg/report/publisher.go
Normal file
44
pkg/report/publisher.go
Normal file
|
@ -0,0 +1,44 @@
|
|||
package report
|
||||
|
||||
import "sync"
|
||||
|
||||
type EventPublisher interface {
|
||||
// RegisterListener register Handlers called on each PolicyReport watch.Event
|
||||
RegisterListener(PolicyReportListener)
|
||||
// GetListener returns a list of all registered Listeners
|
||||
GetListener() []PolicyReportListener
|
||||
// Publish events to the registered listeners
|
||||
Publish(eventChan <-chan LifecycleEvent)
|
||||
}
|
||||
|
||||
type lifecycleEventPublisher struct {
|
||||
listeners []PolicyReportListener
|
||||
}
|
||||
|
||||
func (p *lifecycleEventPublisher) RegisterListener(listener PolicyReportListener) {
|
||||
p.listeners = append(p.listeners, listener)
|
||||
}
|
||||
|
||||
func (p *lifecycleEventPublisher) GetListener() []PolicyReportListener {
|
||||
return p.listeners
|
||||
}
|
||||
|
||||
func (p *lifecycleEventPublisher) Publish(eventChan <-chan LifecycleEvent) {
|
||||
for event := range eventChan {
|
||||
wg := sync.WaitGroup{}
|
||||
wg.Add(len(p.listeners))
|
||||
|
||||
for _, listener := range p.listeners {
|
||||
go func(li PolicyReportListener, ev LifecycleEvent) {
|
||||
li(event)
|
||||
wg.Done()
|
||||
}(listener, event)
|
||||
}
|
||||
|
||||
wg.Wait()
|
||||
}
|
||||
}
|
||||
|
||||
func NewEventPublisher() EventPublisher {
|
||||
return &lifecycleEventPublisher{}
|
||||
}
|
46
pkg/report/publisher_test.go
Normal file
46
pkg/report/publisher_test.go
Normal file
|
@ -0,0 +1,46 @@
|
|||
package report_test
|
||||
|
||||
import (
|
||||
"sync"
|
||||
"testing"
|
||||
|
||||
"github.com/kyverno/policy-reporter/pkg/report"
|
||||
)
|
||||
|
||||
func Test_PublishLifecycleEvents(t *testing.T) {
|
||||
eventChan := make(chan report.LifecycleEvent)
|
||||
|
||||
var event report.LifecycleEvent
|
||||
|
||||
wg := sync.WaitGroup{}
|
||||
wg.Add(1)
|
||||
|
||||
publisher := report.NewEventPublisher()
|
||||
publisher.RegisterListener(func(le report.LifecycleEvent) {
|
||||
event = le
|
||||
wg.Done()
|
||||
})
|
||||
|
||||
go func() {
|
||||
eventChan <- report.LifecycleEvent{Type: report.Updated, NewPolicyReport: &report.PolicyReport{}, OldPolicyReport: &report.PolicyReport{}}
|
||||
|
||||
close(eventChan)
|
||||
}()
|
||||
|
||||
publisher.Publish(eventChan)
|
||||
|
||||
wg.Wait()
|
||||
|
||||
if event.Type != report.Updated {
|
||||
t.Error("Expected Event to be published to the listener")
|
||||
}
|
||||
}
|
||||
|
||||
func Test_GetReisteredListeners(t *testing.T) {
|
||||
publisher := report.NewEventPublisher()
|
||||
publisher.RegisterListener(func(le report.LifecycleEvent) {})
|
||||
|
||||
if len(publisher.GetListener()) != 1 {
|
||||
t.Error("Expected to get one registered listener back")
|
||||
}
|
||||
}
|
|
@ -2,52 +2,87 @@ package report
|
|||
|
||||
import "sync"
|
||||
|
||||
type PolicyReportStore interface {
|
||||
// CreateSchemas for PolicyReports and PolicyReportResults
|
||||
CreateSchemas() error
|
||||
// Get an PolicyReport by Type and ID
|
||||
Get(id string) (*PolicyReport, bool)
|
||||
// Add a PolicyReport to the Store
|
||||
Add(r *PolicyReport) error
|
||||
// Add a PolicyReport to the Store
|
||||
Update(r *PolicyReport) error
|
||||
// Remove a PolicyReport with the given Type and ID from the Store
|
||||
Remove(id string) error
|
||||
// CleanUp removes all items in the store
|
||||
CleanUp() error
|
||||
}
|
||||
|
||||
// PolicyReportStore caches the latest version of an PolicyReport
|
||||
type PolicyReportStore struct {
|
||||
store map[string]map[string]PolicyReport
|
||||
type policyReportStore struct {
|
||||
store map[string]map[string]*PolicyReport
|
||||
rwm *sync.RWMutex
|
||||
}
|
||||
|
||||
// Get an PolicyReport by Type and ID
|
||||
func (s *PolicyReportStore) Get(rType Type, id string) (PolicyReport, bool) {
|
||||
func (s *policyReportStore) CreateSchemas() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *policyReportStore) Get(id string) (*PolicyReport, bool) {
|
||||
s.rwm.RLock()
|
||||
r, ok := s.store[rType][id]
|
||||
r, ok := s.store[PolicyReportType][id]
|
||||
s.rwm.RUnlock()
|
||||
if ok {
|
||||
return r, ok
|
||||
}
|
||||
|
||||
s.rwm.RLock()
|
||||
r, ok = s.store[ClusterPolicyReportType][id]
|
||||
s.rwm.RUnlock()
|
||||
|
||||
return r, ok
|
||||
}
|
||||
|
||||
// List all PolicyReports of the given Type
|
||||
func (s *PolicyReportStore) List(rType Type) []PolicyReport {
|
||||
s.rwm.RLock()
|
||||
list := make([]PolicyReport, 0, len(s.store))
|
||||
|
||||
for _, r := range s.store[rType] {
|
||||
list = append(list, r)
|
||||
}
|
||||
s.rwm.RUnlock()
|
||||
|
||||
return list
|
||||
}
|
||||
|
||||
// Add a PolicyReport to the Store
|
||||
func (s *PolicyReportStore) Add(r PolicyReport) {
|
||||
func (s *policyReportStore) Add(r *PolicyReport) error {
|
||||
s.rwm.Lock()
|
||||
s.store[r.GetType()][r.GetIdentifier()] = r
|
||||
s.rwm.Unlock()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Remove a PolicyReport with the given Type and ID from the Store
|
||||
func (s *PolicyReportStore) Remove(rType Type, id string) {
|
||||
func (s *policyReportStore) Update(r *PolicyReport) error {
|
||||
s.rwm.Lock()
|
||||
delete(s.store[rType], id)
|
||||
s.store[r.GetType()][r.GetIdentifier()] = r
|
||||
s.rwm.Unlock()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *policyReportStore) Remove(id string) error {
|
||||
if r, ok := s.Get(id); ok {
|
||||
s.rwm.Lock()
|
||||
delete(s.store[r.GetType()], id)
|
||||
s.rwm.Unlock()
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *policyReportStore) CleanUp() error {
|
||||
s.rwm.Lock()
|
||||
s.store = map[ResourceType]map[string]*PolicyReport{
|
||||
PolicyReportType: {},
|
||||
ClusterPolicyReportType: {},
|
||||
}
|
||||
s.rwm.Unlock()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// NewPolicyReportStore construct a PolicyReportStore
|
||||
func NewPolicyReportStore() *PolicyReportStore {
|
||||
return &PolicyReportStore{
|
||||
store: map[Type]map[string]PolicyReport{
|
||||
func NewPolicyReportStore() PolicyReportStore {
|
||||
return &policyReportStore{
|
||||
store: map[ResourceType]map[string]*PolicyReport{
|
||||
PolicyReportType: {},
|
||||
ClusterPolicyReportType: {},
|
||||
},
|
||||
|
|
|
@ -2,43 +2,71 @@ package report_test
|
|||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/kyverno/policy-reporter/pkg/report"
|
||||
)
|
||||
|
||||
func Test_PolicyReportStore(t *testing.T) {
|
||||
store := report.NewPolicyReportStore()
|
||||
store.CreateSchemas()
|
||||
|
||||
t.Run("Add/Get", func(t *testing.T) {
|
||||
_, ok := store.Get(preport.GetType(), preport.GetIdentifier())
|
||||
_, ok := store.Get(preport.GetIdentifier())
|
||||
if ok == true {
|
||||
t.Fatalf("Should not be found in empty Store")
|
||||
}
|
||||
|
||||
store.Add(preport)
|
||||
_, ok = store.Get(preport.GetType(), preport.GetIdentifier())
|
||||
_, ok = store.Get(preport.GetIdentifier())
|
||||
if ok == false {
|
||||
t.Errorf("Should be found in Store after adding report to the store")
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("List", func(t *testing.T) {
|
||||
items := store.List(preport.GetType())
|
||||
if len(items) != 1 {
|
||||
t.Errorf("Should return List with the added Report")
|
||||
t.Run("Update/Get", func(t *testing.T) {
|
||||
ureport := &report.PolicyReport{
|
||||
ID: "24cfa233af033d104cd6ce0ff9a5a875c71a5844",
|
||||
Name: "polr-test",
|
||||
Namespace: "test",
|
||||
Results: make(map[string]*report.Result),
|
||||
Summary: &report.Summary{Skip: 1},
|
||||
CreationTimestamp: time.Now(),
|
||||
}
|
||||
|
||||
store.Add(preport)
|
||||
r, _ := store.Get(preport.GetIdentifier())
|
||||
if r.Summary.Skip != 0 {
|
||||
t.Errorf("Expected Summary.Skip to be 0")
|
||||
}
|
||||
|
||||
store.Update(ureport)
|
||||
r2, _ := store.Get(preport.GetIdentifier())
|
||||
if r2.Summary.Skip != 1 {
|
||||
t.Errorf("Expected Summary.Skip to be 1 after update")
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("Delete/Get", func(t *testing.T) {
|
||||
_, ok := store.Get(preport.GetType(), preport.GetIdentifier())
|
||||
_, ok := store.Get(preport.GetIdentifier())
|
||||
if ok == false {
|
||||
t.Errorf("Should be found in Store after adding report to the store")
|
||||
}
|
||||
|
||||
store.Remove(preport.GetType(), preport.GetIdentifier())
|
||||
_, ok = store.Get(preport.GetType(), preport.GetIdentifier())
|
||||
store.Remove(preport.GetIdentifier())
|
||||
_, ok = store.Get(preport.GetIdentifier())
|
||||
if ok == true {
|
||||
t.Fatalf("Should not be found after Remove report from Store")
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("CleanUp", func(t *testing.T) {
|
||||
store.Add(preport)
|
||||
|
||||
store.CleanUp()
|
||||
_, ok := store.Get(preport.GetIdentifier())
|
||||
if ok == true {
|
||||
t.Fatalf("Should have no results after CleanUp")
|
||||
}
|
||||
})
|
||||
}
|
||||
|
|
846
pkg/sqlite3/store.go
Normal file
846
pkg/sqlite3/store.go
Normal file
|
@ -0,0 +1,846 @@
|
|||
package sqlite3
|
||||
|
||||
import (
|
||||
"database/sql"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"log"
|
||||
"os"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
api "github.com/kyverno/policy-reporter/pkg/api/v1"
|
||||
"github.com/kyverno/policy-reporter/pkg/report"
|
||||
_ "github.com/mattn/go-sqlite3"
|
||||
)
|
||||
|
||||
const (
|
||||
reportSQL = `CREATE TABLE policy_report (
|
||||
"id" TEXT NOT NULL PRIMARY KEY,
|
||||
"type" TEXT,
|
||||
"namespace" TEXT,
|
||||
"name" TEXT NOT NULL,
|
||||
"skip" INTEGER DEFAULT 0,
|
||||
"pass" INTEGER DEFAULT 0,
|
||||
"warn" INTEGER DEFAULT 0,
|
||||
"fail" INTEGER DEFAULT 0,
|
||||
"error" INTEGER DEFAULT 0,
|
||||
"created" INTEGER
|
||||
);`
|
||||
|
||||
resultSQL = `CREATE TABLE policy_report_result (
|
||||
"policy_report_id" TEXT NOT NULL,
|
||||
"id" TEXT NOT NULL PRIMARY KEY,
|
||||
"policy" TEXT,
|
||||
"rule" TEXT,
|
||||
"message" TEXT,
|
||||
"scored" INTEGER,
|
||||
"priority" TEXT,
|
||||
"status" TEXT,
|
||||
"severity" TEXT,
|
||||
"category" TEXT,
|
||||
"source" TEXT,
|
||||
"resource_api_version" TEXT,
|
||||
"resource_kind" TEXT,
|
||||
"resource_name" TEXT,
|
||||
"resource_namespace" TEXT,
|
||||
"resource_uid" TEXT,
|
||||
"properties" TEXT,
|
||||
"timestamp" INTEGER,
|
||||
FOREIGN KEY (policy_report_id) REFERENCES policy_report(id) ON DELETE CASCADE
|
||||
);`
|
||||
)
|
||||
|
||||
type PolicyReportStore interface {
|
||||
report.PolicyReportStore
|
||||
api.PolicyReportFinder
|
||||
}
|
||||
|
||||
// policyReportStore caches the latest version of an PolicyReport
|
||||
type policyReportStore struct {
|
||||
db *sql.DB
|
||||
}
|
||||
|
||||
func (s *policyReportStore) CreateSchemas() error {
|
||||
_, err := s.db.Exec("PRAGMA foreign_keys = ON")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
_, err = s.db.Exec(reportSQL)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
_, err = s.db.Exec(resultSQL)
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
// Get an PolicyReport by Type and ID
|
||||
func (s *policyReportStore) Get(id string) (*report.PolicyReport, bool) {
|
||||
var created int64
|
||||
r := &report.PolicyReport{Summary: &report.Summary{}}
|
||||
|
||||
row := s.db.QueryRow("SELECT namespace, name, pass, skip, warn, fail, error, created FROM policy_report WHERE id=$1", id)
|
||||
err := row.Scan(&r.Namespace, &r.Name, &r.Summary.Pass, &r.Summary.Skip, &r.Summary.Warn, &r.Summary.Fail, &r.Summary.Error, &created)
|
||||
if err == sql.ErrNoRows {
|
||||
return r, false
|
||||
} else if err != nil {
|
||||
log.Printf("[ERROR] Failed to select PolicyReport: %s", err)
|
||||
return r, false
|
||||
}
|
||||
|
||||
r.CreationTimestamp = time.Unix(created, 0)
|
||||
|
||||
results, err := s.fetchResults(id)
|
||||
if err != nil {
|
||||
log.Printf("Failed to fetch Reports: %s\n", err)
|
||||
return r, false
|
||||
}
|
||||
|
||||
r.Results = results
|
||||
|
||||
return r, true
|
||||
}
|
||||
|
||||
// Add a PolicyReport to the Store
|
||||
func (s *policyReportStore) Add(r *report.PolicyReport) error {
|
||||
stmt, err := s.db.Prepare("INSERT INTO policy_report(id, type, namespace, name, pass, skip, warn, fail, error, created) values(?,?,?,?,?,?,?,?,?,?)")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer stmt.Close()
|
||||
|
||||
_, err = stmt.Exec(r.GetIdentifier(), r.GetType(), r.Namespace, r.Name, r.Summary.Pass, r.Summary.Skip, r.Summary.Warn, r.Summary.Fail, r.Summary.Error, r.CreationTimestamp.Unix())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return s.persistResults(r)
|
||||
}
|
||||
|
||||
func (s *policyReportStore) Update(r *report.PolicyReport) error {
|
||||
stmt, err := s.db.Prepare("UPDATE policy_report SET pass=?, skip=?, warn=?, fail=?, error=?, created=? WHERE id=?")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer stmt.Close()
|
||||
|
||||
_, err = stmt.Exec(r.Summary.Pass, r.Summary.Skip, r.Summary.Warn, r.Summary.Fail, r.Summary.Error, r.CreationTimestamp.Unix(), r.GetIdentifier())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
dstmt, err := s.db.Prepare("DELETE FROM policy_report_result WHERE policy_report_id=?")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer dstmt.Close()
|
||||
|
||||
_, err = dstmt.Exec(r.GetIdentifier())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return s.persistResults(r)
|
||||
}
|
||||
|
||||
// Remove a PolicyReport with the given Type and ID from the Store
|
||||
func (s *policyReportStore) Remove(id string) error {
|
||||
stmt, err := s.db.Prepare("DELETE FROM policy_report WHERE id=?")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer stmt.Close()
|
||||
|
||||
_, err = stmt.Exec(id)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
stmt, err = s.db.Prepare("DELETE FROM policy_report_result WHERE policy_report_id=?")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer stmt.Close()
|
||||
|
||||
_, err = stmt.Exec(id)
|
||||
return err
|
||||
}
|
||||
|
||||
func (s *policyReportStore) CleanUp() error {
|
||||
stmt, err := s.db.Prepare("DELETE FROM policy_report")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer stmt.Close()
|
||||
|
||||
_, err = stmt.Exec()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
dstmt, err := s.db.Prepare("DELETE FROM policy_report_result")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer dstmt.Close()
|
||||
|
||||
_, err = dstmt.Exec()
|
||||
return err
|
||||
}
|
||||
|
||||
func (s *policyReportStore) FetchClusterPolicies(source string) ([]string, error) {
|
||||
list := make([]string, 0)
|
||||
|
||||
where, args := appendSourceWhere(source)
|
||||
if where != "" {
|
||||
where = " AND " + where
|
||||
}
|
||||
|
||||
rows, err := s.db.Query(`SELECT DISTINCT policy FROM policy_report_result WHERE resource_namespace == ""`+where+` ORDER BY policy ASC`, args...)
|
||||
if err != nil {
|
||||
return list, err
|
||||
}
|
||||
defer rows.Close()
|
||||
for rows.Next() {
|
||||
var item string
|
||||
err := rows.Scan(&item)
|
||||
if err != nil {
|
||||
return list, err
|
||||
}
|
||||
|
||||
list = append(list, item)
|
||||
}
|
||||
|
||||
return list, nil
|
||||
}
|
||||
|
||||
func (s *policyReportStore) FetchNamespacedPolicies(source string) ([]string, error) {
|
||||
list := make([]string, 0)
|
||||
|
||||
where, args := appendSourceWhere(source)
|
||||
if where != "" {
|
||||
where = " AND " + where
|
||||
}
|
||||
|
||||
rows, err := s.db.Query(`SELECT DISTINCT policy FROM policy_report_result WHERE resource_namespace != ""`+where+` ORDER BY policy ASC`, args...)
|
||||
if err != nil {
|
||||
return list, err
|
||||
}
|
||||
defer rows.Close()
|
||||
for rows.Next() {
|
||||
var item string
|
||||
err := rows.Scan(&item)
|
||||
if err != nil {
|
||||
return list, err
|
||||
}
|
||||
|
||||
list = append(list, item)
|
||||
}
|
||||
|
||||
return list, nil
|
||||
}
|
||||
|
||||
func (s *policyReportStore) FetchCategories(source string) ([]string, error) {
|
||||
list := make([]string, 0)
|
||||
|
||||
where, args := appendSourceWhere(source)
|
||||
if where != "" {
|
||||
where = " AND " + where
|
||||
}
|
||||
|
||||
rows, err := s.db.Query(`SELECT DISTINCT category FROM policy_report_result WHERE category != ""`+where+` ORDER BY category ASC`, args...)
|
||||
if err != nil {
|
||||
return list, err
|
||||
}
|
||||
defer rows.Close()
|
||||
for rows.Next() {
|
||||
var item string
|
||||
err := rows.Scan(&item)
|
||||
if err != nil {
|
||||
return list, err
|
||||
}
|
||||
|
||||
list = append(list, item)
|
||||
}
|
||||
|
||||
return list, nil
|
||||
}
|
||||
|
||||
func (s *policyReportStore) FetchNamespacedKinds(source string) ([]string, error) {
|
||||
list := make([]string, 0)
|
||||
|
||||
where, args := appendSourceWhere(source)
|
||||
if where != "" {
|
||||
where = " AND " + where
|
||||
}
|
||||
|
||||
rows, err := s.db.Query(`SELECT DISTINCT resource_kind FROM policy_report_result WHERE resource_kind != "" AND resource_namespace != ""`+where+` ORDER BY resource_kind ASC`, args...)
|
||||
if err != nil {
|
||||
return list, err
|
||||
}
|
||||
defer rows.Close()
|
||||
for rows.Next() {
|
||||
var item string
|
||||
err := rows.Scan(&item)
|
||||
if err != nil {
|
||||
return list, err
|
||||
}
|
||||
|
||||
list = append(list, item)
|
||||
}
|
||||
|
||||
return list, nil
|
||||
}
|
||||
|
||||
func (s *policyReportStore) FetchClusterKinds(source string) ([]string, error) {
|
||||
list := make([]string, 0)
|
||||
|
||||
where, args := appendSourceWhere(source)
|
||||
if where != "" {
|
||||
where = " AND " + where
|
||||
}
|
||||
|
||||
rows, err := s.db.Query(`SELECT DISTINCT resource_kind FROM policy_report_result WHERE resource_kind != "" AND resource_namespace == ""`+where+` ORDER BY resource_kind ASC`, args...)
|
||||
if err != nil {
|
||||
return list, err
|
||||
}
|
||||
defer rows.Close()
|
||||
for rows.Next() {
|
||||
var item string
|
||||
err := rows.Scan(&item)
|
||||
if err != nil {
|
||||
return list, err
|
||||
}
|
||||
|
||||
list = append(list, item)
|
||||
}
|
||||
|
||||
return list, nil
|
||||
}
|
||||
|
||||
func (s *policyReportStore) FetchClusterSources() ([]string, error) {
|
||||
list := make([]string, 0)
|
||||
rows, err := s.db.Query(`SELECT DISTINCT source FROM policy_report_result WHERE source != "" AND resource_namespace == "" ORDER BY source ASC`)
|
||||
if err != nil {
|
||||
return list, err
|
||||
}
|
||||
defer rows.Close()
|
||||
for rows.Next() {
|
||||
var item string
|
||||
err := rows.Scan(&item)
|
||||
if err != nil {
|
||||
return list, err
|
||||
}
|
||||
|
||||
list = append(list, item)
|
||||
}
|
||||
|
||||
return list, nil
|
||||
}
|
||||
|
||||
func (s *policyReportStore) FetchNamespacedSources() ([]string, error) {
|
||||
list := make([]string, 0)
|
||||
rows, err := s.db.Query(`SELECT DISTINCT source FROM policy_report_result WHERE source != "" AND resource_namespace != "" ORDER BY source ASC`)
|
||||
if err != nil {
|
||||
return list, err
|
||||
}
|
||||
defer rows.Close()
|
||||
for rows.Next() {
|
||||
var item string
|
||||
err := rows.Scan(&item)
|
||||
if err != nil {
|
||||
return list, err
|
||||
}
|
||||
|
||||
list = append(list, item)
|
||||
}
|
||||
|
||||
return list, nil
|
||||
}
|
||||
|
||||
func (s *policyReportStore) FetchNamespaces(source string) ([]string, error) {
|
||||
list := make([]string, 0)
|
||||
|
||||
where, args := appendSourceWhere(source)
|
||||
if where != "" {
|
||||
where = " AND " + where
|
||||
}
|
||||
|
||||
rows, err := s.db.Query(`SELECT DISTINCT resource_namespace FROM policy_report_result WHERE resource_namespace != ""`+where+` ORDER BY resource_namespace ASC`, args...)
|
||||
if err != nil {
|
||||
return list, err
|
||||
}
|
||||
defer rows.Close()
|
||||
for rows.Next() {
|
||||
var item string
|
||||
err := rows.Scan(&item)
|
||||
if err != nil {
|
||||
return list, err
|
||||
}
|
||||
|
||||
list = append(list, item)
|
||||
}
|
||||
|
||||
return list, nil
|
||||
}
|
||||
|
||||
func (s *policyReportStore) FetchNamespacedStatusCounts(filter api.Filter) ([]api.NamespacedStatusCount, error) {
|
||||
var list map[string][]api.NamespaceCount
|
||||
|
||||
if len(filter.Status) == 0 {
|
||||
list = map[string][]api.NamespaceCount{
|
||||
report.Pass: make([]api.NamespaceCount, 0),
|
||||
report.Fail: make([]api.NamespaceCount, 0),
|
||||
report.Warn: make([]api.NamespaceCount, 0),
|
||||
report.Error: make([]api.NamespaceCount, 0),
|
||||
report.Skip: make([]api.NamespaceCount, 0),
|
||||
}
|
||||
} else {
|
||||
list = map[string][]api.NamespaceCount{}
|
||||
|
||||
for _, status := range filter.Status {
|
||||
list[status] = make([]api.NamespaceCount, 0)
|
||||
}
|
||||
}
|
||||
|
||||
statusCounts := make([]api.NamespacedStatusCount, 0, 5)
|
||||
|
||||
where := make([]string, 0)
|
||||
args := make([]interface{}, 0)
|
||||
|
||||
var argCounter int
|
||||
|
||||
argCounter, where, args = appendWhere(filter.Policies, "policy", where, args, argCounter)
|
||||
argCounter, where, args = appendWhere(filter.Kinds, "resource_kind", where, args, argCounter)
|
||||
argCounter, where, args = appendWhere(filter.Sources, "source", where, args, argCounter)
|
||||
argCounter, where, args = appendWhere(filter.Categories, "category", where, args, argCounter)
|
||||
argCounter, where, args = appendWhere(filter.Severities, "severity", where, args, argCounter)
|
||||
argCounter, where, args = appendWhere(filter.Status, "status", where, args, argCounter)
|
||||
_, where, args = appendWhere(filter.Namespaces, "resource_namespace", where, args, argCounter)
|
||||
|
||||
whereClause := ""
|
||||
if len(where) > 0 {
|
||||
whereClause = " AND " + strings.Join(where, " AND ")
|
||||
}
|
||||
|
||||
rows, err := s.db.Query(`
|
||||
SELECT COUNT(id) as counter, resource_namespace, status
|
||||
FROM policy_report_result WHERE resource_namespace != ""`+whereClause+`
|
||||
GROUP BY resource_namespace, status
|
||||
ORDER BY resource_namespace ASC`, args...)
|
||||
|
||||
if err != nil {
|
||||
return statusCounts, err
|
||||
}
|
||||
defer rows.Close()
|
||||
for rows.Next() {
|
||||
count := api.NamespaceCount{}
|
||||
var status string
|
||||
err := rows.Scan(&count.Count, &count.Namespace, &status)
|
||||
if err != nil {
|
||||
return statusCounts, err
|
||||
}
|
||||
|
||||
list[status] = append(list[status], count)
|
||||
}
|
||||
|
||||
for status, items := range list {
|
||||
statusCounts = append(statusCounts, api.NamespacedStatusCount{
|
||||
Status: status,
|
||||
Items: items,
|
||||
})
|
||||
}
|
||||
|
||||
return statusCounts, nil
|
||||
}
|
||||
|
||||
func (s *policyReportStore) FetchRuleStatusCounts(policy, rule string) ([]api.StatusCount, error) {
|
||||
list := map[string]api.StatusCount{
|
||||
report.Pass: {Status: report.Pass},
|
||||
report.Fail: {Status: report.Fail},
|
||||
report.Warn: {Status: report.Warn},
|
||||
report.Error: {Status: report.Error},
|
||||
report.Skip: {Status: report.Skip},
|
||||
}
|
||||
|
||||
statusCounts := make([]api.StatusCount, 0, len(list))
|
||||
|
||||
where := make([]string, 0)
|
||||
args := make([]interface{}, 0)
|
||||
|
||||
var argCounter int
|
||||
|
||||
argCounter, where, args = appendWhere([]string{policy}, "policy", where, args, argCounter)
|
||||
argCounter, where, args = appendWhere([]string{rule}, "rule", where, args, argCounter)
|
||||
|
||||
whereClause := ""
|
||||
if len(where) > 0 {
|
||||
whereClause = " WHERE " + strings.Join(where, " AND ")
|
||||
}
|
||||
|
||||
rows, err := s.db.Query(`
|
||||
SELECT COUNT(id) as counter, status
|
||||
FROM policy_report_result`+whereClause+`
|
||||
GROUP BY status`, args...)
|
||||
|
||||
if err != nil {
|
||||
return statusCounts, err
|
||||
}
|
||||
defer rows.Close()
|
||||
for rows.Next() {
|
||||
count := api.StatusCount{}
|
||||
err := rows.Scan(&count.Count, &count.Status)
|
||||
if err != nil {
|
||||
return statusCounts, err
|
||||
}
|
||||
|
||||
list[count.Status] = count
|
||||
}
|
||||
|
||||
for _, count := range list {
|
||||
statusCounts = append(statusCounts, count)
|
||||
}
|
||||
|
||||
return statusCounts, nil
|
||||
}
|
||||
|
||||
func (s *policyReportStore) FetchStatusCounts(filter api.Filter) ([]api.StatusCount, error) {
|
||||
var list map[string]api.StatusCount
|
||||
|
||||
if len(filter.Status) == 0 {
|
||||
list = map[string]api.StatusCount{
|
||||
report.Pass: {Status: report.Pass},
|
||||
report.Fail: {Status: report.Fail},
|
||||
report.Warn: {Status: report.Warn},
|
||||
report.Error: {Status: report.Error},
|
||||
report.Skip: {Status: report.Skip},
|
||||
}
|
||||
} else {
|
||||
list = map[string]api.StatusCount{}
|
||||
|
||||
for _, status := range filter.Status {
|
||||
list[status] = api.StatusCount{Status: status}
|
||||
}
|
||||
}
|
||||
|
||||
statusCounts := make([]api.StatusCount, 0, len(list))
|
||||
|
||||
where := make([]string, 0)
|
||||
args := make([]interface{}, 0)
|
||||
|
||||
var argCounter int
|
||||
|
||||
argCounter, where, args = appendWhere(filter.Policies, "policy", where, args, argCounter)
|
||||
argCounter, where, args = appendWhere(filter.Kinds, "resource_kind", where, args, argCounter)
|
||||
argCounter, where, args = appendWhere(filter.Sources, "source", where, args, argCounter)
|
||||
argCounter, where, args = appendWhere(filter.Categories, "category", where, args, argCounter)
|
||||
argCounter, where, args = appendWhere(filter.Severities, "severity", where, args, argCounter)
|
||||
_, where, args = appendWhere(filter.Status, "status", where, args, argCounter)
|
||||
|
||||
whereClause := ""
|
||||
if len(where) > 0 {
|
||||
whereClause = " AND " + strings.Join(where, " AND ")
|
||||
}
|
||||
|
||||
rows, err := s.db.Query(`
|
||||
SELECT COUNT(id) as counter, status
|
||||
FROM policy_report_result WHERE resource_namespace = ""`+whereClause+`
|
||||
GROUP BY status`, args...)
|
||||
|
||||
if err != nil {
|
||||
return statusCounts, err
|
||||
}
|
||||
defer rows.Close()
|
||||
for rows.Next() {
|
||||
count := api.StatusCount{}
|
||||
err := rows.Scan(&count.Count, &count.Status)
|
||||
if err != nil {
|
||||
return statusCounts, err
|
||||
}
|
||||
|
||||
list[count.Status] = count
|
||||
}
|
||||
|
||||
for _, count := range list {
|
||||
statusCounts = append(statusCounts, count)
|
||||
}
|
||||
|
||||
return statusCounts, nil
|
||||
}
|
||||
|
||||
func (s *policyReportStore) FetchNamespacedResults(filter api.Filter) ([]*api.ListResult, error) {
|
||||
list := []*api.ListResult{}
|
||||
|
||||
where := make([]string, 0)
|
||||
args := make([]interface{}, 0)
|
||||
|
||||
var argCounter int
|
||||
|
||||
argCounter, where, args = appendWhere(filter.Policies, "policy", where, args, argCounter)
|
||||
argCounter, where, args = appendWhere(filter.Kinds, "resource_kind", where, args, argCounter)
|
||||
argCounter, where, args = appendWhere(filter.Sources, "source", where, args, argCounter)
|
||||
argCounter, where, args = appendWhere(filter.Categories, "category", where, args, argCounter)
|
||||
argCounter, where, args = appendWhere(filter.Severities, "severity", where, args, argCounter)
|
||||
argCounter, where, args = appendWhere(filter.Status, "status", where, args, argCounter)
|
||||
_, where, args = appendWhere(filter.Namespaces, "resource_namespace", where, args, argCounter)
|
||||
|
||||
whereClause := ""
|
||||
if len(where) > 0 {
|
||||
whereClause = " AND " + strings.Join(where, " AND ")
|
||||
}
|
||||
|
||||
rows, err := s.db.Query(`
|
||||
SELECT id, resource_namespace, resource_kind, resource_name, message, policy, rule, severity, properties, status
|
||||
FROM policy_report_result WHERE resource_namespace != ""`+whereClause+`
|
||||
ORDER BY resource_namespace, resource_name, resource_uid ASC`, args...)
|
||||
|
||||
if err != nil {
|
||||
return list, err
|
||||
}
|
||||
defer rows.Close()
|
||||
for rows.Next() {
|
||||
result := api.ListResult{}
|
||||
var props []byte
|
||||
|
||||
err := rows.Scan(&result.ID, &result.Namespace, &result.Kind, &result.Name, &result.Message, &result.Policy, &result.Rule, &result.Severity, &props, &result.Status)
|
||||
if err != nil {
|
||||
return list, err
|
||||
}
|
||||
|
||||
json.Unmarshal(props, &result.Properties)
|
||||
|
||||
list = append(list, &result)
|
||||
}
|
||||
|
||||
return list, nil
|
||||
}
|
||||
|
||||
func (s *policyReportStore) FetchClusterResults(filter api.Filter) ([]*api.ListResult, error) {
|
||||
list := []*api.ListResult{}
|
||||
|
||||
where := make([]string, 0)
|
||||
args := make([]interface{}, 0)
|
||||
|
||||
var argCounter int
|
||||
|
||||
argCounter, where, args = appendWhere(filter.Policies, "policy", where, args, argCounter)
|
||||
argCounter, where, args = appendWhere(filter.Kinds, "resource_kind", where, args, argCounter)
|
||||
argCounter, where, args = appendWhere(filter.Sources, "source", where, args, argCounter)
|
||||
argCounter, where, args = appendWhere(filter.Categories, "category", where, args, argCounter)
|
||||
argCounter, where, args = appendWhere(filter.Severities, "severity", where, args, argCounter)
|
||||
_, where, args = appendWhere(filter.Status, "status", where, args, argCounter)
|
||||
|
||||
whereClause := ""
|
||||
if len(where) > 0 {
|
||||
whereClause = " AND " + strings.Join(where, " AND ")
|
||||
}
|
||||
|
||||
rows, err := s.db.Query(`
|
||||
SELECT id, resource_namespace, resource_kind, resource_name, message, policy, rule, severity, properties, status
|
||||
FROM policy_report_result WHERE resource_namespace =""`+whereClause+`
|
||||
ORDER BY resource_namespace, resource_name, resource_uid ASC`, args...)
|
||||
|
||||
if err != nil {
|
||||
return list, err
|
||||
}
|
||||
defer rows.Close()
|
||||
for rows.Next() {
|
||||
result := api.ListResult{}
|
||||
var props []byte
|
||||
|
||||
err := rows.Scan(&result.ID, &result.Namespace, &result.Kind, &result.Name, &result.Message, &result.Policy, &result.Rule, &result.Severity, &props, &result.Status)
|
||||
if err != nil {
|
||||
return list, err
|
||||
}
|
||||
|
||||
json.Unmarshal(props, &result.Properties)
|
||||
|
||||
list = append(list, &result)
|
||||
}
|
||||
|
||||
return list, nil
|
||||
}
|
||||
|
||||
func (s *policyReportStore) persistResults(report *report.PolicyReport) error {
|
||||
for _, result := range report.Results {
|
||||
rstmt, err := s.db.Prepare("INSERT INTO policy_report_result(policy_report_id, id, policy, rule, message, scored, priority, status, severity, category, source, resource_api_version, resource_kind, resource_name, resource_namespace, resource_uid, properties, timestamp) values(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer rstmt.Close()
|
||||
|
||||
var props string
|
||||
|
||||
b, err := json.Marshal(result.Properties)
|
||||
if err == nil {
|
||||
props = string(b)
|
||||
}
|
||||
|
||||
_, err = rstmt.Exec(
|
||||
report.GetIdentifier(),
|
||||
result.GetIdentifier(),
|
||||
result.Policy,
|
||||
result.Rule,
|
||||
result.Message,
|
||||
result.Scored,
|
||||
result.Priority,
|
||||
result.Status,
|
||||
result.Severity,
|
||||
result.Category,
|
||||
result.Source,
|
||||
result.Resource.APIVersion,
|
||||
result.Resource.Kind,
|
||||
result.Resource.Name,
|
||||
result.Resource.Namespace,
|
||||
result.Resource.UID,
|
||||
props,
|
||||
result.Timestamp.Unix(),
|
||||
)
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *policyReportStore) fetchResults(reportID string) (map[string]*report.Result, error) {
|
||||
results := make(map[string]*report.Result)
|
||||
|
||||
rows, err := s.db.Query(`
|
||||
SELECT
|
||||
id,
|
||||
policy,
|
||||
rule,
|
||||
message,
|
||||
scored,
|
||||
priority,
|
||||
status,
|
||||
severity,
|
||||
category,
|
||||
source,
|
||||
resource_api_version,
|
||||
resource_kind,
|
||||
resource_name,
|
||||
resource_namespace,
|
||||
resource_uid,
|
||||
properties,
|
||||
timestamp
|
||||
FROM policy_report_result
|
||||
WHERE policy_report_id=$1
|
||||
`, reportID)
|
||||
if err != nil {
|
||||
return results, err
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
var props []byte
|
||||
var timestamp int64
|
||||
|
||||
for rows.Next() {
|
||||
result := &report.Result{
|
||||
Resource: &report.Resource{},
|
||||
}
|
||||
|
||||
err = rows.Scan(
|
||||
&result.ID,
|
||||
&result.Policy,
|
||||
&result.Rule,
|
||||
&result.Message,
|
||||
&result.Scored,
|
||||
&result.Priority,
|
||||
&result.Status,
|
||||
&result.Severity,
|
||||
&result.Category,
|
||||
&result.Source,
|
||||
&result.Resource.APIVersion,
|
||||
&result.Resource.Kind,
|
||||
&result.Resource.Name,
|
||||
&result.Resource.Namespace,
|
||||
&result.Resource.UID,
|
||||
&props,
|
||||
×tamp,
|
||||
)
|
||||
if err != nil {
|
||||
return results, err
|
||||
}
|
||||
|
||||
err = json.Unmarshal(props, &result.Properties)
|
||||
if err != nil {
|
||||
result.Properties = make(map[string]string)
|
||||
}
|
||||
|
||||
result.Timestamp = time.Unix(timestamp, 0)
|
||||
|
||||
results[result.GetIdentifier()] = result
|
||||
}
|
||||
|
||||
return results, nil
|
||||
}
|
||||
|
||||
func appendWhere(options []string, field string, where []string, args []interface{}, argCounter int) (int, []string, []interface{}) {
|
||||
length := len(options)
|
||||
|
||||
if length == 0 {
|
||||
return argCounter, where, args
|
||||
}
|
||||
|
||||
if length == 1 {
|
||||
option := options[0]
|
||||
argCounter++
|
||||
|
||||
args = append(args, strings.ToLower(option))
|
||||
|
||||
where = append(where, fmt.Sprintf("LOWER(%s)=$%d", field, argCounter))
|
||||
|
||||
return argCounter + length, where, args
|
||||
}
|
||||
|
||||
arguments := make([]string, 0, length)
|
||||
|
||||
for _, option := range options {
|
||||
argCounter++
|
||||
|
||||
arguments = append(arguments, fmt.Sprintf("$%d", argCounter))
|
||||
args = append(args, strings.ToLower(option))
|
||||
}
|
||||
|
||||
where = append(where, "LOWER("+field+") IN ("+strings.Join(arguments, ",")+")")
|
||||
|
||||
return argCounter + length, where, args
|
||||
}
|
||||
|
||||
func appendSourceWhere(source string) (string, []interface{}) {
|
||||
if source == "" {
|
||||
return "", make([]interface{}, 0)
|
||||
}
|
||||
|
||||
return "LOWER(source)=$1", []interface{}{strings.ToLower(source)}
|
||||
}
|
||||
|
||||
// NewPolicyReportStore construct a PolicyReportStore
|
||||
func NewPolicyReportStore(db *sql.DB) (PolicyReportStore, error) {
|
||||
var err error
|
||||
|
||||
s := &policyReportStore{db}
|
||||
if db != nil {
|
||||
err = s.CreateSchemas()
|
||||
}
|
||||
|
||||
return s, err
|
||||
}
|
||||
|
||||
func NewDatabase(dbFile string) (*sql.DB, error) {
|
||||
os.Remove(dbFile)
|
||||
file, err := os.Create(dbFile)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
file.Close()
|
||||
|
||||
return sql.Open("sqlite3", dbFile)
|
||||
}
|
476
pkg/sqlite3/store_test.go
Normal file
476
pkg/sqlite3/store_test.go
Normal file
|
@ -0,0 +1,476 @@
|
|||
package sqlite3_test
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
v1 "github.com/kyverno/policy-reporter/pkg/api/v1"
|
||||
"github.com/kyverno/policy-reporter/pkg/report"
|
||||
"github.com/kyverno/policy-reporter/pkg/sqlite3"
|
||||
)
|
||||
|
||||
var result1 = &report.Result{
|
||||
ID: "123",
|
||||
Message: "validation error: requests and limits required. Rule autogen-check-for-requests-and-limits failed at path /spec/template/spec/containers/0/resources/requests/",
|
||||
Policy: "require-requests-and-limits-required",
|
||||
Rule: "autogen-check-for-requests-and-limits",
|
||||
Priority: report.ErrorPriority,
|
||||
Status: report.Fail,
|
||||
Category: "resources",
|
||||
Severity: report.High,
|
||||
Scored: true,
|
||||
Source: "Kyverno",
|
||||
Resource: &report.Resource{
|
||||
APIVersion: "v1",
|
||||
Kind: "Deployment",
|
||||
Name: "nginx",
|
||||
Namespace: "test",
|
||||
UID: "536ab69f-1b3c-4bd9-9ba4-274a56188409",
|
||||
},
|
||||
}
|
||||
|
||||
var result2 = &report.Result{
|
||||
ID: "124",
|
||||
Message: "validation error: requests and limits required. Rule autogen-check-for-requests-and-limits failed at path /spec/template/spec/containers/0/resources/requests/",
|
||||
Policy: "require-requests-and-limits-required",
|
||||
Rule: "autogen-check-for-requests-and-limits",
|
||||
Priority: report.WarningPriority,
|
||||
Status: report.Pass,
|
||||
Category: "Best Practices",
|
||||
Scored: true,
|
||||
Source: "Kyverno",
|
||||
Resource: &report.Resource{
|
||||
APIVersion: "v1",
|
||||
Kind: "Pod",
|
||||
Name: "nginx",
|
||||
Namespace: "test",
|
||||
UID: "536ab69f-1b3c-4bd9-9ba4-274a56188419",
|
||||
},
|
||||
}
|
||||
|
||||
var cresult1 = &report.Result{
|
||||
ID: "125",
|
||||
Message: "validation error: The label `test` is required. Rule check-for-labels-on-namespace",
|
||||
Policy: "require-ns-labels",
|
||||
Rule: "check-for-labels-on-namespace",
|
||||
Priority: report.ErrorPriority,
|
||||
Status: report.Pass,
|
||||
Category: "namespaces",
|
||||
Severity: report.Medium,
|
||||
Scored: true,
|
||||
Source: "Kyverno",
|
||||
Resource: &report.Resource{
|
||||
APIVersion: "v1",
|
||||
Kind: "Namespace",
|
||||
Name: "test",
|
||||
UID: "536ab69f-1b3c-4bd9-9ba4-274a56188411",
|
||||
},
|
||||
}
|
||||
|
||||
var cresult2 = &report.Result{
|
||||
ID: "126",
|
||||
Message: "validation error: The label `test` is required. Rule check-for-labels-on-namespace",
|
||||
Policy: "require-ns-labels",
|
||||
Rule: "check-for-labels-on-namespace",
|
||||
Priority: report.WarningPriority,
|
||||
Status: report.Fail,
|
||||
Category: "namespaces",
|
||||
Severity: report.High,
|
||||
Scored: true,
|
||||
Source: "Kyverno",
|
||||
Resource: &report.Resource{
|
||||
APIVersion: "v1",
|
||||
Kind: "Namespace",
|
||||
Name: "dev",
|
||||
UID: "536ab69f-1b3c-4bd9-9ba4-274a56188412",
|
||||
},
|
||||
}
|
||||
|
||||
var preport = &report.PolicyReport{
|
||||
ID: report.GeneratePolicyReportID("polr-test", "test"),
|
||||
Name: "polr-test",
|
||||
Namespace: "test",
|
||||
Results: map[string]*report.Result{
|
||||
result1.GetIdentifier(): result1,
|
||||
},
|
||||
Summary: &report.Summary{Fail: 1},
|
||||
CreationTimestamp: time.Now(),
|
||||
}
|
||||
|
||||
var ureport = &report.PolicyReport{
|
||||
ID: report.GeneratePolicyReportID("polr-test", "test"),
|
||||
Name: "polr-test",
|
||||
Namespace: "test",
|
||||
Results: map[string]*report.Result{
|
||||
result1.GetIdentifier(): result1,
|
||||
result2.GetIdentifier(): result2,
|
||||
},
|
||||
Summary: &report.Summary{Fail: 1, Pass: 1},
|
||||
CreationTimestamp: time.Now(),
|
||||
}
|
||||
|
||||
var creport = &report.PolicyReport{
|
||||
ID: report.GeneratePolicyReportID("cpolr", ""),
|
||||
Name: "cpolr",
|
||||
Results: map[string]*report.Result{
|
||||
cresult1.GetIdentifier(): cresult1,
|
||||
cresult2.GetIdentifier(): cresult2,
|
||||
},
|
||||
Summary: &report.Summary{},
|
||||
CreationTimestamp: time.Now(),
|
||||
}
|
||||
|
||||
func Test_PolicyReportStore(t *testing.T) {
|
||||
db, _ := sqlite3.NewDatabase("test.db")
|
||||
defer db.Close()
|
||||
store, _ := sqlite3.NewPolicyReportStore(db)
|
||||
|
||||
t.Run("Add/Get/Update PolicyReport", func(t *testing.T) {
|
||||
_, ok := store.Get(preport.GetIdentifier())
|
||||
if ok == true {
|
||||
t.Fatalf("Should not be found in empty Store")
|
||||
}
|
||||
|
||||
store.Add(preport)
|
||||
r1, ok := store.Get(preport.GetIdentifier())
|
||||
if ok == false {
|
||||
t.Errorf("Should be found in Store after adding report to the store")
|
||||
}
|
||||
if r1.Summary.Pass != 0 {
|
||||
t.Errorf("Expected 0 Passed Results in Summary")
|
||||
}
|
||||
|
||||
store.Update(ureport)
|
||||
r2, _ := store.Get(preport.GetIdentifier())
|
||||
if r2.Summary.Pass != 1 {
|
||||
t.Errorf("Expected 1 Passed Results in Summary after Update")
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("Add/Get ClusterPolicyReport", func(t *testing.T) {
|
||||
_, ok := store.Get(creport.GetIdentifier())
|
||||
if ok == true {
|
||||
t.Fatalf("Should not be found in empty Store")
|
||||
}
|
||||
|
||||
store.Add(creport)
|
||||
_, ok = store.Get(creport.GetIdentifier())
|
||||
if ok == false {
|
||||
t.Errorf("Should be found in Store after adding report to the store")
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("FetchNamespacedKinds", func(t *testing.T) {
|
||||
items, err := store.FetchNamespacedKinds("kyverno")
|
||||
if err != nil {
|
||||
t.Fatalf("Unexpected Error: %s", err)
|
||||
}
|
||||
if len(items) != 2 {
|
||||
t.Fatalf("Should Find 2 Kinds with Namespace Scope")
|
||||
}
|
||||
if items[0] != "Deployment" {
|
||||
t.Errorf("Should return 'Deployment' as first result")
|
||||
}
|
||||
if items[1] != "Pod" {
|
||||
t.Errorf("Should return 'Pod' as second result")
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("FetchClusterKinds", func(t *testing.T) {
|
||||
items, err := store.FetchClusterKinds("kyverno")
|
||||
if err != nil {
|
||||
t.Fatalf("Unexpected Error: %s", err)
|
||||
}
|
||||
if len(items) != 1 {
|
||||
t.Fatalf("Should find 1 kind with cluster scope")
|
||||
}
|
||||
if items[0] != "Namespace" {
|
||||
t.Errorf("Should return 'Namespace' as first result")
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("FetchNamespacedStatusCounts", func(t *testing.T) {
|
||||
items, err := store.FetchNamespacedStatusCounts(v1.Filter{})
|
||||
if err != nil {
|
||||
t.Fatalf("Unexpected Error: %s", err)
|
||||
}
|
||||
if len(items) != 5 {
|
||||
t.Fatalf("Should include 1 item per possible status")
|
||||
}
|
||||
|
||||
var passed v1.NamespacedStatusCount
|
||||
var failed v1.NamespacedStatusCount
|
||||
for _, item := range items {
|
||||
if item.Status == report.Pass {
|
||||
passed = item
|
||||
}
|
||||
if item.Status == report.Fail {
|
||||
failed = item
|
||||
}
|
||||
}
|
||||
|
||||
if passed.Status != report.Pass {
|
||||
t.Errorf("Expected Pass Counts as first item")
|
||||
}
|
||||
if passed.Items[0].Count != 1 {
|
||||
t.Errorf("Expected count to be one for pass")
|
||||
}
|
||||
|
||||
if failed.Status != report.Fail {
|
||||
t.Errorf("Expected Pass Counts as first item")
|
||||
}
|
||||
if failed.Items[0].Count != 1 {
|
||||
t.Errorf("Expected count to be one for fail")
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("FetchNamespacedStatusCounts with StatusFilter", func(t *testing.T) {
|
||||
items, err := store.FetchNamespacedStatusCounts(v1.Filter{Status: []string{report.Pass}})
|
||||
if err != nil {
|
||||
t.Fatalf("Unexpected Error: %s", err)
|
||||
}
|
||||
if len(items) != 1 {
|
||||
t.Fatalf("Should have only 1 item for pass counts")
|
||||
}
|
||||
if items[0].Status != report.Pass {
|
||||
t.Errorf("Expected Pass Counts")
|
||||
}
|
||||
if items[0].Items[0].Count != 1 {
|
||||
t.Errorf("Expected count to be one for pass")
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("FetchRuleStatusCounts", func(t *testing.T) {
|
||||
items, err := store.FetchRuleStatusCounts("require-requests-and-limits-required", "autogen-check-for-requests-and-limits")
|
||||
var passed v1.StatusCount
|
||||
var failed v1.StatusCount
|
||||
for _, item := range items {
|
||||
if item.Status == report.Pass {
|
||||
passed = item
|
||||
}
|
||||
if item.Status == report.Fail {
|
||||
failed = item
|
||||
}
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
t.Fatalf("Unexpected Error: %s", err)
|
||||
}
|
||||
if passed.Count != 1 {
|
||||
t.Errorf("Expected count to be one for pass")
|
||||
}
|
||||
|
||||
if failed.Count != 1 {
|
||||
t.Errorf("Expected count to be one for fail")
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("FetchStatusCounts", func(t *testing.T) {
|
||||
items, err := store.FetchStatusCounts(v1.Filter{})
|
||||
if err != nil {
|
||||
t.Fatalf("Unexpected Error: %s", err)
|
||||
}
|
||||
var passed v1.StatusCount
|
||||
var failed v1.StatusCount
|
||||
for _, item := range items {
|
||||
if item.Status == report.Pass {
|
||||
passed = item
|
||||
}
|
||||
if item.Status == report.Fail {
|
||||
failed = item
|
||||
}
|
||||
}
|
||||
if len(items) != 5 {
|
||||
t.Fatalf("Should include 1 item per possible status")
|
||||
}
|
||||
if passed.Count != 1 {
|
||||
t.Errorf("Expected count to be one for pass")
|
||||
}
|
||||
if failed.Count != 1 {
|
||||
t.Errorf("Expected count to be one for fail")
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("FetchStatusCounts with StatusFilter", func(t *testing.T) {
|
||||
items, err := store.FetchStatusCounts(v1.Filter{Status: []string{report.Pass}})
|
||||
if err != nil {
|
||||
t.Fatalf("Unexpected Error: %s", err)
|
||||
}
|
||||
if len(items) != 1 {
|
||||
t.Fatalf("Should have only 1 item for pass counts")
|
||||
}
|
||||
if items[0].Status != report.Pass {
|
||||
t.Errorf("Expected Pass Counts")
|
||||
}
|
||||
if items[0].Count != 1 {
|
||||
t.Errorf("Expected count to be one for pass")
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("FetchNamespacedResults", func(t *testing.T) {
|
||||
items, err := store.FetchNamespacedResults(v1.Filter{Namespaces: []string{"test"}})
|
||||
if err != nil {
|
||||
t.Fatalf("Unexpected Error: %s", err)
|
||||
}
|
||||
|
||||
if len(items) != 2 {
|
||||
t.Fatalf("Should return 2 namespaced results")
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("FetchNamespacedResults with SeverityFilter", func(t *testing.T) {
|
||||
items, err := store.FetchNamespacedResults(v1.Filter{Severities: []string{report.High}})
|
||||
if err != nil {
|
||||
t.Fatalf("Unexpected Error: %s", err)
|
||||
}
|
||||
|
||||
if len(items) != 1 {
|
||||
t.Fatalf("Should return 1 namespaced result")
|
||||
}
|
||||
if items[0].Severity != report.High {
|
||||
t.Fatalf("result with severity high")
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("FetchClusterResults", func(t *testing.T) {
|
||||
items, err := store.FetchClusterResults(v1.Filter{Status: []string{report.Pass, report.Fail}})
|
||||
if err != nil {
|
||||
t.Fatalf("Unexpected Error: %s", err)
|
||||
}
|
||||
|
||||
if len(items) != 2 {
|
||||
t.Fatalf("Should return 2 cluster results")
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("FetchClusterResults with SeverityFilter", func(t *testing.T) {
|
||||
items, err := store.FetchClusterResults(v1.Filter{Severities: []string{report.High}})
|
||||
if err != nil {
|
||||
t.Fatalf("Unexpected Error: %s", err)
|
||||
}
|
||||
|
||||
if len(items) != 1 {
|
||||
t.Fatalf("Should return 1 namespaced result")
|
||||
}
|
||||
if items[0].Severity != report.High {
|
||||
t.Fatalf("result with severity high")
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("FetchStatusCounts with StatusFilter", func(t *testing.T) {
|
||||
items, err := store.FetchStatusCounts(v1.Filter{Status: []string{report.Pass}})
|
||||
if err != nil {
|
||||
t.Fatalf("Unexpected Error: %s", err)
|
||||
}
|
||||
if len(items) != 1 {
|
||||
t.Fatalf("Should have only 1 item for pass counts")
|
||||
}
|
||||
if items[0].Status != report.Pass {
|
||||
t.Errorf("Expected Pass Counts")
|
||||
}
|
||||
if items[0].Count != 1 {
|
||||
t.Errorf("Expected count to be one for pass")
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("FetchNamespaces", func(t *testing.T) {
|
||||
items, err := store.FetchNamespaces("kyverno")
|
||||
if err != nil {
|
||||
t.Fatalf("Unexpected Error: %s", err)
|
||||
}
|
||||
if len(items) != 1 {
|
||||
t.Errorf("Should find 1 Namespace")
|
||||
}
|
||||
if items[0] != "test" {
|
||||
t.Errorf("Should return test namespace")
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("FetchCategories", func(t *testing.T) {
|
||||
items, err := store.FetchCategories("kyverno")
|
||||
if err != nil {
|
||||
t.Fatalf("Unexpected Error: %s", err)
|
||||
}
|
||||
if len(items) != 3 {
|
||||
t.Errorf("Should Find 2 Categories")
|
||||
}
|
||||
if items[0] != "Best Practices" {
|
||||
t.Errorf("Should return 'Best Practices' as first category")
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("FetchClusterPolicies", func(t *testing.T) {
|
||||
items, err := store.FetchClusterPolicies("kyverno")
|
||||
if err != nil {
|
||||
t.Fatalf("Unexpected Error: %s", err)
|
||||
}
|
||||
if len(items) != 1 {
|
||||
t.Errorf("Should Find 1 cluster scoped Policy")
|
||||
}
|
||||
if items[0] != "require-ns-labels" {
|
||||
t.Errorf("Should return 'require-ns-labels' policy")
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("FetchNamespacedPolicies", func(t *testing.T) {
|
||||
items, err := store.FetchNamespacedPolicies("kyverno")
|
||||
if err != nil {
|
||||
t.Fatalf("Unexpected Error: %s", err)
|
||||
}
|
||||
if len(items) != 1 {
|
||||
t.Errorf("Should find 1 namespace scoped policy")
|
||||
}
|
||||
if items[0] != "require-requests-and-limits-required" {
|
||||
t.Errorf("Should return 'require-requests-and-limits-required' policy")
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("FetchClusterSources", func(t *testing.T) {
|
||||
items, err := store.FetchClusterSources()
|
||||
if err != nil {
|
||||
t.Fatalf("Unexpected Error: %s", err)
|
||||
}
|
||||
if len(items) != 1 {
|
||||
t.Errorf("Should find 1 Source")
|
||||
}
|
||||
if items[0] != "Kyverno" {
|
||||
t.Errorf("Should return Kyverno")
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("FetchNamespacedSources", func(t *testing.T) {
|
||||
items, err := store.FetchNamespacedSources()
|
||||
if err != nil {
|
||||
t.Fatalf("Unexpected Error: %s", err)
|
||||
}
|
||||
if len(items) != 1 {
|
||||
t.Errorf("Should find 1 Source")
|
||||
}
|
||||
if items[0] != "Kyverno" {
|
||||
t.Errorf("Should return Kyverno")
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("Delete/Get", func(t *testing.T) {
|
||||
_, ok := store.Get(preport.GetIdentifier())
|
||||
if ok == false {
|
||||
t.Errorf("Should be found in Store after adding report to the store")
|
||||
}
|
||||
|
||||
store.Remove(preport.GetIdentifier())
|
||||
_, ok = store.Get(preport.GetIdentifier())
|
||||
if ok == true {
|
||||
t.Fatalf("Should not be found after Remove report from Store")
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("CleanUp", func(t *testing.T) {
|
||||
store.Add(preport)
|
||||
|
||||
store.CleanUp()
|
||||
list, _ := store.FetchNamespacedResults(v1.Filter{})
|
||||
if len(list) == 1 {
|
||||
t.Fatalf("Should have no results after CleanUp")
|
||||
}
|
||||
})
|
||||
}
|
|
@ -1,17 +1,67 @@
|
|||
package target
|
||||
|
||||
import (
|
||||
"strings"
|
||||
|
||||
"github.com/kyverno/policy-reporter/pkg/report"
|
||||
)
|
||||
|
||||
// Client for a provided Target
|
||||
type Client interface {
|
||||
// Send the given Result to the configured Target
|
||||
Send(result report.Result)
|
||||
Send(result *report.Result)
|
||||
// SkipExistingOnStartup skips already existing PolicyReportResults on startup
|
||||
SkipExistingOnStartup() bool
|
||||
// Name is a unique identifier for each Target
|
||||
Name() string
|
||||
// Validate is a result should send
|
||||
Validate(result *report.Result) bool
|
||||
// MinimumPriority for a triggered Result to send to this target
|
||||
MinimumPriority() string
|
||||
// Sources of the Results which should send to this target, empty means all sources
|
||||
Sources() []string
|
||||
}
|
||||
|
||||
type BaseClient struct {
|
||||
minimumPriority string
|
||||
sources []string
|
||||
skipExistingOnStartup bool
|
||||
}
|
||||
|
||||
func (c *BaseClient) MinimumPriority() string {
|
||||
return c.minimumPriority
|
||||
}
|
||||
|
||||
func (c *BaseClient) Sources() []string {
|
||||
return c.sources
|
||||
}
|
||||
|
||||
func (c *BaseClient) SkipExistingOnStartup() bool {
|
||||
return c.skipExistingOnStartup
|
||||
}
|
||||
|
||||
func (c *BaseClient) Validate(result *report.Result) bool {
|
||||
if result.Priority < report.NewPriority(c.minimumPriority) {
|
||||
return false
|
||||
}
|
||||
|
||||
if len(c.sources) > 0 && !contains(result.Source, c.sources) {
|
||||
return false
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
func contains(source string, sources []string) bool {
|
||||
for _, s := range sources {
|
||||
if strings.EqualFold(s, source) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
func NewBaseClient(minimumPriority string, sources []string, skipExistingOnStartup bool) BaseClient {
|
||||
return BaseClient{minimumPriority, sources, skipExistingOnStartup}
|
||||
}
|
||||
|
|
75
pkg/target/client_test.go
Normal file
75
pkg/target/client_test.go
Normal file
|
@ -0,0 +1,75 @@
|
|||
package target_test
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/kyverno/policy-reporter/pkg/report"
|
||||
"github.com/kyverno/policy-reporter/pkg/target"
|
||||
)
|
||||
|
||||
var result = &report.Result{
|
||||
Message: "validation error: requests and limits required. Rule autogen-check-for-requests-and-limits failed at path /spec/template/spec/containers/0/resources/requests/",
|
||||
Policy: "require-requests-and-limits-required",
|
||||
Rule: "autogen-check-for-requests-and-limits",
|
||||
Priority: report.WarningPriority,
|
||||
Status: report.Fail,
|
||||
Severity: report.High,
|
||||
Category: "resources",
|
||||
Scored: true,
|
||||
Source: "Kyverno",
|
||||
Resource: &report.Resource{
|
||||
APIVersion: "v1",
|
||||
Kind: "Deployment",
|
||||
Name: "nginx",
|
||||
Namespace: "default",
|
||||
UID: "536ab69f-1b3c-4bd9-9ba4-274a56188409",
|
||||
},
|
||||
}
|
||||
|
||||
func Test_BaseClient(t *testing.T) {
|
||||
t.Run("Validate Default", func(t *testing.T) {
|
||||
client := target.NewBaseClient("", []string{}, false)
|
||||
|
||||
if !client.Validate(result) {
|
||||
t.Errorf("Unexpected Validation Result")
|
||||
}
|
||||
})
|
||||
t.Run("Validate MinimumPriority", func(t *testing.T) {
|
||||
client := target.NewBaseClient("error", []string{}, false)
|
||||
|
||||
if client.Validate(result) {
|
||||
t.Errorf("Unexpected Validation Result")
|
||||
}
|
||||
})
|
||||
t.Run("Validate Source", func(t *testing.T) {
|
||||
client := target.NewBaseClient("", []string{"jsPolicy"}, false)
|
||||
|
||||
if client.Validate(result) {
|
||||
t.Errorf("Unexpected Validation Result")
|
||||
}
|
||||
})
|
||||
t.Run("SkipExistingOnStartup", func(t *testing.T) {
|
||||
client := target.NewBaseClient("", []string{}, true)
|
||||
|
||||
if !client.SkipExistingOnStartup() {
|
||||
t.Error("Should return configured SkipExistingOnStartup")
|
||||
}
|
||||
})
|
||||
t.Run("MinimumPriority", func(t *testing.T) {
|
||||
client := target.NewBaseClient("error", []string{}, true)
|
||||
|
||||
if client.MinimumPriority() != "error" {
|
||||
t.Error("Should return configured MinimumPriority")
|
||||
}
|
||||
})
|
||||
t.Run("Sources", func(t *testing.T) {
|
||||
client := target.NewBaseClient("", []string{"Kyverno"}, true)
|
||||
|
||||
if len(client.Sources()) != 1 {
|
||||
t.Fatal("Unexpected length of Sources")
|
||||
}
|
||||
if client.Sources()[0] != "Kyverno" {
|
||||
t.Error("Unexptected Source returned")
|
||||
}
|
||||
})
|
||||
}
|
|
@ -1,15 +1,12 @@
|
|||
package discord
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"log"
|
||||
"net/http"
|
||||
"strings"
|
||||
|
||||
"github.com/kyverno/policy-reporter/pkg/helper"
|
||||
"github.com/kyverno/policy-reporter/pkg/report"
|
||||
"github.com/kyverno/policy-reporter/pkg/target"
|
||||
"github.com/kyverno/policy-reporter/pkg/target/helper"
|
||||
)
|
||||
|
||||
type payload struct {
|
||||
|
@ -30,20 +27,16 @@ type embedField struct {
|
|||
Inline bool `json:"inline"`
|
||||
}
|
||||
|
||||
func newPayload(result report.Result) payload {
|
||||
var color string
|
||||
switch result.Priority {
|
||||
case report.CriticalPriority:
|
||||
color = "15158332"
|
||||
case report.ErrorPriority:
|
||||
color = "15158332"
|
||||
case report.WarningPriority:
|
||||
color = "15105570"
|
||||
case report.InfoPriority:
|
||||
color = "3066993"
|
||||
case report.DebugPriority:
|
||||
color = "12370112"
|
||||
}
|
||||
var colors = map[report.Priority]string{
|
||||
report.DebugPriority: "12370112",
|
||||
report.InfoPriority: "3066993",
|
||||
report.WarningPriority: "15105570",
|
||||
report.CriticalPriority: "15158332",
|
||||
report.ErrorPriority: "15158332",
|
||||
}
|
||||
|
||||
func newPayload(result *report.Result) payload {
|
||||
color := colors[result.Priority]
|
||||
|
||||
embedFields := make([]embedField, 0)
|
||||
|
||||
|
@ -94,28 +87,14 @@ type httpClient interface {
|
|||
}
|
||||
|
||||
type client struct {
|
||||
target.BaseClient
|
||||
webhook string
|
||||
minimumPriority string
|
||||
skipExistingOnStartup bool
|
||||
client httpClient
|
||||
}
|
||||
|
||||
func (d *client) Send(result report.Result) {
|
||||
if result.Priority < report.NewPriority(d.minimumPriority) {
|
||||
return
|
||||
}
|
||||
|
||||
payload := newPayload(result)
|
||||
body := new(bytes.Buffer)
|
||||
|
||||
if err := json.NewEncoder(body).Encode(payload); err != nil {
|
||||
log.Printf("[ERROR] DISCORD : %v\n", err.Error())
|
||||
return
|
||||
}
|
||||
|
||||
req, err := http.NewRequest("POST", d.webhook, body)
|
||||
func (d *client) Send(result *report.Result) {
|
||||
req, err := helper.CreateJSONRequest(d.Name(), "POST", d.webhook, newPayload(result))
|
||||
if err != nil {
|
||||
log.Printf("[ERROR] DISCORD : %v\n", err.Error())
|
||||
return
|
||||
}
|
||||
|
||||
|
@ -123,27 +102,18 @@ func (d *client) Send(result report.Result) {
|
|||
req.Header.Add("User-Agent", "Policy-Reporter")
|
||||
|
||||
resp, err := d.client.Do(req)
|
||||
helper.HandleHTTPResponse("DISCORD", resp, err)
|
||||
}
|
||||
|
||||
func (d *client) SkipExistingOnStartup() bool {
|
||||
return d.skipExistingOnStartup
|
||||
helper.ProcessHTTPResponse(d.Name(), resp, err)
|
||||
}
|
||||
|
||||
func (d *client) Name() string {
|
||||
return "Discord"
|
||||
}
|
||||
|
||||
func (d *client) MinimumPriority() string {
|
||||
return d.minimumPriority
|
||||
}
|
||||
|
||||
// NewClient creates a new loki.client to send Results to Discord
|
||||
func NewClient(webhook, minimumPriority string, skipExistingOnStartup bool, httpClient httpClient) target.Client {
|
||||
func NewClient(webhook, minimumPriority string, sources []string, skipExistingOnStartup bool, httpClient httpClient) target.Client {
|
||||
return &client{
|
||||
target.NewBaseClient(minimumPriority, sources, skipExistingOnStartup),
|
||||
webhook,
|
||||
minimumPriority,
|
||||
skipExistingOnStartup,
|
||||
httpClient,
|
||||
}
|
||||
}
|
||||
|
|
|
@ -9,7 +9,7 @@ import (
|
|||
"github.com/kyverno/policy-reporter/pkg/target/discord"
|
||||
)
|
||||
|
||||
var completeResult = report.Result{
|
||||
var completeResult = &report.Result{
|
||||
Message: "validation error: requests and limits required. Rule autogen-check-for-requests-and-limits failed at path /spec/template/spec/containers/0/resources/requests/",
|
||||
Policy: "require-requests-and-limits-required",
|
||||
Rule: "autogen-check-for-requests-and-limits",
|
||||
|
@ -19,7 +19,8 @@ var completeResult = report.Result{
|
|||
Severity: report.High,
|
||||
Category: "resources",
|
||||
Scored: true,
|
||||
Resource: report.Resource{
|
||||
Source: "Kyverno",
|
||||
Resource: &report.Resource{
|
||||
APIVersion: "v1",
|
||||
Kind: "Deployment",
|
||||
Name: "nginx",
|
||||
|
@ -29,10 +30,10 @@ var completeResult = report.Result{
|
|||
Properties: map[string]string{"version": "1.2.0"},
|
||||
}
|
||||
|
||||
var minimalResult = report.Result{
|
||||
var minimalResult = &report.Result{
|
||||
Message: "validation error: label required. Rule app-label-required failed at path /spec/template/spec/containers/0/resources/requests/",
|
||||
Policy: "app-label-requirement",
|
||||
Priority: report.WarningPriority,
|
||||
Priority: report.CriticalPriority,
|
||||
Status: report.Fail,
|
||||
Scored: true,
|
||||
}
|
||||
|
@ -66,7 +67,7 @@ func Test_LokiTarget(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
client := discord.NewClient("http://hook.discord:80", "", false, testClient{callback, 200})
|
||||
client := discord.NewClient("http://hook.discord:80", "", []string{}, false, testClient{callback, 200})
|
||||
client.Send(completeResult)
|
||||
})
|
||||
|
||||
|
@ -85,40 +86,14 @@ func Test_LokiTarget(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
client := discord.NewClient("http://hook.discord:80", "", false, testClient{callback, 200})
|
||||
client := discord.NewClient("http://hook.discord:80", "", []string{}, false, testClient{callback, 200})
|
||||
client.Send(minimalResult)
|
||||
})
|
||||
t.Run("Send with ingored Priority", func(t *testing.T) {
|
||||
callback := func(req *http.Request) {
|
||||
t.Errorf("Unexpected Call")
|
||||
}
|
||||
|
||||
client := discord.NewClient("http://localhost:9200", "error", false, testClient{callback, 200})
|
||||
client.Send(completeResult)
|
||||
})
|
||||
t.Run("SkipExistingOnStartup", func(t *testing.T) {
|
||||
callback := func(req *http.Request) {
|
||||
t.Errorf("Unexpected Call")
|
||||
}
|
||||
|
||||
client := discord.NewClient("http://localhost:9200", "", true, testClient{callback, 200})
|
||||
|
||||
if !client.SkipExistingOnStartup() {
|
||||
t.Error("Should return configured SkipExistingOnStartup")
|
||||
}
|
||||
})
|
||||
t.Run("Name", func(t *testing.T) {
|
||||
client := discord.NewClient("http://localhost:9200", "", true, testClient{})
|
||||
client := discord.NewClient("http://localhost:9200", "", []string{}, true, testClient{})
|
||||
|
||||
if client.Name() != "Discord" {
|
||||
t.Errorf("Unexpected Name %s", client.Name())
|
||||
}
|
||||
})
|
||||
t.Run("MinimumPriority", func(t *testing.T) {
|
||||
client := discord.NewClient("http://localhost:9200", "debug", true, testClient{})
|
||||
|
||||
if client.MinimumPriority() != "debug" {
|
||||
t.Errorf("Unexpected MinimumPriority %s", client.MinimumPriority())
|
||||
}
|
||||
})
|
||||
}
|
||||
|
|
|
@ -1,15 +1,12 @@
|
|||
package elasticsearch
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"log"
|
||||
"net/http"
|
||||
"time"
|
||||
|
||||
"github.com/kyverno/policy-reporter/pkg/helper"
|
||||
"github.com/kyverno/policy-reporter/pkg/report"
|
||||
"github.com/kyverno/policy-reporter/pkg/target"
|
||||
"github.com/kyverno/policy-reporter/pkg/target/helper"
|
||||
)
|
||||
|
||||
// Rotation Enum
|
||||
|
@ -28,26 +25,14 @@ type httpClient interface {
|
|||
}
|
||||
|
||||
type client struct {
|
||||
target.BaseClient
|
||||
host string
|
||||
index string
|
||||
rotation Rotation
|
||||
minimumPriority string
|
||||
skipExistingOnStartup bool
|
||||
client httpClient
|
||||
}
|
||||
|
||||
func (e *client) Send(result report.Result) {
|
||||
if result.Priority < report.NewPriority(e.minimumPriority) {
|
||||
return
|
||||
}
|
||||
|
||||
body := new(bytes.Buffer)
|
||||
|
||||
if err := json.NewEncoder(body).Encode(result); err != nil {
|
||||
log.Printf("[ERROR] ELASTICSEARCH : %v\n", err.Error())
|
||||
return
|
||||
}
|
||||
|
||||
func (e *client) Send(result *report.Result) {
|
||||
var host string
|
||||
switch e.rotation {
|
||||
case None:
|
||||
|
@ -60,9 +45,8 @@ func (e *client) Send(result report.Result) {
|
|||
host = e.host + "/" + e.index + "-" + time.Now().Format("2006.01.02") + "/event"
|
||||
}
|
||||
|
||||
req, err := http.NewRequest("POST", host, body)
|
||||
req, err := helper.CreateJSONRequest(e.Name(), "POST", host, result)
|
||||
if err != nil {
|
||||
log.Printf("[ERROR] ELASTICSEARCH : %v\n", err.Error())
|
||||
return
|
||||
}
|
||||
|
||||
|
@ -70,29 +54,20 @@ func (e *client) Send(result report.Result) {
|
|||
req.Header.Add("User-Agent", "Policy-Reporter")
|
||||
|
||||
resp, err := e.client.Do(req)
|
||||
helper.HandleHTTPResponse("ELASTICSEARCH", resp, err)
|
||||
}
|
||||
|
||||
func (e *client) SkipExistingOnStartup() bool {
|
||||
return e.skipExistingOnStartup
|
||||
helper.ProcessHTTPResponse(e.Name(), resp, err)
|
||||
}
|
||||
|
||||
func (e *client) Name() string {
|
||||
return "Elasticsearch"
|
||||
}
|
||||
|
||||
func (e *client) MinimumPriority() string {
|
||||
return e.minimumPriority
|
||||
}
|
||||
|
||||
// NewClient creates a new loki.client to send Results to Elasticsearch
|
||||
func NewClient(host, index, rotation, minimumPriority string, skipExistingOnStartup bool, httpClient httpClient) target.Client {
|
||||
func NewClient(host, index, rotation, minimumPriority string, sources []string, skipExistingOnStartup bool, httpClient httpClient) target.Client {
|
||||
return &client{
|
||||
target.NewBaseClient(minimumPriority, sources, skipExistingOnStartup),
|
||||
host,
|
||||
index,
|
||||
rotation,
|
||||
minimumPriority,
|
||||
skipExistingOnStartup,
|
||||
httpClient,
|
||||
}
|
||||
}
|
||||
|
|
|
@ -9,7 +9,7 @@ import (
|
|||
"github.com/kyverno/policy-reporter/pkg/target/elasticsearch"
|
||||
)
|
||||
|
||||
var completeResult = report.Result{
|
||||
var completeResult = &report.Result{
|
||||
Message: "validation error: requests and limits required. Rule autogen-check-for-requests-and-limits failed at path /spec/template/spec/containers/0/resources/requests/",
|
||||
Policy: "require-requests-and-limits-required",
|
||||
Rule: "autogen-check-for-requests-and-limits",
|
||||
|
@ -18,8 +18,9 @@ var completeResult = report.Result{
|
|||
Status: report.Fail,
|
||||
Severity: report.High,
|
||||
Category: "resources",
|
||||
Source: "Kyverno",
|
||||
Scored: true,
|
||||
Resource: report.Resource{
|
||||
Resource: &report.Resource{
|
||||
APIVersion: "v1",
|
||||
Kind: "Deployment",
|
||||
Name: "nginx",
|
||||
|
@ -58,7 +59,7 @@ func Test_ElasticsearchTarget(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
client := elasticsearch.NewClient("http://localhost:9200", "policy-reporter", "annually", "", false, testClient{callback, 200})
|
||||
client := elasticsearch.NewClient("http://localhost:9200", "policy-reporter", "annually", "", []string{}, false, testClient{callback, 200})
|
||||
client.Send(completeResult)
|
||||
})
|
||||
t.Run("Send with Monthly Result", func(t *testing.T) {
|
||||
|
@ -68,7 +69,7 @@ func Test_ElasticsearchTarget(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
client := elasticsearch.NewClient("http://localhost:9200", "policy-reporter", "monthly", "", false, testClient{callback, 200})
|
||||
client := elasticsearch.NewClient("http://localhost:9200", "policy-reporter", "monthly", "", []string{}, false, testClient{callback, 200})
|
||||
client.Send(completeResult)
|
||||
})
|
||||
t.Run("Send with Monthly Result", func(t *testing.T) {
|
||||
|
@ -78,7 +79,7 @@ func Test_ElasticsearchTarget(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
client := elasticsearch.NewClient("http://localhost:9200", "policy-reporter", "daily", "", false, testClient{callback, 200})
|
||||
client := elasticsearch.NewClient("http://localhost:9200", "policy-reporter", "daily", "", []string{}, false, testClient{callback, 200})
|
||||
client.Send(completeResult)
|
||||
})
|
||||
t.Run("Send with None Result", func(t *testing.T) {
|
||||
|
@ -88,40 +89,14 @@ func Test_ElasticsearchTarget(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
client := elasticsearch.NewClient("http://localhost:9200", "policy-reporter", "none", "", false, testClient{callback, 200})
|
||||
client := elasticsearch.NewClient("http://localhost:9200", "policy-reporter", "none", "", []string{}, false, testClient{callback, 200})
|
||||
client.Send(completeResult)
|
||||
})
|
||||
t.Run("Send with ignored Priority", func(t *testing.T) {
|
||||
callback := func(req *http.Request) {
|
||||
t.Errorf("Unexpected Call")
|
||||
}
|
||||
|
||||
client := elasticsearch.NewClient("http://localhost:9200", "policy-reporter", "none", "error", false, testClient{callback, 200})
|
||||
client.Send(completeResult)
|
||||
})
|
||||
t.Run("SkipExistingOnStartup", func(t *testing.T) {
|
||||
callback := func(req *http.Request) {
|
||||
t.Errorf("Unexpected Call")
|
||||
}
|
||||
|
||||
client := elasticsearch.NewClient("http://localhost:9200", "policy-reporter", "none", "", true, testClient{callback, 200})
|
||||
|
||||
if !client.SkipExistingOnStartup() {
|
||||
t.Error("Should return configured SkipExistingOnStartup")
|
||||
}
|
||||
})
|
||||
t.Run("Name", func(t *testing.T) {
|
||||
client := elasticsearch.NewClient("http://localhost:9200", "policy-reporter", "none", "", true, testClient{})
|
||||
client := elasticsearch.NewClient("http://localhost:9200", "policy-reporter", "none", "", []string{}, true, testClient{})
|
||||
|
||||
if client.Name() != "Elasticsearch" {
|
||||
t.Errorf("Unexpected Name %s", client.Name())
|
||||
}
|
||||
})
|
||||
t.Run("MinimumPriority", func(t *testing.T) {
|
||||
client := elasticsearch.NewClient("http://localhost:9200", "policy-reporter", "none", "debug", true, testClient{})
|
||||
|
||||
if client.MinimumPriority() != "debug" {
|
||||
t.Errorf("Unexpected MinimumPriority %s", client.MinimumPriority())
|
||||
}
|
||||
})
|
||||
}
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue