mirror of
https://github.com/TwiN/gatus.git
synced 2024-12-14 11:58:04 +00:00
Rename Service to Endpoint (#192)
* Add clarifications in comments * #191: Rename Service to Endpoint
This commit is contained in:
parent
634123d723
commit
6ed93d4b82
99 changed files with 2136 additions and 2006 deletions
Before Width: | Height: | Size: 39 KiB After Width: | Height: | Size: 39 KiB |
286
README.md
286
README.md
|
@ -11,7 +11,7 @@
|
|||
Gatus is a health dashboard that gives you the ability to monitor your services using HTTP, ICMP, TCP, and even DNS
|
||||
queries as well as evaluate the result of said queries by using a list of conditions on values like the status code,
|
||||
the response time, the certificate expiration, the body and many others. The icing on top is that each of these health
|
||||
checks can be paired with alerting via Slack, PagerDuty, Discord and even Twilio.
|
||||
checks can be paired with alerting via Slack, PagerDuty, Discord, Twilio and more.
|
||||
|
||||
I personally deploy it in my Kubernetes cluster and let it monitor the status of my
|
||||
core applications: https://status.twin.sh/
|
||||
|
@ -58,15 +58,15 @@ For more details, see [Usage](#usage)
|
|||
- [Sending a GraphQL request](#sending-a-graphql-request)
|
||||
- [Recommended interval](#recommended-interval)
|
||||
- [Default timeouts](#default-timeouts)
|
||||
- [Monitoring a TCP service](#monitoring-a-tcp-service)
|
||||
- [Monitoring a service using ICMP](#monitoring-a-service-using-icmp)
|
||||
- [Monitoring a service using DNS queries](#monitoring-a-service-using-dns-queries)
|
||||
- [Monitoring a service using STARTTLS](#monitoring-a-service-using-starttls)
|
||||
- [Monitoring a service using TLS](#monitoring-a-service-using-tls)
|
||||
- [Monitoring a TCP endpoint](#monitoring-a-tcp-endpoint)
|
||||
- [Monitoring an endpoint using ICMP](#monitoring-an-endpoint-using-icmp)
|
||||
- [Monitoring an endpoint using DNS queries](#monitoring-an-endpoint-using-dns-queries)
|
||||
- [Monitoring an endpoint using STARTTLS](#monitoring-an-endpoint-using-starttls)
|
||||
- [Monitoring an endpoint using TLS](#monitoring-an-endpoint-using-tls)
|
||||
- [Basic authentication](#basic-authentication)
|
||||
- [disable-monitoring-lock](#disable-monitoring-lock)
|
||||
- [Reloading configuration on the fly](#reloading-configuration-on-the-fly)
|
||||
- [Service groups](#service-groups)
|
||||
- [Endpoint groups](#endpoint-groups)
|
||||
- [Exposing Gatus on a custom port](#exposing-gatus-on-a-custom-port)
|
||||
- [Badges](#badges)
|
||||
- [Uptime](#uptime)
|
||||
|
@ -103,7 +103,7 @@ The main features of Gatus are:
|
|||
- **Alerting**: While having a pretty visual dashboard is useful to keep track of the state of your application(s), you probably don't want to stare at it all day. Thus, notifications via Slack, Mattermost, Messagebird, PagerDuty, Twilio and Teams are supported out of the box with the ability to configure a custom alerting provider for any needs you might have, whether it be a different provider or a custom application that manages automated rollbacks.
|
||||
- **Metrics**
|
||||
- **Low resource consumption**: As with most Go applications, the resource footprint that this application requires is negligibly small.
|
||||
- **[Badges](#badges)**: ![Uptime 7d](https://status.twin.sh/api/v1/services/core_website-external/uptimes/7d/badge.svg) ![Response time 24h](https://status.twin.sh/api/v1/services/core_website-external/response-times/24h/badge.svg)
|
||||
- **[Badges](#badges)**: ![Uptime 7d](https://status.twin.sh/api/v1/endpoints/core_website-external/uptimes/7d/badge.svg) ![Response time 24h](https://status.twin.sh/api/v1/endpoints/core_website-external/response-times/24h/badge.svg)
|
||||
|
||||
|
||||
## Usage
|
||||
|
@ -112,11 +112,9 @@ By default, the configuration file is expected to be at `config/config.yaml`.
|
|||
You can specify a custom path by setting the `GATUS_CONFIG_FILE` environment variable.
|
||||
|
||||
Here's a simple example:
|
||||
|
||||
```yaml
|
||||
metrics: true # Whether to expose metrics at /metrics
|
||||
services:
|
||||
- name: website # Name of your service, can be anything
|
||||
endpoints:
|
||||
- name: website # Name of your endpoint, can be anything
|
||||
url: "https://twin.sh/health"
|
||||
interval: 30s # Duration to wait between every status check (default: 60s)
|
||||
conditions:
|
||||
|
@ -145,30 +143,30 @@ If you want to test it locally, see [Docker](#docker).
|
|||
| `debug` | Whether to enable debug logs. | `false` |
|
||||
| `metrics` | Whether to expose metrics at /metrics. | `false` |
|
||||
| `storage` | [Storage configuration](#storage) | `{}` |
|
||||
| `services` | List of services to monitor. | Required `[]` |
|
||||
| `services[].enabled` | Whether to enable the service. | `true` |
|
||||
| `services[].name` | Name of the service. Can be anything. | Required `""` |
|
||||
| `services[].group` | Group name. Used to group multiple services together on the dashboard. <br />See [Service groups](#service-groups). | `""` |
|
||||
| `services[].url` | URL to send the request to. | Required `""` |
|
||||
| `services[].method` | Request method. | `GET` |
|
||||
| `services[].conditions` | Conditions used to determine the health of the service. <br />See [Conditions](#conditions). | `[]` |
|
||||
| `services[].interval` | Duration to wait between every status check. | `60s` |
|
||||
| `services[].graphql` | Whether to wrap the body in a query param (`{"query":"$body"}`). | `false` |
|
||||
| `services[].body` | Request body. | `""` |
|
||||
| `services[].headers` | Request headers. | `{}` |
|
||||
| `services[].dns` | Configuration for a service of type DNS. <br />See [Monitoring a service using DNS queries](#monitoring-a-service-using-dns-queries). | `""` |
|
||||
| `services[].dns.query-type` | Query type for DNS service. | `""` |
|
||||
| `services[].dns.query-name` | Query name for DNS service. | `""` |
|
||||
| `services[].alerts[].type` | Type of alert. <br />Valid types: `slack`, `discord`, `pagerduty`, `twilio`, `mattermost`, `messagebird`, `teams` `custom`. | Required `""` |
|
||||
| `services[].alerts[].enabled` | Whether to enable the alert. | `false` |
|
||||
| `services[].alerts[].failure-threshold` | Number of failures in a row needed before triggering the alert. | `3` |
|
||||
| `services[].alerts[].success-threshold` | Number of successes in a row before an ongoing incident is marked as resolved. | `2` |
|
||||
| `services[].alerts[].send-on-resolved` | Whether to send a notification once a triggered alert is marked as resolved. | `false` |
|
||||
| `services[].alerts[].description` | Description of the alert. Will be included in the alert sent. | `""` |
|
||||
| `services[].client` | [Client configuration](#client-configuration). | `{}` |
|
||||
| `services[].ui` | UI configuration at the service level. | `{}` |
|
||||
| `services[].ui.hide-hostname` | Whether to include the hostname in the result. | `false` |
|
||||
| `services[].ui.dont-resolve-failed-conditions` | Whether to resolve failed conditions for the UI. | `false` |
|
||||
| `endpoints` | List of endpoints to monitor. | Required `[]` |
|
||||
| `endpoints[].enabled` | Whether to monitor the endpoint. | `true` |
|
||||
| `endpoints[].name` | Name of the endpoint. Can be anything. | Required `""` |
|
||||
| `endpoints[].group` | Group name. Used to group multiple endpoints together on the dashboard. <br />See [Endpoint groups](#endpoint-groups). | `""` |
|
||||
| `endpoints[].url` | URL to send the request to. | Required `""` |
|
||||
| `endpoints[].method` | Request method. | `GET` |
|
||||
| `endpoints[].conditions` | Conditions used to determine the health of the endpoint. <br />See [Conditions](#conditions). | `[]` |
|
||||
| `endpoints[].interval` | Duration to wait between every status check. | `60s` |
|
||||
| `endpoints[].graphql` | Whether to wrap the body in a query param (`{"query":"$body"}`). | `false` |
|
||||
| `endpoints[].body` | Request body. | `""` |
|
||||
| `endpoints[].headers` | Request headers. | `{}` |
|
||||
| `endpoints[].dns` | Configuration for an endpoint of type DNS. <br />See [Monitoring an endpoint using DNS queries](#monitoring-an-endpoint-using-dns-queries). | `""` |
|
||||
| `endpoints[].dns.query-type` | Query type (e.g. MX) | `""` |
|
||||
| `endpoints[].dns.query-name` | Query name (e.g. example.com) | `""` |
|
||||
| `endpoints[].alerts[].type` | Type of alert. <br />Valid types: `slack`, `discord`, `pagerduty`, `twilio`, `mattermost`, `messagebird`, `teams` `custom`. | Required `""` |
|
||||
| `endpoints[].alerts[].enabled` | Whether to enable the alert. | `false` |
|
||||
| `endpoints[].alerts[].failure-threshold` | Number of failures in a row needed before triggering the alert. | `3` |
|
||||
| `endpoints[].alerts[].success-threshold` | Number of successes in a row before an ongoing incident is marked as resolved. | `2` |
|
||||
| `endpoints[].alerts[].send-on-resolved` | Whether to send a notification once a triggered alert is marked as resolved. | `false` |
|
||||
| `endpoints[].alerts[].description` | Description of the alert. Will be included in the alert sent. | `""` |
|
||||
| `endpoints[].client` | [Client configuration](#client-configuration). | `{}` |
|
||||
| `endpoints[].ui` | UI configuration at the endpoint level. | `{}` |
|
||||
| `endpoints[].ui.hide-hostname` | Whether to include the hostname in the result. | `false` |
|
||||
| `endpoints[].ui.dont-resolve-failed-conditions` | Whether to resolve failed conditions for the UI. | `false` |
|
||||
| `alerting` | [Alerting configuration](#alerting). | `{}` |
|
||||
| `security` | Security configuration. | `{}` |
|
||||
| `security.basic` | Basic authentication security configuration. | `{}` |
|
||||
|
@ -260,7 +258,7 @@ See [examples/docker-compose-postgres-storage](examples/docker-compose-postgres-
|
|||
|
||||
|
||||
### Client configuration
|
||||
In order to support a wide range of environments, each monitored service has a unique configuration for
|
||||
In order to support a wide range of environments, each monitored endpoint has a unique configuration for
|
||||
the client used to send the request.
|
||||
|
||||
| Parameter | Description | Default |
|
||||
|
@ -269,8 +267,8 @@ the client used to send the request.
|
|||
| `client.ignore-redirect` | Whether to ignore redirects (true) or follow them (false, default). | `false` |
|
||||
| `client.timeout` | Duration before timing out. | `10s` |
|
||||
|
||||
Note that some of these parameters are ignored based on the type of service. For instance, there's no certificate involved
|
||||
in ICMP requests (ping), therefore, setting `client.insecure` to `true` for a service of that type will not do anything.
|
||||
Note that some of these parameters are ignored based on the type of endpoint. For instance, there's no certificate involved
|
||||
in ICMP requests (ping), therefore, setting `client.insecure` to `true` for an endpoint of that type will not do anything.
|
||||
|
||||
This default configuration is as follows:
|
||||
```yaml
|
||||
|
@ -279,11 +277,11 @@ client:
|
|||
ignore-redirect: false
|
||||
timeout: 10s
|
||||
```
|
||||
Note that this configuration is only available under `services[]`, `alerting.mattermost` and `alerting.custom`.
|
||||
Note that this configuration is only available under `endpoints[]`, `alerting.mattermost` and `alerting.custom`.
|
||||
|
||||
Here's an example with the client configuration under `service[]`:
|
||||
Here's an example with the client configuration under `endpoints[]`:
|
||||
```yaml
|
||||
services:
|
||||
endpoints:
|
||||
- name: website
|
||||
url: "https://twin.sh/health"
|
||||
client:
|
||||
|
@ -297,7 +295,7 @@ services:
|
|||
|
||||
### Alerting
|
||||
Gatus supports multiple alerting providers, such as Slack and PagerDuty, and supports different alerts for each
|
||||
individual services with configurable descriptions and thresholds.
|
||||
individual endpoints with configurable descriptions and thresholds.
|
||||
|
||||
Note that if an alerting provider is not properly configured, all alerts configured with the provider's type will be
|
||||
ignored.
|
||||
|
@ -327,7 +325,7 @@ alerting:
|
|||
discord:
|
||||
webhook-url: "https://discord.com/api/webhooks/**********/**********"
|
||||
|
||||
services:
|
||||
endpoints:
|
||||
- name: website
|
||||
url: "https://twin.sh/health"
|
||||
interval: 30s
|
||||
|
@ -358,7 +356,7 @@ alerting:
|
|||
client:
|
||||
insecure: true
|
||||
|
||||
services:
|
||||
endpoints:
|
||||
- name: website
|
||||
url: "https://twin.sh/health"
|
||||
interval: 30s
|
||||
|
@ -394,7 +392,8 @@ alerting:
|
|||
access-key: "..."
|
||||
originator: "31619191918"
|
||||
recipients: "31619191919,31619191920"
|
||||
services:
|
||||
|
||||
endpoints:
|
||||
- name: website
|
||||
interval: 30s
|
||||
url: "https://twin.sh/health"
|
||||
|
@ -418,17 +417,17 @@ services:
|
|||
| `alerting.pagerduty.integration-key` | PagerDuty Events API v2 integration key | `""` |
|
||||
| `alerting.pagerduty.default-alert` | Default alert configuration. <br />See [Setting a default alert](#setting-a-default-alert) | N/A |
|
||||
| `alerting.pagerduty.overrides` | List of overrides that may be prioritized over the default configuration | `[]` |
|
||||
| `alerting.pagerduty.overrides[].group` | Service group for which the configuration will be overridden by this configuration | `""` |
|
||||
| `alerting.pagerduty.overrides[].group` | Endpoint group for which the configuration will be overridden by this configuration | `""` |
|
||||
| `alerting.pagerduty.overrides[].integration-key` | PagerDuty Events API v2 integration key | `""` |
|
||||
|
||||
It is highly recommended to set `services[].alerts[].send-on-resolved` to `true` for alerts
|
||||
It is highly recommended to set `endpoints[].alerts[].send-on-resolved` to `true` for alerts
|
||||
of type `pagerduty`, because unlike other alerts, the operation resulting from setting said
|
||||
parameter to `true` will not create another incident, but mark the incident as resolved on
|
||||
PagerDuty instead.
|
||||
|
||||
Behavior:
|
||||
- By default, `alerting.pagerduty.integration-key` is used as the integration key
|
||||
- If the service being evaluated belongs to a group (`services[].group`) matching the value of `alerting.pagerduty.overrides[].group`, the provider will use that override's integration key instead of `alerting.pagerduty.integration-key`'s
|
||||
- If the endpoint being evaluated belongs to a group (`endpoints[].group`) matching the value of `alerting.pagerduty.overrides[].group`, the provider will use that override's integration key instead of `alerting.pagerduty.integration-key`'s
|
||||
|
||||
|
||||
```yaml
|
||||
|
@ -441,8 +440,7 @@ alerting:
|
|||
- group: "core"
|
||||
integration-key: "********************************"
|
||||
|
||||
|
||||
services:
|
||||
endpoints:
|
||||
- name: website
|
||||
url: "https://twin.sh/health"
|
||||
interval: 30s
|
||||
|
@ -487,7 +485,7 @@ alerting:
|
|||
slack:
|
||||
webhook-url: "https://hooks.slack.com/services/**********/**********/**********"
|
||||
|
||||
services:
|
||||
endpoints:
|
||||
- name: website
|
||||
url: "https://twin.sh/health"
|
||||
interval: 30s
|
||||
|
@ -524,7 +522,7 @@ alerting:
|
|||
teams:
|
||||
webhook-url: "https://********.webhook.office.com/webhookb2/************"
|
||||
|
||||
services:
|
||||
endpoints:
|
||||
- name: website
|
||||
url: "https://twin.sh/health"
|
||||
interval: 30s
|
||||
|
@ -557,7 +555,7 @@ alerting:
|
|||
token: "123456:ABC-DEF1234ghIkl-zyx57W2v1u123ew11"
|
||||
id: "0123456789"
|
||||
|
||||
services:
|
||||
endpoints:
|
||||
- name: website
|
||||
url: "https://twin.sh/health"
|
||||
interval: 30s
|
||||
|
@ -593,7 +591,7 @@ alerting:
|
|||
from: "+1-234-567-8901"
|
||||
to: "+1-234-567-8901"
|
||||
|
||||
services:
|
||||
endpoints:
|
||||
- name: website
|
||||
interval: 30s
|
||||
url: "https://twin.sh/health"
|
||||
|
@ -624,12 +622,12 @@ services:
|
|||
While they're called alerts, you can use this feature to call anything.
|
||||
|
||||
For instance, you could automate rollbacks by having an application that keeps tracks of new deployments, and by
|
||||
leveraging Gatus, you could have Gatus call that application endpoint when a service starts failing. Your application
|
||||
would then check if the service that started failing was recently deployed, and if it was, then automatically
|
||||
roll it back.
|
||||
leveraging Gatus, you could have Gatus call that application endpoint when an endpoint starts failing. Your application
|
||||
would then check if the endpoint that started failing was part of the recently deployed application, and if it was,
|
||||
then automatically roll it back.
|
||||
|
||||
The placeholders `[ALERT_DESCRIPTION]` and `[SERVICE_NAME]` are automatically substituted for the alert description and
|
||||
the service name. These placeholders can be used in the body (`alerting.custom.body`) and in the url (`alerting.custom.url`).
|
||||
The placeholders `[ALERT_DESCRIPTION]` and `[ENDPOINT_NAME]` are automatically substituted for the alert description and
|
||||
the endpoint name. These placeholders can be used in the body (`alerting.custom.body`) and in the url (`alerting.custom.url`).
|
||||
|
||||
If you have an alert using the `custom` provider with `send-on-resolved` set to `true`, you can use the
|
||||
`[ALERT_TRIGGERED_OR_RESOLVED]` placeholder to differentiate the notifications.
|
||||
|
@ -644,9 +642,9 @@ alerting:
|
|||
method: "POST"
|
||||
body: |
|
||||
{
|
||||
"text": "[ALERT_TRIGGERED_OR_RESOLVED]: [SERVICE_NAME] - [ALERT_DESCRIPTION]"
|
||||
"text": "[ALERT_TRIGGERED_OR_RESOLVED]: [ENDPOINT_NAME] - [ALERT_DESCRIPTION]"
|
||||
}
|
||||
services:
|
||||
endpoints:
|
||||
- name: website
|
||||
url: "https://twin.sh/health"
|
||||
interval: 30s
|
||||
|
@ -685,7 +683,7 @@ As a result, the `[ALERT_TRIGGERED_OR_RESOLVED]` in the body of first example of
|
|||
| `alerting.*.default-alert.send-on-resolved` | Whether to send a notification once a triggered alert is marked as resolved | N/A |
|
||||
| `alerting.*.default-alert.description` | Description of the alert. Will be included in the alert sent | N/A |
|
||||
|
||||
While you can specify the alert configuration directly in the service definition, it's tedious and may lead to a very
|
||||
While you can specify the alert configuration directly in the endpoint definition, it's tedious and may lead to a very
|
||||
long configuration file.
|
||||
|
||||
To avoid such problem, you can use the `default-alert` parameter present in each provider configuration:
|
||||
|
@ -701,9 +699,9 @@ alerting:
|
|||
success-threshold: 5
|
||||
```
|
||||
|
||||
As a result, your service configuration looks a lot tidier:
|
||||
As a result, your Gatus configuration looks a lot tidier:
|
||||
```yaml
|
||||
services:
|
||||
endpoints:
|
||||
- name: example
|
||||
url: "https://example.org"
|
||||
conditions:
|
||||
|
@ -721,7 +719,7 @@ services:
|
|||
|
||||
It also allows you to do things like this:
|
||||
```yaml
|
||||
services:
|
||||
endpoints:
|
||||
- name: example
|
||||
url: "https://example.org"
|
||||
conditions:
|
||||
|
@ -749,8 +747,8 @@ alerting:
|
|||
enabled: true
|
||||
failure-threshold: 5
|
||||
|
||||
services:
|
||||
- name: service-1
|
||||
endpoints:
|
||||
- name: endpoint-1
|
||||
url: "https://example.org"
|
||||
conditions:
|
||||
- "[STATUS] == 200"
|
||||
|
@ -758,7 +756,7 @@ services:
|
|||
- type: slack
|
||||
- type: pagerduty
|
||||
|
||||
- name: service-2
|
||||
- name: endpoint-2
|
||||
url: "https://example.org"
|
||||
conditions:
|
||||
- "[STATUS] == 200"
|
||||
|
@ -858,11 +856,11 @@ See the [Deployment](#deployment) section.
|
|||
|
||||
## FAQ
|
||||
### Sending a GraphQL request
|
||||
By setting `services[].graphql` to true, the body will automatically be wrapped by the standard GraphQL `query` parameter.
|
||||
By setting `endpoints[].graphql` to true, the body will automatically be wrapped by the standard GraphQL `query` parameter.
|
||||
|
||||
For instance, the following configuration:
|
||||
```yaml
|
||||
services:
|
||||
endpoints:
|
||||
- name: filter-users-by-gender
|
||||
url: http://localhost:8080/playground
|
||||
method: POST
|
||||
|
@ -888,28 +886,27 @@ will send a `POST` request to `http://localhost:8080/playground` with the follow
|
|||
|
||||
|
||||
### Recommended interval
|
||||
> **NOTE**: This does not _really_ apply if `disable-monitoring-lock` is set to `true`, as the monitoring lock is what
|
||||
> tells Gatus to only evaluate one service at a time.
|
||||
> **NOTE**: This does not apply if `disable-monitoring-lock` is set to `true`, as the monitoring lock is what
|
||||
> tells Gatus to only evaluate one endpoint at a time.
|
||||
|
||||
To ensure that Gatus provides reliable and accurate results (i.e. response time), Gatus only evaluates one service at a time
|
||||
In other words, even if you have multiple services with the exact same interval, they will not execute at the same time.
|
||||
To ensure that Gatus provides reliable and accurate results (i.e. response time), Gatus only evaluates one endpoint at a time
|
||||
In other words, even if you have multiple endpoints with the exact same interval, they will not execute at the same time.
|
||||
|
||||
You can test this yourself by running Gatus with several services configured with a very short, unrealistic interval,
|
||||
such as 1ms. You'll notice that the response time does not fluctuate - that is because while services are evaluated on
|
||||
different goroutines, there's a global lock that prevents multiple services from running at the same time.
|
||||
You can test this yourself by running Gatus with several endpoints configured with a very short, unrealistic interval,
|
||||
such as 1ms. You'll notice that the response time does not fluctuate - that is because while endpoints are evaluated on
|
||||
different goroutines, there's a global lock that prevents multiple endpoints from running at the same time.
|
||||
|
||||
Unfortunately, there is a drawback. If you have a lot of services, including some that are very slow or prone to time out (the default
|
||||
timeout is 10s), then it means that for the entire duration of the request, no other services can be evaluated.
|
||||
Unfortunately, there is a drawback. If you have a lot of endpoints, including some that are very slow or prone to timing out
|
||||
(the default timeout is 10s), then it means that for the entire duration of the request, no other endpoint can be evaluated.
|
||||
|
||||
**This does mean that Gatus will be unable to evaluate the health of other services**.
|
||||
The interval does not include the duration of the request itself, which means that if a service has an interval of 30s
|
||||
The interval does not include the duration of the request itself, which means that if an endpoint has an interval of 30s
|
||||
and the request takes 2s to complete, the timestamp between two evaluations will be 32s, not 30s.
|
||||
|
||||
While this does not prevent Gatus' from performing health checks on all other services, it may cause Gatus to be unable
|
||||
While this does not prevent Gatus' from performing health checks on all other endpoints, it may cause Gatus to be unable
|
||||
to respect the configured interval, for instance:
|
||||
- Service A has an interval of 5s, and times out after 10s to complete
|
||||
- Service B has an interval of 5s, and takes 1ms to complete
|
||||
- Service B will be unable to run every 5s, because service A's health evaluation takes longer than its interval
|
||||
- Endpoint A has an interval of 5s, and times out after 10s to complete
|
||||
- Endpoint B has an interval of 5s, and takes 1ms to complete
|
||||
- Endpoint B will be unable to run every 5s, because endpoint A's health evaluation takes longer than its interval
|
||||
|
||||
To sum it up, while Gatus can really handle any interval you throw at it, you're better off having slow requests with
|
||||
higher interval.
|
||||
|
@ -919,8 +916,8 @@ simple health checks used for alerting (PagerDuty/Twilio) to `30s`.
|
|||
|
||||
|
||||
### Default timeouts
|
||||
| Protocol | Timeout |
|
||||
|:-------- |:------- |
|
||||
| Endpoint type | Timeout |
|
||||
|:------------- |:------- |
|
||||
| HTTP | 10s
|
||||
| TCP | 10s
|
||||
| ICMP | 10s
|
||||
|
@ -928,11 +925,11 @@ simple health checks used for alerting (PagerDuty/Twilio) to `30s`.
|
|||
To modify the timeout, see [Client configuration](#client-configuration).
|
||||
|
||||
|
||||
### Monitoring a TCP service
|
||||
By prefixing `services[].url` with `tcp:\\`, you can monitor TCP services at a very basic level:
|
||||
### Monitoring a TCP endpoint
|
||||
By prefixing `endpoints[].url` with `tcp:\\`, you can monitor TCP endpoints at a very basic level:
|
||||
|
||||
```yaml
|
||||
services:
|
||||
endpoints:
|
||||
- name: redis
|
||||
url: "tcp://127.0.0.1:6379"
|
||||
interval: 30s
|
||||
|
@ -940,34 +937,34 @@ services:
|
|||
- "[CONNECTED] == true"
|
||||
```
|
||||
|
||||
Placeholders `[STATUS]` and `[BODY]` as well as the fields `services[].body`, `services[].insecure`,
|
||||
`services[].headers`, `services[].method` and `services[].graphql` are not supported for TCP services.
|
||||
Placeholders `[STATUS]` and `[BODY]` as well as the fields `endpoints[].body`, `endpoints[].insecure`,
|
||||
`endpoints[].headers`, `endpoints[].method` and `endpoints[].graphql` are not supported for TCP endpoints.
|
||||
|
||||
**NOTE**: `[CONNECTED] == true` does not guarantee that the service itself is healthy - it only guarantees that there's
|
||||
**NOTE**: `[CONNECTED] == true` does not guarantee that the endpoint itself is healthy - it only guarantees that there's
|
||||
something at the given address listening to the given port, and that a connection to that address was successfully
|
||||
established.
|
||||
|
||||
|
||||
### Monitoring a service using ICMP
|
||||
By prefixing `services[].url` with `icmp:\\`, you can monitor services at a very basic level using ICMP, or more
|
||||
### Monitoring an endpoint using ICMP
|
||||
By prefixing `endpoints[].url` with `icmp:\\`, you can monitor endpoints at a very basic level using ICMP, or more
|
||||
commonly known as "ping" or "echo":
|
||||
|
||||
```yaml
|
||||
services:
|
||||
endpoints:
|
||||
- name: ping-example
|
||||
url: "icmp://example.com"
|
||||
conditions:
|
||||
- "[CONNECTED] == true"
|
||||
```
|
||||
|
||||
Only the placeholders `[CONNECTED]`, `[IP]` and `[RESPONSE_TIME]` are supported for services of type ICMP.
|
||||
Only the placeholders `[CONNECTED]`, `[IP]` and `[RESPONSE_TIME]` are supported for endpoints of type ICMP.
|
||||
You can specify a domain prefixed by `icmp://`, or an IP address prefixed by `icmp://`.
|
||||
|
||||
|
||||
### Monitoring a service using DNS queries
|
||||
Defining a `dns` configuration in a service will automatically mark that service as a service of type DNS:
|
||||
### Monitoring an endpoint using DNS queries
|
||||
Defining a `dns` configuration in an endpoint will automatically mark said endpoint as an endpoint of type DNS:
|
||||
```yaml
|
||||
services:
|
||||
endpoints:
|
||||
- name: example-dns-query
|
||||
url: "8.8.8.8" # Address of the DNS server to use
|
||||
interval: 30s
|
||||
|
@ -979,17 +976,17 @@ services:
|
|||
- "[DNS_RCODE] == NOERROR"
|
||||
```
|
||||
|
||||
There are two placeholders that can be used in the conditions for services of type DNS:
|
||||
There are two placeholders that can be used in the conditions for endpoints of type DNS:
|
||||
- The placeholder `[BODY]` resolves to the output of the query. For instance, a query of type `A` would return an IPv4.
|
||||
- The placeholder `[DNS_RCODE]` resolves to the name associated to the response code returned by the query, such as
|
||||
`NOERROR`, `FORMERR`, `SERVFAIL`, `NXDOMAIN`, etc.
|
||||
|
||||
|
||||
### Monitoring a service using STARTTLS
|
||||
### Monitoring an endpoint using STARTTLS
|
||||
If you have an email server that you want to ensure there are no problems with, monitoring it through STARTTLS
|
||||
will serve as a good initial indicator:
|
||||
```yaml
|
||||
services:
|
||||
endpoints:
|
||||
- name: starttls-smtp-example
|
||||
url: "starttls://smtp.gmail.com:587"
|
||||
interval: 30m
|
||||
|
@ -1001,11 +998,10 @@ services:
|
|||
```
|
||||
|
||||
|
||||
### Monitoring a service using TLS
|
||||
Monitoring services using SSL/TLS encryption, such as LDAP over TLS, can help
|
||||
detecting certificate expiration:
|
||||
### Monitoring an endpoint using TLS
|
||||
Monitoring endpoints using SSL/TLS encryption, such as LDAP over TLS, can help detect certificate expiration:
|
||||
```yaml
|
||||
services:
|
||||
endpoints:
|
||||
- name: tls-ldaps-example
|
||||
url: "tls://ldap.example.com:636"
|
||||
interval: 30m
|
||||
|
@ -1030,16 +1026,16 @@ The example above will require that you authenticate with the username `john.doe
|
|||
|
||||
|
||||
### disable-monitoring-lock
|
||||
Setting `disable-monitoring-lock` to `true` means that multiple services could be monitored at the same time.
|
||||
Setting `disable-monitoring-lock` to `true` means that multiple endpoints could be monitored at the same time.
|
||||
|
||||
While this behavior wouldn't generally be harmful, conditions using the `[RESPONSE_TIME]` placeholder could be impacted
|
||||
by the evaluation of multiple services at the same time, therefore, the default value for this parameter is `false`.
|
||||
by the evaluation of multiple endpoints at the same time, therefore, the default value for this parameter is `false`.
|
||||
|
||||
There are three main reasons why you might want to disable the monitoring lock:
|
||||
- You're using Gatus for load testing (each services are periodically evaluated on a different goroutine, so
|
||||
technically, if you create 100 services with a 1 seconds interval, Gatus will send 100 requests per second)
|
||||
- You have a _lot_ of services to monitor
|
||||
- You want to test multiple services at very short interval (< 5s)
|
||||
- You're using Gatus for load testing (each endpoint are periodically evaluated on a different goroutine, so
|
||||
technically, if you create 100 endpoints with a 1 seconds interval, Gatus will send 100 requests per second)
|
||||
- You have a _lot_ of endpoints to monitor
|
||||
- You want to test multiple endpoints at very short interval (< 5s)
|
||||
|
||||
|
||||
### Reloading configuration on the fly
|
||||
|
@ -1067,11 +1063,11 @@ the same as restarting the application.
|
|||
**NOTE:** Updates may not be detected if the config file is bound instead of the config folder. See [#151](https://github.com/TwiN/gatus/issues/151).
|
||||
|
||||
|
||||
### Service groups
|
||||
Service groups are used for grouping multiple services together on the dashboard.
|
||||
### Endpoint groups
|
||||
Endpoint groups are used for grouping multiple endpoints together on the dashboard.
|
||||
|
||||
```yaml
|
||||
services:
|
||||
endpoints:
|
||||
- name: frontend
|
||||
group: core
|
||||
url: "https://example.org/"
|
||||
|
@ -1100,7 +1096,7 @@ services:
|
|||
conditions:
|
||||
- "[STATUS] == 200"
|
||||
|
||||
- name: random service that isn't part of a group
|
||||
- name: random endpoint that isn't part of a group
|
||||
url: "https://example.org/"
|
||||
interval: 5m
|
||||
conditions:
|
||||
|
@ -1109,7 +1105,7 @@ services:
|
|||
|
||||
The configuration above will result in a dashboard that looks like this:
|
||||
|
||||
![Gatus Service Groups](.github/assets/service-groups.png)
|
||||
![Gatus Endpoint Groups](.github/assets/endpoint-groups.png)
|
||||
|
||||
|
||||
### Exposing Gatus on a custom port
|
||||
|
@ -1128,66 +1124,66 @@ web:
|
|||
|
||||
### Badges
|
||||
### Uptime
|
||||
![Uptime 1h](https://status.twin.sh/api/v1/services/core_website-external/uptimes/1h/badge.svg)
|
||||
![Uptime 24h](https://status.twin.sh/api/v1/services/core_website-external/uptimes/24h/badge.svg)
|
||||
![Uptime 7d](https://status.twin.sh/api/v1/services/core_website-external/uptimes/7d/badge.svg)
|
||||
![Uptime 1h](https://status.twin.sh/api/v1/endpoints/core_website-external/uptimes/1h/badge.svg)
|
||||
![Uptime 24h](https://status.twin.sh/api/v1/endpoints/core_website-external/uptimes/24h/badge.svg)
|
||||
![Uptime 7d](https://status.twin.sh/api/v1/endpoints/core_website-external/uptimes/7d/badge.svg)
|
||||
|
||||
Gatus can automatically generate a SVG badge for one of your monitored services.
|
||||
This allows you to put badges in your individual services' README or even create your own status page, if you
|
||||
Gatus can automatically generate a SVG badge for one of your monitored endpoints.
|
||||
This allows you to put badges in your individual applications' README or even create your own status page, if you
|
||||
desire.
|
||||
|
||||
The endpoint to generate a badge is the following:
|
||||
The path to generate a badge is the following:
|
||||
```
|
||||
/api/v1/services/{key}/uptimes/{duration}/badge.svg
|
||||
/api/v1/endpoints/{key}/uptimes/{duration}/badge.svg
|
||||
```
|
||||
Where:
|
||||
- `{duration}` is `7d`, `24h` or `1h`
|
||||
- `{key}` has the pattern `<GROUP_NAME>_<SERVICE_NAME>` in which both variables have ` `, `/`, `_`, `,` and `.` replaced by `-`.
|
||||
- `{key}` has the pattern `<GROUP_NAME>_<ENDPOINT_NAME>` in which both variables have ` `, `/`, `_`, `,` and `.` replaced by `-`.
|
||||
|
||||
For instance, if you want the uptime during the last 24 hours from the service `frontend` in the group `core`,
|
||||
For instance, if you want the uptime during the last 24 hours from the endpoint `frontend` in the group `core`,
|
||||
the URL would look like this:
|
||||
```
|
||||
https://example.com/api/v1/services/core_frontend/uptimes/7d/badge.svg
|
||||
https://example.com/api/v1/endpoints/core_frontend/uptimes/7d/badge.svg
|
||||
```
|
||||
If you want to display a service that is not part of a group, you must leave the group value empty:
|
||||
If you want to display an endpoint that is not part of a group, you must leave the group value empty:
|
||||
```
|
||||
https://example.com/api/v1/services/_frontend/uptimes/7d/badge.svg
|
||||
https://example.com/api/v1/endpoints/_frontend/uptimes/7d/badge.svg
|
||||
```
|
||||
Example:
|
||||
```
|
||||
![Uptime 24h](https://status.twin.sh/api/v1/services/core_website-external/uptimes/24h/badge.svg)
|
||||
![Uptime 24h](https://status.twin.sh/api/v1/endpoints/core_website-external/uptimes/24h/badge.svg)
|
||||
```
|
||||
If you'd like to see a visual example of each badges available, you can simply navigate to the service's detail page.
|
||||
If you'd like to see a visual example of each badges available, you can simply navigate to the endpoint's detail page.
|
||||
|
||||
|
||||
### Response time
|
||||
![Response time 1h](https://status.twin.sh/api/v1/services/core_website-external/response-times/1h/badge.svg)
|
||||
![Response time 24h](https://status.twin.sh/api/v1/services/core_website-external/response-times/24h/badge.svg)
|
||||
![Response time 7d](https://status.twin.sh/api/v1/services/core_website-external/response-times/7d/badge.svg)
|
||||
![Response time 1h](https://status.twin.sh/api/v1/endpoints/core_website-external/response-times/1h/badge.svg)
|
||||
![Response time 24h](https://status.twin.sh/api/v1/endpoints/core_website-external/response-times/24h/badge.svg)
|
||||
![Response time 7d](https://status.twin.sh/api/v1/endpoints/core_website-external/response-times/7d/badge.svg)
|
||||
|
||||
The endpoint to generate a badge is the following:
|
||||
```
|
||||
/api/v1/services/{key}/response-times/{duration}/badge.svg
|
||||
/api/v1/endpoints/{key}/response-times/{duration}/badge.svg
|
||||
```
|
||||
Where:
|
||||
- `{duration}` is `7d`, `24h` or `1h`
|
||||
- `{key}` has the pattern `<GROUP_NAME>_<SERVICE_NAME>` in which both variables have ` `, `/`, `_`, `,` and `.` replaced by `-`.
|
||||
- `{key}` has the pattern `<GROUP_NAME>_<ENDPOINT_NAME>` in which both variables have ` `, `/`, `_`, `,` and `.` replaced by `-`.
|
||||
|
||||
|
||||
### API
|
||||
Gatus provides a simple read-only API which can be queried in order to programmatically determine service status and history.
|
||||
Gatus provides a simple read-only API which can be queried in order to programmatically determine endpoint status and history.
|
||||
|
||||
All services are available via a GET request to the following endpoint:
|
||||
All endpoints are available via a GET request to the following endpoint:
|
||||
```
|
||||
/api/v1/services/statuses
|
||||
/api/v1/endpoints/statuses
|
||||
````
|
||||
Example: https://status.twin.sh/api/v1/services/statuses
|
||||
Example: https://status.twin.sh/api/v1/endpoints/statuses
|
||||
|
||||
Specific services can also be queried by using the following pattern:
|
||||
Specific endpoints can also be queried by using the following pattern:
|
||||
```
|
||||
/api/v1/services/{group}_{service}/statuses
|
||||
/api/v1/endpoints/{group}_{endpoint}/statuses
|
||||
```
|
||||
Example: https://status.twin.sh/api/v1/services/core_website-home/statuses
|
||||
Example: https://status.twin.sh/api/v1/endpoints/core_website-home/statuses
|
||||
|
||||
Gzip compression will be used if the `Accept-Encoding` HTTP header contains `gzip`.
|
||||
|
||||
|
|
|
@ -1,11 +1,11 @@
|
|||
package alert
|
||||
|
||||
// Alert is the service's alert configuration
|
||||
// Alert is a core.Endpoint's alert configuration
|
||||
type Alert struct {
|
||||
// Type of alert (required)
|
||||
Type Type `yaml:"type"`
|
||||
|
||||
// Enabled defines whether or not the alert is enabled
|
||||
// Enabled defines whether the alert is enabled
|
||||
//
|
||||
// This is a pointer, because it is populated by YAML and we need to know whether it was explicitly set to a value
|
||||
// or not for provider.ParseWithDefaultAlert to work.
|
||||
|
|
|
@ -26,7 +26,7 @@ type AlertProvider struct {
|
|||
// ClientConfig is the configuration of the client used to communicate with the provider's target
|
||||
ClientConfig *client.Config `yaml:"client"`
|
||||
|
||||
// DefaultAlert is the default alert configuration to use for services with an alert of the appropriate type
|
||||
// DefaultAlert is the default alert configuration to use for endpoints with an alert of the appropriate type
|
||||
DefaultAlert *alert.Alert `yaml:"default-alert"`
|
||||
}
|
||||
|
||||
|
@ -39,7 +39,7 @@ func (provider *AlertProvider) IsValid() bool {
|
|||
}
|
||||
|
||||
// ToCustomAlertProvider converts the provider into a custom.AlertProvider
|
||||
func (provider *AlertProvider) ToCustomAlertProvider(service *core.Service, alert *alert.Alert, result *core.Result, resolved bool) *AlertProvider {
|
||||
func (provider *AlertProvider) ToCustomAlertProvider(endpoint *core.Endpoint, alert *alert.Alert, result *core.Result, resolved bool) *AlertProvider {
|
||||
return provider
|
||||
}
|
||||
|
||||
|
@ -57,7 +57,7 @@ func (provider *AlertProvider) GetAlertStatePlaceholderValue(resolved bool) stri
|
|||
return status
|
||||
}
|
||||
|
||||
func (provider *AlertProvider) buildHTTPRequest(serviceName, alertDescription string, resolved bool) *http.Request {
|
||||
func (provider *AlertProvider) buildHTTPRequest(endpointName, alertDescription string, resolved bool) *http.Request {
|
||||
body := provider.Body
|
||||
providerURL := provider.URL
|
||||
method := provider.Method
|
||||
|
@ -65,8 +65,11 @@ func (provider *AlertProvider) buildHTTPRequest(serviceName, alertDescription st
|
|||
if strings.Contains(body, "[ALERT_DESCRIPTION]") {
|
||||
body = strings.ReplaceAll(body, "[ALERT_DESCRIPTION]", alertDescription)
|
||||
}
|
||||
if strings.Contains(body, "[SERVICE_NAME]") {
|
||||
body = strings.ReplaceAll(body, "[SERVICE_NAME]", serviceName)
|
||||
if strings.Contains(body, "[SERVICE_NAME]") { // XXX: Remove this in v4.0.0
|
||||
body = strings.ReplaceAll(body, "[SERVICE_NAME]", endpointName)
|
||||
}
|
||||
if strings.Contains(body, "[ENDPOINT_NAME]") {
|
||||
body = strings.ReplaceAll(body, "[ENDPOINT_NAME]", endpointName)
|
||||
}
|
||||
if strings.Contains(body, "[ALERT_TRIGGERED_OR_RESOLVED]") {
|
||||
if resolved {
|
||||
|
@ -78,8 +81,11 @@ func (provider *AlertProvider) buildHTTPRequest(serviceName, alertDescription st
|
|||
if strings.Contains(providerURL, "[ALERT_DESCRIPTION]") {
|
||||
providerURL = strings.ReplaceAll(providerURL, "[ALERT_DESCRIPTION]", alertDescription)
|
||||
}
|
||||
if strings.Contains(providerURL, "[SERVICE_NAME]") {
|
||||
providerURL = strings.ReplaceAll(providerURL, "[SERVICE_NAME]", serviceName)
|
||||
if strings.Contains(providerURL, "[SERVICE_NAME]") { // XXX: Remove this in v4.0.0
|
||||
providerURL = strings.ReplaceAll(providerURL, "[SERVICE_NAME]", endpointName)
|
||||
}
|
||||
if strings.Contains(providerURL, "[ENDPOINT_NAME]") {
|
||||
providerURL = strings.ReplaceAll(providerURL, "[ENDPOINT_NAME]", endpointName)
|
||||
}
|
||||
if strings.Contains(providerURL, "[ALERT_TRIGGERED_OR_RESOLVED]") {
|
||||
if resolved {
|
||||
|
@ -100,14 +106,14 @@ func (provider *AlertProvider) buildHTTPRequest(serviceName, alertDescription st
|
|||
}
|
||||
|
||||
// Send a request to the alert provider and return the body
|
||||
func (provider *AlertProvider) Send(serviceName, alertDescription string, resolved bool) ([]byte, error) {
|
||||
func (provider *AlertProvider) Send(endpointName, alertDescription string, resolved bool) ([]byte, error) {
|
||||
if os.Getenv("MOCK_ALERT_PROVIDER") == "true" {
|
||||
if os.Getenv("MOCK_ALERT_PROVIDER_ERROR") == "true" {
|
||||
return nil, errors.New("error")
|
||||
}
|
||||
return []byte("{}"), nil
|
||||
}
|
||||
request := provider.buildHTTPRequest(serviceName, alertDescription, resolved)
|
||||
request := provider.buildHTTPRequest(endpointName, alertDescription, resolved)
|
||||
response, err := client.GetHTTPClient(provider.ClientConfig).Do(request)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
|
|
|
@ -13,7 +13,7 @@ func TestAlertProvider_IsValid(t *testing.T) {
|
|||
if invalidProvider.IsValid() {
|
||||
t.Error("provider shouldn't have been valid")
|
||||
}
|
||||
validProvider := AlertProvider{URL: "http://example.com"}
|
||||
validProvider := AlertProvider{URL: "https://example.com"}
|
||||
if !validProvider.IsValid() {
|
||||
t.Error("provider should've been valid")
|
||||
}
|
||||
|
@ -21,15 +21,15 @@ func TestAlertProvider_IsValid(t *testing.T) {
|
|||
|
||||
func TestAlertProvider_buildHTTPRequestWhenResolved(t *testing.T) {
|
||||
const (
|
||||
ExpectedURL = "http://example.com/service-name?event=RESOLVED&description=alert-description"
|
||||
ExpectedBody = "service-name,alert-description,RESOLVED"
|
||||
ExpectedURL = "https://example.com/endpoint-name?event=RESOLVED&description=alert-description"
|
||||
ExpectedBody = "endpoint-name,alert-description,RESOLVED"
|
||||
)
|
||||
customAlertProvider := &AlertProvider{
|
||||
URL: "http://example.com/[SERVICE_NAME]?event=[ALERT_TRIGGERED_OR_RESOLVED]&description=[ALERT_DESCRIPTION]",
|
||||
Body: "[SERVICE_NAME],[ALERT_DESCRIPTION],[ALERT_TRIGGERED_OR_RESOLVED]",
|
||||
URL: "https://example.com/[ENDPOINT_NAME]?event=[ALERT_TRIGGERED_OR_RESOLVED]&description=[ALERT_DESCRIPTION]",
|
||||
Body: "[ENDPOINT_NAME],[ALERT_DESCRIPTION],[ALERT_TRIGGERED_OR_RESOLVED]",
|
||||
Headers: nil,
|
||||
}
|
||||
request := customAlertProvider.buildHTTPRequest("service-name", "alert-description", true)
|
||||
request := customAlertProvider.buildHTTPRequest("endpoint-name", "alert-description", true)
|
||||
if request.URL.String() != ExpectedURL {
|
||||
t.Error("expected URL to be", ExpectedURL, "was", request.URL.String())
|
||||
}
|
||||
|
@ -41,15 +41,15 @@ func TestAlertProvider_buildHTTPRequestWhenResolved(t *testing.T) {
|
|||
|
||||
func TestAlertProvider_buildHTTPRequestWhenTriggered(t *testing.T) {
|
||||
const (
|
||||
ExpectedURL = "http://example.com/service-name?event=TRIGGERED&description=alert-description"
|
||||
ExpectedBody = "service-name,alert-description,TRIGGERED"
|
||||
ExpectedURL = "https://example.com/endpoint-name?event=TRIGGERED&description=alert-description"
|
||||
ExpectedBody = "endpoint-name,alert-description,TRIGGERED"
|
||||
)
|
||||
customAlertProvider := &AlertProvider{
|
||||
URL: "http://example.com/[SERVICE_NAME]?event=[ALERT_TRIGGERED_OR_RESOLVED]&description=[ALERT_DESCRIPTION]",
|
||||
Body: "[SERVICE_NAME],[ALERT_DESCRIPTION],[ALERT_TRIGGERED_OR_RESOLVED]",
|
||||
URL: "https://example.com/[ENDPOINT_NAME]?event=[ALERT_TRIGGERED_OR_RESOLVED]&description=[ALERT_DESCRIPTION]",
|
||||
Body: "[ENDPOINT_NAME],[ALERT_DESCRIPTION],[ALERT_TRIGGERED_OR_RESOLVED]",
|
||||
Headers: map[string]string{"Authorization": "Basic hunter2"},
|
||||
}
|
||||
request := customAlertProvider.buildHTTPRequest("service-name", "alert-description", false)
|
||||
request := customAlertProvider.buildHTTPRequest("endpoint-name", "alert-description", false)
|
||||
if request.URL.String() != ExpectedURL {
|
||||
t.Error("expected URL to be", ExpectedURL, "was", request.URL.String())
|
||||
}
|
||||
|
@ -60,24 +60,24 @@ func TestAlertProvider_buildHTTPRequestWhenTriggered(t *testing.T) {
|
|||
}
|
||||
|
||||
func TestAlertProvider_ToCustomAlertProvider(t *testing.T) {
|
||||
provider := AlertProvider{URL: "http://example.com"}
|
||||
customAlertProvider := provider.ToCustomAlertProvider(&core.Service{}, &alert.Alert{}, &core.Result{}, true)
|
||||
provider := AlertProvider{URL: "https://example.com"}
|
||||
customAlertProvider := provider.ToCustomAlertProvider(&core.Endpoint{}, &alert.Alert{}, &core.Result{}, true)
|
||||
if customAlertProvider == nil {
|
||||
t.Fatal("customAlertProvider shouldn't have been nil")
|
||||
}
|
||||
if customAlertProvider.URL != "http://example.com" {
|
||||
t.Error("expected URL to be http://example.com, got", customAlertProvider.URL)
|
||||
if customAlertProvider.URL != "https://example.com" {
|
||||
t.Error("expected URL to be https://example.com, got", customAlertProvider.URL)
|
||||
}
|
||||
}
|
||||
|
||||
func TestAlertProvider_buildHTTPRequestWithCustomPlaceholder(t *testing.T) {
|
||||
const (
|
||||
ExpectedURL = "http://example.com/service-name?event=test&description=alert-description"
|
||||
ExpectedBody = "service-name,alert-description,test"
|
||||
ExpectedURL = "https://example.com/endpoint-name?event=test&description=alert-description"
|
||||
ExpectedBody = "endpoint-name,alert-description,test"
|
||||
)
|
||||
customAlertProvider := &AlertProvider{
|
||||
URL: "http://example.com/[SERVICE_NAME]?event=[ALERT_TRIGGERED_OR_RESOLVED]&description=[ALERT_DESCRIPTION]",
|
||||
Body: "[SERVICE_NAME],[ALERT_DESCRIPTION],[ALERT_TRIGGERED_OR_RESOLVED]",
|
||||
URL: "https://example.com/[ENDPOINT_NAME]?event=[ALERT_TRIGGERED_OR_RESOLVED]&description=[ALERT_DESCRIPTION]",
|
||||
Body: "[ENDPOINT_NAME],[ALERT_DESCRIPTION],[ALERT_TRIGGERED_OR_RESOLVED]",
|
||||
Headers: nil,
|
||||
Placeholders: map[string]map[string]string{
|
||||
"ALERT_TRIGGERED_OR_RESOLVED": {
|
||||
|
@ -85,7 +85,7 @@ func TestAlertProvider_buildHTTPRequestWithCustomPlaceholder(t *testing.T) {
|
|||
},
|
||||
},
|
||||
}
|
||||
request := customAlertProvider.buildHTTPRequest("service-name", "alert-description", true)
|
||||
request := customAlertProvider.buildHTTPRequest("endpoint-name", "alert-description", true)
|
||||
if request.URL.String() != ExpectedURL {
|
||||
t.Error("expected URL to be", ExpectedURL, "was", request.URL.String())
|
||||
}
|
||||
|
@ -97,8 +97,8 @@ func TestAlertProvider_buildHTTPRequestWithCustomPlaceholder(t *testing.T) {
|
|||
|
||||
func TestAlertProvider_GetAlertStatePlaceholderValueDefaults(t *testing.T) {
|
||||
customAlertProvider := &AlertProvider{
|
||||
URL: "http://example.com/[SERVICE_NAME]?event=[ALERT_TRIGGERED_OR_RESOLVED]&description=[ALERT_DESCRIPTION]",
|
||||
Body: "[SERVICE_NAME],[ALERT_DESCRIPTION],[ALERT_TRIGGERED_OR_RESOLVED]",
|
||||
URL: "https://example.com/[ENDPOINT_NAME]?event=[ALERT_TRIGGERED_OR_RESOLVED]&description=[ALERT_DESCRIPTION]",
|
||||
Body: "[ENDPOINT_NAME],[ALERT_DESCRIPTION],[ALERT_TRIGGERED_OR_RESOLVED]",
|
||||
Headers: nil,
|
||||
Placeholders: nil,
|
||||
}
|
||||
|
@ -109,3 +109,26 @@ func TestAlertProvider_GetAlertStatePlaceholderValueDefaults(t *testing.T) {
|
|||
t.Error("expected TRIGGERED, got", customAlertProvider.GetAlertStatePlaceholderValue(false))
|
||||
}
|
||||
}
|
||||
|
||||
// TestAlertProvider_isBackwardCompatibleWithServiceRename checks if the custom alerting provider still supports
|
||||
// service placeholders after the migration from "service" to "endpoint"
|
||||
//
|
||||
// XXX: Remove this in v4.0.0
|
||||
func TestAlertProvider_isBackwardCompatibleWithServiceRename(t *testing.T) {
|
||||
const (
|
||||
ExpectedURL = "https://example.com/endpoint-name?event=TRIGGERED&description=alert-description"
|
||||
ExpectedBody = "endpoint-name,alert-description,TRIGGERED"
|
||||
)
|
||||
customAlertProvider := &AlertProvider{
|
||||
URL: "https://example.com/[SERVICE_NAME]?event=[ALERT_TRIGGERED_OR_RESOLVED]&description=[ALERT_DESCRIPTION]",
|
||||
Body: "[SERVICE_NAME],[ALERT_DESCRIPTION],[ALERT_TRIGGERED_OR_RESOLVED]",
|
||||
}
|
||||
request := customAlertProvider.buildHTTPRequest("endpoint-name", "alert-description", false)
|
||||
if request.URL.String() != ExpectedURL {
|
||||
t.Error("expected URL to be", ExpectedURL, "was", request.URL.String())
|
||||
}
|
||||
body, _ := ioutil.ReadAll(request.Body)
|
||||
if string(body) != ExpectedBody {
|
||||
t.Error("expected body to be", ExpectedBody, "was", string(body))
|
||||
}
|
||||
}
|
||||
|
|
|
@ -13,7 +13,7 @@ import (
|
|||
type AlertProvider struct {
|
||||
WebhookURL string `yaml:"webhook-url"`
|
||||
|
||||
// DefaultAlert is the default alert configuration to use for services with an alert of the appropriate type
|
||||
// DefaultAlert is the default alert configuration to use for endpoints with an alert of the appropriate type
|
||||
DefaultAlert *alert.Alert `yaml:"default-alert"`
|
||||
}
|
||||
|
||||
|
@ -23,14 +23,14 @@ func (provider *AlertProvider) IsValid() bool {
|
|||
}
|
||||
|
||||
// ToCustomAlertProvider converts the provider into a custom.AlertProvider
|
||||
func (provider *AlertProvider) ToCustomAlertProvider(service *core.Service, alert *alert.Alert, result *core.Result, resolved bool) *custom.AlertProvider {
|
||||
func (provider *AlertProvider) ToCustomAlertProvider(endpoint *core.Endpoint, alert *alert.Alert, result *core.Result, resolved bool) *custom.AlertProvider {
|
||||
var message, results string
|
||||
var colorCode int
|
||||
if resolved {
|
||||
message = fmt.Sprintf("An alert for **%s** has been resolved after passing successfully %d time(s) in a row", service.Name, alert.SuccessThreshold)
|
||||
message = fmt.Sprintf("An alert for **%s** has been resolved after passing successfully %d time(s) in a row", endpoint.Name, alert.SuccessThreshold)
|
||||
colorCode = 3066993
|
||||
} else {
|
||||
message = fmt.Sprintf("An alert for **%s** has been triggered due to having failed %d time(s) in a row", service.Name, alert.FailureThreshold)
|
||||
message = fmt.Sprintf("An alert for **%s** has been triggered due to having failed %d time(s) in a row", endpoint.Name, alert.FailureThreshold)
|
||||
colorCode = 15158332
|
||||
}
|
||||
for _, conditionResult := range result.ConditionResults {
|
||||
|
|
|
@ -24,7 +24,7 @@ func TestAlertProvider_IsValid(t *testing.T) {
|
|||
func TestAlertProvider_ToCustomAlertProviderWithResolvedAlert(t *testing.T) {
|
||||
provider := AlertProvider{WebhookURL: "http://example.com"}
|
||||
alertDescription := "test"
|
||||
customAlertProvider := provider.ToCustomAlertProvider(&core.Service{Name: "svc"}, &alert.Alert{Description: &alertDescription}, &core.Result{ConditionResults: []*core.ConditionResult{{Condition: "SUCCESSFUL_CONDITION", Success: true}}}, true)
|
||||
customAlertProvider := provider.ToCustomAlertProvider(&core.Endpoint{Name: "svc"}, &alert.Alert{Description: &alertDescription}, &core.Result{ConditionResults: []*core.ConditionResult{{Condition: "SUCCESSFUL_CONDITION", Success: true}}}, true)
|
||||
if customAlertProvider == nil {
|
||||
t.Fatal("customAlertProvider shouldn't have been nil")
|
||||
}
|
||||
|
@ -49,7 +49,7 @@ func TestAlertProvider_ToCustomAlertProviderWithResolvedAlert(t *testing.T) {
|
|||
|
||||
func TestAlertProvider_ToCustomAlertProviderWithTriggeredAlert(t *testing.T) {
|
||||
provider := AlertProvider{WebhookURL: "http://example.com"}
|
||||
customAlertProvider := provider.ToCustomAlertProvider(&core.Service{}, &alert.Alert{}, &core.Result{ConditionResults: []*core.ConditionResult{{Condition: "UNSUCCESSFUL_CONDITION", Success: false}}}, false)
|
||||
customAlertProvider := provider.ToCustomAlertProvider(&core.Endpoint{}, &alert.Alert{}, &core.Result{ConditionResults: []*core.ConditionResult{{Condition: "UNSUCCESSFUL_CONDITION", Success: false}}}, false)
|
||||
if customAlertProvider == nil {
|
||||
t.Fatal("customAlertProvider shouldn't have been nil")
|
||||
}
|
||||
|
|
|
@ -17,7 +17,7 @@ type AlertProvider struct {
|
|||
// ClientConfig is the configuration of the client used to communicate with the provider's target
|
||||
ClientConfig *client.Config `yaml:"client"`
|
||||
|
||||
// DefaultAlert is the default alert configuration to use for services with an alert of the appropriate type
|
||||
// DefaultAlert is the default alert configuration to use for endpoints with an alert of the appropriate type
|
||||
DefaultAlert *alert.Alert `yaml:"default-alert"`
|
||||
}
|
||||
|
||||
|
@ -30,14 +30,14 @@ func (provider *AlertProvider) IsValid() bool {
|
|||
}
|
||||
|
||||
// ToCustomAlertProvider converts the provider into a custom.AlertProvider
|
||||
func (provider *AlertProvider) ToCustomAlertProvider(service *core.Service, alert *alert.Alert, result *core.Result, resolved bool) *custom.AlertProvider {
|
||||
func (provider *AlertProvider) ToCustomAlertProvider(endpoint *core.Endpoint, alert *alert.Alert, result *core.Result, resolved bool) *custom.AlertProvider {
|
||||
var message string
|
||||
var color string
|
||||
if resolved {
|
||||
message = fmt.Sprintf("An alert for *%s* has been resolved after passing successfully %d time(s) in a row", service.Name, alert.SuccessThreshold)
|
||||
message = fmt.Sprintf("An alert for *%s* has been resolved after passing successfully %d time(s) in a row", endpoint.Name, alert.SuccessThreshold)
|
||||
color = "#36A64F"
|
||||
} else {
|
||||
message = fmt.Sprintf("An alert for *%s* has been triggered due to having failed %d time(s) in a row", service.Name, alert.FailureThreshold)
|
||||
message = fmt.Sprintf("An alert for *%s* has been triggered due to having failed %d time(s) in a row", endpoint.Name, alert.FailureThreshold)
|
||||
color = "#DD0000"
|
||||
}
|
||||
var results string
|
||||
|
@ -83,7 +83,7 @@ func (provider *AlertProvider) ToCustomAlertProvider(service *core.Service, aler
|
|||
]
|
||||
}
|
||||
]
|
||||
}`, message, message, description, color, service.URL, results),
|
||||
}`, message, message, description, color, endpoint.URL, results),
|
||||
Headers: map[string]string{"Content-Type": "application/json"},
|
||||
}
|
||||
}
|
||||
|
|
|
@ -24,7 +24,7 @@ func TestAlertProvider_IsValid(t *testing.T) {
|
|||
func TestAlertProvider_ToCustomAlertProviderWithResolvedAlert(t *testing.T) {
|
||||
provider := AlertProvider{WebhookURL: "http://example.org"}
|
||||
alertDescription := "test"
|
||||
customAlertProvider := provider.ToCustomAlertProvider(&core.Service{Name: "svc"}, &alert.Alert{Description: &alertDescription}, &core.Result{ConditionResults: []*core.ConditionResult{{Condition: "SUCCESSFUL_CONDITION", Success: true}}}, true)
|
||||
customAlertProvider := provider.ToCustomAlertProvider(&core.Endpoint{Name: "svc"}, &alert.Alert{Description: &alertDescription}, &core.Result{ConditionResults: []*core.ConditionResult{{Condition: "SUCCESSFUL_CONDITION", Success: true}}}, true)
|
||||
if customAlertProvider == nil {
|
||||
t.Fatal("customAlertProvider shouldn't have been nil")
|
||||
}
|
||||
|
@ -49,7 +49,7 @@ func TestAlertProvider_ToCustomAlertProviderWithResolvedAlert(t *testing.T) {
|
|||
|
||||
func TestAlertProvider_ToCustomAlertProviderWithTriggeredAlert(t *testing.T) {
|
||||
provider := AlertProvider{WebhookURL: "http://example.org"}
|
||||
customAlertProvider := provider.ToCustomAlertProvider(&core.Service{}, &alert.Alert{}, &core.Result{ConditionResults: []*core.ConditionResult{{Condition: "UNSUCCESSFUL_CONDITION", Success: false}}}, false)
|
||||
customAlertProvider := provider.ToCustomAlertProvider(&core.Endpoint{}, &alert.Alert{}, &core.Result{ConditionResults: []*core.ConditionResult{{Condition: "UNSUCCESSFUL_CONDITION", Success: false}}}, false)
|
||||
if customAlertProvider == nil {
|
||||
t.Fatal("customAlertProvider shouldn't have been nil")
|
||||
}
|
||||
|
|
|
@ -19,7 +19,7 @@ type AlertProvider struct {
|
|||
Originator string `yaml:"originator"`
|
||||
Recipients string `yaml:"recipients"`
|
||||
|
||||
// DefaultAlert is the default alert configuration to use for services with an alert of the appropriate type
|
||||
// DefaultAlert is the default alert configuration to use for endpoints with an alert of the appropriate type
|
||||
DefaultAlert *alert.Alert `yaml:"default-alert"`
|
||||
}
|
||||
|
||||
|
@ -30,12 +30,12 @@ func (provider *AlertProvider) IsValid() bool {
|
|||
|
||||
// ToCustomAlertProvider converts the provider into a custom.AlertProvider
|
||||
// Reference doc for messagebird https://developers.messagebird.com/api/sms-messaging/#send-outbound-sms
|
||||
func (provider *AlertProvider) ToCustomAlertProvider(service *core.Service, alert *alert.Alert, _ *core.Result, resolved bool) *custom.AlertProvider {
|
||||
func (provider *AlertProvider) ToCustomAlertProvider(endpoint *core.Endpoint, alert *alert.Alert, _ *core.Result, resolved bool) *custom.AlertProvider {
|
||||
var message string
|
||||
if resolved {
|
||||
message = fmt.Sprintf("RESOLVED: %s - %s", service.Name, alert.GetDescription())
|
||||
message = fmt.Sprintf("RESOLVED: %s - %s", endpoint.Name, alert.GetDescription())
|
||||
} else {
|
||||
message = fmt.Sprintf("TRIGGERED: %s - %s", service.Name, alert.GetDescription())
|
||||
message = fmt.Sprintf("TRIGGERED: %s - %s", endpoint.Name, alert.GetDescription())
|
||||
}
|
||||
return &custom.AlertProvider{
|
||||
URL: restAPIURL,
|
||||
|
|
|
@ -31,7 +31,7 @@ func TestAlertProvider_ToCustomAlertProviderWithResolvedAlert(t *testing.T) {
|
|||
Originator: "1",
|
||||
Recipients: "1",
|
||||
}
|
||||
customAlertProvider := provider.ToCustomAlertProvider(&core.Service{}, &alert.Alert{}, &core.Result{}, true)
|
||||
customAlertProvider := provider.ToCustomAlertProvider(&core.Endpoint{}, &alert.Alert{}, &core.Result{}, true)
|
||||
if customAlertProvider == nil {
|
||||
t.Fatal("customAlertProvider shouldn't have been nil")
|
||||
}
|
||||
|
@ -57,7 +57,7 @@ func TestAlertProvider_ToCustomAlertProviderWithTriggeredAlert(t *testing.T) {
|
|||
Originator: "1",
|
||||
Recipients: "1",
|
||||
}
|
||||
customAlertProvider := provider.ToCustomAlertProvider(&core.Service{}, &alert.Alert{}, &core.Result{}, false)
|
||||
customAlertProvider := provider.ToCustomAlertProvider(&core.Endpoint{}, &alert.Alert{}, &core.Result{}, false)
|
||||
if customAlertProvider == nil {
|
||||
t.Fatal("customAlertProvider shouldn't have been nil")
|
||||
}
|
||||
|
|
|
@ -17,7 +17,7 @@ const (
|
|||
type AlertProvider struct {
|
||||
IntegrationKey string `yaml:"integration-key"`
|
||||
|
||||
// DefaultAlert is the default alert configuration to use for services with an alert of the appropriate type
|
||||
// DefaultAlert is the default alert configuration to use for endpoints with an alert of the appropriate type
|
||||
DefaultAlert *alert.Alert `yaml:"default-alert"`
|
||||
|
||||
// Overrides is a list of Override that may be prioritized over the default configuration
|
||||
|
@ -48,14 +48,14 @@ func (provider *AlertProvider) IsValid() bool {
|
|||
// ToCustomAlertProvider converts the provider into a custom.AlertProvider
|
||||
//
|
||||
// relevant: https://developer.pagerduty.com/docs/events-api-v2/trigger-events/
|
||||
func (provider *AlertProvider) ToCustomAlertProvider(service *core.Service, alert *alert.Alert, _ *core.Result, resolved bool) *custom.AlertProvider {
|
||||
func (provider *AlertProvider) ToCustomAlertProvider(endpoint *core.Endpoint, alert *alert.Alert, _ *core.Result, resolved bool) *custom.AlertProvider {
|
||||
var message, eventAction, resolveKey string
|
||||
if resolved {
|
||||
message = fmt.Sprintf("RESOLVED: %s - %s", service.Name, alert.GetDescription())
|
||||
message = fmt.Sprintf("RESOLVED: %s - %s", endpoint.Name, alert.GetDescription())
|
||||
eventAction = "resolve"
|
||||
resolveKey = alert.ResolveKey
|
||||
} else {
|
||||
message = fmt.Sprintf("TRIGGERED: %s - %s", service.Name, alert.GetDescription())
|
||||
message = fmt.Sprintf("TRIGGERED: %s - %s", endpoint.Name, alert.GetDescription())
|
||||
eventAction = "trigger"
|
||||
resolveKey = ""
|
||||
}
|
||||
|
@ -71,7 +71,7 @@ func (provider *AlertProvider) ToCustomAlertProvider(service *core.Service, aler
|
|||
"source": "%s",
|
||||
"severity": "critical"
|
||||
}
|
||||
}`, provider.getPagerDutyIntegrationKeyForGroup(service.Group), resolveKey, eventAction, message, service.Name),
|
||||
}`, provider.getPagerDutyIntegrationKeyForGroup(endpoint.Group), resolveKey, eventAction, message, endpoint.Name),
|
||||
Headers: map[string]string{
|
||||
"Content-Type": "application/json",
|
||||
},
|
||||
|
|
|
@ -59,7 +59,7 @@ func TestAlertProvider_IsValidWithOverride(t *testing.T) {
|
|||
|
||||
func TestAlertProvider_ToCustomAlertProviderWithResolvedAlert(t *testing.T) {
|
||||
provider := AlertProvider{IntegrationKey: "00000000000000000000000000000000"}
|
||||
customAlertProvider := provider.ToCustomAlertProvider(&core.Service{}, &alert.Alert{}, &core.Result{}, true)
|
||||
customAlertProvider := provider.ToCustomAlertProvider(&core.Endpoint{}, &alert.Alert{}, &core.Result{}, true)
|
||||
if customAlertProvider == nil {
|
||||
t.Fatal("customAlertProvider shouldn't have been nil")
|
||||
}
|
||||
|
@ -89,7 +89,7 @@ func TestAlertProvider_ToCustomAlertProviderWithResolvedAlertAndOverride(t *test
|
|||
},
|
||||
},
|
||||
}
|
||||
customAlertProvider := provider.ToCustomAlertProvider(&core.Service{}, &alert.Alert{}, &core.Result{}, true)
|
||||
customAlertProvider := provider.ToCustomAlertProvider(&core.Endpoint{}, &alert.Alert{}, &core.Result{}, true)
|
||||
if customAlertProvider == nil {
|
||||
t.Fatal("customAlertProvider shouldn't have been nil")
|
||||
}
|
||||
|
@ -111,7 +111,7 @@ func TestAlertProvider_ToCustomAlertProviderWithResolvedAlertAndOverride(t *test
|
|||
|
||||
func TestAlertProvider_ToCustomAlertProviderWithTriggeredAlert(t *testing.T) {
|
||||
provider := AlertProvider{IntegrationKey: "00000000000000000000000000000000"}
|
||||
customAlertProvider := provider.ToCustomAlertProvider(&core.Service{}, &alert.Alert{}, &core.Result{}, false)
|
||||
customAlertProvider := provider.ToCustomAlertProvider(&core.Endpoint{}, &alert.Alert{}, &core.Result{}, false)
|
||||
if customAlertProvider == nil {
|
||||
t.Fatal("customAlertProvider shouldn't have been nil")
|
||||
}
|
||||
|
@ -141,7 +141,7 @@ func TestAlertProvider_ToCustomAlertProviderWithTriggeredAlertAndOverride(t *tes
|
|||
},
|
||||
},
|
||||
}
|
||||
customAlertProvider := provider.ToCustomAlertProvider(&core.Service{}, &alert.Alert{}, &core.Result{}, false)
|
||||
customAlertProvider := provider.ToCustomAlertProvider(&core.Endpoint{}, &alert.Alert{}, &core.Result{}, false)
|
||||
if customAlertProvider == nil {
|
||||
t.Fatal("customAlertProvider shouldn't have been nil")
|
||||
}
|
||||
|
|
|
@ -20,31 +20,31 @@ type AlertProvider interface {
|
|||
IsValid() bool
|
||||
|
||||
// ToCustomAlertProvider converts the provider into a custom.AlertProvider
|
||||
ToCustomAlertProvider(service *core.Service, alert *alert.Alert, result *core.Result, resolved bool) *custom.AlertProvider
|
||||
ToCustomAlertProvider(endpoint *core.Endpoint, alert *alert.Alert, result *core.Result, resolved bool) *custom.AlertProvider
|
||||
|
||||
// GetDefaultAlert returns the provider's default alert configuration
|
||||
GetDefaultAlert() *alert.Alert
|
||||
}
|
||||
|
||||
// ParseWithDefaultAlert parses a service alert by using the provider's default alert as a baseline
|
||||
func ParseWithDefaultAlert(providerDefaultAlert, serviceAlert *alert.Alert) {
|
||||
if providerDefaultAlert == nil || serviceAlert == nil {
|
||||
// ParseWithDefaultAlert parses an Endpoint alert by using the provider's default alert as a baseline
|
||||
func ParseWithDefaultAlert(providerDefaultAlert, endpointAlert *alert.Alert) {
|
||||
if providerDefaultAlert == nil || endpointAlert == nil {
|
||||
return
|
||||
}
|
||||
if serviceAlert.Enabled == nil {
|
||||
serviceAlert.Enabled = providerDefaultAlert.Enabled
|
||||
if endpointAlert.Enabled == nil {
|
||||
endpointAlert.Enabled = providerDefaultAlert.Enabled
|
||||
}
|
||||
if serviceAlert.SendOnResolved == nil {
|
||||
serviceAlert.SendOnResolved = providerDefaultAlert.SendOnResolved
|
||||
if endpointAlert.SendOnResolved == nil {
|
||||
endpointAlert.SendOnResolved = providerDefaultAlert.SendOnResolved
|
||||
}
|
||||
if serviceAlert.Description == nil {
|
||||
serviceAlert.Description = providerDefaultAlert.Description
|
||||
if endpointAlert.Description == nil {
|
||||
endpointAlert.Description = providerDefaultAlert.Description
|
||||
}
|
||||
if serviceAlert.FailureThreshold == 0 {
|
||||
serviceAlert.FailureThreshold = providerDefaultAlert.FailureThreshold
|
||||
if endpointAlert.FailureThreshold == 0 {
|
||||
endpointAlert.FailureThreshold = providerDefaultAlert.FailureThreshold
|
||||
}
|
||||
if serviceAlert.SuccessThreshold == 0 {
|
||||
serviceAlert.SuccessThreshold = providerDefaultAlert.SuccessThreshold
|
||||
if endpointAlert.SuccessThreshold == 0 {
|
||||
endpointAlert.SuccessThreshold = providerDefaultAlert.SuccessThreshold
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -9,7 +9,7 @@ import (
|
|||
func TestParseWithDefaultAlert(t *testing.T) {
|
||||
type Scenario struct {
|
||||
Name string
|
||||
DefaultAlert, ServiceAlert, ExpectedOutputAlert *alert.Alert
|
||||
DefaultAlert, EndpointAlert, ExpectedOutputAlert *alert.Alert
|
||||
}
|
||||
enabled := true
|
||||
disabled := false
|
||||
|
@ -17,7 +17,7 @@ func TestParseWithDefaultAlert(t *testing.T) {
|
|||
secondDescription := "description-2"
|
||||
scenarios := []Scenario{
|
||||
{
|
||||
Name: "service-alert-type-only",
|
||||
Name: "endpoint-alert-type-only",
|
||||
DefaultAlert: &alert.Alert{
|
||||
Enabled: &enabled,
|
||||
SendOnResolved: &enabled,
|
||||
|
@ -25,7 +25,7 @@ func TestParseWithDefaultAlert(t *testing.T) {
|
|||
FailureThreshold: 5,
|
||||
SuccessThreshold: 10,
|
||||
},
|
||||
ServiceAlert: &alert.Alert{
|
||||
EndpointAlert: &alert.Alert{
|
||||
Type: alert.TypeDiscord,
|
||||
},
|
||||
ExpectedOutputAlert: &alert.Alert{
|
||||
|
@ -38,7 +38,7 @@ func TestParseWithDefaultAlert(t *testing.T) {
|
|||
},
|
||||
},
|
||||
{
|
||||
Name: "service-alert-overwrites-default-alert",
|
||||
Name: "endpoint-alert-overwrites-default-alert",
|
||||
DefaultAlert: &alert.Alert{
|
||||
Enabled: &disabled,
|
||||
SendOnResolved: &disabled,
|
||||
|
@ -46,7 +46,7 @@ func TestParseWithDefaultAlert(t *testing.T) {
|
|||
FailureThreshold: 5,
|
||||
SuccessThreshold: 10,
|
||||
},
|
||||
ServiceAlert: &alert.Alert{
|
||||
EndpointAlert: &alert.Alert{
|
||||
Type: alert.TypeTelegram,
|
||||
Enabled: &enabled,
|
||||
SendOnResolved: &enabled,
|
||||
|
@ -64,7 +64,7 @@ func TestParseWithDefaultAlert(t *testing.T) {
|
|||
},
|
||||
},
|
||||
{
|
||||
Name: "service-alert-partially-overwrites-default-alert",
|
||||
Name: "endpoint-alert-partially-overwrites-default-alert",
|
||||
DefaultAlert: &alert.Alert{
|
||||
Enabled: &enabled,
|
||||
SendOnResolved: &enabled,
|
||||
|
@ -72,7 +72,7 @@ func TestParseWithDefaultAlert(t *testing.T) {
|
|||
FailureThreshold: 5,
|
||||
SuccessThreshold: 10,
|
||||
},
|
||||
ServiceAlert: &alert.Alert{
|
||||
EndpointAlert: &alert.Alert{
|
||||
Type: alert.TypeDiscord,
|
||||
Enabled: nil,
|
||||
SendOnResolved: nil,
|
||||
|
@ -98,7 +98,7 @@ func TestParseWithDefaultAlert(t *testing.T) {
|
|||
FailureThreshold: 5,
|
||||
SuccessThreshold: 10,
|
||||
},
|
||||
ServiceAlert: &alert.Alert{
|
||||
EndpointAlert: &alert.Alert{
|
||||
Type: alert.TypeDiscord,
|
||||
},
|
||||
ExpectedOutputAlert: &alert.Alert{
|
||||
|
@ -120,33 +120,33 @@ func TestParseWithDefaultAlert(t *testing.T) {
|
|||
FailureThreshold: 2,
|
||||
SuccessThreshold: 5,
|
||||
},
|
||||
ServiceAlert: nil,
|
||||
EndpointAlert: nil,
|
||||
ExpectedOutputAlert: nil,
|
||||
},
|
||||
}
|
||||
for _, scenario := range scenarios {
|
||||
t.Run(scenario.Name, func(t *testing.T) {
|
||||
ParseWithDefaultAlert(scenario.DefaultAlert, scenario.ServiceAlert)
|
||||
ParseWithDefaultAlert(scenario.DefaultAlert, scenario.EndpointAlert)
|
||||
if scenario.ExpectedOutputAlert == nil {
|
||||
if scenario.ServiceAlert != nil {
|
||||
if scenario.EndpointAlert != nil {
|
||||
t.Fail()
|
||||
}
|
||||
return
|
||||
}
|
||||
if scenario.ServiceAlert.IsEnabled() != scenario.ExpectedOutputAlert.IsEnabled() {
|
||||
t.Errorf("expected ServiceAlert.IsEnabled() to be %v, got %v", scenario.ExpectedOutputAlert.IsEnabled(), scenario.ServiceAlert.IsEnabled())
|
||||
if scenario.EndpointAlert.IsEnabled() != scenario.ExpectedOutputAlert.IsEnabled() {
|
||||
t.Errorf("expected EndpointAlert.IsEnabled() to be %v, got %v", scenario.ExpectedOutputAlert.IsEnabled(), scenario.EndpointAlert.IsEnabled())
|
||||
}
|
||||
if scenario.ServiceAlert.IsSendingOnResolved() != scenario.ExpectedOutputAlert.IsSendingOnResolved() {
|
||||
t.Errorf("expected ServiceAlert.IsSendingOnResolved() to be %v, got %v", scenario.ExpectedOutputAlert.IsSendingOnResolved(), scenario.ServiceAlert.IsSendingOnResolved())
|
||||
if scenario.EndpointAlert.IsSendingOnResolved() != scenario.ExpectedOutputAlert.IsSendingOnResolved() {
|
||||
t.Errorf("expected EndpointAlert.IsSendingOnResolved() to be %v, got %v", scenario.ExpectedOutputAlert.IsSendingOnResolved(), scenario.EndpointAlert.IsSendingOnResolved())
|
||||
}
|
||||
if scenario.ServiceAlert.GetDescription() != scenario.ExpectedOutputAlert.GetDescription() {
|
||||
t.Errorf("expected ServiceAlert.GetDescription() to be %v, got %v", scenario.ExpectedOutputAlert.GetDescription(), scenario.ServiceAlert.GetDescription())
|
||||
if scenario.EndpointAlert.GetDescription() != scenario.ExpectedOutputAlert.GetDescription() {
|
||||
t.Errorf("expected EndpointAlert.GetDescription() to be %v, got %v", scenario.ExpectedOutputAlert.GetDescription(), scenario.EndpointAlert.GetDescription())
|
||||
}
|
||||
if scenario.ServiceAlert.FailureThreshold != scenario.ExpectedOutputAlert.FailureThreshold {
|
||||
t.Errorf("expected ServiceAlert.FailureThreshold to be %v, got %v", scenario.ExpectedOutputAlert.FailureThreshold, scenario.ServiceAlert.FailureThreshold)
|
||||
if scenario.EndpointAlert.FailureThreshold != scenario.ExpectedOutputAlert.FailureThreshold {
|
||||
t.Errorf("expected EndpointAlert.FailureThreshold to be %v, got %v", scenario.ExpectedOutputAlert.FailureThreshold, scenario.EndpointAlert.FailureThreshold)
|
||||
}
|
||||
if scenario.ServiceAlert.SuccessThreshold != scenario.ExpectedOutputAlert.SuccessThreshold {
|
||||
t.Errorf("expected ServiceAlert.SuccessThreshold to be %v, got %v", scenario.ExpectedOutputAlert.SuccessThreshold, scenario.ServiceAlert.SuccessThreshold)
|
||||
if scenario.EndpointAlert.SuccessThreshold != scenario.ExpectedOutputAlert.SuccessThreshold {
|
||||
t.Errorf("expected EndpointAlert.SuccessThreshold to be %v, got %v", scenario.ExpectedOutputAlert.SuccessThreshold, scenario.EndpointAlert.SuccessThreshold)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
|
|
@ -13,7 +13,7 @@ import (
|
|||
type AlertProvider struct {
|
||||
WebhookURL string `yaml:"webhook-url"` // Slack webhook URL
|
||||
|
||||
// DefaultAlert is the default alert configuration to use for services with an alert of the appropriate type
|
||||
// DefaultAlert is the default alert configuration to use for endpoints with an alert of the appropriate type
|
||||
DefaultAlert *alert.Alert `yaml:"default-alert"`
|
||||
}
|
||||
|
||||
|
@ -23,13 +23,13 @@ func (provider *AlertProvider) IsValid() bool {
|
|||
}
|
||||
|
||||
// ToCustomAlertProvider converts the provider into a custom.AlertProvider
|
||||
func (provider *AlertProvider) ToCustomAlertProvider(service *core.Service, alert *alert.Alert, result *core.Result, resolved bool) *custom.AlertProvider {
|
||||
func (provider *AlertProvider) ToCustomAlertProvider(endpoint *core.Endpoint, alert *alert.Alert, result *core.Result, resolved bool) *custom.AlertProvider {
|
||||
var message, color, results string
|
||||
if resolved {
|
||||
message = fmt.Sprintf("An alert for *%s* has been resolved after passing successfully %d time(s) in a row", service.Name, alert.SuccessThreshold)
|
||||
message = fmt.Sprintf("An alert for *%s* has been resolved after passing successfully %d time(s) in a row", endpoint.Name, alert.SuccessThreshold)
|
||||
color = "#36A64F"
|
||||
} else {
|
||||
message = fmt.Sprintf("An alert for *%s* has been triggered due to having failed %d time(s) in a row", service.Name, alert.FailureThreshold)
|
||||
message = fmt.Sprintf("An alert for *%s* has been triggered due to having failed %d time(s) in a row", endpoint.Name, alert.FailureThreshold)
|
||||
color = "#DD0000"
|
||||
}
|
||||
for _, conditionResult := range result.ConditionResults {
|
||||
|
|
|
@ -24,7 +24,7 @@ func TestAlertProvider_IsValid(t *testing.T) {
|
|||
func TestAlertProvider_ToCustomAlertProviderWithResolvedAlert(t *testing.T) {
|
||||
provider := AlertProvider{WebhookURL: "http://example.com"}
|
||||
alertDescription := "test"
|
||||
customAlertProvider := provider.ToCustomAlertProvider(&core.Service{Name: "svc"}, &alert.Alert{Description: &alertDescription}, &core.Result{ConditionResults: []*core.ConditionResult{{Condition: "SUCCESSFUL_CONDITION", Success: true}}}, true)
|
||||
customAlertProvider := provider.ToCustomAlertProvider(&core.Endpoint{Name: "svc"}, &alert.Alert{Description: &alertDescription}, &core.Result{ConditionResults: []*core.ConditionResult{{Condition: "SUCCESSFUL_CONDITION", Success: true}}}, true)
|
||||
if customAlertProvider == nil {
|
||||
t.Fatal("customAlertProvider shouldn't have been nil")
|
||||
}
|
||||
|
@ -49,7 +49,7 @@ func TestAlertProvider_ToCustomAlertProviderWithResolvedAlert(t *testing.T) {
|
|||
|
||||
func TestAlertProvider_ToCustomAlertProviderWithTriggeredAlert(t *testing.T) {
|
||||
provider := AlertProvider{WebhookURL: "http://example.com"}
|
||||
customAlertProvider := provider.ToCustomAlertProvider(&core.Service{}, &alert.Alert{}, &core.Result{ConditionResults: []*core.ConditionResult{{Condition: "UNSUCCESSFUL_CONDITION", Success: false}}}, false)
|
||||
customAlertProvider := provider.ToCustomAlertProvider(&core.Endpoint{}, &alert.Alert{}, &core.Result{ConditionResults: []*core.ConditionResult{{Condition: "UNSUCCESSFUL_CONDITION", Success: false}}}, false)
|
||||
if customAlertProvider == nil {
|
||||
t.Fatal("customAlertProvider shouldn't have been nil")
|
||||
}
|
||||
|
|
|
@ -13,7 +13,7 @@ import (
|
|||
type AlertProvider struct {
|
||||
WebhookURL string `yaml:"webhook-url"`
|
||||
|
||||
// DefaultAlert is the default alert configuration to use for services with an alert of the appropriate type
|
||||
// DefaultAlert is the default alert configuration to use for endpoints with an alert of the appropriate type
|
||||
DefaultAlert *alert.Alert `yaml:"default-alert"`
|
||||
}
|
||||
|
||||
|
@ -23,14 +23,14 @@ func (provider *AlertProvider) IsValid() bool {
|
|||
}
|
||||
|
||||
// ToCustomAlertProvider converts the provider into a custom.AlertProvider
|
||||
func (provider *AlertProvider) ToCustomAlertProvider(service *core.Service, alert *alert.Alert, result *core.Result, resolved bool) *custom.AlertProvider {
|
||||
func (provider *AlertProvider) ToCustomAlertProvider(endpoint *core.Endpoint, alert *alert.Alert, result *core.Result, resolved bool) *custom.AlertProvider {
|
||||
var message string
|
||||
var color string
|
||||
if resolved {
|
||||
message = fmt.Sprintf("An alert for *%s* has been resolved after passing successfully %d time(s) in a row", service.Name, alert.SuccessThreshold)
|
||||
message = fmt.Sprintf("An alert for *%s* has been resolved after passing successfully %d time(s) in a row", endpoint.Name, alert.SuccessThreshold)
|
||||
color = "#36A64F"
|
||||
} else {
|
||||
message = fmt.Sprintf("An alert for *%s* has been triggered due to having failed %d time(s) in a row", service.Name, alert.FailureThreshold)
|
||||
message = fmt.Sprintf("An alert for *%s* has been triggered due to having failed %d time(s) in a row", endpoint.Name, alert.FailureThreshold)
|
||||
color = "#DD0000"
|
||||
}
|
||||
var results string
|
||||
|
@ -66,7 +66,7 @@ func (provider *AlertProvider) ToCustomAlertProvider(service *core.Service, aler
|
|||
"text": "%s"
|
||||
}
|
||||
]
|
||||
}`, color, message, description, service.URL, results),
|
||||
}`, color, message, description, endpoint.URL, results),
|
||||
Headers: map[string]string{"Content-Type": "application/json"},
|
||||
}
|
||||
}
|
||||
|
|
|
@ -24,7 +24,7 @@ func TestAlertProvider_IsValid(t *testing.T) {
|
|||
func TestAlertProvider_ToCustomAlertProviderWithResolvedAlert(t *testing.T) {
|
||||
provider := AlertProvider{WebhookURL: "http://example.org"}
|
||||
alertDescription := "test"
|
||||
customAlertProvider := provider.ToCustomAlertProvider(&core.Service{Name: "svc"}, &alert.Alert{Description: &alertDescription}, &core.Result{ConditionResults: []*core.ConditionResult{{Condition: "SUCCESSFUL_CONDITION", Success: true}}}, true)
|
||||
customAlertProvider := provider.ToCustomAlertProvider(&core.Endpoint{Name: "svc"}, &alert.Alert{Description: &alertDescription}, &core.Result{ConditionResults: []*core.ConditionResult{{Condition: "SUCCESSFUL_CONDITION", Success: true}}}, true)
|
||||
if customAlertProvider == nil {
|
||||
t.Fatal("customAlertProvider shouldn't have been nil")
|
||||
}
|
||||
|
@ -49,7 +49,7 @@ func TestAlertProvider_ToCustomAlertProviderWithResolvedAlert(t *testing.T) {
|
|||
|
||||
func TestAlertProvider_ToCustomAlertProviderWithTriggeredAlert(t *testing.T) {
|
||||
provider := AlertProvider{WebhookURL: "http://example.org"}
|
||||
customAlertProvider := provider.ToCustomAlertProvider(&core.Service{}, &alert.Alert{}, &core.Result{ConditionResults: []*core.ConditionResult{{Condition: "UNSUCCESSFUL_CONDITION", Success: false}}}, false)
|
||||
customAlertProvider := provider.ToCustomAlertProvider(&core.Endpoint{}, &alert.Alert{}, &core.Result{ConditionResults: []*core.ConditionResult{{Condition: "UNSUCCESSFUL_CONDITION", Success: false}}}, false)
|
||||
if customAlertProvider == nil {
|
||||
t.Fatal("customAlertProvider shouldn't have been nil")
|
||||
}
|
||||
|
|
|
@ -14,7 +14,7 @@ type AlertProvider struct {
|
|||
Token string `yaml:"token"`
|
||||
ID string `yaml:"id"`
|
||||
|
||||
// DefaultAlert is the default alert configuration to use for services with an alert of the appropriate type
|
||||
// DefaultAlert is the default alert configuration to use for endpoints with an alert of the appropriate type
|
||||
DefaultAlert *alert.Alert `yaml:"default-alert"`
|
||||
}
|
||||
|
||||
|
@ -24,12 +24,12 @@ func (provider *AlertProvider) IsValid() bool {
|
|||
}
|
||||
|
||||
// ToCustomAlertProvider converts the provider into a custom.AlertProvider
|
||||
func (provider *AlertProvider) ToCustomAlertProvider(service *core.Service, alert *alert.Alert, result *core.Result, resolved bool) *custom.AlertProvider {
|
||||
func (provider *AlertProvider) ToCustomAlertProvider(endpoint *core.Endpoint, alert *alert.Alert, result *core.Result, resolved bool) *custom.AlertProvider {
|
||||
var message, results string
|
||||
if resolved {
|
||||
message = fmt.Sprintf("An alert for *%s* has been resolved:\\n—\\n _healthcheck passing successfully %d time(s) in a row_\\n— ", service.Name, alert.FailureThreshold)
|
||||
message = fmt.Sprintf("An alert for *%s* has been resolved:\\n—\\n _healthcheck passing successfully %d time(s) in a row_\\n— ", endpoint.Name, alert.FailureThreshold)
|
||||
} else {
|
||||
message = fmt.Sprintf("An alert for *%s* has been triggered:\\n—\\n _healthcheck failed %d time(s) in a row_\\n— ", service.Name, alert.FailureThreshold)
|
||||
message = fmt.Sprintf("An alert for *%s* has been triggered:\\n—\\n _healthcheck failed %d time(s) in a row_\\n— ", endpoint.Name, alert.FailureThreshold)
|
||||
}
|
||||
for _, conditionResult := range result.ConditionResults {
|
||||
var prefix string
|
||||
|
|
|
@ -24,7 +24,7 @@ func TestAlertProvider_IsValid(t *testing.T) {
|
|||
|
||||
func TestAlertProvider_ToCustomAlertProviderWithResolvedAlert(t *testing.T) {
|
||||
provider := AlertProvider{Token: "123456:ABC-DEF1234ghIkl-zyx57W2v1u123ew11", ID: "12345678"}
|
||||
customAlertProvider := provider.ToCustomAlertProvider(&core.Service{}, &alert.Alert{}, &core.Result{ConditionResults: []*core.ConditionResult{{Condition: "SUCCESSFUL_CONDITION", Success: true}}}, true)
|
||||
customAlertProvider := provider.ToCustomAlertProvider(&core.Endpoint{}, &alert.Alert{}, &core.Result{ConditionResults: []*core.ConditionResult{{Condition: "SUCCESSFUL_CONDITION", Success: true}}}, true)
|
||||
if customAlertProvider == nil {
|
||||
t.Fatal("customAlertProvider shouldn't have been nil")
|
||||
}
|
||||
|
@ -48,7 +48,7 @@ func TestAlertProvider_ToCustomAlertProviderWithResolvedAlert(t *testing.T) {
|
|||
func TestAlertProvider_ToCustomAlertProviderWithTriggeredAlert(t *testing.T) {
|
||||
provider := AlertProvider{Token: "123456:ABC-DEF1234ghIkl-zyx57W2v1u123ew11", ID: "0123456789"}
|
||||
description := "Healthcheck Successful"
|
||||
customAlertProvider := provider.ToCustomAlertProvider(&core.Service{}, &alert.Alert{Description: &description}, &core.Result{ConditionResults: []*core.ConditionResult{{Condition: "UNSUCCESSFUL_CONDITION", Success: false}}}, false)
|
||||
customAlertProvider := provider.ToCustomAlertProvider(&core.Endpoint{}, &alert.Alert{Description: &description}, &core.Result{ConditionResults: []*core.ConditionResult{{Condition: "UNSUCCESSFUL_CONDITION", Success: false}}}, false)
|
||||
if customAlertProvider == nil {
|
||||
t.Fatal("customAlertProvider shouldn't have been nil")
|
||||
}
|
||||
|
@ -70,7 +70,7 @@ func TestAlertProvider_ToCustomAlertProviderWithTriggeredAlert(t *testing.T) {
|
|||
|
||||
func TestAlertProvider_ToCustomAlertProviderWithDescription(t *testing.T) {
|
||||
provider := AlertProvider{Token: "123456:ABC-DEF1234ghIkl-zyx57W2v1u123ew11", ID: "0123456789"}
|
||||
customAlertProvider := provider.ToCustomAlertProvider(&core.Service{}, &alert.Alert{}, &core.Result{ConditionResults: []*core.ConditionResult{{Condition: "UNSUCCESSFUL_CONDITION", Success: false}}}, false)
|
||||
customAlertProvider := provider.ToCustomAlertProvider(&core.Endpoint{}, &alert.Alert{}, &core.Result{ConditionResults: []*core.ConditionResult{{Condition: "UNSUCCESSFUL_CONDITION", Success: false}}}, false)
|
||||
if customAlertProvider == nil {
|
||||
t.Fatal("customAlertProvider shouldn't have been nil")
|
||||
}
|
||||
|
|
|
@ -18,7 +18,7 @@ type AlertProvider struct {
|
|||
From string `yaml:"from"`
|
||||
To string `yaml:"to"`
|
||||
|
||||
// DefaultAlert is the default alert configuration to use for services with an alert of the appropriate type
|
||||
// DefaultAlert is the default alert configuration to use for endpoints with an alert of the appropriate type
|
||||
DefaultAlert *alert.Alert `yaml:"default-alert"`
|
||||
}
|
||||
|
||||
|
@ -28,12 +28,12 @@ func (provider *AlertProvider) IsValid() bool {
|
|||
}
|
||||
|
||||
// ToCustomAlertProvider converts the provider into a custom.AlertProvider
|
||||
func (provider *AlertProvider) ToCustomAlertProvider(service *core.Service, alert *alert.Alert, _ *core.Result, resolved bool) *custom.AlertProvider {
|
||||
func (provider *AlertProvider) ToCustomAlertProvider(endpoint *core.Endpoint, alert *alert.Alert, _ *core.Result, resolved bool) *custom.AlertProvider {
|
||||
var message string
|
||||
if resolved {
|
||||
message = fmt.Sprintf("RESOLVED: %s - %s", service.Name, alert.GetDescription())
|
||||
message = fmt.Sprintf("RESOLVED: %s - %s", endpoint.Name, alert.GetDescription())
|
||||
} else {
|
||||
message = fmt.Sprintf("TRIGGERED: %s - %s", service.Name, alert.GetDescription())
|
||||
message = fmt.Sprintf("TRIGGERED: %s - %s", endpoint.Name, alert.GetDescription())
|
||||
}
|
||||
return &custom.AlertProvider{
|
||||
URL: fmt.Sprintf("https://api.twilio.com/2010-04-01/Accounts/%s/Messages.json", provider.SID),
|
||||
|
|
|
@ -33,7 +33,7 @@ func TestAlertProvider_ToCustomAlertProviderWithResolvedAlert(t *testing.T) {
|
|||
To: "4",
|
||||
}
|
||||
description := "alert-description"
|
||||
customAlertProvider := provider.ToCustomAlertProvider(&core.Service{Name: "service-name"}, &alert.Alert{Description: &description}, &core.Result{}, true)
|
||||
customAlertProvider := provider.ToCustomAlertProvider(&core.Endpoint{Name: "endpoint-name"}, &alert.Alert{Description: &description}, &core.Result{}, true)
|
||||
if customAlertProvider == nil {
|
||||
t.Fatal("customAlertProvider shouldn't have been nil")
|
||||
}
|
||||
|
@ -46,8 +46,8 @@ func TestAlertProvider_ToCustomAlertProviderWithResolvedAlert(t *testing.T) {
|
|||
if customAlertProvider.Method != http.MethodPost {
|
||||
t.Errorf("expected method to be %s, got %s", http.MethodPost, customAlertProvider.Method)
|
||||
}
|
||||
if customAlertProvider.Body != "Body=RESOLVED%3A+service-name+-+alert-description&From=3&To=4" {
|
||||
t.Errorf("expected body to be %s, got %s", "Body=RESOLVED%3A+service-name+-+alert-description&From=3&To=4", customAlertProvider.Body)
|
||||
if customAlertProvider.Body != "Body=RESOLVED%3A+endpoint-name+-+alert-description&From=3&To=4" {
|
||||
t.Errorf("expected body to be %s, got %s", "Body=RESOLVED%3A+endpoint-name+-+alert-description&From=3&To=4", customAlertProvider.Body)
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -59,7 +59,7 @@ func TestAlertProvider_ToCustomAlertProviderWithTriggeredAlert(t *testing.T) {
|
|||
To: "1",
|
||||
}
|
||||
description := "alert-description"
|
||||
customAlertProvider := provider.ToCustomAlertProvider(&core.Service{Name: "service-name"}, &alert.Alert{Description: &description}, &core.Result{}, false)
|
||||
customAlertProvider := provider.ToCustomAlertProvider(&core.Endpoint{Name: "endpoint-name"}, &alert.Alert{Description: &description}, &core.Result{}, false)
|
||||
if customAlertProvider == nil {
|
||||
t.Fatal("customAlertProvider shouldn't have been nil")
|
||||
}
|
||||
|
@ -72,7 +72,7 @@ func TestAlertProvider_ToCustomAlertProviderWithTriggeredAlert(t *testing.T) {
|
|||
if customAlertProvider.Method != http.MethodPost {
|
||||
t.Errorf("expected method to be %s, got %s", http.MethodPost, customAlertProvider.Method)
|
||||
}
|
||||
if customAlertProvider.Body != "Body=TRIGGERED%3A+service-name+-+alert-description&From=2&To=1" {
|
||||
t.Errorf("expected body to be %s, got %s", "Body=TRIGGERED%3A+service-name+-+alert-description&From=2&To=1", customAlertProvider.Body)
|
||||
if customAlertProvider.Body != "Body=TRIGGERED%3A+endpoint-name+-+alert-description&From=2&To=1" {
|
||||
t.Errorf("expected body to be %s, got %s", "Body=TRIGGERED%3A+endpoint-name+-+alert-description&From=2&To=1", customAlertProvider.Body)
|
||||
}
|
||||
}
|
||||
|
|
|
@ -22,7 +22,7 @@ func GetHTTPClient(config *Config) *http.Client {
|
|||
return config.getHTTPClient()
|
||||
}
|
||||
|
||||
// CanCreateTCPConnection checks whether a connection can be established with a TCP service
|
||||
// CanCreateTCPConnection checks whether a connection can be established with a TCP endpoint
|
||||
func CanCreateTCPConnection(address string, config *Config) bool {
|
||||
conn, err := net.DialTimeout("tcp", address, config.Timeout)
|
||||
if err != nil {
|
||||
|
|
|
@ -46,7 +46,7 @@ func (c *Config) ValidateAndSetDefaults() {
|
|||
}
|
||||
}
|
||||
|
||||
// GetHTTPClient return a HTTP client matching the Config's parameters.
|
||||
// GetHTTPClient return an HTTP client matching the Config's parameters.
|
||||
func (c *Config) getHTTPClient() *http.Client {
|
||||
if c.httpClient == nil {
|
||||
c.httpClient = &http.Client{
|
||||
|
|
|
@ -1,8 +1,8 @@
|
|||
services:
|
||||
endpoints:
|
||||
- name: front-end
|
||||
group: core
|
||||
url: "https://twin.sh/health"
|
||||
interval: 1m
|
||||
interval: 5m
|
||||
conditions:
|
||||
- "[STATUS] == 200"
|
||||
- "[BODY].status == UP"
|
||||
|
|
|
@ -30,8 +30,8 @@ const (
|
|||
)
|
||||
|
||||
var (
|
||||
// ErrNoServiceInConfig is an error returned when a configuration file has no services configured
|
||||
ErrNoServiceInConfig = errors.New("configuration file should contain at least 1 service")
|
||||
// ErrNoEndpointInConfig is an error returned when a configuration file has no endpoints configured
|
||||
ErrNoEndpointInConfig = errors.New("configuration file should contain at least 1 endpoint")
|
||||
|
||||
// ErrConfigFileNotFound is an error returned when the configuration file could not be found
|
||||
ErrConfigFileNotFound = errors.New("configuration file not found")
|
||||
|
@ -53,7 +53,7 @@ type Config struct {
|
|||
SkipInvalidConfigUpdate bool `yaml:"skip-invalid-config-update"`
|
||||
|
||||
// DisableMonitoringLock Whether to disable the monitoring lock
|
||||
// The monitoring lock is what prevents multiple services from being processed at the same time.
|
||||
// The monitoring lock is what prevents multiple endpoints from being processed at the same time.
|
||||
// Disabling this may lead to inaccurate response times
|
||||
DisableMonitoringLock bool `yaml:"disable-monitoring-lock"`
|
||||
|
||||
|
@ -63,13 +63,21 @@ type Config struct {
|
|||
// Alerting Configuration for alerting
|
||||
Alerting *alerting.Config `yaml:"alerting"`
|
||||
|
||||
// Services List of services to monitor
|
||||
Services []*core.Service `yaml:"services"`
|
||||
// Endpoints List of endpoints to monitor
|
||||
Endpoints []*core.Endpoint `yaml:"endpoints"`
|
||||
|
||||
// Services List of endpoints to monitor
|
||||
//
|
||||
// XXX: Remove this in v5.0.0
|
||||
// XXX: This is not a typo -- not v4.0.0, but v5.0.0 -- I want to give enough time for people to migrate
|
||||
//
|
||||
// Deprecated in favor of Endpoints
|
||||
Services []*core.Endpoint `yaml:"services"`
|
||||
|
||||
// Storage is the configuration for how the data is stored
|
||||
Storage *storage.Config `yaml:"storage"`
|
||||
|
||||
// Web is the configuration for the web listener
|
||||
// Web is the web configuration for the application
|
||||
Web *web.Config `yaml:"web"`
|
||||
|
||||
// UI is the configuration for the UI
|
||||
|
@ -149,17 +157,23 @@ func parseAndValidateConfigBytes(yamlBytes []byte) (config *Config, err error) {
|
|||
if err != nil {
|
||||
return
|
||||
}
|
||||
// Check if the configuration file at least has services configured
|
||||
if config == nil || config.Services == nil || len(config.Services) == 0 {
|
||||
err = ErrNoServiceInConfig
|
||||
if config != nil && len(config.Services) > 0 { // XXX: Remove this in v5.0.0
|
||||
log.Println("WARNING: Your configuration is using 'services:', which is deprecated in favor of 'endpoints:'.")
|
||||
log.Println("WARNING: See https://github.com/TwiN/gatus/issues/191 for more information")
|
||||
config.Endpoints = append(config.Endpoints, config.Services...)
|
||||
config.Services = nil
|
||||
}
|
||||
// Check if the configuration file at least has endpoints configured
|
||||
if config == nil || config.Endpoints == nil || len(config.Endpoints) == 0 {
|
||||
err = ErrNoEndpointInConfig
|
||||
} else {
|
||||
// Note that the functions below may panic, and this is on purpose to prevent Gatus from starting with
|
||||
// invalid configurations
|
||||
validateAlertingConfig(config.Alerting, config.Services, config.Debug)
|
||||
validateAlertingConfig(config.Alerting, config.Endpoints, config.Debug)
|
||||
if err := validateSecurityConfig(config); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if err := validateServicesConfig(config); err != nil {
|
||||
if err := validateEndpointsConfig(config); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if err := validateWebConfig(config); err != nil {
|
||||
|
@ -188,14 +202,14 @@ func validateStorageConfig(config *Config) error {
|
|||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// Remove all ServiceStatus that represent services which no longer exist in the configuration
|
||||
// Remove all EndpointStatus that represent endpoints which no longer exist in the configuration
|
||||
var keys []string
|
||||
for _, service := range config.Services {
|
||||
keys = append(keys, service.Key())
|
||||
for _, endpoint := range config.Endpoints {
|
||||
keys = append(keys, endpoint.Key())
|
||||
}
|
||||
numberOfServiceStatusesDeleted := storage.Get().DeleteAllServiceStatusesNotInKeys(keys)
|
||||
if numberOfServiceStatusesDeleted > 0 {
|
||||
log.Printf("[config][validateStorageConfig] Deleted %d service statuses because their matching services no longer existed", numberOfServiceStatusesDeleted)
|
||||
numberOfEndpointStatusesDeleted := storage.Get().DeleteAllEndpointStatusesNotInKeys(keys)
|
||||
if numberOfEndpointStatusesDeleted > 0 {
|
||||
log.Printf("[config][validateStorageConfig] Deleted %d endpoint statuses because their matching endpoints no longer existed", numberOfEndpointStatusesDeleted)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
@ -231,16 +245,16 @@ func validateWebConfig(config *Config) error {
|
|||
return nil
|
||||
}
|
||||
|
||||
func validateServicesConfig(config *Config) error {
|
||||
for _, service := range config.Services {
|
||||
func validateEndpointsConfig(config *Config) error {
|
||||
for _, endpoint := range config.Endpoints {
|
||||
if config.Debug {
|
||||
log.Printf("[config][validateServicesConfig] Validating service '%s'", service.Name)
|
||||
log.Printf("[config][validateEndpointsConfig] Validating endpoint '%s'", endpoint.Name)
|
||||
}
|
||||
if err := service.ValidateAndSetDefaults(); err != nil {
|
||||
if err := endpoint.ValidateAndSetDefaults(); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
log.Printf("[config][validateServicesConfig] Validated %d services", len(config.Services))
|
||||
log.Printf("[config][validateEndpointsConfig] Validated %d endpoints", len(config.Endpoints))
|
||||
return nil
|
||||
}
|
||||
|
||||
|
@ -260,10 +274,10 @@ func validateSecurityConfig(config *Config) error {
|
|||
}
|
||||
|
||||
// validateAlertingConfig validates the alerting configuration
|
||||
// Note that the alerting configuration has to be validated before the service configuration, because the default alert
|
||||
// returned by provider.AlertProvider.GetDefaultAlert() must be parsed before core.Service.ValidateAndSetDefaults()
|
||||
// Note that the alerting configuration has to be validated before the endpoint configuration, because the default alert
|
||||
// returned by provider.AlertProvider.GetDefaultAlert() must be parsed before core.Endpoint.ValidateAndSetDefaults()
|
||||
// sets the default alert values when none are set.
|
||||
func validateAlertingConfig(alertingConfig *alerting.Config, services []*core.Service, debug bool) {
|
||||
func validateAlertingConfig(alertingConfig *alerting.Config, endpoints []*core.Endpoint, debug bool) {
|
||||
if alertingConfig == nil {
|
||||
log.Printf("[config][validateAlertingConfig] Alerting is not configured")
|
||||
return
|
||||
|
@ -286,13 +300,13 @@ func validateAlertingConfig(alertingConfig *alerting.Config, services []*core.Se
|
|||
if alertProvider.IsValid() {
|
||||
// Parse alerts with the provider's default alert
|
||||
if alertProvider.GetDefaultAlert() != nil {
|
||||
for _, service := range services {
|
||||
for alertIndex, serviceAlert := range service.Alerts {
|
||||
if alertType == serviceAlert.Type {
|
||||
for _, endpoint := range endpoints {
|
||||
for alertIndex, endpointAlert := range endpoint.Alerts {
|
||||
if alertType == endpointAlert.Type {
|
||||
if debug {
|
||||
log.Printf("[config][validateAlertingConfig] Parsing alert %d with provider's default alert for provider=%s in service=%s", alertIndex, alertType, service.Name)
|
||||
log.Printf("[config][validateAlertingConfig] Parsing alert %d with provider's default alert for provider=%s in endpoint=%s", alertIndex, alertType, endpoint.Name)
|
||||
}
|
||||
provider.ParseWithDefaultAlert(alertProvider.GetDefaultAlert(), serviceAlert)
|
||||
provider.ParseWithDefaultAlert(alertProvider.GetDefaultAlert(), endpointAlert)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -52,7 +52,7 @@ maintenance:
|
|||
every: [Monday, Thursday]
|
||||
ui:
|
||||
title: Test
|
||||
services:
|
||||
endpoints:
|
||||
- name: website
|
||||
url: https://twin.sh/health
|
||||
interval: 15s
|
||||
|
@ -89,80 +89,80 @@ services:
|
|||
if mc := config.Maintenance; mc == nil || mc.Start != "00:00" || !mc.IsEnabled() || mc.Duration != 4*time.Hour || len(mc.Every) != 2 {
|
||||
t.Error("Expected Config.Maintenance to be configured properly")
|
||||
}
|
||||
if len(config.Services) != 3 {
|
||||
t.Error("Should have returned two services")
|
||||
if len(config.Endpoints) != 3 {
|
||||
t.Error("Should have returned two endpoints")
|
||||
}
|
||||
|
||||
if config.Services[0].URL != "https://twin.sh/health" {
|
||||
if config.Endpoints[0].URL != "https://twin.sh/health" {
|
||||
t.Errorf("URL should have been %s", "https://twin.sh/health")
|
||||
}
|
||||
if config.Services[0].Method != "GET" {
|
||||
if config.Endpoints[0].Method != "GET" {
|
||||
t.Errorf("Method should have been %s (default)", "GET")
|
||||
}
|
||||
if config.Services[0].Interval != 15*time.Second {
|
||||
if config.Endpoints[0].Interval != 15*time.Second {
|
||||
t.Errorf("Interval should have been %s", 15*time.Second)
|
||||
}
|
||||
if config.Services[0].ClientConfig.Insecure != client.GetDefaultConfig().Insecure {
|
||||
t.Errorf("ClientConfig.Insecure should have been %v, got %v", true, config.Services[0].ClientConfig.Insecure)
|
||||
if config.Endpoints[0].ClientConfig.Insecure != client.GetDefaultConfig().Insecure {
|
||||
t.Errorf("ClientConfig.Insecure should have been %v, got %v", true, config.Endpoints[0].ClientConfig.Insecure)
|
||||
}
|
||||
if config.Services[0].ClientConfig.IgnoreRedirect != client.GetDefaultConfig().IgnoreRedirect {
|
||||
t.Errorf("ClientConfig.IgnoreRedirect should have been %v, got %v", true, config.Services[0].ClientConfig.IgnoreRedirect)
|
||||
if config.Endpoints[0].ClientConfig.IgnoreRedirect != client.GetDefaultConfig().IgnoreRedirect {
|
||||
t.Errorf("ClientConfig.IgnoreRedirect should have been %v, got %v", true, config.Endpoints[0].ClientConfig.IgnoreRedirect)
|
||||
}
|
||||
if config.Services[0].ClientConfig.Timeout != client.GetDefaultConfig().Timeout {
|
||||
t.Errorf("ClientConfig.Timeout should have been %v, got %v", client.GetDefaultConfig().Timeout, config.Services[0].ClientConfig.Timeout)
|
||||
if config.Endpoints[0].ClientConfig.Timeout != client.GetDefaultConfig().Timeout {
|
||||
t.Errorf("ClientConfig.Timeout should have been %v, got %v", client.GetDefaultConfig().Timeout, config.Endpoints[0].ClientConfig.Timeout)
|
||||
}
|
||||
if len(config.Services[0].Conditions) != 1 {
|
||||
if len(config.Endpoints[0].Conditions) != 1 {
|
||||
t.Errorf("There should have been %d conditions", 1)
|
||||
}
|
||||
|
||||
if config.Services[1].URL != "https://api.github.com/healthz" {
|
||||
if config.Endpoints[1].URL != "https://api.github.com/healthz" {
|
||||
t.Errorf("URL should have been %s", "https://api.github.com/healthz")
|
||||
}
|
||||
if config.Services[1].Method != "GET" {
|
||||
if config.Endpoints[1].Method != "GET" {
|
||||
t.Errorf("Method should have been %s (default)", "GET")
|
||||
}
|
||||
if config.Services[1].Interval != 60*time.Second {
|
||||
if config.Endpoints[1].Interval != 60*time.Second {
|
||||
t.Errorf("Interval should have been %s, because it is the default value", 60*time.Second)
|
||||
}
|
||||
if !config.Services[1].ClientConfig.Insecure {
|
||||
t.Errorf("ClientConfig.Insecure should have been %v, got %v", true, config.Services[1].ClientConfig.Insecure)
|
||||
if !config.Endpoints[1].ClientConfig.Insecure {
|
||||
t.Errorf("ClientConfig.Insecure should have been %v, got %v", true, config.Endpoints[1].ClientConfig.Insecure)
|
||||
}
|
||||
if !config.Services[1].ClientConfig.IgnoreRedirect {
|
||||
t.Errorf("ClientConfig.IgnoreRedirect should have been %v, got %v", true, config.Services[1].ClientConfig.IgnoreRedirect)
|
||||
if !config.Endpoints[1].ClientConfig.IgnoreRedirect {
|
||||
t.Errorf("ClientConfig.IgnoreRedirect should have been %v, got %v", true, config.Endpoints[1].ClientConfig.IgnoreRedirect)
|
||||
}
|
||||
if config.Services[1].ClientConfig.Timeout != 5*time.Second {
|
||||
t.Errorf("ClientConfig.Timeout should have been %v, got %v", 5*time.Second, config.Services[1].ClientConfig.Timeout)
|
||||
if config.Endpoints[1].ClientConfig.Timeout != 5*time.Second {
|
||||
t.Errorf("ClientConfig.Timeout should have been %v, got %v", 5*time.Second, config.Endpoints[1].ClientConfig.Timeout)
|
||||
}
|
||||
if len(config.Services[1].Conditions) != 2 {
|
||||
if len(config.Endpoints[1].Conditions) != 2 {
|
||||
t.Errorf("There should have been %d conditions", 2)
|
||||
}
|
||||
|
||||
if config.Services[2].URL != "https://example.com/" {
|
||||
if config.Endpoints[2].URL != "https://example.com/" {
|
||||
t.Errorf("URL should have been %s", "https://example.com/")
|
||||
}
|
||||
if config.Services[2].Method != "GET" {
|
||||
if config.Endpoints[2].Method != "GET" {
|
||||
t.Errorf("Method should have been %s (default)", "GET")
|
||||
}
|
||||
if config.Services[2].Interval != 30*time.Minute {
|
||||
if config.Endpoints[2].Interval != 30*time.Minute {
|
||||
t.Errorf("Interval should have been %s, because it is the default value", 30*time.Minute)
|
||||
}
|
||||
if !config.Services[2].ClientConfig.Insecure {
|
||||
t.Errorf("ClientConfig.Insecure should have been %v, got %v", true, config.Services[2].ClientConfig.Insecure)
|
||||
if !config.Endpoints[2].ClientConfig.Insecure {
|
||||
t.Errorf("ClientConfig.Insecure should have been %v, got %v", true, config.Endpoints[2].ClientConfig.Insecure)
|
||||
}
|
||||
if config.Services[2].ClientConfig.IgnoreRedirect {
|
||||
t.Errorf("ClientConfig.IgnoreRedirect should have been %v by default, got %v", false, config.Services[2].ClientConfig.IgnoreRedirect)
|
||||
if config.Endpoints[2].ClientConfig.IgnoreRedirect {
|
||||
t.Errorf("ClientConfig.IgnoreRedirect should have been %v by default, got %v", false, config.Endpoints[2].ClientConfig.IgnoreRedirect)
|
||||
}
|
||||
if config.Services[2].ClientConfig.Timeout != 10*time.Second {
|
||||
t.Errorf("ClientConfig.Timeout should have been %v by default, got %v", 10*time.Second, config.Services[2].ClientConfig.Timeout)
|
||||
if config.Endpoints[2].ClientConfig.Timeout != 10*time.Second {
|
||||
t.Errorf("ClientConfig.Timeout should have been %v by default, got %v", 10*time.Second, config.Endpoints[2].ClientConfig.Timeout)
|
||||
}
|
||||
if len(config.Services[2].Conditions) != 1 {
|
||||
if len(config.Endpoints[2].Conditions) != 1 {
|
||||
t.Errorf("There should have been %d conditions", 1)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseAndValidateConfigBytesDefault(t *testing.T) {
|
||||
config, err := parseAndValidateConfigBytes([]byte(`
|
||||
services:
|
||||
endpoints:
|
||||
- name: website
|
||||
url: https://twin.sh/health
|
||||
conditions:
|
||||
|
@ -183,20 +183,20 @@ services:
|
|||
if config.Web.Port != web.DefaultPort {
|
||||
t.Errorf("Port should have been %d, because it is the default value", web.DefaultPort)
|
||||
}
|
||||
if config.Services[0].URL != "https://twin.sh/health" {
|
||||
if config.Endpoints[0].URL != "https://twin.sh/health" {
|
||||
t.Errorf("URL should have been %s", "https://twin.sh/health")
|
||||
}
|
||||
if config.Services[0].Interval != 60*time.Second {
|
||||
if config.Endpoints[0].Interval != 60*time.Second {
|
||||
t.Errorf("Interval should have been %s, because it is the default value", 60*time.Second)
|
||||
}
|
||||
if config.Services[0].ClientConfig.Insecure != client.GetDefaultConfig().Insecure {
|
||||
t.Errorf("ClientConfig.Insecure should have been %v by default, got %v", true, config.Services[0].ClientConfig.Insecure)
|
||||
if config.Endpoints[0].ClientConfig.Insecure != client.GetDefaultConfig().Insecure {
|
||||
t.Errorf("ClientConfig.Insecure should have been %v by default, got %v", true, config.Endpoints[0].ClientConfig.Insecure)
|
||||
}
|
||||
if config.Services[0].ClientConfig.IgnoreRedirect != client.GetDefaultConfig().IgnoreRedirect {
|
||||
t.Errorf("ClientConfig.IgnoreRedirect should have been %v by default, got %v", true, config.Services[0].ClientConfig.IgnoreRedirect)
|
||||
if config.Endpoints[0].ClientConfig.IgnoreRedirect != client.GetDefaultConfig().IgnoreRedirect {
|
||||
t.Errorf("ClientConfig.IgnoreRedirect should have been %v by default, got %v", true, config.Endpoints[0].ClientConfig.IgnoreRedirect)
|
||||
}
|
||||
if config.Services[0].ClientConfig.Timeout != client.GetDefaultConfig().Timeout {
|
||||
t.Errorf("ClientConfig.Timeout should have been %v by default, got %v", client.GetDefaultConfig().Timeout, config.Services[0].ClientConfig.Timeout)
|
||||
if config.Endpoints[0].ClientConfig.Timeout != client.GetDefaultConfig().Timeout {
|
||||
t.Errorf("ClientConfig.Timeout should have been %v by default, got %v", client.GetDefaultConfig().Timeout, config.Endpoints[0].ClientConfig.Timeout)
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -204,7 +204,7 @@ func TestParseAndValidateConfigBytesWithAddress(t *testing.T) {
|
|||
config, err := parseAndValidateConfigBytes([]byte(`
|
||||
web:
|
||||
address: 127.0.0.1
|
||||
services:
|
||||
endpoints:
|
||||
- name: website
|
||||
url: https://twin.sh/actuator/health
|
||||
conditions:
|
||||
|
@ -219,10 +219,10 @@ services:
|
|||
if config.Metrics {
|
||||
t.Error("Metrics should've been false by default")
|
||||
}
|
||||
if config.Services[0].URL != "https://twin.sh/actuator/health" {
|
||||
if config.Endpoints[0].URL != "https://twin.sh/actuator/health" {
|
||||
t.Errorf("URL should have been %s", "https://twin.sh/actuator/health")
|
||||
}
|
||||
if config.Services[0].Interval != 60*time.Second {
|
||||
if config.Endpoints[0].Interval != 60*time.Second {
|
||||
t.Errorf("Interval should have been %s, because it is the default value", 60*time.Second)
|
||||
}
|
||||
if config.Web.Address != "127.0.0.1" {
|
||||
|
@ -237,7 +237,7 @@ func TestParseAndValidateConfigBytesWithPort(t *testing.T) {
|
|||
config, err := parseAndValidateConfigBytes([]byte(`
|
||||
web:
|
||||
port: 12345
|
||||
services:
|
||||
endpoints:
|
||||
- name: website
|
||||
url: https://twin.sh/health
|
||||
conditions:
|
||||
|
@ -252,10 +252,10 @@ services:
|
|||
if config.Metrics {
|
||||
t.Error("Metrics should've been false by default")
|
||||
}
|
||||
if config.Services[0].URL != "https://twin.sh/health" {
|
||||
if config.Endpoints[0].URL != "https://twin.sh/health" {
|
||||
t.Errorf("URL should have been %s", "https://twin.sh/health")
|
||||
}
|
||||
if config.Services[0].Interval != 60*time.Second {
|
||||
if config.Endpoints[0].Interval != 60*time.Second {
|
||||
t.Errorf("Interval should have been %s, because it is the default value", 60*time.Second)
|
||||
}
|
||||
if config.Web.Address != web.DefaultAddress {
|
||||
|
@ -271,7 +271,7 @@ func TestParseAndValidateConfigBytesWithPortAndHost(t *testing.T) {
|
|||
web:
|
||||
port: 12345
|
||||
address: 127.0.0.1
|
||||
services:
|
||||
endpoints:
|
||||
- name: website
|
||||
url: https://twin.sh/health
|
||||
conditions:
|
||||
|
@ -286,10 +286,10 @@ services:
|
|||
if config.Metrics {
|
||||
t.Error("Metrics should've been false by default")
|
||||
}
|
||||
if config.Services[0].URL != "https://twin.sh/health" {
|
||||
if config.Endpoints[0].URL != "https://twin.sh/health" {
|
||||
t.Errorf("URL should have been %s", "https://twin.sh/health")
|
||||
}
|
||||
if config.Services[0].Interval != 60*time.Second {
|
||||
if config.Endpoints[0].Interval != 60*time.Second {
|
||||
t.Errorf("Interval should have been %s, because it is the default value", 60*time.Second)
|
||||
}
|
||||
if config.Web.Address != "127.0.0.1" {
|
||||
|
@ -305,7 +305,7 @@ func TestParseAndValidateConfigBytesWithInvalidPort(t *testing.T) {
|
|||
web:
|
||||
port: 65536
|
||||
address: 127.0.0.1
|
||||
services:
|
||||
endpoints:
|
||||
- name: website
|
||||
url: https://twin.sh/health
|
||||
conditions:
|
||||
|
@ -319,7 +319,7 @@ services:
|
|||
func TestParseAndValidateConfigBytesWithMetricsAndCustomUserAgentHeader(t *testing.T) {
|
||||
config, err := parseAndValidateConfigBytes([]byte(`
|
||||
metrics: true
|
||||
services:
|
||||
endpoints:
|
||||
- name: website
|
||||
url: https://twin.sh/health
|
||||
headers:
|
||||
|
@ -336,10 +336,10 @@ services:
|
|||
if !config.Metrics {
|
||||
t.Error("Metrics should have been true")
|
||||
}
|
||||
if config.Services[0].URL != "https://twin.sh/health" {
|
||||
if config.Endpoints[0].URL != "https://twin.sh/health" {
|
||||
t.Errorf("URL should have been %s", "https://twin.sh/health")
|
||||
}
|
||||
if config.Services[0].Interval != 60*time.Second {
|
||||
if config.Endpoints[0].Interval != 60*time.Second {
|
||||
t.Errorf("Interval should have been %s, because it is the default value", 60*time.Second)
|
||||
}
|
||||
if config.Web.Address != web.DefaultAddress {
|
||||
|
@ -348,7 +348,7 @@ services:
|
|||
if config.Web.Port != web.DefaultPort {
|
||||
t.Errorf("Port should have been %d, because it is the default value", web.DefaultPort)
|
||||
}
|
||||
if userAgent := config.Services[0].Headers["User-Agent"]; userAgent != "Test/2.0" {
|
||||
if userAgent := config.Endpoints[0].Headers["User-Agent"]; userAgent != "Test/2.0" {
|
||||
t.Errorf("User-Agent should've been %s, got %s", "Test/2.0", userAgent)
|
||||
}
|
||||
}
|
||||
|
@ -359,7 +359,7 @@ metrics: true
|
|||
web:
|
||||
address: 192.168.0.1
|
||||
port: 9090
|
||||
services:
|
||||
endpoints:
|
||||
- name: website
|
||||
url: https://twin.sh/health
|
||||
conditions:
|
||||
|
@ -380,13 +380,13 @@ services:
|
|||
if config.Web.Port != 9090 {
|
||||
t.Errorf("Port should have been %d, because it is specified in config", 9090)
|
||||
}
|
||||
if config.Services[0].URL != "https://twin.sh/health" {
|
||||
if config.Endpoints[0].URL != "https://twin.sh/health" {
|
||||
t.Errorf("URL should have been %s", "https://twin.sh/health")
|
||||
}
|
||||
if config.Services[0].Interval != 60*time.Second {
|
||||
if config.Endpoints[0].Interval != 60*time.Second {
|
||||
t.Errorf("Interval should have been %s, because it is the default value", 60*time.Second)
|
||||
}
|
||||
if userAgent := config.Services[0].Headers["User-Agent"]; userAgent != core.GatusUserAgent {
|
||||
if userAgent := config.Endpoints[0].Headers["User-Agent"]; userAgent != core.GatusUserAgent {
|
||||
t.Errorf("User-Agent should've been %s because it's the default value, got %s", core.GatusUserAgent, userAgent)
|
||||
}
|
||||
}
|
||||
|
@ -402,8 +402,8 @@ badconfig:
|
|||
if err == nil {
|
||||
t.Error("An error should've been returned")
|
||||
}
|
||||
if err != ErrNoServiceInConfig {
|
||||
t.Error("The error returned should have been of type ErrNoServiceInConfig")
|
||||
if err != ErrNoEndpointInConfig {
|
||||
t.Error("The error returned should have been of type ErrNoEndpointInConfig")
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -436,7 +436,7 @@ alerting:
|
|||
teams:
|
||||
webhook-url: "http://example.com"
|
||||
|
||||
services:
|
||||
endpoints:
|
||||
- name: website
|
||||
url: https://twin.sh/health
|
||||
alerts:
|
||||
|
@ -477,116 +477,116 @@ services:
|
|||
if config.Alerting.Slack == nil || !config.Alerting.Slack.IsValid() {
|
||||
t.Fatal("Slack alerting config should've been valid")
|
||||
}
|
||||
// Services
|
||||
if len(config.Services) != 1 {
|
||||
t.Error("There should've been 1 service")
|
||||
// Endpoints
|
||||
if len(config.Endpoints) != 1 {
|
||||
t.Error("There should've been 1 endpoint")
|
||||
}
|
||||
if config.Services[0].URL != "https://twin.sh/health" {
|
||||
if config.Endpoints[0].URL != "https://twin.sh/health" {
|
||||
t.Errorf("URL should have been %s", "https://twin.sh/health")
|
||||
}
|
||||
if config.Services[0].Interval != 60*time.Second {
|
||||
if config.Endpoints[0].Interval != 60*time.Second {
|
||||
t.Errorf("Interval should have been %s, because it is the default value", 60*time.Second)
|
||||
}
|
||||
if len(config.Services[0].Alerts) != 8 {
|
||||
if len(config.Endpoints[0].Alerts) != 8 {
|
||||
t.Fatal("There should've been 8 alerts configured")
|
||||
}
|
||||
|
||||
if config.Services[0].Alerts[0].Type != alert.TypeSlack {
|
||||
t.Errorf("The type of the alert should've been %s, but it was %s", alert.TypeSlack, config.Services[0].Alerts[0].Type)
|
||||
if config.Endpoints[0].Alerts[0].Type != alert.TypeSlack {
|
||||
t.Errorf("The type of the alert should've been %s, but it was %s", alert.TypeSlack, config.Endpoints[0].Alerts[0].Type)
|
||||
}
|
||||
if !config.Services[0].Alerts[0].IsEnabled() {
|
||||
if !config.Endpoints[0].Alerts[0].IsEnabled() {
|
||||
t.Error("The alert should've been enabled")
|
||||
}
|
||||
if config.Services[0].Alerts[0].FailureThreshold != 3 {
|
||||
t.Errorf("The default failure threshold of the alert should've been %d, but it was %d", 3, config.Services[0].Alerts[0].FailureThreshold)
|
||||
if config.Endpoints[0].Alerts[0].FailureThreshold != 3 {
|
||||
t.Errorf("The default failure threshold of the alert should've been %d, but it was %d", 3, config.Endpoints[0].Alerts[0].FailureThreshold)
|
||||
}
|
||||
if config.Services[0].Alerts[0].SuccessThreshold != 2 {
|
||||
t.Errorf("The default success threshold of the alert should've been %d, but it was %d", 2, config.Services[0].Alerts[0].SuccessThreshold)
|
||||
if config.Endpoints[0].Alerts[0].SuccessThreshold != 2 {
|
||||
t.Errorf("The default success threshold of the alert should've been %d, but it was %d", 2, config.Endpoints[0].Alerts[0].SuccessThreshold)
|
||||
}
|
||||
|
||||
if config.Services[0].Alerts[1].Type != alert.TypePagerDuty {
|
||||
t.Errorf("The type of the alert should've been %s, but it was %s", alert.TypePagerDuty, config.Services[0].Alerts[1].Type)
|
||||
if config.Endpoints[0].Alerts[1].Type != alert.TypePagerDuty {
|
||||
t.Errorf("The type of the alert should've been %s, but it was %s", alert.TypePagerDuty, config.Endpoints[0].Alerts[1].Type)
|
||||
}
|
||||
if config.Services[0].Alerts[1].GetDescription() != "Healthcheck failed 7 times in a row" {
|
||||
t.Errorf("The description of the alert should've been %s, but it was %s", "Healthcheck failed 7 times in a row", config.Services[0].Alerts[1].GetDescription())
|
||||
if config.Endpoints[0].Alerts[1].GetDescription() != "Healthcheck failed 7 times in a row" {
|
||||
t.Errorf("The description of the alert should've been %s, but it was %s", "Healthcheck failed 7 times in a row", config.Endpoints[0].Alerts[1].GetDescription())
|
||||
}
|
||||
if config.Services[0].Alerts[1].FailureThreshold != 7 {
|
||||
t.Errorf("The failure threshold of the alert should've been %d, but it was %d", 7, config.Services[0].Alerts[1].FailureThreshold)
|
||||
if config.Endpoints[0].Alerts[1].FailureThreshold != 7 {
|
||||
t.Errorf("The failure threshold of the alert should've been %d, but it was %d", 7, config.Endpoints[0].Alerts[1].FailureThreshold)
|
||||
}
|
||||
if config.Services[0].Alerts[1].SuccessThreshold != 5 {
|
||||
t.Errorf("The success threshold of the alert should've been %d, but it was %d", 5, config.Services[0].Alerts[1].SuccessThreshold)
|
||||
if config.Endpoints[0].Alerts[1].SuccessThreshold != 5 {
|
||||
t.Errorf("The success threshold of the alert should've been %d, but it was %d", 5, config.Endpoints[0].Alerts[1].SuccessThreshold)
|
||||
}
|
||||
|
||||
if config.Services[0].Alerts[2].Type != alert.TypeMattermost {
|
||||
t.Errorf("The type of the alert should've been %s, but it was %s", alert.TypeMattermost, config.Services[0].Alerts[2].Type)
|
||||
if config.Endpoints[0].Alerts[2].Type != alert.TypeMattermost {
|
||||
t.Errorf("The type of the alert should've been %s, but it was %s", alert.TypeMattermost, config.Endpoints[0].Alerts[2].Type)
|
||||
}
|
||||
if !config.Services[0].Alerts[2].IsEnabled() {
|
||||
if !config.Endpoints[0].Alerts[2].IsEnabled() {
|
||||
t.Error("The alert should've been enabled")
|
||||
}
|
||||
if config.Services[0].Alerts[2].FailureThreshold != 3 {
|
||||
t.Errorf("The default failure threshold of the alert should've been %d, but it was %d", 3, config.Services[0].Alerts[2].FailureThreshold)
|
||||
if config.Endpoints[0].Alerts[2].FailureThreshold != 3 {
|
||||
t.Errorf("The default failure threshold of the alert should've been %d, but it was %d", 3, config.Endpoints[0].Alerts[2].FailureThreshold)
|
||||
}
|
||||
if config.Services[0].Alerts[2].SuccessThreshold != 2 {
|
||||
t.Errorf("The default success threshold of the alert should've been %d, but it was %d", 2, config.Services[0].Alerts[2].SuccessThreshold)
|
||||
if config.Endpoints[0].Alerts[2].SuccessThreshold != 2 {
|
||||
t.Errorf("The default success threshold of the alert should've been %d, but it was %d", 2, config.Endpoints[0].Alerts[2].SuccessThreshold)
|
||||
}
|
||||
|
||||
if config.Services[0].Alerts[3].Type != alert.TypeMessagebird {
|
||||
t.Errorf("The type of the alert should've been %s, but it was %s", alert.TypeMessagebird, config.Services[0].Alerts[3].Type)
|
||||
if config.Endpoints[0].Alerts[3].Type != alert.TypeMessagebird {
|
||||
t.Errorf("The type of the alert should've been %s, but it was %s", alert.TypeMessagebird, config.Endpoints[0].Alerts[3].Type)
|
||||
}
|
||||
if config.Services[0].Alerts[3].IsEnabled() {
|
||||
if config.Endpoints[0].Alerts[3].IsEnabled() {
|
||||
t.Error("The alert should've been disabled")
|
||||
}
|
||||
|
||||
if config.Services[0].Alerts[4].Type != alert.TypeDiscord {
|
||||
t.Errorf("The type of the alert should've been %s, but it was %s", alert.TypeDiscord, config.Services[0].Alerts[4].Type)
|
||||
if config.Endpoints[0].Alerts[4].Type != alert.TypeDiscord {
|
||||
t.Errorf("The type of the alert should've been %s, but it was %s", alert.TypeDiscord, config.Endpoints[0].Alerts[4].Type)
|
||||
}
|
||||
if !config.Services[0].Alerts[4].IsEnabled() {
|
||||
if !config.Endpoints[0].Alerts[4].IsEnabled() {
|
||||
t.Error("The alert should've been enabled")
|
||||
}
|
||||
if config.Services[0].Alerts[4].FailureThreshold != 10 {
|
||||
t.Errorf("The failure threshold of the alert should've been %d, but it was %d", 10, config.Services[0].Alerts[4].FailureThreshold)
|
||||
if config.Endpoints[0].Alerts[4].FailureThreshold != 10 {
|
||||
t.Errorf("The failure threshold of the alert should've been %d, but it was %d", 10, config.Endpoints[0].Alerts[4].FailureThreshold)
|
||||
}
|
||||
if config.Services[0].Alerts[4].SuccessThreshold != 2 {
|
||||
t.Errorf("The default success threshold of the alert should've been %d, but it was %d", 2, config.Services[0].Alerts[4].SuccessThreshold)
|
||||
if config.Endpoints[0].Alerts[4].SuccessThreshold != 2 {
|
||||
t.Errorf("The default success threshold of the alert should've been %d, but it was %d", 2, config.Endpoints[0].Alerts[4].SuccessThreshold)
|
||||
}
|
||||
|
||||
if config.Services[0].Alerts[5].Type != alert.TypeTelegram {
|
||||
t.Errorf("The type of the alert should've been %s, but it was %s", alert.TypeTelegram, config.Services[0].Alerts[5].Type)
|
||||
if config.Endpoints[0].Alerts[5].Type != alert.TypeTelegram {
|
||||
t.Errorf("The type of the alert should've been %s, but it was %s", alert.TypeTelegram, config.Endpoints[0].Alerts[5].Type)
|
||||
}
|
||||
if !config.Services[0].Alerts[5].IsEnabled() {
|
||||
if !config.Endpoints[0].Alerts[5].IsEnabled() {
|
||||
t.Error("The alert should've been enabled")
|
||||
}
|
||||
if config.Services[0].Alerts[5].FailureThreshold != 3 {
|
||||
t.Errorf("The default failure threshold of the alert should've been %d, but it was %d", 3, config.Services[0].Alerts[5].FailureThreshold)
|
||||
if config.Endpoints[0].Alerts[5].FailureThreshold != 3 {
|
||||
t.Errorf("The default failure threshold of the alert should've been %d, but it was %d", 3, config.Endpoints[0].Alerts[5].FailureThreshold)
|
||||
}
|
||||
if config.Services[0].Alerts[5].SuccessThreshold != 2 {
|
||||
t.Errorf("The default success threshold of the alert should've been %d, but it was %d", 2, config.Services[0].Alerts[5].SuccessThreshold)
|
||||
if config.Endpoints[0].Alerts[5].SuccessThreshold != 2 {
|
||||
t.Errorf("The default success threshold of the alert should've been %d, but it was %d", 2, config.Endpoints[0].Alerts[5].SuccessThreshold)
|
||||
}
|
||||
|
||||
if config.Services[0].Alerts[6].Type != alert.TypeTwilio {
|
||||
t.Errorf("The type of the alert should've been %s, but it was %s", alert.TypeTwilio, config.Services[0].Alerts[6].Type)
|
||||
if config.Endpoints[0].Alerts[6].Type != alert.TypeTwilio {
|
||||
t.Errorf("The type of the alert should've been %s, but it was %s", alert.TypeTwilio, config.Endpoints[0].Alerts[6].Type)
|
||||
}
|
||||
if !config.Services[0].Alerts[6].IsEnabled() {
|
||||
if !config.Endpoints[0].Alerts[6].IsEnabled() {
|
||||
t.Error("The alert should've been enabled")
|
||||
}
|
||||
if config.Services[0].Alerts[6].FailureThreshold != 12 {
|
||||
t.Errorf("The default failure threshold of the alert should've been %d, but it was %d", 12, config.Services[0].Alerts[6].FailureThreshold)
|
||||
if config.Endpoints[0].Alerts[6].FailureThreshold != 12 {
|
||||
t.Errorf("The default failure threshold of the alert should've been %d, but it was %d", 12, config.Endpoints[0].Alerts[6].FailureThreshold)
|
||||
}
|
||||
if config.Services[0].Alerts[6].SuccessThreshold != 15 {
|
||||
t.Errorf("The default success threshold of the alert should've been %d, but it was %d", 15, config.Services[0].Alerts[6].SuccessThreshold)
|
||||
if config.Endpoints[0].Alerts[6].SuccessThreshold != 15 {
|
||||
t.Errorf("The default success threshold of the alert should've been %d, but it was %d", 15, config.Endpoints[0].Alerts[6].SuccessThreshold)
|
||||
}
|
||||
|
||||
if config.Services[0].Alerts[7].Type != alert.TypeTeams {
|
||||
t.Errorf("The type of the alert should've been %s, but it was %s", alert.TypeTeams, config.Services[0].Alerts[7].Type)
|
||||
if config.Endpoints[0].Alerts[7].Type != alert.TypeTeams {
|
||||
t.Errorf("The type of the alert should've been %s, but it was %s", alert.TypeTeams, config.Endpoints[0].Alerts[7].Type)
|
||||
}
|
||||
if !config.Services[0].Alerts[7].IsEnabled() {
|
||||
if !config.Endpoints[0].Alerts[7].IsEnabled() {
|
||||
t.Error("The alert should've been enabled")
|
||||
}
|
||||
if config.Services[0].Alerts[7].FailureThreshold != 3 {
|
||||
t.Errorf("The default failure threshold of the alert should've been %d, but it was %d", 3, config.Services[0].Alerts[7].FailureThreshold)
|
||||
if config.Endpoints[0].Alerts[7].FailureThreshold != 3 {
|
||||
t.Errorf("The default failure threshold of the alert should've been %d, but it was %d", 3, config.Endpoints[0].Alerts[7].FailureThreshold)
|
||||
}
|
||||
if config.Services[0].Alerts[7].SuccessThreshold != 2 {
|
||||
t.Errorf("The default success threshold of the alert should've been %d, but it was %d", 2, config.Services[0].Alerts[7].SuccessThreshold)
|
||||
if config.Endpoints[0].Alerts[7].SuccessThreshold != 2 {
|
||||
t.Errorf("The default success threshold of the alert should've been %d, but it was %d", 2, config.Endpoints[0].Alerts[7].SuccessThreshold)
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -642,7 +642,7 @@ alerting:
|
|||
default-alert:
|
||||
enabled: true
|
||||
|
||||
services:
|
||||
endpoints:
|
||||
- name: website
|
||||
url: https://twin.sh/health
|
||||
alerts:
|
||||
|
@ -651,7 +651,7 @@ services:
|
|||
- type: mattermost
|
||||
- type: messagebird
|
||||
- type: discord
|
||||
success-threshold: 2 # test service alert override
|
||||
success-threshold: 2 # test endpoint alert override
|
||||
- type: telegram
|
||||
- type: twilio
|
||||
- type: teams
|
||||
|
@ -754,119 +754,119 @@ services:
|
|||
t.Fatal("Teams.GetDefaultAlert() shouldn't have returned nil")
|
||||
}
|
||||
|
||||
// Services
|
||||
if len(config.Services) != 1 {
|
||||
t.Error("There should've been 1 service")
|
||||
// Endpoints
|
||||
if len(config.Endpoints) != 1 {
|
||||
t.Error("There should've been 1 endpoint")
|
||||
}
|
||||
if config.Services[0].URL != "https://twin.sh/health" {
|
||||
if config.Endpoints[0].URL != "https://twin.sh/health" {
|
||||
t.Errorf("URL should have been %s", "https://twin.sh/health")
|
||||
}
|
||||
if config.Services[0].Interval != 60*time.Second {
|
||||
if config.Endpoints[0].Interval != 60*time.Second {
|
||||
t.Errorf("Interval should have been %s, because it is the default value", 60*time.Second)
|
||||
}
|
||||
if len(config.Services[0].Alerts) != 8 {
|
||||
if len(config.Endpoints[0].Alerts) != 8 {
|
||||
t.Fatal("There should've been 8 alerts configured")
|
||||
}
|
||||
|
||||
if config.Services[0].Alerts[0].Type != alert.TypeSlack {
|
||||
t.Errorf("The type of the alert should've been %s, but it was %s", alert.TypeSlack, config.Services[0].Alerts[0].Type)
|
||||
if config.Endpoints[0].Alerts[0].Type != alert.TypeSlack {
|
||||
t.Errorf("The type of the alert should've been %s, but it was %s", alert.TypeSlack, config.Endpoints[0].Alerts[0].Type)
|
||||
}
|
||||
if !config.Services[0].Alerts[0].IsEnabled() {
|
||||
if !config.Endpoints[0].Alerts[0].IsEnabled() {
|
||||
t.Error("The alert should've been enabled")
|
||||
}
|
||||
if config.Services[0].Alerts[0].FailureThreshold != 3 {
|
||||
t.Errorf("The default failure threshold of the alert should've been %d, but it was %d", 3, config.Services[0].Alerts[0].FailureThreshold)
|
||||
if config.Endpoints[0].Alerts[0].FailureThreshold != 3 {
|
||||
t.Errorf("The default failure threshold of the alert should've been %d, but it was %d", 3, config.Endpoints[0].Alerts[0].FailureThreshold)
|
||||
}
|
||||
if config.Services[0].Alerts[0].SuccessThreshold != 2 {
|
||||
t.Errorf("The default success threshold of the alert should've been %d, but it was %d", 2, config.Services[0].Alerts[0].SuccessThreshold)
|
||||
if config.Endpoints[0].Alerts[0].SuccessThreshold != 2 {
|
||||
t.Errorf("The default success threshold of the alert should've been %d, but it was %d", 2, config.Endpoints[0].Alerts[0].SuccessThreshold)
|
||||
}
|
||||
|
||||
if config.Services[0].Alerts[1].Type != alert.TypePagerDuty {
|
||||
t.Errorf("The type of the alert should've been %s, but it was %s", alert.TypePagerDuty, config.Services[0].Alerts[1].Type)
|
||||
if config.Endpoints[0].Alerts[1].Type != alert.TypePagerDuty {
|
||||
t.Errorf("The type of the alert should've been %s, but it was %s", alert.TypePagerDuty, config.Endpoints[0].Alerts[1].Type)
|
||||
}
|
||||
if config.Services[0].Alerts[1].GetDescription() != "default description" {
|
||||
t.Errorf("The description of the alert should've been %s, but it was %s", "default description", config.Services[0].Alerts[1].GetDescription())
|
||||
if config.Endpoints[0].Alerts[1].GetDescription() != "default description" {
|
||||
t.Errorf("The description of the alert should've been %s, but it was %s", "default description", config.Endpoints[0].Alerts[1].GetDescription())
|
||||
}
|
||||
if config.Services[0].Alerts[1].FailureThreshold != 7 {
|
||||
t.Errorf("The failure threshold of the alert should've been %d, but it was %d", 7, config.Services[0].Alerts[1].FailureThreshold)
|
||||
if config.Endpoints[0].Alerts[1].FailureThreshold != 7 {
|
||||
t.Errorf("The failure threshold of the alert should've been %d, but it was %d", 7, config.Endpoints[0].Alerts[1].FailureThreshold)
|
||||
}
|
||||
if config.Services[0].Alerts[1].SuccessThreshold != 5 {
|
||||
t.Errorf("The success threshold of the alert should've been %d, but it was %d", 5, config.Services[0].Alerts[1].SuccessThreshold)
|
||||
if config.Endpoints[0].Alerts[1].SuccessThreshold != 5 {
|
||||
t.Errorf("The success threshold of the alert should've been %d, but it was %d", 5, config.Endpoints[0].Alerts[1].SuccessThreshold)
|
||||
}
|
||||
|
||||
if config.Services[0].Alerts[2].Type != alert.TypeMattermost {
|
||||
t.Errorf("The type of the alert should've been %s, but it was %s", alert.TypeMattermost, config.Services[0].Alerts[2].Type)
|
||||
if config.Endpoints[0].Alerts[2].Type != alert.TypeMattermost {
|
||||
t.Errorf("The type of the alert should've been %s, but it was %s", alert.TypeMattermost, config.Endpoints[0].Alerts[2].Type)
|
||||
}
|
||||
if !config.Services[0].Alerts[2].IsEnabled() {
|
||||
if !config.Endpoints[0].Alerts[2].IsEnabled() {
|
||||
t.Error("The alert should've been enabled")
|
||||
}
|
||||
if config.Services[0].Alerts[2].FailureThreshold != 3 {
|
||||
t.Errorf("The default failure threshold of the alert should've been %d, but it was %d", 3, config.Services[0].Alerts[2].FailureThreshold)
|
||||
if config.Endpoints[0].Alerts[2].FailureThreshold != 3 {
|
||||
t.Errorf("The default failure threshold of the alert should've been %d, but it was %d", 3, config.Endpoints[0].Alerts[2].FailureThreshold)
|
||||
}
|
||||
if config.Services[0].Alerts[2].SuccessThreshold != 2 {
|
||||
t.Errorf("The default success threshold of the alert should've been %d, but it was %d", 2, config.Services[0].Alerts[2].SuccessThreshold)
|
||||
if config.Endpoints[0].Alerts[2].SuccessThreshold != 2 {
|
||||
t.Errorf("The default success threshold of the alert should've been %d, but it was %d", 2, config.Endpoints[0].Alerts[2].SuccessThreshold)
|
||||
}
|
||||
|
||||
if config.Services[0].Alerts[3].Type != alert.TypeMessagebird {
|
||||
t.Errorf("The type of the alert should've been %s, but it was %s", alert.TypeMessagebird, config.Services[0].Alerts[3].Type)
|
||||
if config.Endpoints[0].Alerts[3].Type != alert.TypeMessagebird {
|
||||
t.Errorf("The type of the alert should've been %s, but it was %s", alert.TypeMessagebird, config.Endpoints[0].Alerts[3].Type)
|
||||
}
|
||||
if config.Services[0].Alerts[3].IsEnabled() {
|
||||
if config.Endpoints[0].Alerts[3].IsEnabled() {
|
||||
t.Error("The alert should've been disabled")
|
||||
}
|
||||
if !config.Services[0].Alerts[3].IsSendingOnResolved() {
|
||||
if !config.Endpoints[0].Alerts[3].IsSendingOnResolved() {
|
||||
t.Error("The alert should be sending on resolve")
|
||||
}
|
||||
|
||||
if config.Services[0].Alerts[4].Type != alert.TypeDiscord {
|
||||
t.Errorf("The type of the alert should've been %s, but it was %s", alert.TypeDiscord, config.Services[0].Alerts[4].Type)
|
||||
if config.Endpoints[0].Alerts[4].Type != alert.TypeDiscord {
|
||||
t.Errorf("The type of the alert should've been %s, but it was %s", alert.TypeDiscord, config.Endpoints[0].Alerts[4].Type)
|
||||
}
|
||||
if !config.Services[0].Alerts[4].IsEnabled() {
|
||||
if !config.Endpoints[0].Alerts[4].IsEnabled() {
|
||||
t.Error("The alert should've been enabled")
|
||||
}
|
||||
if config.Services[0].Alerts[4].FailureThreshold != 10 {
|
||||
t.Errorf("The failure threshold of the alert should've been %d, but it was %d", 10, config.Services[0].Alerts[4].FailureThreshold)
|
||||
if config.Endpoints[0].Alerts[4].FailureThreshold != 10 {
|
||||
t.Errorf("The failure threshold of the alert should've been %d, but it was %d", 10, config.Endpoints[0].Alerts[4].FailureThreshold)
|
||||
}
|
||||
if config.Services[0].Alerts[4].SuccessThreshold != 2 {
|
||||
t.Errorf("The default success threshold of the alert should've been %d, but it was %d", 2, config.Services[0].Alerts[4].SuccessThreshold)
|
||||
if config.Endpoints[0].Alerts[4].SuccessThreshold != 2 {
|
||||
t.Errorf("The default success threshold of the alert should've been %d, but it was %d", 2, config.Endpoints[0].Alerts[4].SuccessThreshold)
|
||||
}
|
||||
|
||||
if config.Services[0].Alerts[5].Type != alert.TypeTelegram {
|
||||
t.Errorf("The type of the alert should've been %s, but it was %s", alert.TypeTelegram, config.Services[0].Alerts[5].Type)
|
||||
if config.Endpoints[0].Alerts[5].Type != alert.TypeTelegram {
|
||||
t.Errorf("The type of the alert should've been %s, but it was %s", alert.TypeTelegram, config.Endpoints[0].Alerts[5].Type)
|
||||
}
|
||||
if !config.Services[0].Alerts[5].IsEnabled() {
|
||||
if !config.Endpoints[0].Alerts[5].IsEnabled() {
|
||||
t.Error("The alert should've been enabled")
|
||||
}
|
||||
if config.Services[0].Alerts[5].FailureThreshold != 3 {
|
||||
t.Errorf("The default failure threshold of the alert should've been %d, but it was %d", 3, config.Services[0].Alerts[5].FailureThreshold)
|
||||
if config.Endpoints[0].Alerts[5].FailureThreshold != 3 {
|
||||
t.Errorf("The default failure threshold of the alert should've been %d, but it was %d", 3, config.Endpoints[0].Alerts[5].FailureThreshold)
|
||||
}
|
||||
if config.Services[0].Alerts[5].SuccessThreshold != 2 {
|
||||
t.Errorf("The default success threshold of the alert should've been %d, but it was %d", 2, config.Services[0].Alerts[5].SuccessThreshold)
|
||||
if config.Endpoints[0].Alerts[5].SuccessThreshold != 2 {
|
||||
t.Errorf("The default success threshold of the alert should've been %d, but it was %d", 2, config.Endpoints[0].Alerts[5].SuccessThreshold)
|
||||
}
|
||||
|
||||
if config.Services[0].Alerts[6].Type != alert.TypeTwilio {
|
||||
t.Errorf("The type of the alert should've been %s, but it was %s", alert.TypeTwilio, config.Services[0].Alerts[6].Type)
|
||||
if config.Endpoints[0].Alerts[6].Type != alert.TypeTwilio {
|
||||
t.Errorf("The type of the alert should've been %s, but it was %s", alert.TypeTwilio, config.Endpoints[0].Alerts[6].Type)
|
||||
}
|
||||
if !config.Services[0].Alerts[6].IsEnabled() {
|
||||
if !config.Endpoints[0].Alerts[6].IsEnabled() {
|
||||
t.Error("The alert should've been enabled")
|
||||
}
|
||||
if config.Services[0].Alerts[6].FailureThreshold != 12 {
|
||||
t.Errorf("The default failure threshold of the alert should've been %d, but it was %d", 12, config.Services[0].Alerts[6].FailureThreshold)
|
||||
if config.Endpoints[0].Alerts[6].FailureThreshold != 12 {
|
||||
t.Errorf("The default failure threshold of the alert should've been %d, but it was %d", 12, config.Endpoints[0].Alerts[6].FailureThreshold)
|
||||
}
|
||||
if config.Services[0].Alerts[6].SuccessThreshold != 15 {
|
||||
t.Errorf("The default success threshold of the alert should've been %d, but it was %d", 15, config.Services[0].Alerts[6].SuccessThreshold)
|
||||
if config.Endpoints[0].Alerts[6].SuccessThreshold != 15 {
|
||||
t.Errorf("The default success threshold of the alert should've been %d, but it was %d", 15, config.Endpoints[0].Alerts[6].SuccessThreshold)
|
||||
}
|
||||
|
||||
if config.Services[0].Alerts[7].Type != alert.TypeTeams {
|
||||
t.Errorf("The type of the alert should've been %s, but it was %s", alert.TypeTeams, config.Services[0].Alerts[7].Type)
|
||||
if config.Endpoints[0].Alerts[7].Type != alert.TypeTeams {
|
||||
t.Errorf("The type of the alert should've been %s, but it was %s", alert.TypeTeams, config.Endpoints[0].Alerts[7].Type)
|
||||
}
|
||||
if !config.Services[0].Alerts[7].IsEnabled() {
|
||||
if !config.Endpoints[0].Alerts[7].IsEnabled() {
|
||||
t.Error("The alert should've been enabled")
|
||||
}
|
||||
if config.Services[0].Alerts[7].FailureThreshold != 3 {
|
||||
t.Errorf("The default failure threshold of the alert should've been %d, but it was %d", 3, config.Services[0].Alerts[7].FailureThreshold)
|
||||
if config.Endpoints[0].Alerts[7].FailureThreshold != 3 {
|
||||
t.Errorf("The default failure threshold of the alert should've been %d, but it was %d", 3, config.Endpoints[0].Alerts[7].FailureThreshold)
|
||||
}
|
||||
if config.Services[0].Alerts[7].SuccessThreshold != 2 {
|
||||
t.Errorf("The default success threshold of the alert should've been %d, but it was %d", 2, config.Services[0].Alerts[7].SuccessThreshold)
|
||||
if config.Endpoints[0].Alerts[7].SuccessThreshold != 2 {
|
||||
t.Errorf("The default success threshold of the alert should've been %d, but it was %d", 2, config.Endpoints[0].Alerts[7].SuccessThreshold)
|
||||
}
|
||||
|
||||
}
|
||||
|
@ -875,12 +875,12 @@ func TestParseAndValidateConfigBytesWithAlertingAndDefaultAlertAndMultipleAlerts
|
|||
config, err := parseAndValidateConfigBytes([]byte(`
|
||||
alerting:
|
||||
slack:
|
||||
webhook-url: "http://example.com"
|
||||
webhook-url: "https://example.com"
|
||||
default-alert:
|
||||
enabled: true
|
||||
description: "description"
|
||||
|
||||
services:
|
||||
endpoints:
|
||||
- name: website
|
||||
url: https://twin.sh/health
|
||||
alerts:
|
||||
|
@ -908,45 +908,45 @@ services:
|
|||
if config.Alerting.Slack == nil || !config.Alerting.Slack.IsValid() {
|
||||
t.Fatal("Slack alerting config should've been valid")
|
||||
}
|
||||
// Services
|
||||
if len(config.Services) != 1 {
|
||||
t.Error("There should've been 2 services")
|
||||
// Endpoints
|
||||
if len(config.Endpoints) != 1 {
|
||||
t.Error("There should've been 2 endpoints")
|
||||
}
|
||||
if config.Services[0].Alerts[0].Type != alert.TypeSlack {
|
||||
t.Errorf("The type of the alert should've been %s, but it was %s", alert.TypeSlack, config.Services[0].Alerts[0].Type)
|
||||
if config.Endpoints[0].Alerts[0].Type != alert.TypeSlack {
|
||||
t.Errorf("The type of the alert should've been %s, but it was %s", alert.TypeSlack, config.Endpoints[0].Alerts[0].Type)
|
||||
}
|
||||
if config.Services[0].Alerts[1].Type != alert.TypeSlack {
|
||||
t.Errorf("The type of the alert should've been %s, but it was %s", alert.TypeSlack, config.Services[0].Alerts[1].Type)
|
||||
if config.Endpoints[0].Alerts[1].Type != alert.TypeSlack {
|
||||
t.Errorf("The type of the alert should've been %s, but it was %s", alert.TypeSlack, config.Endpoints[0].Alerts[1].Type)
|
||||
}
|
||||
if config.Services[0].Alerts[2].Type != alert.TypeSlack {
|
||||
t.Errorf("The type of the alert should've been %s, but it was %s", alert.TypeSlack, config.Services[0].Alerts[2].Type)
|
||||
if config.Endpoints[0].Alerts[2].Type != alert.TypeSlack {
|
||||
t.Errorf("The type of the alert should've been %s, but it was %s", alert.TypeSlack, config.Endpoints[0].Alerts[2].Type)
|
||||
}
|
||||
if !config.Services[0].Alerts[0].IsEnabled() {
|
||||
if !config.Endpoints[0].Alerts[0].IsEnabled() {
|
||||
t.Error("The alert should've been enabled")
|
||||
}
|
||||
if !config.Services[0].Alerts[1].IsEnabled() {
|
||||
if !config.Endpoints[0].Alerts[1].IsEnabled() {
|
||||
t.Error("The alert should've been enabled")
|
||||
}
|
||||
if config.Services[0].Alerts[2].IsEnabled() {
|
||||
if config.Endpoints[0].Alerts[2].IsEnabled() {
|
||||
t.Error("The alert should've been disabled")
|
||||
}
|
||||
if config.Services[0].Alerts[0].GetDescription() != "description" {
|
||||
t.Errorf("The description of the alert should've been %s, but it was %s", "description", config.Services[0].Alerts[0].GetDescription())
|
||||
if config.Endpoints[0].Alerts[0].GetDescription() != "description" {
|
||||
t.Errorf("The description of the alert should've been %s, but it was %s", "description", config.Endpoints[0].Alerts[0].GetDescription())
|
||||
}
|
||||
if config.Services[0].Alerts[1].GetDescription() != "wow" {
|
||||
t.Errorf("The description of the alert should've been %s, but it was %s", "description", config.Services[0].Alerts[1].GetDescription())
|
||||
if config.Endpoints[0].Alerts[1].GetDescription() != "wow" {
|
||||
t.Errorf("The description of the alert should've been %s, but it was %s", "description", config.Endpoints[0].Alerts[1].GetDescription())
|
||||
}
|
||||
if config.Services[0].Alerts[2].GetDescription() != "description" {
|
||||
t.Errorf("The description of the alert should've been %s, but it was %s", "description", config.Services[0].Alerts[2].GetDescription())
|
||||
if config.Endpoints[0].Alerts[2].GetDescription() != "description" {
|
||||
t.Errorf("The description of the alert should've been %s, but it was %s", "description", config.Endpoints[0].Alerts[2].GetDescription())
|
||||
}
|
||||
if config.Services[0].Alerts[0].FailureThreshold != 10 {
|
||||
t.Errorf("The failure threshold of the alert should've been %d, but it was %d", 10, config.Services[0].Alerts[0].FailureThreshold)
|
||||
if config.Endpoints[0].Alerts[0].FailureThreshold != 10 {
|
||||
t.Errorf("The failure threshold of the alert should've been %d, but it was %d", 10, config.Endpoints[0].Alerts[0].FailureThreshold)
|
||||
}
|
||||
if config.Services[0].Alerts[1].FailureThreshold != 20 {
|
||||
t.Errorf("The failure threshold of the alert should've been %d, but it was %d", 20, config.Services[0].Alerts[1].FailureThreshold)
|
||||
if config.Endpoints[0].Alerts[1].FailureThreshold != 20 {
|
||||
t.Errorf("The failure threshold of the alert should've been %d, but it was %d", 20, config.Endpoints[0].Alerts[1].FailureThreshold)
|
||||
}
|
||||
if config.Services[0].Alerts[2].FailureThreshold != 30 {
|
||||
t.Errorf("The failure threshold of the alert should've been %d, but it was %d", 30, config.Services[0].Alerts[2].FailureThreshold)
|
||||
if config.Endpoints[0].Alerts[2].FailureThreshold != 30 {
|
||||
t.Errorf("The failure threshold of the alert should've been %d, but it was %d", 30, config.Endpoints[0].Alerts[2].FailureThreshold)
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -955,7 +955,7 @@ func TestParseAndValidateConfigBytesWithInvalidPagerDutyAlertingConfig(t *testin
|
|||
alerting:
|
||||
pagerduty:
|
||||
integration-key: "INVALID_KEY"
|
||||
services:
|
||||
endpoints:
|
||||
- name: website
|
||||
url: https://twin.sh/health
|
||||
alerts:
|
||||
|
@ -987,9 +987,9 @@ alerting:
|
|||
url: "https://example.com"
|
||||
body: |
|
||||
{
|
||||
"text": "[ALERT_TRIGGERED_OR_RESOLVED]: [SERVICE_NAME] - [ALERT_DESCRIPTION]"
|
||||
"text": "[ALERT_TRIGGERED_OR_RESOLVED]: [ENDPOINT_NAME] - [ALERT_DESCRIPTION]"
|
||||
}
|
||||
services:
|
||||
endpoints:
|
||||
- name: website
|
||||
url: https://twin.sh/health
|
||||
alerts:
|
||||
|
@ -1033,8 +1033,8 @@ alerting:
|
|||
RESOLVED: "operational"
|
||||
url: "https://example.com"
|
||||
insecure: true
|
||||
body: "[ALERT_TRIGGERED_OR_RESOLVED]: [SERVICE_NAME] - [ALERT_DESCRIPTION]"
|
||||
services:
|
||||
body: "[ALERT_TRIGGERED_OR_RESOLVED]: [ENDPOINT_NAME] - [ALERT_DESCRIPTION]"
|
||||
endpoints:
|
||||
- name: website
|
||||
url: https://twin.sh/health
|
||||
alerts:
|
||||
|
@ -1073,8 +1073,8 @@ alerting:
|
|||
ALERT_TRIGGERED_OR_RESOLVED:
|
||||
TRIGGERED: "partial_outage"
|
||||
url: "https://example.com"
|
||||
body: "[ALERT_TRIGGERED_OR_RESOLVED]: [SERVICE_NAME] - [ALERT_DESCRIPTION]"
|
||||
services:
|
||||
body: "[ALERT_TRIGGERED_OR_RESOLVED]: [ENDPOINT_NAME] - [ALERT_DESCRIPTION]"
|
||||
endpoints:
|
||||
- name: website
|
||||
url: https://twin.sh/health
|
||||
alerts:
|
||||
|
@ -1105,15 +1105,15 @@ services:
|
|||
}
|
||||
}
|
||||
|
||||
func TestParseAndValidateConfigBytesWithInvalidServiceName(t *testing.T) {
|
||||
func TestParseAndValidateConfigBytesWithInvalidEndpointName(t *testing.T) {
|
||||
_, err := parseAndValidateConfigBytes([]byte(`
|
||||
services:
|
||||
endpoints:
|
||||
- name: ""
|
||||
url: https://twin.sh/health
|
||||
conditions:
|
||||
- "[STATUS] == 200"
|
||||
`))
|
||||
if err != core.ErrServiceWithNoName {
|
||||
if err != core.ErrEndpointWithNoName {
|
||||
t.Error("should've returned an error")
|
||||
}
|
||||
}
|
||||
|
@ -1122,7 +1122,7 @@ func TestParseAndValidateConfigBytesWithInvalidStorageConfig(t *testing.T) {
|
|||
_, err := parseAndValidateConfigBytes([]byte(`
|
||||
storage:
|
||||
type: sqlite
|
||||
services:
|
||||
endpoints:
|
||||
- name: example
|
||||
url: https://example.org
|
||||
conditions:
|
||||
|
@ -1137,7 +1137,7 @@ func TestParseAndValidateConfigBytesWithInvalidYAML(t *testing.T) {
|
|||
_, err := parseAndValidateConfigBytes([]byte(`
|
||||
storage:
|
||||
invalid yaml
|
||||
services:
|
||||
endpoints:
|
||||
- name: example
|
||||
url: https://example.org
|
||||
conditions:
|
||||
|
@ -1154,7 +1154,7 @@ security:
|
|||
basic:
|
||||
username: "admin"
|
||||
password-sha512: "invalid-sha512-hash"
|
||||
services:
|
||||
endpoints:
|
||||
- name: website
|
||||
url: https://twin.sh/health
|
||||
conditions:
|
||||
|
@ -1173,7 +1173,7 @@ security:
|
|||
basic:
|
||||
username: "%s"
|
||||
password-sha512: "%s"
|
||||
services:
|
||||
endpoints:
|
||||
- name: website
|
||||
url: https://twin.sh/health
|
||||
conditions:
|
||||
|
@ -1202,10 +1202,10 @@ services:
|
|||
}
|
||||
}
|
||||
|
||||
func TestParseAndValidateConfigBytesWithNoServicesOrAutoDiscovery(t *testing.T) {
|
||||
func TestParseAndValidateConfigBytesWithNoEndpointsOrAutoDiscovery(t *testing.T) {
|
||||
_, err := parseAndValidateConfigBytes([]byte(``))
|
||||
if err != ErrNoServiceInConfig {
|
||||
t.Error("The error returned should have been of type ErrNoServiceInConfig")
|
||||
if err != ErrNoEndpointInConfig {
|
||||
t.Error("The error returned should have been of type ErrNoEndpointInConfig")
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -1249,3 +1249,51 @@ func TestGetAlertingProviderByAlertType(t *testing.T) {
|
|||
t.Error("expected Teams configuration")
|
||||
}
|
||||
}
|
||||
|
||||
// XXX: Remove this in v5.0.0
|
||||
func TestParseAndValidateConfigBytes_backwardCompatibleWithServices(t *testing.T) {
|
||||
config, err := parseAndValidateConfigBytes([]byte(`
|
||||
services:
|
||||
- name: website
|
||||
url: https://twin.sh/actuator/health
|
||||
conditions:
|
||||
- "[STATUS] == 200"
|
||||
`))
|
||||
if err != nil {
|
||||
t.Error("expected no error, got", err.Error())
|
||||
}
|
||||
if config == nil {
|
||||
t.Fatal("Config shouldn't have been nil")
|
||||
}
|
||||
if config.Endpoints[0].URL != "https://twin.sh/actuator/health" {
|
||||
t.Errorf("URL should have been %s", "https://twin.sh/actuator/health")
|
||||
}
|
||||
if config.Endpoints[0].Interval != 60*time.Second {
|
||||
t.Errorf("Interval should have been %s, because it is the default value", 60*time.Second)
|
||||
}
|
||||
}
|
||||
|
||||
// XXX: Remove this in v5.0.0
|
||||
func TestParseAndValidateConfigBytes_backwardCompatibleMergeServicesInEndpoints(t *testing.T) {
|
||||
config, err := parseAndValidateConfigBytes([]byte(`
|
||||
services:
|
||||
- name: website1
|
||||
url: https://twin.sh/actuator/health
|
||||
conditions:
|
||||
- "[STATUS] == 200"
|
||||
endpoints:
|
||||
- name: website2
|
||||
url: https://twin.sh/actuator/health
|
||||
conditions:
|
||||
- "[STATUS] == 200"
|
||||
`))
|
||||
if err != nil {
|
||||
t.Error("expected no error, got", err.Error())
|
||||
}
|
||||
if config == nil {
|
||||
t.Fatal("Config shouldn't have been nil")
|
||||
}
|
||||
if len(config.Endpoints) != 2 {
|
||||
t.Error("services should've been merged in endpoints")
|
||||
}
|
||||
}
|
||||
|
|
|
@ -89,7 +89,7 @@ func (c *Config) ValidateAndSetDefaults() error {
|
|||
return nil
|
||||
}
|
||||
|
||||
// IsUnderMaintenance checks whether the services that Gatus monitors are within the configured maintenance window
|
||||
// IsUnderMaintenance checks whether the endpoints that Gatus monitors are within the configured maintenance window
|
||||
func (c Config) IsUnderMaintenance() bool {
|
||||
if !c.IsEnabled() {
|
||||
return false
|
||||
|
|
|
@ -6,10 +6,10 @@ import (
|
|||
)
|
||||
|
||||
const (
|
||||
// DefaultAddress is the default address the service will bind to
|
||||
// DefaultAddress is the default address the application will bind to
|
||||
DefaultAddress = "0.0.0.0"
|
||||
|
||||
// DefaultPort is the default port the service will listen on
|
||||
// DefaultPort is the default port the application will listen on
|
||||
DefaultPort = 8080
|
||||
)
|
||||
|
||||
|
|
|
@ -18,7 +18,7 @@ func TestHandle(t *testing.T) {
|
|||
Address: "0.0.0.0",
|
||||
Port: rand.Intn(65534),
|
||||
},
|
||||
Services: []*core.Service{
|
||||
Endpoints: []*core.Endpoint{
|
||||
{
|
||||
Name: "frontend",
|
||||
Group: "core",
|
||||
|
|
|
@ -21,7 +21,7 @@ const (
|
|||
badgeColorHexVeryBad = "#c7130a"
|
||||
)
|
||||
|
||||
// UptimeBadge handles the automatic generation of badge based on the group name and service name passed.
|
||||
// UptimeBadge handles the automatic generation of badge based on the group name and endpoint name passed.
|
||||
//
|
||||
// Valid values for {duration}: 7d, 24h, 1h
|
||||
func UptimeBadge(writer http.ResponseWriter, request *http.Request) {
|
||||
|
@ -42,7 +42,7 @@ func UptimeBadge(writer http.ResponseWriter, request *http.Request) {
|
|||
key := variables["key"]
|
||||
uptime, err := storage.Get().GetUptimeByKey(key, from, time.Now())
|
||||
if err != nil {
|
||||
if err == common.ErrServiceNotFound {
|
||||
if err == common.ErrEndpointNotFound {
|
||||
writer.WriteHeader(http.StatusNotFound)
|
||||
} else if err == common.ErrInvalidTimeRange {
|
||||
writer.WriteHeader(http.StatusBadRequest)
|
||||
|
@ -60,7 +60,7 @@ func UptimeBadge(writer http.ResponseWriter, request *http.Request) {
|
|||
_, _ = writer.Write(generateUptimeBadgeSVG(duration, uptime))
|
||||
}
|
||||
|
||||
// ResponseTimeBadge handles the automatic generation of badge based on the group name and service name passed.
|
||||
// ResponseTimeBadge handles the automatic generation of badge based on the group name and endpoint name passed.
|
||||
//
|
||||
// Valid values for {duration}: 7d, 24h, 1h
|
||||
func ResponseTimeBadge(writer http.ResponseWriter, request *http.Request) {
|
||||
|
@ -81,7 +81,7 @@ func ResponseTimeBadge(writer http.ResponseWriter, request *http.Request) {
|
|||
key := variables["key"]
|
||||
averageResponseTime, err := storage.Get().GetAverageResponseTimeByKey(key, from, time.Now())
|
||||
if err != nil {
|
||||
if err == common.ErrServiceNotFound {
|
||||
if err == common.ErrEndpointNotFound {
|
||||
writer.WriteHeader(http.StatusNotFound)
|
||||
} else if err == common.ErrInvalidTimeRange {
|
||||
writer.WriteHeader(http.StatusBadRequest)
|
||||
|
|
|
@ -18,7 +18,7 @@ func TestUptimeBadge(t *testing.T) {
|
|||
defer cache.Clear()
|
||||
cfg := &config.Config{
|
||||
Metrics: true,
|
||||
Services: []*core.Service{
|
||||
Endpoints: []*core.Endpoint{
|
||||
{
|
||||
Name: "frontend",
|
||||
Group: "core",
|
||||
|
@ -29,8 +29,8 @@ func TestUptimeBadge(t *testing.T) {
|
|||
},
|
||||
},
|
||||
}
|
||||
watchdog.UpdateServiceStatuses(cfg.Services[0], &core.Result{Success: true, Duration: time.Millisecond, Timestamp: time.Now()})
|
||||
watchdog.UpdateServiceStatuses(cfg.Services[1], &core.Result{Success: false, Duration: time.Second, Timestamp: time.Now()})
|
||||
watchdog.UpdateEndpointStatuses(cfg.Endpoints[0], &core.Result{Success: true, Duration: time.Millisecond, Timestamp: time.Now()})
|
||||
watchdog.UpdateEndpointStatuses(cfg.Endpoints[1], &core.Result{Success: false, Duration: time.Second, Timestamp: time.Now()})
|
||||
router := CreateRouter("../../web/static", cfg.Security, nil, cfg.Metrics)
|
||||
type Scenario struct {
|
||||
Name string
|
||||
|
@ -41,56 +41,66 @@ func TestUptimeBadge(t *testing.T) {
|
|||
scenarios := []Scenario{
|
||||
{
|
||||
Name: "badge-uptime-1h",
|
||||
Path: "/api/v1/services/core_frontend/uptimes/1h/badge.svg",
|
||||
Path: "/api/v1/endpoints/core_frontend/uptimes/1h/badge.svg",
|
||||
ExpectedCode: http.StatusOK,
|
||||
},
|
||||
{
|
||||
Name: "badge-uptime-24h",
|
||||
Path: "/api/v1/services/core_backend/uptimes/24h/badge.svg",
|
||||
Path: "/api/v1/endpoints/core_backend/uptimes/24h/badge.svg",
|
||||
ExpectedCode: http.StatusOK,
|
||||
},
|
||||
{
|
||||
Name: "badge-uptime-7d",
|
||||
Path: "/api/v1/services/core_frontend/uptimes/7d/badge.svg",
|
||||
Path: "/api/v1/endpoints/core_frontend/uptimes/7d/badge.svg",
|
||||
ExpectedCode: http.StatusOK,
|
||||
},
|
||||
{
|
||||
Name: "badge-uptime-with-invalid-duration",
|
||||
Path: "/api/v1/services/core_backend/uptimes/3d/badge.svg",
|
||||
Path: "/api/v1/endpoints/core_backend/uptimes/3d/badge.svg",
|
||||
ExpectedCode: http.StatusBadRequest,
|
||||
},
|
||||
{
|
||||
Name: "badge-uptime-for-invalid-key",
|
||||
Path: "/api/v1/services/invalid_key/uptimes/7d/badge.svg",
|
||||
Path: "/api/v1/endpoints/invalid_key/uptimes/7d/badge.svg",
|
||||
ExpectedCode: http.StatusNotFound,
|
||||
},
|
||||
{
|
||||
Name: "badge-response-time-1h",
|
||||
Path: "/api/v1/services/core_frontend/response-times/1h/badge.svg",
|
||||
Path: "/api/v1/endpoints/core_frontend/response-times/1h/badge.svg",
|
||||
ExpectedCode: http.StatusOK,
|
||||
},
|
||||
{
|
||||
Name: "badge-response-time-24h",
|
||||
Path: "/api/v1/services/core_backend/response-times/24h/badge.svg",
|
||||
Path: "/api/v1/endpoints/core_backend/response-times/24h/badge.svg",
|
||||
ExpectedCode: http.StatusOK,
|
||||
},
|
||||
{
|
||||
Name: "badge-response-time-7d",
|
||||
Path: "/api/v1/services/core_frontend/response-times/7d/badge.svg",
|
||||
Path: "/api/v1/endpoints/core_frontend/response-times/7d/badge.svg",
|
||||
ExpectedCode: http.StatusOK,
|
||||
},
|
||||
{
|
||||
Name: "badge-response-time-with-invalid-duration",
|
||||
Path: "/api/v1/services/core_backend/response-times/3d/badge.svg",
|
||||
Path: "/api/v1/endpoints/core_backend/response-times/3d/badge.svg",
|
||||
ExpectedCode: http.StatusBadRequest,
|
||||
},
|
||||
{
|
||||
Name: "badge-response-time-for-invalid-key",
|
||||
Path: "/api/v1/services/invalid_key/response-times/7d/badge.svg",
|
||||
Path: "/api/v1/endpoints/invalid_key/response-times/7d/badge.svg",
|
||||
ExpectedCode: http.StatusNotFound,
|
||||
},
|
||||
{
|
||||
Name: "chart-response-time-24h",
|
||||
Path: "/api/v1/endpoints/core_backend/response-times/24h/chart.svg",
|
||||
ExpectedCode: http.StatusOK,
|
||||
},
|
||||
{ // XXX: Remove this in v4.0.0
|
||||
Name: "backward-compatible-services-badge-uptime-1h",
|
||||
Path: "/api/v1/services/core_frontend/uptimes/1h/badge.svg",
|
||||
ExpectedCode: http.StatusOK,
|
||||
},
|
||||
{ // XXX: Remove this in v4.0.0
|
||||
Name: "backward-compatible-services-chart-response-time-24h",
|
||||
Path: "/api/v1/services/core_backend/response-times/24h/chart.svg",
|
||||
ExpectedCode: http.StatusOK,
|
||||
},
|
||||
|
|
|
@ -44,7 +44,7 @@ func ResponseTimeChart(writer http.ResponseWriter, r *http.Request) {
|
|||
}
|
||||
hourlyAverageResponseTime, err := storage.Get().GetHourlyAverageResponseTimeByKey(vars["key"], from, time.Now())
|
||||
if err != nil {
|
||||
if err == common.ErrServiceNotFound {
|
||||
if err == common.ErrEndpointNotFound {
|
||||
writer.WriteHeader(http.StatusNotFound)
|
||||
} else if err == common.ErrInvalidTimeRange {
|
||||
writer.WriteHeader(http.StatusBadRequest)
|
||||
|
|
|
@ -17,7 +17,7 @@ func TestResponseTimeChart(t *testing.T) {
|
|||
defer cache.Clear()
|
||||
cfg := &config.Config{
|
||||
Metrics: true,
|
||||
Services: []*core.Service{
|
||||
Endpoints: []*core.Endpoint{
|
||||
{
|
||||
Name: "frontend",
|
||||
Group: "core",
|
||||
|
@ -28,8 +28,8 @@ func TestResponseTimeChart(t *testing.T) {
|
|||
},
|
||||
},
|
||||
}
|
||||
watchdog.UpdateServiceStatuses(cfg.Services[0], &core.Result{Success: true, Duration: time.Millisecond, Timestamp: time.Now()})
|
||||
watchdog.UpdateServiceStatuses(cfg.Services[1], &core.Result{Success: false, Duration: time.Second, Timestamp: time.Now()})
|
||||
watchdog.UpdateEndpointStatuses(cfg.Endpoints[0], &core.Result{Success: true, Duration: time.Millisecond, Timestamp: time.Now()})
|
||||
watchdog.UpdateEndpointStatuses(cfg.Endpoints[1], &core.Result{Success: false, Duration: time.Second, Timestamp: time.Now()})
|
||||
router := CreateRouter("../../web/static", cfg.Security, nil, cfg.Metrics)
|
||||
type Scenario struct {
|
||||
Name string
|
||||
|
@ -40,24 +40,29 @@ func TestResponseTimeChart(t *testing.T) {
|
|||
scenarios := []Scenario{
|
||||
{
|
||||
Name: "chart-response-time-24h",
|
||||
Path: "/api/v1/services/core_backend/response-times/24h/chart.svg",
|
||||
Path: "/api/v1/endpoints/core_backend/response-times/24h/chart.svg",
|
||||
ExpectedCode: http.StatusOK,
|
||||
},
|
||||
{
|
||||
Name: "chart-response-time-7d",
|
||||
Path: "/api/v1/services/core_frontend/response-times/7d/chart.svg",
|
||||
Path: "/api/v1/endpoints/core_frontend/response-times/7d/chart.svg",
|
||||
ExpectedCode: http.StatusOK,
|
||||
},
|
||||
{
|
||||
Name: "chart-response-time-with-invalid-duration",
|
||||
Path: "/api/v1/services/core_backend/response-times/3d/chart.svg",
|
||||
Path: "/api/v1/endpoints/core_backend/response-times/3d/chart.svg",
|
||||
ExpectedCode: http.StatusBadRequest,
|
||||
},
|
||||
{
|
||||
Name: "chart-response-time-for-invalid-key",
|
||||
Path: "/api/v1/services/invalid_key/response-times/7d/chart.svg",
|
||||
Path: "/api/v1/endpoints/invalid_key/response-times/7d/chart.svg",
|
||||
ExpectedCode: http.StatusNotFound,
|
||||
},
|
||||
{ // XXX: Remove this in v4.0.0
|
||||
Name: "backward-compatible-services-chart-response-time-24h",
|
||||
Path: "/api/v1/services/core_backend/response-times/24h/chart.svg",
|
||||
ExpectedCode: http.StatusOK,
|
||||
},
|
||||
}
|
||||
for _, scenario := range scenarios {
|
||||
t.Run(scenario.Name, func(t *testing.T) {
|
||||
|
|
|
@ -25,42 +25,42 @@ var (
|
|||
cache = gocache.NewCache().WithMaxSize(100).WithEvictionPolicy(gocache.FirstInFirstOut)
|
||||
)
|
||||
|
||||
// ServiceStatuses handles requests to retrieve all service statuses
|
||||
// EndpointStatuses handles requests to retrieve all EndpointStatus
|
||||
// Due to the size of the response, this function leverages a cache.
|
||||
// Must not be wrapped by GzipHandler
|
||||
func ServiceStatuses(writer http.ResponseWriter, r *http.Request) {
|
||||
func EndpointStatuses(writer http.ResponseWriter, r *http.Request) {
|
||||
page, pageSize := extractPageAndPageSizeFromRequest(r)
|
||||
gzipped := strings.Contains(r.Header.Get("Accept-Encoding"), "gzip")
|
||||
var exists bool
|
||||
var value interface{}
|
||||
if gzipped {
|
||||
writer.Header().Set("Content-Encoding", "gzip")
|
||||
value, exists = cache.Get(fmt.Sprintf("service-status-%d-%d-gzipped", page, pageSize))
|
||||
value, exists = cache.Get(fmt.Sprintf("endpoint-status-%d-%d-gzipped", page, pageSize))
|
||||
} else {
|
||||
value, exists = cache.Get(fmt.Sprintf("service-status-%d-%d", page, pageSize))
|
||||
value, exists = cache.Get(fmt.Sprintf("endpoint-status-%d-%d", page, pageSize))
|
||||
}
|
||||
var data []byte
|
||||
if !exists {
|
||||
var err error
|
||||
buffer := &bytes.Buffer{}
|
||||
gzipWriter := gzip.NewWriter(buffer)
|
||||
serviceStatuses, err := storage.Get().GetAllServiceStatuses(paging.NewServiceStatusParams().WithResults(page, pageSize))
|
||||
endpointStatuses, err := storage.Get().GetAllEndpointStatuses(paging.NewEndpointStatusParams().WithResults(page, pageSize))
|
||||
if err != nil {
|
||||
log.Printf("[handler][ServiceStatuses] Failed to retrieve service statuses: %s", err.Error())
|
||||
log.Printf("[handler][EndpointStatuses] Failed to retrieve endpoint statuses: %s", err.Error())
|
||||
http.Error(writer, err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
data, err = json.Marshal(serviceStatuses)
|
||||
data, err = json.Marshal(endpointStatuses)
|
||||
if err != nil {
|
||||
log.Printf("[handler][ServiceStatuses] Unable to marshal object to JSON: %s", err.Error())
|
||||
log.Printf("[handler][EndpointStatuses] Unable to marshal object to JSON: %s", err.Error())
|
||||
http.Error(writer, "unable to marshal object to JSON", http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
_, _ = gzipWriter.Write(data)
|
||||
_ = gzipWriter.Close()
|
||||
gzippedData := buffer.Bytes()
|
||||
cache.SetWithTTL(fmt.Sprintf("service-status-%d-%d", page, pageSize), data, cacheTTL)
|
||||
cache.SetWithTTL(fmt.Sprintf("service-status-%d-%d-gzipped", page, pageSize), gzippedData, cacheTTL)
|
||||
cache.SetWithTTL(fmt.Sprintf("endpoint-status-%d-%d", page, pageSize), data, cacheTTL)
|
||||
cache.SetWithTTL(fmt.Sprintf("endpoint-status-%d-%d-gzipped", page, pageSize), gzippedData, cacheTTL)
|
||||
if gzipped {
|
||||
data = gzippedData
|
||||
}
|
||||
|
@ -72,28 +72,28 @@ func ServiceStatuses(writer http.ResponseWriter, r *http.Request) {
|
|||
_, _ = writer.Write(data)
|
||||
}
|
||||
|
||||
// ServiceStatus retrieves a single ServiceStatus by group name and service name
|
||||
func ServiceStatus(writer http.ResponseWriter, r *http.Request) {
|
||||
// EndpointStatus retrieves a single core.EndpointStatus by group and endpoint name
|
||||
func EndpointStatus(writer http.ResponseWriter, r *http.Request) {
|
||||
page, pageSize := extractPageAndPageSizeFromRequest(r)
|
||||
vars := mux.Vars(r)
|
||||
serviceStatus, err := storage.Get().GetServiceStatusByKey(vars["key"], paging.NewServiceStatusParams().WithResults(page, pageSize).WithEvents(1, common.MaximumNumberOfEvents))
|
||||
endpointStatus, err := storage.Get().GetEndpointStatusByKey(vars["key"], paging.NewEndpointStatusParams().WithResults(page, pageSize).WithEvents(1, common.MaximumNumberOfEvents))
|
||||
if err != nil {
|
||||
if err == common.ErrServiceNotFound {
|
||||
if err == common.ErrEndpointNotFound {
|
||||
http.Error(writer, err.Error(), http.StatusNotFound)
|
||||
return
|
||||
}
|
||||
log.Printf("[handler][ServiceStatus] Failed to retrieve service status: %s", err.Error())
|
||||
log.Printf("[handler][EndpointStatus] Failed to retrieve endpoint status: %s", err.Error())
|
||||
http.Error(writer, err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
if serviceStatus == nil {
|
||||
log.Printf("[handler][ServiceStatus] Service with key=%s not found", vars["key"])
|
||||
if endpointStatus == nil {
|
||||
log.Printf("[handler][EndpointStatus] Endpoint with key=%s not found", vars["key"])
|
||||
http.Error(writer, "not found", http.StatusNotFound)
|
||||
return
|
||||
}
|
||||
output, err := json.Marshal(serviceStatus)
|
||||
output, err := json.Marshal(endpointStatus)
|
||||
if err != nil {
|
||||
log.Printf("[handler][ServiceStatus] Unable to marshal object to JSON: %s", err.Error())
|
||||
log.Printf("[handler][EndpointStatus] Unable to marshal object to JSON: %s", err.Error())
|
||||
http.Error(writer, "unable to marshal object to JSON", http.StatusInternalServerError)
|
||||
return
|
||||
}
|
|
@ -19,7 +19,7 @@ var (
|
|||
|
||||
timestamp = time.Now()
|
||||
|
||||
testService = core.Service{
|
||||
testEndpoint = core.Endpoint{
|
||||
Name: "name",
|
||||
Group: "group",
|
||||
URL: "https://example.org/what/ever",
|
||||
|
@ -83,12 +83,12 @@ var (
|
|||
}
|
||||
)
|
||||
|
||||
func TestServiceStatus(t *testing.T) {
|
||||
func TestEndpointStatus(t *testing.T) {
|
||||
defer storage.Get().Clear()
|
||||
defer cache.Clear()
|
||||
cfg := &config.Config{
|
||||
Metrics: true,
|
||||
Services: []*core.Service{
|
||||
Endpoints: []*core.Endpoint{
|
||||
{
|
||||
Name: "frontend",
|
||||
Group: "core",
|
||||
|
@ -99,8 +99,8 @@ func TestServiceStatus(t *testing.T) {
|
|||
},
|
||||
},
|
||||
}
|
||||
watchdog.UpdateServiceStatuses(cfg.Services[0], &core.Result{Success: true, Duration: time.Millisecond, Timestamp: time.Now()})
|
||||
watchdog.UpdateServiceStatuses(cfg.Services[1], &core.Result{Success: false, Duration: time.Second, Timestamp: time.Now()})
|
||||
watchdog.UpdateEndpointStatuses(cfg.Endpoints[0], &core.Result{Success: true, Duration: time.Millisecond, Timestamp: time.Now()})
|
||||
watchdog.UpdateEndpointStatuses(cfg.Endpoints[1], &core.Result{Success: false, Duration: time.Second, Timestamp: time.Now()})
|
||||
router := CreateRouter("../../web/static", cfg.Security, nil, cfg.Metrics)
|
||||
|
||||
type Scenario struct {
|
||||
|
@ -111,26 +111,31 @@ func TestServiceStatus(t *testing.T) {
|
|||
}
|
||||
scenarios := []Scenario{
|
||||
{
|
||||
Name: "service-status",
|
||||
Path: "/api/v1/services/core_frontend/statuses",
|
||||
Name: "endpoint-status",
|
||||
Path: "/api/v1/endpoints/core_frontend/statuses",
|
||||
ExpectedCode: http.StatusOK,
|
||||
},
|
||||
{
|
||||
Name: "service-status-gzip",
|
||||
Path: "/api/v1/services/core_frontend/statuses",
|
||||
Name: "endpoint-status-gzip",
|
||||
Path: "/api/v1/endpoints/core_frontend/statuses",
|
||||
ExpectedCode: http.StatusOK,
|
||||
Gzip: true,
|
||||
},
|
||||
{
|
||||
Name: "service-status-pagination",
|
||||
Path: "/api/v1/services/core_frontend/statuses?page=1&pageSize=20",
|
||||
Name: "endpoint-status-pagination",
|
||||
Path: "/api/v1/endpoints/core_frontend/statuses?page=1&pageSize=20",
|
||||
ExpectedCode: http.StatusOK,
|
||||
},
|
||||
{
|
||||
Name: "service-status-for-invalid-key",
|
||||
Path: "/api/v1/services/invalid_key/statuses",
|
||||
Name: "endpoint-status-for-invalid-key",
|
||||
Path: "/api/v1/endpoints/invalid_key/statuses",
|
||||
ExpectedCode: http.StatusNotFound,
|
||||
},
|
||||
{ // XXX: Remove this in v4.0.0
|
||||
Name: "backward-compatible-service-status",
|
||||
Path: "/api/v1/services/core_frontend/statuses",
|
||||
ExpectedCode: http.StatusOK,
|
||||
},
|
||||
}
|
||||
for _, scenario := range scenarios {
|
||||
t.Run(scenario.Name, func(t *testing.T) {
|
||||
|
@ -147,13 +152,13 @@ func TestServiceStatus(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
func TestServiceStatuses(t *testing.T) {
|
||||
func TestEndpointStatuses(t *testing.T) {
|
||||
defer storage.Get().Clear()
|
||||
defer cache.Clear()
|
||||
firstResult := &testSuccessfulResult
|
||||
secondResult := &testUnsuccessfulResult
|
||||
storage.Get().Insert(&testService, firstResult)
|
||||
storage.Get().Insert(&testService, secondResult)
|
||||
storage.Get().Insert(&testEndpoint, firstResult)
|
||||
storage.Get().Insert(&testEndpoint, secondResult)
|
||||
// Can't be bothered dealing with timezone issues on the worker that runs the automated tests
|
||||
firstResult.Timestamp = time.Time{}
|
||||
secondResult.Timestamp = time.Time{}
|
||||
|
@ -168,31 +173,37 @@ func TestServiceStatuses(t *testing.T) {
|
|||
scenarios := []Scenario{
|
||||
{
|
||||
Name: "no-pagination",
|
||||
Path: "/api/v1/services/statuses",
|
||||
Path: "/api/v1/endpoints/statuses",
|
||||
ExpectedCode: http.StatusOK,
|
||||
ExpectedBody: `[{"name":"name","group":"group","key":"group_name","results":[{"status":200,"hostname":"example.org","duration":150000000,"errors":null,"conditionResults":[{"condition":"[STATUS] == 200","success":true},{"condition":"[RESPONSE_TIME] \u003c 500","success":true},{"condition":"[CERTIFICATE_EXPIRATION] \u003c 72h","success":true}],"success":true,"timestamp":"0001-01-01T00:00:00Z"},{"status":200,"hostname":"example.org","duration":750000000,"errors":["error-1","error-2"],"conditionResults":[{"condition":"[STATUS] == 200","success":true},{"condition":"[RESPONSE_TIME] \u003c 500","success":false},{"condition":"[CERTIFICATE_EXPIRATION] \u003c 72h","success":false}],"success":false,"timestamp":"0001-01-01T00:00:00Z"}],"events":[]}]`,
|
||||
},
|
||||
{
|
||||
Name: "pagination-first-result",
|
||||
Path: "/api/v1/services/statuses?page=1&pageSize=1",
|
||||
Path: "/api/v1/endpoints/statuses?page=1&pageSize=1",
|
||||
ExpectedCode: http.StatusOK,
|
||||
ExpectedBody: `[{"name":"name","group":"group","key":"group_name","results":[{"status":200,"hostname":"example.org","duration":750000000,"errors":["error-1","error-2"],"conditionResults":[{"condition":"[STATUS] == 200","success":true},{"condition":"[RESPONSE_TIME] \u003c 500","success":false},{"condition":"[CERTIFICATE_EXPIRATION] \u003c 72h","success":false}],"success":false,"timestamp":"0001-01-01T00:00:00Z"}],"events":[]}]`,
|
||||
},
|
||||
{
|
||||
Name: "pagination-second-result",
|
||||
Path: "/api/v1/services/statuses?page=2&pageSize=1",
|
||||
Path: "/api/v1/endpoints/statuses?page=2&pageSize=1",
|
||||
ExpectedCode: http.StatusOK,
|
||||
ExpectedBody: `[{"name":"name","group":"group","key":"group_name","results":[{"status":200,"hostname":"example.org","duration":150000000,"errors":null,"conditionResults":[{"condition":"[STATUS] == 200","success":true},{"condition":"[RESPONSE_TIME] \u003c 500","success":true},{"condition":"[CERTIFICATE_EXPIRATION] \u003c 72h","success":true}],"success":true,"timestamp":"0001-01-01T00:00:00Z"}],"events":[]}]`,
|
||||
},
|
||||
{
|
||||
Name: "pagination-no-results",
|
||||
Path: "/api/v1/services/statuses?page=5&pageSize=20",
|
||||
Path: "/api/v1/endpoints/statuses?page=5&pageSize=20",
|
||||
ExpectedCode: http.StatusOK,
|
||||
ExpectedBody: `[{"name":"name","group":"group","key":"group_name","results":[],"events":[]}]`,
|
||||
},
|
||||
{
|
||||
Name: "invalid-pagination-should-fall-back-to-default",
|
||||
Path: "/api/v1/services/statuses?page=INVALID&pageSize=INVALID",
|
||||
Path: "/api/v1/endpoints/statuses?page=INVALID&pageSize=INVALID",
|
||||
ExpectedCode: http.StatusOK,
|
||||
ExpectedBody: `[{"name":"name","group":"group","key":"group_name","results":[{"status":200,"hostname":"example.org","duration":150000000,"errors":null,"conditionResults":[{"condition":"[STATUS] == 200","success":true},{"condition":"[RESPONSE_TIME] \u003c 500","success":true},{"condition":"[CERTIFICATE_EXPIRATION] \u003c 72h","success":true}],"success":true,"timestamp":"0001-01-01T00:00:00Z"},{"status":200,"hostname":"example.org","duration":750000000,"errors":["error-1","error-2"],"conditionResults":[{"condition":"[STATUS] == 200","success":true},{"condition":"[RESPONSE_TIME] \u003c 500","success":false},{"condition":"[CERTIFICATE_EXPIRATION] \u003c 72h","success":false}],"success":false,"timestamp":"0001-01-01T00:00:00Z"}],"events":[]}]`,
|
||||
},
|
||||
{ // XXX: Remove this in v4.0.0
|
||||
Name: "backward-compatible-service-status",
|
||||
Path: "/api/v1/services/statuses",
|
||||
ExpectedCode: http.StatusOK,
|
||||
ExpectedBody: `[{"name":"name","group":"group","key":"group_name","results":[{"status":200,"hostname":"example.org","duration":150000000,"errors":null,"conditionResults":[{"condition":"[STATUS] == 200","success":true},{"condition":"[RESPONSE_TIME] \u003c 500","success":true},{"condition":"[CERTIFICATE_EXPIRATION] \u003c 72h","success":true}],"success":true,"timestamp":"0001-01-01T00:00:00Z"},{"status":200,"hostname":"example.org","duration":750000000,"errors":["error-1","error-2"],"conditionResults":[{"condition":"[STATUS] == 200","success":true},{"condition":"[RESPONSE_TIME] \u003c 500","success":false},{"condition":"[CERTIFICATE_EXPIRATION] \u003c 72h","success":false}],"success":false,"timestamp":"0001-01-01T00:00:00Z"}],"events":[]}]`,
|
||||
},
|
|
@ -18,15 +18,21 @@ func CreateRouter(staticFolder string, securityConfig *security.Config, uiConfig
|
|||
router.Handle("/health", health.Handler().WithJSON(true)).Methods("GET")
|
||||
router.HandleFunc("/favicon.ico", FavIcon(staticFolder)).Methods("GET")
|
||||
// Endpoints
|
||||
router.HandleFunc("/api/v1/services/statuses", secureIfNecessary(securityConfig, ServiceStatuses)).Methods("GET") // No GzipHandler for this one, because we cache the content as Gzipped already
|
||||
router.HandleFunc("/api/v1/services/{key}/statuses", secureIfNecessary(securityConfig, GzipHandlerFunc(ServiceStatus))).Methods("GET")
|
||||
// TODO: router.HandleFunc("/api/v1/services/{key}/uptimes", secureIfNecessary(securityConfig, GzipHandlerFunc(serviceUptimesHandler))).Methods("GET")
|
||||
// TODO: router.HandleFunc("/api/v1/services/{key}/events", secureIfNecessary(securityConfig, GzipHandlerFunc(serviceEventsHandler))).Methods("GET")
|
||||
router.HandleFunc("/api/v1/endpoints/statuses", secureIfNecessary(securityConfig, EndpointStatuses)).Methods("GET") // No GzipHandler for this one, because we cache the content as Gzipped already
|
||||
router.HandleFunc("/api/v1/endpoints/{key}/statuses", secureIfNecessary(securityConfig, GzipHandlerFunc(EndpointStatus))).Methods("GET")
|
||||
router.HandleFunc("/api/v1/endpoints/{key}/uptimes/{duration}/badge.svg", UptimeBadge).Methods("GET")
|
||||
router.HandleFunc("/api/v1/endpoints/{key}/response-times/{duration}/badge.svg", ResponseTimeBadge).Methods("GET")
|
||||
router.HandleFunc("/api/v1/endpoints/{key}/response-times/{duration}/chart.svg", ResponseTimeChart).Methods("GET")
|
||||
// XXX: Remove the lines between this and the next XXX comment in v4.0.0
|
||||
router.HandleFunc("/api/v1/services/statuses", secureIfNecessary(securityConfig, EndpointStatuses)).Methods("GET") // No GzipHandler for this one, because we cache the content as Gzipped already
|
||||
router.HandleFunc("/api/v1/services/{key}/statuses", secureIfNecessary(securityConfig, GzipHandlerFunc(EndpointStatus))).Methods("GET")
|
||||
router.HandleFunc("/api/v1/services/{key}/uptimes/{duration}/badge.svg", UptimeBadge).Methods("GET")
|
||||
router.HandleFunc("/api/v1/services/{key}/response-times/{duration}/badge.svg", ResponseTimeBadge).Methods("GET")
|
||||
router.HandleFunc("/api/v1/services/{key}/response-times/{duration}/chart.svg", ResponseTimeChart).Methods("GET")
|
||||
// XXX: Remove the lines between this and the previous XXX comment in v4.0.0
|
||||
// SPA
|
||||
router.HandleFunc("/services/{service}", SinglePageApplication(staticFolder, uiConfig)).Methods("GET")
|
||||
router.HandleFunc("/services/{name}", SinglePageApplication(staticFolder, uiConfig)).Methods("GET") // XXX: Remove this in v4.0.0
|
||||
router.HandleFunc("/endpoints/{name}", SinglePageApplication(staticFolder, uiConfig)).Methods("GET")
|
||||
router.HandleFunc("/", SinglePageApplication(staticFolder, uiConfig)).Methods("GET")
|
||||
// Everything else falls back on static content
|
||||
router.PathPrefix("/").Handler(GzipHandler(http.FileServer(http.Dir(staticFolder))))
|
||||
|
|
|
@ -17,7 +17,7 @@ func TestSinglePageApplication(t *testing.T) {
|
|||
defer cache.Clear()
|
||||
cfg := &config.Config{
|
||||
Metrics: true,
|
||||
Services: []*core.Service{
|
||||
Endpoints: []*core.Endpoint{
|
||||
{
|
||||
Name: "frontend",
|
||||
Group: "core",
|
||||
|
@ -28,8 +28,8 @@ func TestSinglePageApplication(t *testing.T) {
|
|||
},
|
||||
},
|
||||
}
|
||||
watchdog.UpdateServiceStatuses(cfg.Services[0], &core.Result{Success: true, Duration: time.Millisecond, Timestamp: time.Now()})
|
||||
watchdog.UpdateServiceStatuses(cfg.Services[1], &core.Result{Success: false, Duration: time.Second, Timestamp: time.Now()})
|
||||
watchdog.UpdateEndpointStatuses(cfg.Endpoints[0], &core.Result{Success: true, Duration: time.Millisecond, Timestamp: time.Now()})
|
||||
watchdog.UpdateEndpointStatuses(cfg.Endpoints[1], &core.Result{Success: false, Duration: time.Second, Timestamp: time.Now()})
|
||||
router := CreateRouter("../../web/static", cfg.Security, nil, cfg.Metrics)
|
||||
type Scenario struct {
|
||||
Name string
|
||||
|
@ -44,6 +44,11 @@ func TestSinglePageApplication(t *testing.T) {
|
|||
ExpectedCode: http.StatusOK,
|
||||
},
|
||||
{
|
||||
Name: "frontend-endpoint",
|
||||
Path: "/endpoints/core_frontend",
|
||||
ExpectedCode: http.StatusOK,
|
||||
},
|
||||
{ // XXX: Remove this in v4.0.0
|
||||
Name: "frontend-service",
|
||||
Path: "/services/core_frontend",
|
||||
ExpectedCode: http.StatusOK,
|
||||
|
|
|
@ -80,7 +80,7 @@ const (
|
|||
maximumLengthBeforeTruncatingWhenComparedWithPattern = 25
|
||||
)
|
||||
|
||||
// Condition is a condition that needs to be met in order for a Service to be considered healthy.
|
||||
// Condition is a condition that needs to be met in order for a Endpoint to be considered healthy.
|
||||
type Condition string
|
||||
|
||||
// evaluate the Condition with the Result of the health check
|
||||
|
|
|
@ -20,7 +20,7 @@ const (
|
|||
dnsPort = 53
|
||||
)
|
||||
|
||||
// DNS is the configuration for a Service of type DNS
|
||||
// DNS is the configuration for a Endpoint of type DNS
|
||||
type DNS struct {
|
||||
// QueryType is the type for the DNS records like A, AAAA, CNAME...
|
||||
QueryType string `yaml:"query-type"`
|
||||
|
|
|
@ -103,7 +103,7 @@ func TestIntegrationQuery(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
func TestService_ValidateAndSetDefaultsWithNoDNSQueryName(t *testing.T) {
|
||||
func TestEndpoint_ValidateAndSetDefaultsWithNoDNSQueryName(t *testing.T) {
|
||||
defer func() { recover() }()
|
||||
dns := &DNS{
|
||||
QueryType: "A",
|
||||
|
@ -111,11 +111,11 @@ func TestService_ValidateAndSetDefaultsWithNoDNSQueryName(t *testing.T) {
|
|||
}
|
||||
err := dns.validateAndSetDefault()
|
||||
if err == nil {
|
||||
t.Fatal("Should've returned an error because service`s dns didn't have a query name, which is a mandatory field for dns")
|
||||
t.Fatal("Should've returned an error because endpoint's dns didn't have a query name, which is a mandatory field for dns")
|
||||
}
|
||||
}
|
||||
|
||||
func TestService_ValidateAndSetDefaultsWithInvalidDNSQueryType(t *testing.T) {
|
||||
func TestEndpoint_ValidateAndSetDefaultsWithInvalidDNSQueryType(t *testing.T) {
|
||||
defer func() { recover() }()
|
||||
dns := &DNS{
|
||||
QueryType: "B",
|
||||
|
@ -123,6 +123,6 @@ func TestService_ValidateAndSetDefaultsWithInvalidDNSQueryType(t *testing.T) {
|
|||
}
|
||||
err := dns.validateAndSetDefault()
|
||||
if err == nil {
|
||||
t.Fatal("Should've returned an error because service`s dns query type is invalid, it needs to be a valid query name like A, AAAA, CNAME...")
|
||||
t.Fatal("Should've returned an error because endpoint's dns query type is invalid, it needs to be a valid query name like A, AAAA, CNAME...")
|
||||
}
|
||||
}
|
||||
|
|
299
core/endpoint.go
Normal file
299
core/endpoint.go
Normal file
|
@ -0,0 +1,299 @@
|
|||
package core
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"crypto/x509"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"io/ioutil"
|
||||
"net"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/TwiN/gatus/v3/alerting/alert"
|
||||
"github.com/TwiN/gatus/v3/client"
|
||||
"github.com/TwiN/gatus/v3/core/ui"
|
||||
"github.com/TwiN/gatus/v3/util"
|
||||
)
|
||||
|
||||
const (
|
||||
// HostHeader is the name of the header used to specify the host
|
||||
HostHeader = "Host"
|
||||
|
||||
// ContentTypeHeader is the name of the header used to specify the content type
|
||||
ContentTypeHeader = "Content-Type"
|
||||
|
||||
// UserAgentHeader is the name of the header used to specify the request's user agent
|
||||
UserAgentHeader = "User-Agent"
|
||||
|
||||
// GatusUserAgent is the default user agent that Gatus uses to send requests.
|
||||
GatusUserAgent = "Gatus/1.0"
|
||||
)
|
||||
|
||||
var (
|
||||
// ErrEndpointWithNoCondition is the error with which Gatus will panic if an endpoint is configured with no conditions
|
||||
ErrEndpointWithNoCondition = errors.New("you must specify at least one condition per endpoint")
|
||||
|
||||
// ErrEndpointWithNoURL is the error with which Gatus will panic if an endpoint is configured with no url
|
||||
ErrEndpointWithNoURL = errors.New("you must specify an url for each endpoint")
|
||||
|
||||
// ErrEndpointWithNoName is the error with which Gatus will panic if an endpoint is configured with no name
|
||||
ErrEndpointWithNoName = errors.New("you must specify a name for each endpoint")
|
||||
)
|
||||
|
||||
// Endpoint is the configuration of a monitored
|
||||
type Endpoint struct {
|
||||
// Enabled defines whether to enable the monitoring of the endpoint
|
||||
Enabled *bool `yaml:"enabled,omitempty"`
|
||||
|
||||
// Name of the endpoint. Can be anything.
|
||||
Name string `yaml:"name"`
|
||||
|
||||
// Group the endpoint is a part of. Used for grouping multiple endpoints together on the front end.
|
||||
Group string `yaml:"group,omitempty"`
|
||||
|
||||
// URL to send the request to
|
||||
URL string `yaml:"url"`
|
||||
|
||||
// DNS is the configuration of DNS monitoring
|
||||
DNS *DNS `yaml:"dns,omitempty"`
|
||||
|
||||
// Method of the request made to the url of the endpoint
|
||||
Method string `yaml:"method,omitempty"`
|
||||
|
||||
// Body of the request
|
||||
Body string `yaml:"body,omitempty"`
|
||||
|
||||
// GraphQL is whether to wrap the body in a query param ({"query":"$body"})
|
||||
GraphQL bool `yaml:"graphql,omitempty"`
|
||||
|
||||
// Headers of the request
|
||||
Headers map[string]string `yaml:"headers,omitempty"`
|
||||
|
||||
// Interval is the duration to wait between every status check
|
||||
Interval time.Duration `yaml:"interval,omitempty"`
|
||||
|
||||
// Conditions used to determine the health of the endpoint
|
||||
Conditions []*Condition `yaml:"conditions"`
|
||||
|
||||
// Alerts is the alerting configuration for the endpoint in case of failure
|
||||
Alerts []*alert.Alert `yaml:"alerts"`
|
||||
|
||||
// ClientConfig is the configuration of the client used to communicate with the endpoint's target
|
||||
ClientConfig *client.Config `yaml:"client"`
|
||||
|
||||
// UIConfig is the configuration for the UI
|
||||
UIConfig *ui.Config `yaml:"ui"`
|
||||
|
||||
// NumberOfFailuresInARow is the number of unsuccessful evaluations in a row
|
||||
NumberOfFailuresInARow int
|
||||
|
||||
// NumberOfSuccessesInARow is the number of successful evaluations in a row
|
||||
NumberOfSuccessesInARow int
|
||||
}
|
||||
|
||||
// IsEnabled returns whether the endpoint is enabled or not
|
||||
func (endpoint Endpoint) IsEnabled() bool {
|
||||
if endpoint.Enabled == nil {
|
||||
return true
|
||||
}
|
||||
return *endpoint.Enabled
|
||||
}
|
||||
|
||||
// ValidateAndSetDefaults validates the endpoint's configuration and sets the default value of fields that have one
|
||||
func (endpoint *Endpoint) ValidateAndSetDefaults() error {
|
||||
// Set default values
|
||||
if endpoint.ClientConfig == nil {
|
||||
endpoint.ClientConfig = client.GetDefaultConfig()
|
||||
} else {
|
||||
endpoint.ClientConfig.ValidateAndSetDefaults()
|
||||
}
|
||||
if endpoint.UIConfig == nil {
|
||||
endpoint.UIConfig = ui.GetDefaultConfig()
|
||||
}
|
||||
if endpoint.Interval == 0 {
|
||||
endpoint.Interval = 1 * time.Minute
|
||||
}
|
||||
if len(endpoint.Method) == 0 {
|
||||
endpoint.Method = http.MethodGet
|
||||
}
|
||||
if len(endpoint.Headers) == 0 {
|
||||
endpoint.Headers = make(map[string]string)
|
||||
}
|
||||
// Automatically add user agent header if there isn't one specified in the endpoint configuration
|
||||
if _, userAgentHeaderExists := endpoint.Headers[UserAgentHeader]; !userAgentHeaderExists {
|
||||
endpoint.Headers[UserAgentHeader] = GatusUserAgent
|
||||
}
|
||||
// Automatically add "Content-Type: application/json" header if there's no Content-Type set
|
||||
// and endpoint.GraphQL is set to true
|
||||
if _, contentTypeHeaderExists := endpoint.Headers[ContentTypeHeader]; !contentTypeHeaderExists && endpoint.GraphQL {
|
||||
endpoint.Headers[ContentTypeHeader] = "application/json"
|
||||
}
|
||||
for _, endpointAlert := range endpoint.Alerts {
|
||||
if endpointAlert.FailureThreshold <= 0 {
|
||||
endpointAlert.FailureThreshold = 3
|
||||
}
|
||||
if endpointAlert.SuccessThreshold <= 0 {
|
||||
endpointAlert.SuccessThreshold = 2
|
||||
}
|
||||
}
|
||||
if len(endpoint.Name) == 0 {
|
||||
return ErrEndpointWithNoName
|
||||
}
|
||||
if len(endpoint.URL) == 0 {
|
||||
return ErrEndpointWithNoURL
|
||||
}
|
||||
if len(endpoint.Conditions) == 0 {
|
||||
return ErrEndpointWithNoCondition
|
||||
}
|
||||
if endpoint.DNS != nil {
|
||||
return endpoint.DNS.validateAndSetDefault()
|
||||
}
|
||||
// Make sure that the request can be created
|
||||
_, err := http.NewRequest(endpoint.Method, endpoint.URL, bytes.NewBuffer([]byte(endpoint.Body)))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Key returns the unique key for the Endpoint
|
||||
func (endpoint Endpoint) Key() string {
|
||||
return util.ConvertGroupAndEndpointNameToKey(endpoint.Group, endpoint.Name)
|
||||
}
|
||||
|
||||
// EvaluateHealth sends a request to the endpoint's URL and evaluates the conditions of the endpoint.
|
||||
func (endpoint *Endpoint) EvaluateHealth() *Result {
|
||||
result := &Result{Success: true, Errors: []string{}}
|
||||
endpoint.getIP(result)
|
||||
if len(result.Errors) == 0 {
|
||||
endpoint.call(result)
|
||||
} else {
|
||||
result.Success = false
|
||||
}
|
||||
for _, condition := range endpoint.Conditions {
|
||||
success := condition.evaluate(result, endpoint.UIConfig.DontResolveFailedConditions)
|
||||
if !success {
|
||||
result.Success = false
|
||||
}
|
||||
}
|
||||
result.Timestamp = time.Now()
|
||||
// No need to keep the body after the endpoint has been evaluated
|
||||
result.body = nil
|
||||
// Clean up parameters that we don't need to keep in the results
|
||||
if endpoint.UIConfig.HideHostname {
|
||||
result.Hostname = ""
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
func (endpoint *Endpoint) getIP(result *Result) {
|
||||
if endpoint.DNS != nil {
|
||||
result.Hostname = strings.TrimSuffix(endpoint.URL, ":53")
|
||||
} else {
|
||||
urlObject, err := url.Parse(endpoint.URL)
|
||||
if err != nil {
|
||||
result.AddError(err.Error())
|
||||
return
|
||||
}
|
||||
result.Hostname = urlObject.Hostname()
|
||||
}
|
||||
ips, err := net.LookupIP(result.Hostname)
|
||||
if err != nil {
|
||||
result.AddError(err.Error())
|
||||
return
|
||||
}
|
||||
result.IP = ips[0].String()
|
||||
}
|
||||
|
||||
func (endpoint *Endpoint) call(result *Result) {
|
||||
var request *http.Request
|
||||
var response *http.Response
|
||||
var err error
|
||||
var certificate *x509.Certificate
|
||||
isTypeDNS := endpoint.DNS != nil
|
||||
isTypeTCP := strings.HasPrefix(endpoint.URL, "tcp://")
|
||||
isTypeICMP := strings.HasPrefix(endpoint.URL, "icmp://")
|
||||
isTypeSTARTTLS := strings.HasPrefix(endpoint.URL, "starttls://")
|
||||
isTypeTLS := strings.HasPrefix(endpoint.URL, "tls://")
|
||||
isTypeHTTP := !isTypeDNS && !isTypeTCP && !isTypeICMP && !isTypeSTARTTLS && !isTypeTLS
|
||||
if isTypeHTTP {
|
||||
request = endpoint.buildHTTPRequest()
|
||||
}
|
||||
startTime := time.Now()
|
||||
if isTypeDNS {
|
||||
endpoint.DNS.query(endpoint.URL, result)
|
||||
result.Duration = time.Since(startTime)
|
||||
} else if isTypeSTARTTLS || isTypeTLS {
|
||||
if isTypeSTARTTLS {
|
||||
result.Connected, certificate, err = client.CanPerformStartTLS(strings.TrimPrefix(endpoint.URL, "starttls://"), endpoint.ClientConfig)
|
||||
} else {
|
||||
result.Connected, certificate, err = client.CanPerformTLS(strings.TrimPrefix(endpoint.URL, "tls://"), endpoint.ClientConfig)
|
||||
}
|
||||
if err != nil {
|
||||
result.AddError(err.Error())
|
||||
return
|
||||
}
|
||||
result.Duration = time.Since(startTime)
|
||||
result.CertificateExpiration = time.Until(certificate.NotAfter)
|
||||
} else if isTypeTCP {
|
||||
result.Connected = client.CanCreateTCPConnection(strings.TrimPrefix(endpoint.URL, "tcp://"), endpoint.ClientConfig)
|
||||
result.Duration = time.Since(startTime)
|
||||
} else if isTypeICMP {
|
||||
result.Connected, result.Duration = client.Ping(strings.TrimPrefix(endpoint.URL, "icmp://"), endpoint.ClientConfig)
|
||||
} else {
|
||||
response, err = client.GetHTTPClient(endpoint.ClientConfig).Do(request)
|
||||
result.Duration = time.Since(startTime)
|
||||
if err != nil {
|
||||
result.AddError(err.Error())
|
||||
return
|
||||
}
|
||||
defer response.Body.Close()
|
||||
if response.TLS != nil && len(response.TLS.PeerCertificates) > 0 {
|
||||
certificate = response.TLS.PeerCertificates[0]
|
||||
result.CertificateExpiration = time.Until(certificate.NotAfter)
|
||||
}
|
||||
result.HTTPStatus = response.StatusCode
|
||||
result.Connected = response.StatusCode > 0
|
||||
// Only read the body if there's a condition that uses the BodyPlaceholder
|
||||
if endpoint.needsToReadBody() {
|
||||
result.body, err = ioutil.ReadAll(response.Body)
|
||||
if err != nil {
|
||||
result.AddError(err.Error())
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (endpoint *Endpoint) buildHTTPRequest() *http.Request {
|
||||
var bodyBuffer *bytes.Buffer
|
||||
if endpoint.GraphQL {
|
||||
graphQlBody := map[string]string{
|
||||
"query": endpoint.Body,
|
||||
}
|
||||
body, _ := json.Marshal(graphQlBody)
|
||||
bodyBuffer = bytes.NewBuffer(body)
|
||||
} else {
|
||||
bodyBuffer = bytes.NewBuffer([]byte(endpoint.Body))
|
||||
}
|
||||
request, _ := http.NewRequest(endpoint.Method, endpoint.URL, bodyBuffer)
|
||||
for k, v := range endpoint.Headers {
|
||||
request.Header.Set(k, v)
|
||||
if k == HostHeader {
|
||||
request.Host = v
|
||||
}
|
||||
}
|
||||
return request
|
||||
}
|
||||
|
||||
// needsToReadBody checks if there's any conditions that requires the response body to be read
|
||||
func (endpoint *Endpoint) needsToReadBody() bool {
|
||||
for _, condition := range endpoint.Conditions {
|
||||
if condition.hasBodyPlaceholder() {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
40
core/endpoint_status.go
Normal file
40
core/endpoint_status.go
Normal file
|
@ -0,0 +1,40 @@
|
|||
package core
|
||||
|
||||
import "github.com/TwiN/gatus/v3/util"
|
||||
|
||||
// EndpointStatus contains the evaluation Results of an Endpoint
|
||||
type EndpointStatus struct {
|
||||
// Name of the endpoint
|
||||
Name string `json:"name,omitempty"`
|
||||
|
||||
// Group the endpoint is a part of. Used for grouping multiple endpoints together on the front end.
|
||||
Group string `json:"group,omitempty"`
|
||||
|
||||
// Key is the key representing the EndpointStatus
|
||||
Key string `json:"key"`
|
||||
|
||||
// Results is the list of endpoint evaluation results
|
||||
Results []*Result `json:"results"`
|
||||
|
||||
// Events is a list of events
|
||||
Events []*Event `json:"events"`
|
||||
|
||||
// Uptime information on the endpoint's uptime
|
||||
//
|
||||
// Used by the memory store.
|
||||
//
|
||||
// To retrieve the uptime between two time, use store.GetUptimeByKey.
|
||||
Uptime *Uptime `json:"-"`
|
||||
}
|
||||
|
||||
// NewEndpointStatus creates a new EndpointStatus
|
||||
func NewEndpointStatus(group, name string) *EndpointStatus {
|
||||
return &EndpointStatus{
|
||||
Name: name,
|
||||
Group: group,
|
||||
Key: util.ConvertGroupAndEndpointNameToKey(group, name),
|
||||
Results: make([]*Result, 0),
|
||||
Events: make([]*Event, 0),
|
||||
Uptime: NewUptime(),
|
||||
}
|
||||
}
|
19
core/endpoint_status_test.go
Normal file
19
core/endpoint_status_test.go
Normal file
|
@ -0,0 +1,19 @@
|
|||
package core
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestNewEndpointStatus(t *testing.T) {
|
||||
endpoint := &Endpoint{Name: "name", Group: "group"}
|
||||
status := NewEndpointStatus(endpoint.Group, endpoint.Name)
|
||||
if status.Name != endpoint.Name {
|
||||
t.Errorf("expected %s, got %s", endpoint.Name, status.Name)
|
||||
}
|
||||
if status.Group != endpoint.Group {
|
||||
t.Errorf("expected %s, got %s", endpoint.Group, status.Group)
|
||||
}
|
||||
if status.Key != "group_name" {
|
||||
t.Errorf("expected %s, got %s", "group_name", status.Key)
|
||||
}
|
||||
}
|
|
@ -10,66 +10,66 @@ import (
|
|||
"github.com/TwiN/gatus/v3/client"
|
||||
)
|
||||
|
||||
func TestService_IsEnabled(t *testing.T) {
|
||||
if !(Service{Enabled: nil}).IsEnabled() {
|
||||
t.Error("service.IsEnabled() should've returned true, because Enabled was set to nil")
|
||||
func TestEndpoint_IsEnabled(t *testing.T) {
|
||||
if !(Endpoint{Enabled: nil}).IsEnabled() {
|
||||
t.Error("endpoint.IsEnabled() should've returned true, because Enabled was set to nil")
|
||||
}
|
||||
if value := false; (Service{Enabled: &value}).IsEnabled() {
|
||||
t.Error("service.IsEnabled() should've returned false, because Enabled was set to false")
|
||||
if value := false; (Endpoint{Enabled: &value}).IsEnabled() {
|
||||
t.Error("endpoint.IsEnabled() should've returned false, because Enabled was set to false")
|
||||
}
|
||||
if value := true; !(Service{Enabled: &value}).IsEnabled() {
|
||||
t.Error("Service.IsEnabled() should've returned true, because Enabled was set to true")
|
||||
if value := true; !(Endpoint{Enabled: &value}).IsEnabled() {
|
||||
t.Error("Endpoint.IsEnabled() should've returned true, because Enabled was set to true")
|
||||
}
|
||||
}
|
||||
|
||||
func TestService_ValidateAndSetDefaults(t *testing.T) {
|
||||
func TestEndpoint_ValidateAndSetDefaults(t *testing.T) {
|
||||
condition := Condition("[STATUS] == 200")
|
||||
service := Service{
|
||||
endpoint := Endpoint{
|
||||
Name: "website-health",
|
||||
URL: "https://twin.sh/health",
|
||||
Conditions: []*Condition{&condition},
|
||||
Alerts: []*alert.Alert{{Type: alert.TypePagerDuty}},
|
||||
}
|
||||
service.ValidateAndSetDefaults()
|
||||
if service.ClientConfig == nil {
|
||||
endpoint.ValidateAndSetDefaults()
|
||||
if endpoint.ClientConfig == nil {
|
||||
t.Error("client configuration should've been set to the default configuration")
|
||||
} else {
|
||||
if service.ClientConfig.Insecure != client.GetDefaultConfig().Insecure {
|
||||
t.Errorf("Default client configuration should've set Insecure to %v, got %v", client.GetDefaultConfig().Insecure, service.ClientConfig.Insecure)
|
||||
if endpoint.ClientConfig.Insecure != client.GetDefaultConfig().Insecure {
|
||||
t.Errorf("Default client configuration should've set Insecure to %v, got %v", client.GetDefaultConfig().Insecure, endpoint.ClientConfig.Insecure)
|
||||
}
|
||||
if service.ClientConfig.IgnoreRedirect != client.GetDefaultConfig().IgnoreRedirect {
|
||||
t.Errorf("Default client configuration should've set IgnoreRedirect to %v, got %v", client.GetDefaultConfig().IgnoreRedirect, service.ClientConfig.IgnoreRedirect)
|
||||
if endpoint.ClientConfig.IgnoreRedirect != client.GetDefaultConfig().IgnoreRedirect {
|
||||
t.Errorf("Default client configuration should've set IgnoreRedirect to %v, got %v", client.GetDefaultConfig().IgnoreRedirect, endpoint.ClientConfig.IgnoreRedirect)
|
||||
}
|
||||
if service.ClientConfig.Timeout != client.GetDefaultConfig().Timeout {
|
||||
t.Errorf("Default client configuration should've set Timeout to %v, got %v", client.GetDefaultConfig().Timeout, service.ClientConfig.Timeout)
|
||||
if endpoint.ClientConfig.Timeout != client.GetDefaultConfig().Timeout {
|
||||
t.Errorf("Default client configuration should've set Timeout to %v, got %v", client.GetDefaultConfig().Timeout, endpoint.ClientConfig.Timeout)
|
||||
}
|
||||
}
|
||||
if service.Method != "GET" {
|
||||
t.Error("Service method should've defaulted to GET")
|
||||
if endpoint.Method != "GET" {
|
||||
t.Error("Endpoint method should've defaulted to GET")
|
||||
}
|
||||
if service.Interval != time.Minute {
|
||||
t.Error("Service interval should've defaulted to 1 minute")
|
||||
if endpoint.Interval != time.Minute {
|
||||
t.Error("Endpoint interval should've defaulted to 1 minute")
|
||||
}
|
||||
if service.Headers == nil {
|
||||
t.Error("Service headers should've defaulted to an empty map")
|
||||
if endpoint.Headers == nil {
|
||||
t.Error("Endpoint headers should've defaulted to an empty map")
|
||||
}
|
||||
if len(service.Alerts) != 1 {
|
||||
t.Error("Service should've had 1 alert")
|
||||
if len(endpoint.Alerts) != 1 {
|
||||
t.Error("Endpoint should've had 1 alert")
|
||||
}
|
||||
if service.Alerts[0].IsEnabled() {
|
||||
t.Error("Service alert should've defaulted to disabled")
|
||||
if endpoint.Alerts[0].IsEnabled() {
|
||||
t.Error("Endpoint alert should've defaulted to disabled")
|
||||
}
|
||||
if service.Alerts[0].SuccessThreshold != 2 {
|
||||
t.Error("Service alert should've defaulted to a success threshold of 2")
|
||||
if endpoint.Alerts[0].SuccessThreshold != 2 {
|
||||
t.Error("Endpoint alert should've defaulted to a success threshold of 2")
|
||||
}
|
||||
if service.Alerts[0].FailureThreshold != 3 {
|
||||
t.Error("Service alert should've defaulted to a failure threshold of 3")
|
||||
if endpoint.Alerts[0].FailureThreshold != 3 {
|
||||
t.Error("Endpoint alert should've defaulted to a failure threshold of 3")
|
||||
}
|
||||
}
|
||||
|
||||
func TestService_ValidateAndSetDefaultsWithClientConfig(t *testing.T) {
|
||||
func TestEndpoint_ValidateAndSetDefaultsWithClientConfig(t *testing.T) {
|
||||
condition := Condition("[STATUS] == 200")
|
||||
service := Service{
|
||||
endpoint := Endpoint{
|
||||
Name: "website-health",
|
||||
URL: "https://twin.sh/health",
|
||||
Conditions: []*Condition{&condition},
|
||||
|
@ -79,66 +79,66 @@ func TestService_ValidateAndSetDefaultsWithClientConfig(t *testing.T) {
|
|||
Timeout: 0,
|
||||
},
|
||||
}
|
||||
service.ValidateAndSetDefaults()
|
||||
if service.ClientConfig == nil {
|
||||
endpoint.ValidateAndSetDefaults()
|
||||
if endpoint.ClientConfig == nil {
|
||||
t.Error("client configuration should've been set to the default configuration")
|
||||
} else {
|
||||
if !service.ClientConfig.Insecure {
|
||||
t.Error("service.ClientConfig.Insecure should've been set to true")
|
||||
if !endpoint.ClientConfig.Insecure {
|
||||
t.Error("endpoint.ClientConfig.Insecure should've been set to true")
|
||||
}
|
||||
if !service.ClientConfig.IgnoreRedirect {
|
||||
t.Error("service.ClientConfig.IgnoreRedirect should've been set to true")
|
||||
if !endpoint.ClientConfig.IgnoreRedirect {
|
||||
t.Error("endpoint.ClientConfig.IgnoreRedirect should've been set to true")
|
||||
}
|
||||
if service.ClientConfig.Timeout != client.GetDefaultConfig().Timeout {
|
||||
t.Error("service.ClientConfig.Timeout should've been set to 10s, because the timeout value entered is not set or invalid")
|
||||
if endpoint.ClientConfig.Timeout != client.GetDefaultConfig().Timeout {
|
||||
t.Error("endpoint.ClientConfig.Timeout should've been set to 10s, because the timeout value entered is not set or invalid")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestService_ValidateAndSetDefaultsWithNoName(t *testing.T) {
|
||||
func TestEndpoint_ValidateAndSetDefaultsWithNoName(t *testing.T) {
|
||||
defer func() { recover() }()
|
||||
condition := Condition("[STATUS] == 200")
|
||||
service := &Service{
|
||||
endpoint := &Endpoint{
|
||||
Name: "",
|
||||
URL: "http://example.com",
|
||||
Conditions: []*Condition{&condition},
|
||||
}
|
||||
err := service.ValidateAndSetDefaults()
|
||||
err := endpoint.ValidateAndSetDefaults()
|
||||
if err == nil {
|
||||
t.Fatal("Should've returned an error because service didn't have a name, which is a mandatory field")
|
||||
t.Fatal("Should've returned an error because endpoint didn't have a name, which is a mandatory field")
|
||||
}
|
||||
}
|
||||
|
||||
func TestService_ValidateAndSetDefaultsWithNoUrl(t *testing.T) {
|
||||
func TestEndpoint_ValidateAndSetDefaultsWithNoUrl(t *testing.T) {
|
||||
defer func() { recover() }()
|
||||
condition := Condition("[STATUS] == 200")
|
||||
service := &Service{
|
||||
endpoint := &Endpoint{
|
||||
Name: "example",
|
||||
URL: "",
|
||||
Conditions: []*Condition{&condition},
|
||||
}
|
||||
err := service.ValidateAndSetDefaults()
|
||||
err := endpoint.ValidateAndSetDefaults()
|
||||
if err == nil {
|
||||
t.Fatal("Should've returned an error because service didn't have an url, which is a mandatory field")
|
||||
t.Fatal("Should've returned an error because endpoint didn't have an url, which is a mandatory field")
|
||||
}
|
||||
}
|
||||
|
||||
func TestService_ValidateAndSetDefaultsWithNoConditions(t *testing.T) {
|
||||
func TestEndpoint_ValidateAndSetDefaultsWithNoConditions(t *testing.T) {
|
||||
defer func() { recover() }()
|
||||
service := &Service{
|
||||
endpoint := &Endpoint{
|
||||
Name: "example",
|
||||
URL: "http://example.com",
|
||||
Conditions: nil,
|
||||
}
|
||||
err := service.ValidateAndSetDefaults()
|
||||
err := endpoint.ValidateAndSetDefaults()
|
||||
if err == nil {
|
||||
t.Fatal("Should've returned an error because service didn't have at least 1 condition")
|
||||
t.Fatal("Should've returned an error because endpoint didn't have at least 1 condition")
|
||||
}
|
||||
}
|
||||
|
||||
func TestService_ValidateAndSetDefaultsWithDNS(t *testing.T) {
|
||||
func TestEndpoint_ValidateAndSetDefaultsWithDNS(t *testing.T) {
|
||||
conditionSuccess := Condition("[DNS_RCODE] == NOERROR")
|
||||
service := &Service{
|
||||
endpoint := &Endpoint{
|
||||
Name: "dns-test",
|
||||
URL: "http://example.com",
|
||||
DNS: &DNS{
|
||||
|
@ -147,24 +147,24 @@ func TestService_ValidateAndSetDefaultsWithDNS(t *testing.T) {
|
|||
},
|
||||
Conditions: []*Condition{&conditionSuccess},
|
||||
}
|
||||
err := service.ValidateAndSetDefaults()
|
||||
err := endpoint.ValidateAndSetDefaults()
|
||||
if err != nil {
|
||||
|
||||
}
|
||||
if service.DNS.QueryName != "example.com." {
|
||||
t.Error("Service.dns.query-name should be formatted with . suffix")
|
||||
if endpoint.DNS.QueryName != "example.com." {
|
||||
t.Error("Endpoint.dns.query-name should be formatted with . suffix")
|
||||
}
|
||||
}
|
||||
|
||||
func TestService_buildHTTPRequest(t *testing.T) {
|
||||
func TestEndpoint_buildHTTPRequest(t *testing.T) {
|
||||
condition := Condition("[STATUS] == 200")
|
||||
service := Service{
|
||||
endpoint := Endpoint{
|
||||
Name: "website-health",
|
||||
URL: "https://twin.sh/health",
|
||||
Conditions: []*Condition{&condition},
|
||||
}
|
||||
service.ValidateAndSetDefaults()
|
||||
request := service.buildHTTPRequest()
|
||||
endpoint.ValidateAndSetDefaults()
|
||||
request := endpoint.buildHTTPRequest()
|
||||
if request.Method != "GET" {
|
||||
t.Error("request.Method should've been GET, but was", request.Method)
|
||||
}
|
||||
|
@ -176,9 +176,9 @@ func TestService_buildHTTPRequest(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
func TestService_buildHTTPRequestWithCustomUserAgent(t *testing.T) {
|
||||
func TestEndpoint_buildHTTPRequestWithCustomUserAgent(t *testing.T) {
|
||||
condition := Condition("[STATUS] == 200")
|
||||
service := Service{
|
||||
endpoint := Endpoint{
|
||||
Name: "website-health",
|
||||
URL: "https://twin.sh/health",
|
||||
Conditions: []*Condition{&condition},
|
||||
|
@ -186,8 +186,8 @@ func TestService_buildHTTPRequestWithCustomUserAgent(t *testing.T) {
|
|||
"User-Agent": "Test/2.0",
|
||||
},
|
||||
}
|
||||
service.ValidateAndSetDefaults()
|
||||
request := service.buildHTTPRequest()
|
||||
endpoint.ValidateAndSetDefaults()
|
||||
request := endpoint.buildHTTPRequest()
|
||||
if request.Method != "GET" {
|
||||
t.Error("request.Method should've been GET, but was", request.Method)
|
||||
}
|
||||
|
@ -199,9 +199,9 @@ func TestService_buildHTTPRequestWithCustomUserAgent(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
func TestService_buildHTTPRequestWithHostHeader(t *testing.T) {
|
||||
func TestEndpoint_buildHTTPRequestWithHostHeader(t *testing.T) {
|
||||
condition := Condition("[STATUS] == 200")
|
||||
service := Service{
|
||||
endpoint := Endpoint{
|
||||
Name: "website-health",
|
||||
URL: "https://twin.sh/health",
|
||||
Method: "POST",
|
||||
|
@ -210,8 +210,8 @@ func TestService_buildHTTPRequestWithHostHeader(t *testing.T) {
|
|||
"Host": "example.com",
|
||||
},
|
||||
}
|
||||
service.ValidateAndSetDefaults()
|
||||
request := service.buildHTTPRequest()
|
||||
endpoint.ValidateAndSetDefaults()
|
||||
request := endpoint.buildHTTPRequest()
|
||||
if request.Method != "POST" {
|
||||
t.Error("request.Method should've been POST, but was", request.Method)
|
||||
}
|
||||
|
@ -220,9 +220,9 @@ func TestService_buildHTTPRequestWithHostHeader(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
func TestService_buildHTTPRequestWithGraphQLEnabled(t *testing.T) {
|
||||
func TestEndpoint_buildHTTPRequestWithGraphQLEnabled(t *testing.T) {
|
||||
condition := Condition("[STATUS] == 200")
|
||||
service := Service{
|
||||
endpoint := Endpoint{
|
||||
Name: "website-graphql",
|
||||
URL: "https://twin.sh/graphql",
|
||||
Method: "POST",
|
||||
|
@ -237,8 +237,8 @@ func TestService_buildHTTPRequestWithGraphQLEnabled(t *testing.T) {
|
|||
}
|
||||
}`,
|
||||
}
|
||||
service.ValidateAndSetDefaults()
|
||||
request := service.buildHTTPRequest()
|
||||
endpoint.ValidateAndSetDefaults()
|
||||
request := endpoint.buildHTTPRequest()
|
||||
if request.Method != "POST" {
|
||||
t.Error("request.Method should've been POST, but was", request.Method)
|
||||
}
|
||||
|
@ -254,13 +254,13 @@ func TestService_buildHTTPRequestWithGraphQLEnabled(t *testing.T) {
|
|||
func TestIntegrationEvaluateHealth(t *testing.T) {
|
||||
condition := Condition("[STATUS] == 200")
|
||||
bodyCondition := Condition("[BODY].status == UP")
|
||||
service := Service{
|
||||
endpoint := Endpoint{
|
||||
Name: "website-health",
|
||||
URL: "https://twin.sh/health",
|
||||
Conditions: []*Condition{&condition, &bodyCondition},
|
||||
}
|
||||
service.ValidateAndSetDefaults()
|
||||
result := service.EvaluateHealth()
|
||||
endpoint.ValidateAndSetDefaults()
|
||||
result := endpoint.EvaluateHealth()
|
||||
if !result.ConditionResults[0].Success {
|
||||
t.Errorf("Condition '%s' should have been a success", condition)
|
||||
}
|
||||
|
@ -274,13 +274,13 @@ func TestIntegrationEvaluateHealth(t *testing.T) {
|
|||
|
||||
func TestIntegrationEvaluateHealthWithFailure(t *testing.T) {
|
||||
condition := Condition("[STATUS] == 500")
|
||||
service := Service{
|
||||
endpoint := Endpoint{
|
||||
Name: "website-health",
|
||||
URL: "https://twin.sh/health",
|
||||
Conditions: []*Condition{&condition},
|
||||
}
|
||||
service.ValidateAndSetDefaults()
|
||||
result := service.EvaluateHealth()
|
||||
endpoint.ValidateAndSetDefaults()
|
||||
result := endpoint.EvaluateHealth()
|
||||
if result.ConditionResults[0].Success {
|
||||
t.Errorf("Condition '%s' should have been a failure", condition)
|
||||
}
|
||||
|
@ -295,7 +295,7 @@ func TestIntegrationEvaluateHealthWithFailure(t *testing.T) {
|
|||
func TestIntegrationEvaluateHealthForDNS(t *testing.T) {
|
||||
conditionSuccess := Condition("[DNS_RCODE] == NOERROR")
|
||||
conditionBody := Condition("[BODY] == 93.184.216.34")
|
||||
service := Service{
|
||||
endpoint := Endpoint{
|
||||
Name: "example",
|
||||
URL: "8.8.8.8",
|
||||
DNS: &DNS{
|
||||
|
@ -304,8 +304,8 @@ func TestIntegrationEvaluateHealthForDNS(t *testing.T) {
|
|||
},
|
||||
Conditions: []*Condition{&conditionSuccess, &conditionBody},
|
||||
}
|
||||
service.ValidateAndSetDefaults()
|
||||
result := service.EvaluateHealth()
|
||||
endpoint.ValidateAndSetDefaults()
|
||||
result := endpoint.EvaluateHealth()
|
||||
if !result.ConditionResults[0].Success {
|
||||
t.Errorf("Conditions '%s' and %s should have been a success", conditionSuccess, conditionBody)
|
||||
}
|
||||
|
@ -319,13 +319,13 @@ func TestIntegrationEvaluateHealthForDNS(t *testing.T) {
|
|||
|
||||
func TestIntegrationEvaluateHealthForICMP(t *testing.T) {
|
||||
conditionSuccess := Condition("[CONNECTED] == true")
|
||||
service := Service{
|
||||
endpoint := Endpoint{
|
||||
Name: "icmp-test",
|
||||
URL: "icmp://127.0.0.1",
|
||||
Conditions: []*Condition{&conditionSuccess},
|
||||
}
|
||||
service.ValidateAndSetDefaults()
|
||||
result := service.EvaluateHealth()
|
||||
endpoint.ValidateAndSetDefaults()
|
||||
result := endpoint.EvaluateHealth()
|
||||
if !result.ConditionResults[0].Success {
|
||||
t.Errorf("Conditions '%s' should have been a success", conditionSuccess)
|
||||
}
|
||||
|
@ -337,40 +337,40 @@ func TestIntegrationEvaluateHealthForICMP(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
func TestService_getIP(t *testing.T) {
|
||||
func TestEndpoint_getIP(t *testing.T) {
|
||||
conditionSuccess := Condition("[CONNECTED] == true")
|
||||
service := Service{
|
||||
endpoint := Endpoint{
|
||||
Name: "invalid-url-test",
|
||||
URL: "",
|
||||
Conditions: []*Condition{&conditionSuccess},
|
||||
}
|
||||
result := &Result{}
|
||||
service.getIP(result)
|
||||
endpoint.getIP(result)
|
||||
if len(result.Errors) == 0 {
|
||||
t.Error("service.getIP(result) should've thrown an error because the URL is invalid, thus cannot be parsed")
|
||||
t.Error("endpoint.getIP(result) should've thrown an error because the URL is invalid, thus cannot be parsed")
|
||||
}
|
||||
}
|
||||
|
||||
func TestService_NeedsToReadBody(t *testing.T) {
|
||||
func TestEndpoint_NeedsToReadBody(t *testing.T) {
|
||||
statusCondition := Condition("[STATUS] == 200")
|
||||
bodyCondition := Condition("[BODY].status == UP")
|
||||
bodyConditionWithLength := Condition("len([BODY].tags) > 0")
|
||||
if (&Service{Conditions: []*Condition{&statusCondition}}).needsToReadBody() {
|
||||
if (&Endpoint{Conditions: []*Condition{&statusCondition}}).needsToReadBody() {
|
||||
t.Error("expected false, got true")
|
||||
}
|
||||
if !(&Service{Conditions: []*Condition{&bodyCondition}}).needsToReadBody() {
|
||||
if !(&Endpoint{Conditions: []*Condition{&bodyCondition}}).needsToReadBody() {
|
||||
t.Error("expected true, got false")
|
||||
}
|
||||
if !(&Service{Conditions: []*Condition{&bodyConditionWithLength}}).needsToReadBody() {
|
||||
if !(&Endpoint{Conditions: []*Condition{&bodyConditionWithLength}}).needsToReadBody() {
|
||||
t.Error("expected true, got false")
|
||||
}
|
||||
if !(&Service{Conditions: []*Condition{&statusCondition, &bodyCondition}}).needsToReadBody() {
|
||||
if !(&Endpoint{Conditions: []*Condition{&statusCondition, &bodyCondition}}).needsToReadBody() {
|
||||
t.Error("expected true, got false")
|
||||
}
|
||||
if !(&Service{Conditions: []*Condition{&bodyCondition, &statusCondition}}).needsToReadBody() {
|
||||
if !(&Endpoint{Conditions: []*Condition{&bodyCondition, &statusCondition}}).needsToReadBody() {
|
||||
t.Error("expected true, got false")
|
||||
}
|
||||
if !(&Service{Conditions: []*Condition{&bodyConditionWithLength, &statusCondition}}).needsToReadBody() {
|
||||
if !(&Endpoint{Conditions: []*Condition{&bodyConditionWithLength, &statusCondition}}).needsToReadBody() {
|
||||
t.Error("expected true, got false")
|
||||
}
|
||||
}
|
|
@ -15,13 +15,13 @@ type Event struct {
|
|||
type EventType string
|
||||
|
||||
var (
|
||||
// EventStart is a type of event that represents when a service starts being monitored
|
||||
// EventStart is a type of event that represents when an endpoint starts being monitored
|
||||
EventStart EventType = "START"
|
||||
|
||||
// EventHealthy is a type of event that represents a service passing all of its conditions
|
||||
// EventHealthy is a type of event that represents an endpoint passing all of its conditions
|
||||
EventHealthy EventType = "HEALTHY"
|
||||
|
||||
// EventUnhealthy is a type of event that represents a service failing one or more of its conditions
|
||||
// EventUnhealthy is a type of event that represents an endpoint failing one or more of its conditions
|
||||
EventUnhealthy EventType = "UNHEALTHY"
|
||||
)
|
||||
|
||||
|
|
|
@ -4,18 +4,18 @@ import (
|
|||
"time"
|
||||
)
|
||||
|
||||
// Result of the evaluation of a Service
|
||||
// Result of the evaluation of a Endpoint
|
||||
type Result struct {
|
||||
// HTTPStatus is the HTTP response status code
|
||||
HTTPStatus int `json:"status"`
|
||||
|
||||
// DNSRCode is the response code of a DNS query in a human readable format
|
||||
// DNSRCode is the response code of a DNS query in a human-readable format
|
||||
DNSRCode string `json:"-"`
|
||||
|
||||
// Hostname extracted from Service.URL
|
||||
// Hostname extracted from Endpoint.URL
|
||||
Hostname string `json:"hostname"`
|
||||
|
||||
// IP resolved from the Service URL
|
||||
// IP resolved from the Endpoint URL
|
||||
IP string `json:"-"`
|
||||
|
||||
// Connected whether a connection to the host was established successfully
|
||||
|
@ -24,10 +24,10 @@ type Result struct {
|
|||
// Duration time that the request took
|
||||
Duration time.Duration `json:"duration"`
|
||||
|
||||
// Errors encountered during the evaluation of the service's health
|
||||
// Errors encountered during the evaluation of the Endpoint's health
|
||||
Errors []string `json:"errors"`
|
||||
|
||||
// ConditionResults results of the service's conditions
|
||||
// ConditionResults results of the Endpoint's conditions
|
||||
ConditionResults []*ConditionResult `json:"conditionResults"`
|
||||
|
||||
// Success whether the result signifies a success or not
|
||||
|
@ -41,8 +41,8 @@ type Result struct {
|
|||
|
||||
// body is the response body
|
||||
//
|
||||
// Note that this variable is only used during the evaluation of a service's health.
|
||||
// This means that the call Service.EvaluateHealth both populates the body (if necessary)
|
||||
// Note that this variable is only used during the evaluation of an Endpoint's health.
|
||||
// This means that the call Endpoint.EvaluateHealth both populates the body (if necessary)
|
||||
// and sets it to nil after the evaluation has been completed.
|
||||
body []byte
|
||||
}
|
||||
|
|
300
core/service.go
300
core/service.go
|
@ -1,300 +0,0 @@
|
|||
package core
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"crypto/x509"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"io/ioutil"
|
||||
"net"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/TwiN/gatus/v3/alerting/alert"
|
||||
"github.com/TwiN/gatus/v3/client"
|
||||
"github.com/TwiN/gatus/v3/core/ui"
|
||||
"github.com/TwiN/gatus/v3/util"
|
||||
)
|
||||
|
||||
const (
|
||||
// HostHeader is the name of the header used to specify the host
|
||||
HostHeader = "Host"
|
||||
|
||||
// ContentTypeHeader is the name of the header used to specify the content type
|
||||
ContentTypeHeader = "Content-Type"
|
||||
|
||||
// UserAgentHeader is the name of the header used to specify the request's user agent
|
||||
UserAgentHeader = "User-Agent"
|
||||
|
||||
// GatusUserAgent is the default user agent that Gatus uses to send requests.
|
||||
GatusUserAgent = "Gatus/1.0"
|
||||
)
|
||||
|
||||
var (
|
||||
// ErrServiceWithNoCondition is the error with which Gatus will panic if a service is configured with no conditions
|
||||
ErrServiceWithNoCondition = errors.New("you must specify at least one condition per service")
|
||||
|
||||
// ErrServiceWithNoURL is the error with which Gatus will panic if a service is configured with no url
|
||||
ErrServiceWithNoURL = errors.New("you must specify an url for each service")
|
||||
|
||||
// ErrServiceWithNoName is the error with which Gatus will panic if a service is configured with no name
|
||||
ErrServiceWithNoName = errors.New("you must specify a name for each service")
|
||||
)
|
||||
|
||||
// Service is the configuration of a monitored endpoint
|
||||
// XXX: Rename this to Endpoint in v4.0.0?
|
||||
type Service struct {
|
||||
// Enabled defines whether to enable the service
|
||||
Enabled *bool `yaml:"enabled,omitempty"`
|
||||
|
||||
// Name of the service. Can be anything.
|
||||
Name string `yaml:"name"`
|
||||
|
||||
// Group the service is a part of. Used for grouping multiple services together on the front end.
|
||||
Group string `yaml:"group,omitempty"`
|
||||
|
||||
// URL to send the request to
|
||||
URL string `yaml:"url"`
|
||||
|
||||
// DNS is the configuration of DNS monitoring
|
||||
DNS *DNS `yaml:"dns,omitempty"`
|
||||
|
||||
// Method of the request made to the url of the service
|
||||
Method string `yaml:"method,omitempty"`
|
||||
|
||||
// Body of the request
|
||||
Body string `yaml:"body,omitempty"`
|
||||
|
||||
// GraphQL is whether to wrap the body in a query param ({"query":"$body"})
|
||||
GraphQL bool `yaml:"graphql,omitempty"`
|
||||
|
||||
// Headers of the request
|
||||
Headers map[string]string `yaml:"headers,omitempty"`
|
||||
|
||||
// Interval is the duration to wait between every status check
|
||||
Interval time.Duration `yaml:"interval,omitempty"`
|
||||
|
||||
// Conditions used to determine the health of the service
|
||||
Conditions []*Condition `yaml:"conditions"`
|
||||
|
||||
// Alerts is the alerting configuration for the service in case of failure
|
||||
Alerts []*alert.Alert `yaml:"alerts"`
|
||||
|
||||
// ClientConfig is the configuration of the client used to communicate with the service's target
|
||||
ClientConfig *client.Config `yaml:"client"`
|
||||
|
||||
// UIConfig is the configuration for the UI
|
||||
UIConfig *ui.Config `yaml:"ui"`
|
||||
|
||||
// NumberOfFailuresInARow is the number of unsuccessful evaluations in a row
|
||||
NumberOfFailuresInARow int
|
||||
|
||||
// NumberOfSuccessesInARow is the number of successful evaluations in a row
|
||||
NumberOfSuccessesInARow int
|
||||
}
|
||||
|
||||
// IsEnabled returns whether the service is enabled or not
|
||||
func (service Service) IsEnabled() bool {
|
||||
if service.Enabled == nil {
|
||||
return true
|
||||
}
|
||||
return *service.Enabled
|
||||
}
|
||||
|
||||
// ValidateAndSetDefaults validates the service's configuration and sets the default value of fields that have one
|
||||
func (service *Service) ValidateAndSetDefaults() error {
|
||||
// Set default values
|
||||
if service.ClientConfig == nil {
|
||||
service.ClientConfig = client.GetDefaultConfig()
|
||||
} else {
|
||||
service.ClientConfig.ValidateAndSetDefaults()
|
||||
}
|
||||
if service.UIConfig == nil {
|
||||
service.UIConfig = ui.GetDefaultConfig()
|
||||
}
|
||||
if service.Interval == 0 {
|
||||
service.Interval = 1 * time.Minute
|
||||
}
|
||||
if len(service.Method) == 0 {
|
||||
service.Method = http.MethodGet
|
||||
}
|
||||
if len(service.Headers) == 0 {
|
||||
service.Headers = make(map[string]string)
|
||||
}
|
||||
// Automatically add user agent header if there isn't one specified in the service configuration
|
||||
if _, userAgentHeaderExists := service.Headers[UserAgentHeader]; !userAgentHeaderExists {
|
||||
service.Headers[UserAgentHeader] = GatusUserAgent
|
||||
}
|
||||
// Automatically add "Content-Type: application/json" header if there's no Content-Type set
|
||||
// and service.GraphQL is set to true
|
||||
if _, contentTypeHeaderExists := service.Headers[ContentTypeHeader]; !contentTypeHeaderExists && service.GraphQL {
|
||||
service.Headers[ContentTypeHeader] = "application/json"
|
||||
}
|
||||
for _, serviceAlert := range service.Alerts {
|
||||
if serviceAlert.FailureThreshold <= 0 {
|
||||
serviceAlert.FailureThreshold = 3
|
||||
}
|
||||
if serviceAlert.SuccessThreshold <= 0 {
|
||||
serviceAlert.SuccessThreshold = 2
|
||||
}
|
||||
}
|
||||
if len(service.Name) == 0 {
|
||||
return ErrServiceWithNoName
|
||||
}
|
||||
if len(service.URL) == 0 {
|
||||
return ErrServiceWithNoURL
|
||||
}
|
||||
if len(service.Conditions) == 0 {
|
||||
return ErrServiceWithNoCondition
|
||||
}
|
||||
if service.DNS != nil {
|
||||
return service.DNS.validateAndSetDefault()
|
||||
}
|
||||
// Make sure that the request can be created
|
||||
_, err := http.NewRequest(service.Method, service.URL, bytes.NewBuffer([]byte(service.Body)))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Key returns the unique key for the Service
|
||||
func (service Service) Key() string {
|
||||
return util.ConvertGroupAndServiceToKey(service.Group, service.Name)
|
||||
}
|
||||
|
||||
// EvaluateHealth sends a request to the service's URL and evaluates the conditions of the service.
|
||||
func (service *Service) EvaluateHealth() *Result {
|
||||
result := &Result{Success: true, Errors: []string{}}
|
||||
service.getIP(result)
|
||||
if len(result.Errors) == 0 {
|
||||
service.call(result)
|
||||
} else {
|
||||
result.Success = false
|
||||
}
|
||||
for _, condition := range service.Conditions {
|
||||
success := condition.evaluate(result, service.UIConfig.DontResolveFailedConditions)
|
||||
if !success {
|
||||
result.Success = false
|
||||
}
|
||||
}
|
||||
result.Timestamp = time.Now()
|
||||
// No need to keep the body after the service has been evaluated
|
||||
result.body = nil
|
||||
// Clean up parameters that we don't need to keep in the results
|
||||
if service.UIConfig.HideHostname {
|
||||
result.Hostname = ""
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
func (service *Service) getIP(result *Result) {
|
||||
if service.DNS != nil {
|
||||
result.Hostname = strings.TrimSuffix(service.URL, ":53")
|
||||
} else {
|
||||
urlObject, err := url.Parse(service.URL)
|
||||
if err != nil {
|
||||
result.AddError(err.Error())
|
||||
return
|
||||
}
|
||||
result.Hostname = urlObject.Hostname()
|
||||
}
|
||||
ips, err := net.LookupIP(result.Hostname)
|
||||
if err != nil {
|
||||
result.AddError(err.Error())
|
||||
return
|
||||
}
|
||||
result.IP = ips[0].String()
|
||||
}
|
||||
|
||||
func (service *Service) call(result *Result) {
|
||||
var request *http.Request
|
||||
var response *http.Response
|
||||
var err error
|
||||
var certificate *x509.Certificate
|
||||
isServiceDNS := service.DNS != nil
|
||||
isServiceTCP := strings.HasPrefix(service.URL, "tcp://")
|
||||
isServiceICMP := strings.HasPrefix(service.URL, "icmp://")
|
||||
isServiceStartTLS := strings.HasPrefix(service.URL, "starttls://")
|
||||
isServiceTLS := strings.HasPrefix(service.URL, "tls://")
|
||||
isServiceHTTP := !isServiceDNS && !isServiceTCP && !isServiceICMP && !isServiceStartTLS && !isServiceTLS
|
||||
if isServiceHTTP {
|
||||
request = service.buildHTTPRequest()
|
||||
}
|
||||
startTime := time.Now()
|
||||
if isServiceDNS {
|
||||
service.DNS.query(service.URL, result)
|
||||
result.Duration = time.Since(startTime)
|
||||
} else if isServiceStartTLS || isServiceTLS {
|
||||
if isServiceStartTLS {
|
||||
result.Connected, certificate, err = client.CanPerformStartTLS(strings.TrimPrefix(service.URL, "starttls://"), service.ClientConfig)
|
||||
} else {
|
||||
result.Connected, certificate, err = client.CanPerformTLS(strings.TrimPrefix(service.URL, "tls://"), service.ClientConfig)
|
||||
}
|
||||
if err != nil {
|
||||
result.AddError(err.Error())
|
||||
return
|
||||
}
|
||||
result.Duration = time.Since(startTime)
|
||||
result.CertificateExpiration = time.Until(certificate.NotAfter)
|
||||
} else if isServiceTCP {
|
||||
result.Connected = client.CanCreateTCPConnection(strings.TrimPrefix(service.URL, "tcp://"), service.ClientConfig)
|
||||
result.Duration = time.Since(startTime)
|
||||
} else if isServiceICMP {
|
||||
result.Connected, result.Duration = client.Ping(strings.TrimPrefix(service.URL, "icmp://"), service.ClientConfig)
|
||||
} else {
|
||||
response, err = client.GetHTTPClient(service.ClientConfig).Do(request)
|
||||
result.Duration = time.Since(startTime)
|
||||
if err != nil {
|
||||
result.AddError(err.Error())
|
||||
return
|
||||
}
|
||||
defer response.Body.Close()
|
||||
if response.TLS != nil && len(response.TLS.PeerCertificates) > 0 {
|
||||
certificate = response.TLS.PeerCertificates[0]
|
||||
result.CertificateExpiration = time.Until(certificate.NotAfter)
|
||||
}
|
||||
result.HTTPStatus = response.StatusCode
|
||||
result.Connected = response.StatusCode > 0
|
||||
// Only read the body if there's a condition that uses the BodyPlaceholder
|
||||
if service.needsToReadBody() {
|
||||
result.body, err = ioutil.ReadAll(response.Body)
|
||||
if err != nil {
|
||||
result.AddError(err.Error())
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (service *Service) buildHTTPRequest() *http.Request {
|
||||
var bodyBuffer *bytes.Buffer
|
||||
if service.GraphQL {
|
||||
graphQlBody := map[string]string{
|
||||
"query": service.Body,
|
||||
}
|
||||
body, _ := json.Marshal(graphQlBody)
|
||||
bodyBuffer = bytes.NewBuffer(body)
|
||||
} else {
|
||||
bodyBuffer = bytes.NewBuffer([]byte(service.Body))
|
||||
}
|
||||
request, _ := http.NewRequest(service.Method, service.URL, bodyBuffer)
|
||||
for k, v := range service.Headers {
|
||||
request.Header.Set(k, v)
|
||||
if k == HostHeader {
|
||||
request.Host = v
|
||||
}
|
||||
}
|
||||
return request
|
||||
}
|
||||
|
||||
// needsToReadBody checks if there's any conditions that requires the response body to be read
|
||||
func (service *Service) needsToReadBody() bool {
|
||||
for _, condition := range service.Conditions {
|
||||
if condition.hasBodyPlaceholder() {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
|
@ -1,38 +0,0 @@
|
|||
package core
|
||||
|
||||
// ServiceStatus contains the evaluation Results of a Service
|
||||
type ServiceStatus struct {
|
||||
// Name of the service
|
||||
Name string `json:"name,omitempty"`
|
||||
|
||||
// Group the service is a part of. Used for grouping multiple services together on the front end.
|
||||
Group string `json:"group,omitempty"`
|
||||
|
||||
// Key is the key representing the ServiceStatus
|
||||
Key string `json:"key"`
|
||||
|
||||
// Results is the list of service evaluation results
|
||||
Results []*Result `json:"results"`
|
||||
|
||||
// Events is a list of events
|
||||
Events []*Event `json:"events"`
|
||||
|
||||
// Uptime information on the service's uptime
|
||||
//
|
||||
// Used by the memory store.
|
||||
//
|
||||
// To retrieve the uptime between two time, use store.GetUptimeByKey.
|
||||
Uptime *Uptime `json:"-"`
|
||||
}
|
||||
|
||||
// NewServiceStatus creates a new ServiceStatus
|
||||
func NewServiceStatus(serviceKey, serviceGroup, serviceName string) *ServiceStatus {
|
||||
return &ServiceStatus{
|
||||
Name: serviceName,
|
||||
Group: serviceGroup,
|
||||
Key: serviceKey,
|
||||
Results: make([]*Result, 0),
|
||||
Events: make([]*Event, 0),
|
||||
Uptime: NewUptime(),
|
||||
}
|
||||
}
|
|
@ -1,19 +0,0 @@
|
|||
package core
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestNewServiceStatus(t *testing.T) {
|
||||
service := &Service{Name: "name", Group: "group"}
|
||||
serviceStatus := NewServiceStatus(service.Key(), service.Group, service.Name)
|
||||
if serviceStatus.Name != service.Name {
|
||||
t.Errorf("expected %s, got %s", service.Name, serviceStatus.Name)
|
||||
}
|
||||
if serviceStatus.Group != service.Group {
|
||||
t.Errorf("expected %s, got %s", service.Group, serviceStatus.Group)
|
||||
}
|
||||
if serviceStatus.Key != "group_name" {
|
||||
t.Errorf("expected %s, got %s", "group_name", serviceStatus.Key)
|
||||
}
|
||||
}
|
|
@ -1,6 +1,6 @@
|
|||
package ui
|
||||
|
||||
// Config is the UI configuration for services
|
||||
// Config is the UI configuration for core.Endpoint
|
||||
type Config struct {
|
||||
// HideHostname whether to hide the hostname in the Result
|
||||
HideHostname bool `yaml:"hide-hostname"`
|
||||
|
|
|
@ -1,11 +1,11 @@
|
|||
# PagerDuty + Gatus Integration Benefits
|
||||
- Notify on-call responders based on alerts sent from Gatus.
|
||||
- Incidents will automatically resolve in PagerDuty when the service that caused the incident in Gatus returns to a healthy state.
|
||||
- Incidents will automatically resolve in PagerDuty when the endpoint that caused the incident in Gatus returns to a healthy state.
|
||||
|
||||
|
||||
# How it Works
|
||||
- Services that do not meet the user-specified conditions and that are configured with alerts of type `pagerduty` will trigger a new incident on the corresponding PagerDuty service when the alert's defined `failure-threshold` has been reached.
|
||||
- Once the unhealthy services have returned to a healthy state for the number of executions defined in `success-threshold`, the previously triggered incident will be automatically resolved.
|
||||
- Endpoints that do not meet the user-specified conditions and that are configured with alerts of type `pagerduty` will trigger a new incident on the corresponding PagerDuty service when the alert's defined `failure-threshold` has been reached.
|
||||
- Once the unhealthy endpoints have returned to a healthy state for the number of executions defined in `success-threshold`, the previously triggered incident will be automatically resolved.
|
||||
|
||||
|
||||
# Requirements
|
||||
|
@ -36,9 +36,9 @@ alerting:
|
|||
pagerduty:
|
||||
integration-key: "********************************"
|
||||
```
|
||||
You can now add alerts of type `pagerduty` in the services you've defined, like so:
|
||||
You can now add alerts of type `pagerduty` in the endpoint you've defined, like so:
|
||||
```yaml
|
||||
services:
|
||||
endpoints:
|
||||
- name: website
|
||||
interval: 30s
|
||||
url: "https://twin.sh/health"
|
||||
|
@ -56,12 +56,12 @@ services:
|
|||
```
|
||||
|
||||
The sample above will do the following:
|
||||
- Send a request to the `https://twin.sh/health` (`services[].url`) specified every **30s** (`services[].interval`)
|
||||
- Evaluate the conditions to determine whether the service is "healthy" or not
|
||||
- **If all conditions are not met 3 (`services[].alerts[].failure-threshold`) times in a row**: Gatus will create a new incident
|
||||
- **If, after an incident has been triggered, all conditions are met 5 (`services[].alerts[].success-threshold`) times in a row _AND_ `services[].alerts[].send-on-resolved` is set to `true`**: Gatus will resolve the triggered incident
|
||||
- Send a request to the `https://twin.sh/health` (`endpoints[].url`) specified every **30s** (`endpoints[].interval`)
|
||||
- Evaluate the conditions to determine whether the endpoint is "healthy" or not
|
||||
- **If all conditions are not met 3 (`endpoints[].alerts[].failure-threshold`) times in a row**: Gatus will create a new incident
|
||||
- **If, after an incident has been triggered, all conditions are met 5 (`endpoints[].alerts[].success-threshold`) times in a row _AND_ `endpoints[].alerts[].send-on-resolved` is set to `true`**: Gatus will resolve the triggered incident
|
||||
|
||||
It is highly recommended to set `services[].alerts[].send-on-resolved` to true for alerts of type `pagerduty`.
|
||||
It is highly recommended to set `endpoints[].alerts[].send-on-resolved` to true for alerts of type `pagerduty`.
|
||||
|
||||
|
||||
# How to Uninstall
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
metrics: true
|
||||
services:
|
||||
endpoints:
|
||||
- name: website
|
||||
url: https://twin.sh/health
|
||||
interval: 30s
|
||||
|
|
|
@ -58,9 +58,9 @@
|
|||
"steppedLine": false,
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum(rate(gatus_tasks[30s])) by (service)",
|
||||
"expr": "sum(rate(gatus_tasks[30s])) by (endpoint)",
|
||||
"interval": "30s",
|
||||
"legendFormat": "{{service}}",
|
||||
"legendFormat": "{{endpoint}}",
|
||||
"refId": "A"
|
||||
}
|
||||
],
|
||||
|
@ -145,9 +145,9 @@
|
|||
"steppedLine": false,
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum(rate(gatus_tasks{success=\"false\"}[30s])) by (service)",
|
||||
"expr": "sum(rate(gatus_tasks{success=\"false\"}[30s])) by (endpoint)",
|
||||
"interval": "30s",
|
||||
"legendFormat": "{{service}}",
|
||||
"legendFormat": "{{endpoint}}",
|
||||
"refId": "A"
|
||||
}
|
||||
],
|
||||
|
@ -232,10 +232,10 @@
|
|||
"steppedLine": false,
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum(rate(gatus_tasks{success=\"true\"}[30s])) by (service)",
|
||||
"expr": "sum(rate(gatus_tasks{success=\"true\"}[30s])) by (endpoint)",
|
||||
"instant": false,
|
||||
"interval": "30s",
|
||||
"legendFormat": "{{service}}",
|
||||
"legendFormat": "{{endpoint}}",
|
||||
"refId": "A"
|
||||
}
|
||||
],
|
||||
|
|
|
@ -3,9 +3,9 @@ alerting:
|
|||
webhook-url: "http://mattermost:8065/hooks/tokengoeshere"
|
||||
insecure: true
|
||||
|
||||
services:
|
||||
endpoints:
|
||||
- name: example
|
||||
url: http://example.org
|
||||
url: https://example.org
|
||||
interval: 1m
|
||||
alerts:
|
||||
- type: mattermost
|
||||
|
|
|
@ -2,7 +2,7 @@ storage:
|
|||
type: postgres
|
||||
file: "postgres://username:password@postgres:5432/gatus?sslmode=disable"
|
||||
|
||||
services:
|
||||
endpoints:
|
||||
- name: back-end
|
||||
group: core
|
||||
url: "https://example.org/"
|
||||
|
|
|
@ -2,7 +2,7 @@ storage:
|
|||
type: sqlite
|
||||
file: /data/data.db
|
||||
|
||||
services:
|
||||
endpoints:
|
||||
- name: back-end
|
||||
group: core
|
||||
url: "https://example.org/"
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
services:
|
||||
endpoints:
|
||||
- name: example
|
||||
url: http://example.org
|
||||
url: https://example.org
|
||||
interval: 30s
|
||||
conditions:
|
||||
- "[STATUS] == 200"
|
||||
|
|
|
@ -1,8 +1,12 @@
|
|||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: gatus
|
||||
namespace: kube-system
|
||||
data:
|
||||
config.yaml: |
|
||||
metrics: true
|
||||
services:
|
||||
endpoints:
|
||||
- name: website
|
||||
url: https://twin.sh/health
|
||||
interval: 1m
|
||||
|
@ -27,10 +31,6 @@ data:
|
|||
url: https://example.com/
|
||||
conditions:
|
||||
- "[STATUS] == 200"
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: gatus
|
||||
namespace: kube-system
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
services:
|
||||
endpoints:
|
||||
- name: example
|
||||
url: http://example.org
|
||||
url: https://example.org
|
||||
interval: 30s
|
||||
conditions:
|
||||
- "[STATUS] == 200"
|
||||
|
|
|
@ -15,18 +15,19 @@ var (
|
|||
rwLock sync.RWMutex
|
||||
)
|
||||
|
||||
// PublishMetricsForService publishes metrics for the given service and its result.
|
||||
// PublishMetricsForEndpoint publishes metrics for the given endpoint and its result.
|
||||
// These metrics will be exposed at /metrics if the metrics are enabled
|
||||
func PublishMetricsForService(service *core.Service, result *core.Result) {
|
||||
func PublishMetricsForEndpoint(endpoint *core.Endpoint, result *core.Result) {
|
||||
rwLock.Lock()
|
||||
gauge, exists := gauges[fmt.Sprintf("%s_%s", service.Name, service.URL)]
|
||||
gauge, exists := gauges[fmt.Sprintf("%s_%s", endpoint.Name, endpoint.URL)]
|
||||
if !exists {
|
||||
gauge = promauto.NewGaugeVec(prometheus.GaugeOpts{
|
||||
Subsystem: "gatus",
|
||||
Name: "tasks",
|
||||
ConstLabels: prometheus.Labels{"service": service.Name, "url": service.URL},
|
||||
// TODO: remove the "service" key in v4.0.0, as it is only kept for backward compatibility
|
||||
ConstLabels: prometheus.Labels{"service": endpoint.Name, "endpoint": endpoint.Name, "url": endpoint.URL},
|
||||
}, []string{"status", "success"})
|
||||
gauges[fmt.Sprintf("%s_%s", service.Name, service.URL)] = gauge
|
||||
gauges[fmt.Sprintf("%s_%s", endpoint.Name, endpoint.URL)] = gauge
|
||||
}
|
||||
rwLock.Unlock()
|
||||
gauge.WithLabelValues(strconv.Itoa(result.HTTPStatus), strconv.FormatBool(result.Success)).Inc()
|
||||
|
|
|
@ -3,6 +3,6 @@ package common
|
|||
import "errors"
|
||||
|
||||
var (
|
||||
ErrServiceNotFound = errors.New("service not found") // When a service does not exist in the store
|
||||
ErrEndpointNotFound = errors.New("endpoint not found") // When an endpoint does not exist in the store
|
||||
ErrInvalidTimeRange = errors.New("'from' cannot be older than 'to'") // When an invalid time range is provided
|
||||
)
|
||||
|
|
|
@ -1,9 +1,9 @@
|
|||
package common
|
||||
|
||||
const (
|
||||
// MaximumNumberOfResults is the maximum number of results that a service can have
|
||||
// MaximumNumberOfResults is the maximum number of results that an endpoint can have
|
||||
MaximumNumberOfResults = 100
|
||||
|
||||
// MaximumNumberOfEvents is the maximum number of events that a service can have
|
||||
// MaximumNumberOfEvents is the maximum number of events that an endpoint can have
|
||||
MaximumNumberOfEvents = 50
|
||||
)
|
||||
|
|
|
@ -1,27 +1,27 @@
|
|||
package paging
|
||||
|
||||
// ServiceStatusParams represents all parameters that can be used for paging purposes
|
||||
type ServiceStatusParams struct {
|
||||
// EndpointStatusParams represents all parameters that can be used for paging purposes
|
||||
type EndpointStatusParams struct {
|
||||
EventsPage int // Number of the event page
|
||||
EventsPageSize int // Size of the event page
|
||||
ResultsPage int // Number of the result page
|
||||
ResultsPageSize int // Size of the result page
|
||||
}
|
||||
|
||||
// NewServiceStatusParams creates a new ServiceStatusParams
|
||||
func NewServiceStatusParams() *ServiceStatusParams {
|
||||
return &ServiceStatusParams{}
|
||||
// NewEndpointStatusParams creates a new EndpointStatusParams
|
||||
func NewEndpointStatusParams() *EndpointStatusParams {
|
||||
return &EndpointStatusParams{}
|
||||
}
|
||||
|
||||
// WithEvents sets the values for EventsPage and EventsPageSize
|
||||
func (params *ServiceStatusParams) WithEvents(page, pageSize int) *ServiceStatusParams {
|
||||
func (params *EndpointStatusParams) WithEvents(page, pageSize int) *EndpointStatusParams {
|
||||
params.EventsPage = page
|
||||
params.EventsPageSize = pageSize
|
||||
return params
|
||||
}
|
||||
|
||||
// WithResults sets the values for ResultsPage and ResultsPageSize
|
||||
func (params *ServiceStatusParams) WithResults(page, pageSize int) *ServiceStatusParams {
|
||||
func (params *EndpointStatusParams) WithResults(page, pageSize int) *EndpointStatusParams {
|
||||
params.ResultsPage = page
|
||||
params.ResultsPageSize = pageSize
|
||||
return params
|
||||
|
|
|
@ -2,10 +2,10 @@ package paging
|
|||
|
||||
import "testing"
|
||||
|
||||
func TestNewServiceStatusParams(t *testing.T) {
|
||||
func TestNewEndpointStatusParams(t *testing.T) {
|
||||
type Scenario struct {
|
||||
Name string
|
||||
Params *ServiceStatusParams
|
||||
Params *EndpointStatusParams
|
||||
ExpectedEventsPage int
|
||||
ExpectedEventsPageSize int
|
||||
ExpectedResultsPage int
|
||||
|
@ -14,7 +14,7 @@ func TestNewServiceStatusParams(t *testing.T) {
|
|||
scenarios := []Scenario{
|
||||
{
|
||||
Name: "empty-params",
|
||||
Params: NewServiceStatusParams(),
|
||||
Params: NewEndpointStatusParams(),
|
||||
ExpectedEventsPage: 0,
|
||||
ExpectedEventsPageSize: 0,
|
||||
ExpectedResultsPage: 0,
|
||||
|
@ -22,7 +22,7 @@ func TestNewServiceStatusParams(t *testing.T) {
|
|||
},
|
||||
{
|
||||
Name: "with-events-page-2-size-7",
|
||||
Params: NewServiceStatusParams().WithEvents(2, 7),
|
||||
Params: NewEndpointStatusParams().WithEvents(2, 7),
|
||||
ExpectedEventsPage: 2,
|
||||
ExpectedEventsPageSize: 7,
|
||||
ExpectedResultsPage: 0,
|
||||
|
@ -30,7 +30,7 @@ func TestNewServiceStatusParams(t *testing.T) {
|
|||
},
|
||||
{
|
||||
Name: "with-events-page-4-size-3-uptime",
|
||||
Params: NewServiceStatusParams().WithEvents(4, 3),
|
||||
Params: NewEndpointStatusParams().WithEvents(4, 3),
|
||||
ExpectedEventsPage: 4,
|
||||
ExpectedEventsPageSize: 3,
|
||||
ExpectedResultsPage: 0,
|
||||
|
@ -38,7 +38,7 @@ func TestNewServiceStatusParams(t *testing.T) {
|
|||
},
|
||||
{
|
||||
Name: "with-results-page-1-size-20-uptime",
|
||||
Params: NewServiceStatusParams().WithResults(1, 20),
|
||||
Params: NewEndpointStatusParams().WithResults(1, 20),
|
||||
ExpectedEventsPage: 0,
|
||||
ExpectedEventsPageSize: 0,
|
||||
ExpectedResultsPage: 1,
|
||||
|
@ -46,7 +46,7 @@ func TestNewServiceStatusParams(t *testing.T) {
|
|||
},
|
||||
{
|
||||
Name: "with-results-page-2-size-10-events-page-3-size-50",
|
||||
Params: NewServiceStatusParams().WithResults(2, 10).WithEvents(3, 50),
|
||||
Params: NewEndpointStatusParams().WithResults(2, 10).WithEvents(3, 50),
|
||||
ExpectedEventsPage: 3,
|
||||
ExpectedEventsPageSize: 50,
|
||||
ExpectedResultsPage: 2,
|
||||
|
|
|
@ -14,7 +14,7 @@ import (
|
|||
)
|
||||
|
||||
func init() {
|
||||
gob.Register(&core.ServiceStatus{})
|
||||
gob.Register(&core.EndpointStatus{})
|
||||
gob.Register(&core.HourlyUptimeStatistics{})
|
||||
gob.Register(&core.Uptime{})
|
||||
gob.Register(&core.Result{})
|
||||
|
@ -46,32 +46,32 @@ func NewStore(file string) (*Store, error) {
|
|||
return store, nil
|
||||
}
|
||||
|
||||
// GetAllServiceStatuses returns all monitored core.ServiceStatus
|
||||
// GetAllEndpointStatuses returns all monitored core.EndpointStatus
|
||||
// with a subset of core.Result defined by the page and pageSize parameters
|
||||
func (s *Store) GetAllServiceStatuses(params *paging.ServiceStatusParams) ([]*core.ServiceStatus, error) {
|
||||
serviceStatuses := s.cache.GetAll()
|
||||
pagedServiceStatuses := make([]*core.ServiceStatus, 0, len(serviceStatuses))
|
||||
for _, v := range serviceStatuses {
|
||||
pagedServiceStatuses = append(pagedServiceStatuses, ShallowCopyServiceStatus(v.(*core.ServiceStatus), params))
|
||||
func (s *Store) GetAllEndpointStatuses(params *paging.EndpointStatusParams) ([]*core.EndpointStatus, error) {
|
||||
endpointStatuses := s.cache.GetAll()
|
||||
pagedEndpointStatuses := make([]*core.EndpointStatus, 0, len(endpointStatuses))
|
||||
for _, v := range endpointStatuses {
|
||||
pagedEndpointStatuses = append(pagedEndpointStatuses, ShallowCopyEndpointStatus(v.(*core.EndpointStatus), params))
|
||||
}
|
||||
sort.Slice(pagedServiceStatuses, func(i, j int) bool {
|
||||
return pagedServiceStatuses[i].Key < pagedServiceStatuses[j].Key
|
||||
sort.Slice(pagedEndpointStatuses, func(i, j int) bool {
|
||||
return pagedEndpointStatuses[i].Key < pagedEndpointStatuses[j].Key
|
||||
})
|
||||
return pagedServiceStatuses, nil
|
||||
return pagedEndpointStatuses, nil
|
||||
}
|
||||
|
||||
// GetServiceStatus returns the service status for a given service name in the given group
|
||||
func (s *Store) GetServiceStatus(groupName, serviceName string, params *paging.ServiceStatusParams) (*core.ServiceStatus, error) {
|
||||
return s.GetServiceStatusByKey(util.ConvertGroupAndServiceToKey(groupName, serviceName), params)
|
||||
// GetEndpointStatus returns the endpoint status for a given endpoint name in the given group
|
||||
func (s *Store) GetEndpointStatus(groupName, endpointName string, params *paging.EndpointStatusParams) (*core.EndpointStatus, error) {
|
||||
return s.GetEndpointStatusByKey(util.ConvertGroupAndEndpointNameToKey(groupName, endpointName), params)
|
||||
}
|
||||
|
||||
// GetServiceStatusByKey returns the service status for a given key
|
||||
func (s *Store) GetServiceStatusByKey(key string, params *paging.ServiceStatusParams) (*core.ServiceStatus, error) {
|
||||
serviceStatus := s.cache.GetValue(key)
|
||||
if serviceStatus == nil {
|
||||
return nil, common.ErrServiceNotFound
|
||||
// GetEndpointStatusByKey returns the endpoint status for a given key
|
||||
func (s *Store) GetEndpointStatusByKey(key string, params *paging.EndpointStatusParams) (*core.EndpointStatus, error) {
|
||||
endpointStatus := s.cache.GetValue(key)
|
||||
if endpointStatus == nil {
|
||||
return nil, common.ErrEndpointNotFound
|
||||
}
|
||||
return ShallowCopyServiceStatus(serviceStatus.(*core.ServiceStatus), params), nil
|
||||
return ShallowCopyEndpointStatus(endpointStatus.(*core.EndpointStatus), params), nil
|
||||
}
|
||||
|
||||
// GetUptimeByKey returns the uptime percentage during a time range
|
||||
|
@ -79,16 +79,16 @@ func (s *Store) GetUptimeByKey(key string, from, to time.Time) (float64, error)
|
|||
if from.After(to) {
|
||||
return 0, common.ErrInvalidTimeRange
|
||||
}
|
||||
serviceStatus := s.cache.GetValue(key)
|
||||
if serviceStatus == nil || serviceStatus.(*core.ServiceStatus).Uptime == nil {
|
||||
return 0, common.ErrServiceNotFound
|
||||
endpointStatus := s.cache.GetValue(key)
|
||||
if endpointStatus == nil || endpointStatus.(*core.EndpointStatus).Uptime == nil {
|
||||
return 0, common.ErrEndpointNotFound
|
||||
}
|
||||
successfulExecutions := uint64(0)
|
||||
totalExecutions := uint64(0)
|
||||
current := from
|
||||
for to.Sub(current) >= 0 {
|
||||
hourlyUnixTimestamp := current.Truncate(time.Hour).Unix()
|
||||
hourlyStats := serviceStatus.(*core.ServiceStatus).Uptime.HourlyStatistics[hourlyUnixTimestamp]
|
||||
hourlyStats := endpointStatus.(*core.EndpointStatus).Uptime.HourlyStatistics[hourlyUnixTimestamp]
|
||||
if hourlyStats == nil || hourlyStats.TotalExecutions == 0 {
|
||||
current = current.Add(time.Hour)
|
||||
continue
|
||||
|
@ -108,15 +108,15 @@ func (s *Store) GetAverageResponseTimeByKey(key string, from, to time.Time) (int
|
|||
if from.After(to) {
|
||||
return 0, common.ErrInvalidTimeRange
|
||||
}
|
||||
serviceStatus := s.cache.GetValue(key)
|
||||
if serviceStatus == nil || serviceStatus.(*core.ServiceStatus).Uptime == nil {
|
||||
return 0, common.ErrServiceNotFound
|
||||
endpointStatus := s.cache.GetValue(key)
|
||||
if endpointStatus == nil || endpointStatus.(*core.EndpointStatus).Uptime == nil {
|
||||
return 0, common.ErrEndpointNotFound
|
||||
}
|
||||
current := from
|
||||
var totalExecutions, totalResponseTime uint64
|
||||
for to.Sub(current) >= 0 {
|
||||
hourlyUnixTimestamp := current.Truncate(time.Hour).Unix()
|
||||
hourlyStats := serviceStatus.(*core.ServiceStatus).Uptime.HourlyStatistics[hourlyUnixTimestamp]
|
||||
hourlyStats := endpointStatus.(*core.EndpointStatus).Uptime.HourlyStatistics[hourlyUnixTimestamp]
|
||||
if hourlyStats == nil || hourlyStats.TotalExecutions == 0 {
|
||||
current = current.Add(time.Hour)
|
||||
continue
|
||||
|
@ -136,15 +136,15 @@ func (s *Store) GetHourlyAverageResponseTimeByKey(key string, from, to time.Time
|
|||
if from.After(to) {
|
||||
return nil, common.ErrInvalidTimeRange
|
||||
}
|
||||
serviceStatus := s.cache.GetValue(key)
|
||||
if serviceStatus == nil || serviceStatus.(*core.ServiceStatus).Uptime == nil {
|
||||
return nil, common.ErrServiceNotFound
|
||||
endpointStatus := s.cache.GetValue(key)
|
||||
if endpointStatus == nil || endpointStatus.(*core.EndpointStatus).Uptime == nil {
|
||||
return nil, common.ErrEndpointNotFound
|
||||
}
|
||||
hourlyAverageResponseTimes := make(map[int64]int)
|
||||
current := from
|
||||
for to.Sub(current) >= 0 {
|
||||
hourlyUnixTimestamp := current.Truncate(time.Hour).Unix()
|
||||
hourlyStats := serviceStatus.(*core.ServiceStatus).Uptime.HourlyStatistics[hourlyUnixTimestamp]
|
||||
hourlyStats := endpointStatus.(*core.EndpointStatus).Uptime.HourlyStatistics[hourlyUnixTimestamp]
|
||||
if hourlyStats == nil || hourlyStats.TotalExecutions == 0 {
|
||||
current = current.Add(time.Hour)
|
||||
continue
|
||||
|
@ -155,26 +155,26 @@ func (s *Store) GetHourlyAverageResponseTimeByKey(key string, from, to time.Time
|
|||
return hourlyAverageResponseTimes, nil
|
||||
}
|
||||
|
||||
// Insert adds the observed result for the specified service into the store
|
||||
func (s *Store) Insert(service *core.Service, result *core.Result) error {
|
||||
key := service.Key()
|
||||
// Insert adds the observed result for the specified endpoint into the store
|
||||
func (s *Store) Insert(endpoint *core.Endpoint, result *core.Result) error {
|
||||
key := endpoint.Key()
|
||||
s.Lock()
|
||||
serviceStatus, exists := s.cache.Get(key)
|
||||
status, exists := s.cache.Get(key)
|
||||
if !exists {
|
||||
serviceStatus = core.NewServiceStatus(key, service.Group, service.Name)
|
||||
serviceStatus.(*core.ServiceStatus).Events = append(serviceStatus.(*core.ServiceStatus).Events, &core.Event{
|
||||
status = core.NewEndpointStatus(endpoint.Group, endpoint.Name)
|
||||
status.(*core.EndpointStatus).Events = append(status.(*core.EndpointStatus).Events, &core.Event{
|
||||
Type: core.EventStart,
|
||||
Timestamp: time.Now(),
|
||||
})
|
||||
}
|
||||
AddResult(serviceStatus.(*core.ServiceStatus), result)
|
||||
s.cache.Set(key, serviceStatus)
|
||||
AddResult(status.(*core.EndpointStatus), result)
|
||||
s.cache.Set(key, status)
|
||||
s.Unlock()
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeleteAllServiceStatusesNotInKeys removes all ServiceStatus that are not within the keys provided
|
||||
func (s *Store) DeleteAllServiceStatusesNotInKeys(keys []string) int {
|
||||
// DeleteAllEndpointStatusesNotInKeys removes all EndpointStatus that are not within the keys provided
|
||||
func (s *Store) DeleteAllEndpointStatusesNotInKeys(keys []string) int {
|
||||
var keysToDelete []string
|
||||
for _, existingKey := range s.cache.GetKeysByPattern("*", 0) {
|
||||
shouldDelete := true
|
||||
|
|
|
@ -15,7 +15,7 @@ var (
|
|||
|
||||
now = time.Now()
|
||||
|
||||
testService = core.Service{
|
||||
testEndpoint = core.Endpoint{
|
||||
Name: "name",
|
||||
Group: "group",
|
||||
URL: "https://example.org/what/ever",
|
||||
|
@ -84,39 +84,39 @@ var (
|
|||
func TestStore_SanityCheck(t *testing.T) {
|
||||
store, _ := NewStore("")
|
||||
defer store.Close()
|
||||
store.Insert(&testService, &testSuccessfulResult)
|
||||
serviceStatuses, _ := store.GetAllServiceStatuses(paging.NewServiceStatusParams())
|
||||
if numberOfServiceStatuses := len(serviceStatuses); numberOfServiceStatuses != 1 {
|
||||
t.Fatalf("expected 1 ServiceStatus, got %d", numberOfServiceStatuses)
|
||||
store.Insert(&testEndpoint, &testSuccessfulResult)
|
||||
endpointStatuses, _ := store.GetAllEndpointStatuses(paging.NewEndpointStatusParams())
|
||||
if numberOfEndpointStatuses := len(endpointStatuses); numberOfEndpointStatuses != 1 {
|
||||
t.Fatalf("expected 1 EndpointStatus, got %d", numberOfEndpointStatuses)
|
||||
}
|
||||
store.Insert(&testService, &testUnsuccessfulResult)
|
||||
// Both results inserted are for the same service, therefore, the count shouldn't have increased
|
||||
serviceStatuses, _ = store.GetAllServiceStatuses(paging.NewServiceStatusParams())
|
||||
if numberOfServiceStatuses := len(serviceStatuses); numberOfServiceStatuses != 1 {
|
||||
t.Fatalf("expected 1 ServiceStatus, got %d", numberOfServiceStatuses)
|
||||
store.Insert(&testEndpoint, &testUnsuccessfulResult)
|
||||
// Both results inserted are for the same endpoint, therefore, the count shouldn't have increased
|
||||
endpointStatuses, _ = store.GetAllEndpointStatuses(paging.NewEndpointStatusParams())
|
||||
if numberOfEndpointStatuses := len(endpointStatuses); numberOfEndpointStatuses != 1 {
|
||||
t.Fatalf("expected 1 EndpointStatus, got %d", numberOfEndpointStatuses)
|
||||
}
|
||||
if hourlyAverageResponseTime, err := store.GetHourlyAverageResponseTimeByKey(testService.Key(), time.Now().Add(-24*time.Hour), time.Now()); err != nil {
|
||||
if hourlyAverageResponseTime, err := store.GetHourlyAverageResponseTimeByKey(testEndpoint.Key(), time.Now().Add(-24*time.Hour), time.Now()); err != nil {
|
||||
t.Errorf("expected no error, got %v", err)
|
||||
} else if len(hourlyAverageResponseTime) != 1 {
|
||||
t.Errorf("expected 1 hour to have had a result in the past 24 hours, got %d", len(hourlyAverageResponseTime))
|
||||
}
|
||||
if uptime, _ := store.GetUptimeByKey(testService.Key(), time.Now().Add(-24*time.Hour), time.Now()); uptime != 0.5 {
|
||||
if uptime, _ := store.GetUptimeByKey(testEndpoint.Key(), time.Now().Add(-24*time.Hour), time.Now()); uptime != 0.5 {
|
||||
t.Errorf("expected uptime of last 24h to be 0.5, got %f", uptime)
|
||||
}
|
||||
if averageResponseTime, _ := store.GetAverageResponseTimeByKey(testService.Key(), time.Now().Add(-24*time.Hour), time.Now()); averageResponseTime != 450 {
|
||||
if averageResponseTime, _ := store.GetAverageResponseTimeByKey(testEndpoint.Key(), time.Now().Add(-24*time.Hour), time.Now()); averageResponseTime != 450 {
|
||||
t.Errorf("expected average response time of last 24h to be 450, got %d", averageResponseTime)
|
||||
}
|
||||
ss, _ := store.GetServiceStatus(testService.Group, testService.Name, paging.NewServiceStatusParams().WithResults(1, 20).WithEvents(1, 20))
|
||||
ss, _ := store.GetEndpointStatus(testEndpoint.Group, testEndpoint.Name, paging.NewEndpointStatusParams().WithResults(1, 20).WithEvents(1, 20))
|
||||
if ss == nil {
|
||||
t.Fatalf("Store should've had key '%s', but didn't", testService.Key())
|
||||
t.Fatalf("Store should've had key '%s', but didn't", testEndpoint.Key())
|
||||
}
|
||||
if len(ss.Events) != 3 {
|
||||
t.Errorf("Service '%s' should've had 3 events, got %d", ss.Name, len(ss.Events))
|
||||
t.Errorf("Endpoint '%s' should've had 3 events, got %d", ss.Name, len(ss.Events))
|
||||
}
|
||||
if len(ss.Results) != 2 {
|
||||
t.Errorf("Service '%s' should've had 2 results, got %d", ss.Name, len(ss.Results))
|
||||
t.Errorf("Endpoint '%s' should've had 2 results, got %d", ss.Name, len(ss.Results))
|
||||
}
|
||||
if deleted := store.DeleteAllServiceStatusesNotInKeys([]string{}); deleted != 1 {
|
||||
if deleted := store.DeleteAllEndpointStatusesNotInKeys([]string{}); deleted != 1 {
|
||||
t.Errorf("%d entries should've been deleted, got %d", 1, deleted)
|
||||
}
|
||||
}
|
||||
|
|
|
@ -19,7 +19,7 @@ func BenchmarkProcessUptimeAfterResult(b *testing.B) {
|
|||
Success: n%15 == 0,
|
||||
Timestamp: timestamp,
|
||||
})
|
||||
// Simulate service with an interval of 3 minutes
|
||||
// Simulate an endpoint with an interval of 3 minutes
|
||||
timestamp = timestamp.Add(3 * time.Minute)
|
||||
}
|
||||
b.ReportAllocs()
|
||||
|
|
|
@ -8,9 +8,9 @@ import (
|
|||
)
|
||||
|
||||
func TestProcessUptimeAfterResult(t *testing.T) {
|
||||
service := &core.Service{Name: "name", Group: "group"}
|
||||
serviceStatus := core.NewServiceStatus(service.Key(), service.Group, service.Name)
|
||||
uptime := serviceStatus.Uptime
|
||||
endpoint := &core.Endpoint{Name: "name", Group: "group"}
|
||||
status := core.NewEndpointStatus(endpoint.Group, endpoint.Name)
|
||||
uptime := status.Uptime
|
||||
|
||||
now := time.Now()
|
||||
now = time.Date(now.Year(), now.Month(), now.Day(), now.Hour(), 0, 0, 0, now.Location())
|
||||
|
@ -43,18 +43,18 @@ func TestProcessUptimeAfterResult(t *testing.T) {
|
|||
}
|
||||
|
||||
func TestAddResultUptimeIsCleaningUpAfterItself(t *testing.T) {
|
||||
service := &core.Service{Name: "name", Group: "group"}
|
||||
serviceStatus := core.NewServiceStatus(service.Key(), service.Group, service.Name)
|
||||
endpoint := &core.Endpoint{Name: "name", Group: "group"}
|
||||
status := core.NewEndpointStatus(endpoint.Group, endpoint.Name)
|
||||
now := time.Now()
|
||||
now = time.Date(now.Year(), now.Month(), now.Day(), now.Hour(), 0, 0, 0, now.Location())
|
||||
// Start 12 days ago
|
||||
timestamp := now.Add(-12 * 24 * time.Hour)
|
||||
for timestamp.Unix() <= now.Unix() {
|
||||
AddResult(serviceStatus, &core.Result{Timestamp: timestamp, Success: true})
|
||||
if len(serviceStatus.Uptime.HourlyStatistics) > numberOfHoursInTenDays {
|
||||
t.Errorf("At no point in time should there be more than %d entries in serviceStatus.SuccessfulExecutionsPerHour, but there are %d", numberOfHoursInTenDays, len(serviceStatus.Uptime.HourlyStatistics))
|
||||
AddResult(status, &core.Result{Timestamp: timestamp, Success: true})
|
||||
if len(status.Uptime.HourlyStatistics) > numberOfHoursInTenDays {
|
||||
t.Errorf("At no point in time should there be more than %d entries in status.SuccessfulExecutionsPerHour, but there are %d", numberOfHoursInTenDays, len(status.Uptime.HourlyStatistics))
|
||||
}
|
||||
// Simulate service with an interval of 3 minutes
|
||||
// Simulate endpoint with an interval of 3 minutes
|
||||
timestamp = timestamp.Add(3 * time.Minute)
|
||||
}
|
||||
}
|
||||
|
|
|
@ -6,10 +6,10 @@ import (
|
|||
"github.com/TwiN/gatus/v3/storage/store/common/paging"
|
||||
)
|
||||
|
||||
// ShallowCopyServiceStatus returns a shallow copy of a ServiceStatus with only the results
|
||||
// ShallowCopyEndpointStatus returns a shallow copy of a EndpointStatus with only the results
|
||||
// within the range defined by the page and pageSize parameters
|
||||
func ShallowCopyServiceStatus(ss *core.ServiceStatus, params *paging.ServiceStatusParams) *core.ServiceStatus {
|
||||
shallowCopy := &core.ServiceStatus{
|
||||
func ShallowCopyEndpointStatus(ss *core.EndpointStatus, params *paging.EndpointStatusParams) *core.EndpointStatus {
|
||||
shallowCopy := &core.EndpointStatus{
|
||||
Name: ss.Name,
|
||||
Group: ss.Group,
|
||||
Key: ss.Key,
|
||||
|
@ -49,9 +49,9 @@ func getStartAndEndIndex(numberOfResults int, page, pageSize int) (int, int) {
|
|||
return start, end
|
||||
}
|
||||
|
||||
// AddResult adds a Result to ServiceStatus.Results and makes sure that there are
|
||||
// AddResult adds a Result to EndpointStatus.Results and makes sure that there are
|
||||
// no more than MaximumNumberOfResults results in the Results slice
|
||||
func AddResult(ss *core.ServiceStatus, result *core.Result) {
|
||||
func AddResult(ss *core.EndpointStatus, result *core.Result) {
|
||||
if ss == nil {
|
||||
return
|
||||
}
|
||||
|
|
|
@ -8,14 +8,14 @@ import (
|
|||
"github.com/TwiN/gatus/v3/storage/store/common/paging"
|
||||
)
|
||||
|
||||
func BenchmarkShallowCopyServiceStatus(b *testing.B) {
|
||||
service := &testService
|
||||
serviceStatus := core.NewServiceStatus(service.Key(), service.Group, service.Name)
|
||||
func BenchmarkShallowCopyEndpointStatus(b *testing.B) {
|
||||
endpoint := &testEndpoint
|
||||
status := core.NewEndpointStatus(endpoint.Group, endpoint.Name)
|
||||
for i := 0; i < common.MaximumNumberOfResults; i++ {
|
||||
AddResult(serviceStatus, &testSuccessfulResult)
|
||||
AddResult(status, &testSuccessfulResult)
|
||||
}
|
||||
for n := 0; n < b.N; n++ {
|
||||
ShallowCopyServiceStatus(serviceStatus, paging.NewServiceStatusParams().WithResults(1, 20))
|
||||
ShallowCopyEndpointStatus(status, paging.NewEndpointStatusParams().WithResults(1, 20))
|
||||
}
|
||||
b.ReportAllocs()
|
||||
}
|
||||
|
|
|
@ -10,57 +10,57 @@ import (
|
|||
)
|
||||
|
||||
func TestAddResult(t *testing.T) {
|
||||
service := &core.Service{Name: "name", Group: "group"}
|
||||
serviceStatus := core.NewServiceStatus(service.Key(), service.Group, service.Name)
|
||||
endpoint := &core.Endpoint{Name: "name", Group: "group"}
|
||||
endpointStatus := core.NewEndpointStatus(endpoint.Group, endpoint.Name)
|
||||
for i := 0; i < (common.MaximumNumberOfResults+common.MaximumNumberOfEvents)*2; i++ {
|
||||
AddResult(serviceStatus, &core.Result{Success: i%2 == 0, Timestamp: time.Now()})
|
||||
AddResult(endpointStatus, &core.Result{Success: i%2 == 0, Timestamp: time.Now()})
|
||||
}
|
||||
if len(serviceStatus.Results) != common.MaximumNumberOfResults {
|
||||
t.Errorf("expected serviceStatus.Results to not exceed a length of %d", common.MaximumNumberOfResults)
|
||||
if len(endpointStatus.Results) != common.MaximumNumberOfResults {
|
||||
t.Errorf("expected endpointStatus.Results to not exceed a length of %d", common.MaximumNumberOfResults)
|
||||
}
|
||||
if len(serviceStatus.Events) != common.MaximumNumberOfEvents {
|
||||
t.Errorf("expected serviceStatus.Events to not exceed a length of %d", common.MaximumNumberOfEvents)
|
||||
if len(endpointStatus.Events) != common.MaximumNumberOfEvents {
|
||||
t.Errorf("expected endpointStatus.Events to not exceed a length of %d", common.MaximumNumberOfEvents)
|
||||
}
|
||||
// Try to add nil serviceStatus
|
||||
// Try to add nil endpointStatus
|
||||
AddResult(nil, &core.Result{Timestamp: time.Now()})
|
||||
}
|
||||
|
||||
func TestShallowCopyServiceStatus(t *testing.T) {
|
||||
service := &core.Service{Name: "name", Group: "group"}
|
||||
serviceStatus := core.NewServiceStatus(service.Key(), service.Group, service.Name)
|
||||
func TestShallowCopyEndpointStatus(t *testing.T) {
|
||||
endpoint := &core.Endpoint{Name: "name", Group: "group"}
|
||||
endpointStatus := core.NewEndpointStatus(endpoint.Group, endpoint.Name)
|
||||
ts := time.Now().Add(-25 * time.Hour)
|
||||
for i := 0; i < 25; i++ {
|
||||
AddResult(serviceStatus, &core.Result{Success: i%2 == 0, Timestamp: ts})
|
||||
AddResult(endpointStatus, &core.Result{Success: i%2 == 0, Timestamp: ts})
|
||||
ts = ts.Add(time.Hour)
|
||||
}
|
||||
if len(ShallowCopyServiceStatus(serviceStatus, paging.NewServiceStatusParams().WithResults(-1, -1)).Results) != 0 {
|
||||
if len(ShallowCopyEndpointStatus(endpointStatus, paging.NewEndpointStatusParams().WithResults(-1, -1)).Results) != 0 {
|
||||
t.Error("expected to have 0 result")
|
||||
}
|
||||
if len(ShallowCopyServiceStatus(serviceStatus, paging.NewServiceStatusParams().WithResults(1, 1)).Results) != 1 {
|
||||
if len(ShallowCopyEndpointStatus(endpointStatus, paging.NewEndpointStatusParams().WithResults(1, 1)).Results) != 1 {
|
||||
t.Error("expected to have 1 result")
|
||||
}
|
||||
if len(ShallowCopyServiceStatus(serviceStatus, paging.NewServiceStatusParams().WithResults(5, 0)).Results) != 0 {
|
||||
if len(ShallowCopyEndpointStatus(endpointStatus, paging.NewEndpointStatusParams().WithResults(5, 0)).Results) != 0 {
|
||||
t.Error("expected to have 0 results")
|
||||
}
|
||||
if len(ShallowCopyServiceStatus(serviceStatus, paging.NewServiceStatusParams().WithResults(-1, 20)).Results) != 0 {
|
||||
if len(ShallowCopyEndpointStatus(endpointStatus, paging.NewEndpointStatusParams().WithResults(-1, 20)).Results) != 0 {
|
||||
t.Error("expected to have 0 result, because the page was invalid")
|
||||
}
|
||||
if len(ShallowCopyServiceStatus(serviceStatus, paging.NewServiceStatusParams().WithResults(1, -1)).Results) != 0 {
|
||||
if len(ShallowCopyEndpointStatus(endpointStatus, paging.NewEndpointStatusParams().WithResults(1, -1)).Results) != 0 {
|
||||
t.Error("expected to have 0 result, because the page size was invalid")
|
||||
}
|
||||
if len(ShallowCopyServiceStatus(serviceStatus, paging.NewServiceStatusParams().WithResults(1, 10)).Results) != 10 {
|
||||
if len(ShallowCopyEndpointStatus(endpointStatus, paging.NewEndpointStatusParams().WithResults(1, 10)).Results) != 10 {
|
||||
t.Error("expected to have 10 results, because given a page size of 10, page 1 should have 10 elements")
|
||||
}
|
||||
if len(ShallowCopyServiceStatus(serviceStatus, paging.NewServiceStatusParams().WithResults(2, 10)).Results) != 10 {
|
||||
if len(ShallowCopyEndpointStatus(endpointStatus, paging.NewEndpointStatusParams().WithResults(2, 10)).Results) != 10 {
|
||||
t.Error("expected to have 10 results, because given a page size of 10, page 2 should have 10 elements")
|
||||
}
|
||||
if len(ShallowCopyServiceStatus(serviceStatus, paging.NewServiceStatusParams().WithResults(3, 10)).Results) != 5 {
|
||||
if len(ShallowCopyEndpointStatus(endpointStatus, paging.NewEndpointStatusParams().WithResults(3, 10)).Results) != 5 {
|
||||
t.Error("expected to have 5 results, because given a page size of 10, page 3 should have 5 elements")
|
||||
}
|
||||
if len(ShallowCopyServiceStatus(serviceStatus, paging.NewServiceStatusParams().WithResults(4, 10)).Results) != 0 {
|
||||
if len(ShallowCopyEndpointStatus(endpointStatus, paging.NewEndpointStatusParams().WithResults(4, 10)).Results) != 0 {
|
||||
t.Error("expected to have 0 results, because given a page size of 10, page 4 should have 0 elements")
|
||||
}
|
||||
if len(ShallowCopyServiceStatus(serviceStatus, paging.NewServiceStatusParams().WithResults(1, 50)).Results) != 25 {
|
||||
if len(ShallowCopyEndpointStatus(endpointStatus, paging.NewEndpointStatusParams().WithResults(1, 50)).Results) != 25 {
|
||||
t.Error("expected to have 25 results, because there's only 25 results")
|
||||
}
|
||||
}
|
||||
|
|
|
@ -2,21 +2,21 @@ package sql
|
|||
|
||||
func (s *Store) createPostgresSchema() error {
|
||||
_, err := s.db.Exec(`
|
||||
CREATE TABLE IF NOT EXISTS service (
|
||||
service_id BIGSERIAL PRIMARY KEY,
|
||||
service_key TEXT UNIQUE,
|
||||
service_name TEXT,
|
||||
service_group TEXT,
|
||||
UNIQUE(service_name, service_group)
|
||||
CREATE TABLE IF NOT EXISTS endpoints (
|
||||
endpoint_id BIGSERIAL PRIMARY KEY,
|
||||
endpoint_key TEXT UNIQUE,
|
||||
endpoint_name TEXT,
|
||||
endpoint_group TEXT,
|
||||
UNIQUE(endpoint_name, endpoint_group)
|
||||
)
|
||||
`)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
_, err = s.db.Exec(`
|
||||
CREATE TABLE IF NOT EXISTS service_event (
|
||||
service_event_id BIGSERIAL PRIMARY KEY,
|
||||
service_id INTEGER REFERENCES service(service_id) ON DELETE CASCADE,
|
||||
CREATE TABLE IF NOT EXISTS endpoint_events (
|
||||
endpoint_event_id BIGSERIAL PRIMARY KEY,
|
||||
endpoint_id INTEGER REFERENCES endpoints(endpoint_id) ON DELETE CASCADE,
|
||||
event_type TEXT,
|
||||
event_timestamp TIMESTAMP
|
||||
)
|
||||
|
@ -25,9 +25,9 @@ func (s *Store) createPostgresSchema() error {
|
|||
return err
|
||||
}
|
||||
_, err = s.db.Exec(`
|
||||
CREATE TABLE IF NOT EXISTS service_result (
|
||||
service_result_id BIGSERIAL PRIMARY KEY,
|
||||
service_id BIGINT REFERENCES service(service_id) ON DELETE CASCADE,
|
||||
CREATE TABLE IF NOT EXISTS endpoint_results (
|
||||
endpoint_result_id BIGSERIAL PRIMARY KEY,
|
||||
endpoint_id BIGINT REFERENCES endpoints(endpoint_id) ON DELETE CASCADE,
|
||||
success BOOLEAN,
|
||||
errors TEXT,
|
||||
connected BOOLEAN,
|
||||
|
@ -44,9 +44,9 @@ func (s *Store) createPostgresSchema() error {
|
|||
return err
|
||||
}
|
||||
_, err = s.db.Exec(`
|
||||
CREATE TABLE IF NOT EXISTS service_result_condition (
|
||||
service_result_condition_id BIGSERIAL PRIMARY KEY,
|
||||
service_result_id BIGINT REFERENCES service_result(service_result_id) ON DELETE CASCADE,
|
||||
CREATE TABLE IF NOT EXISTS endpoint_result_conditions (
|
||||
endpoint_result_condition_id BIGSERIAL PRIMARY KEY,
|
||||
endpoint_result_id BIGINT REFERENCES endpoint_results(endpoint_result_id) ON DELETE CASCADE,
|
||||
condition TEXT,
|
||||
success BOOLEAN
|
||||
)
|
||||
|
@ -55,14 +55,14 @@ func (s *Store) createPostgresSchema() error {
|
|||
return err
|
||||
}
|
||||
_, err = s.db.Exec(`
|
||||
CREATE TABLE IF NOT EXISTS service_uptime (
|
||||
service_uptime_id BIGSERIAL PRIMARY KEY,
|
||||
service_id BIGINT REFERENCES service(service_id) ON DELETE CASCADE,
|
||||
CREATE TABLE IF NOT EXISTS endpoint_uptimes (
|
||||
endpoint_uptime_id BIGSERIAL PRIMARY KEY,
|
||||
endpoint_id BIGINT REFERENCES endpoints(endpoint_id) ON DELETE CASCADE,
|
||||
hour_unix_timestamp BIGINT,
|
||||
total_executions BIGINT,
|
||||
successful_executions BIGINT,
|
||||
total_response_time BIGINT,
|
||||
UNIQUE(service_id, hour_unix_timestamp)
|
||||
UNIQUE(endpoint_id, hour_unix_timestamp)
|
||||
)
|
||||
`)
|
||||
return err
|
||||
|
|
|
@ -2,21 +2,21 @@ package sql
|
|||
|
||||
func (s *Store) createSQLiteSchema() error {
|
||||
_, err := s.db.Exec(`
|
||||
CREATE TABLE IF NOT EXISTS service (
|
||||
service_id INTEGER PRIMARY KEY,
|
||||
service_key TEXT UNIQUE,
|
||||
service_name TEXT,
|
||||
service_group TEXT,
|
||||
UNIQUE(service_name, service_group)
|
||||
CREATE TABLE IF NOT EXISTS endpoints (
|
||||
endpoint_id INTEGER PRIMARY KEY,
|
||||
endpoint_key TEXT UNIQUE,
|
||||
endpoint_name TEXT,
|
||||
endpoint_group TEXT,
|
||||
UNIQUE(endpoint_name, endpoint_group)
|
||||
)
|
||||
`)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
_, err = s.db.Exec(`
|
||||
CREATE TABLE IF NOT EXISTS service_event (
|
||||
service_event_id INTEGER PRIMARY KEY,
|
||||
service_id INTEGER REFERENCES service(service_id) ON DELETE CASCADE,
|
||||
CREATE TABLE IF NOT EXISTS endpoint_events (
|
||||
endpoint_event_id INTEGER PRIMARY KEY,
|
||||
endpoint_id INTEGER REFERENCES endpoints(endpoint_id) ON DELETE CASCADE,
|
||||
event_type TEXT,
|
||||
event_timestamp TIMESTAMP
|
||||
)
|
||||
|
@ -25,9 +25,9 @@ func (s *Store) createSQLiteSchema() error {
|
|||
return err
|
||||
}
|
||||
_, err = s.db.Exec(`
|
||||
CREATE TABLE IF NOT EXISTS service_result (
|
||||
service_result_id INTEGER PRIMARY KEY,
|
||||
service_id INTEGER REFERENCES service(service_id) ON DELETE CASCADE,
|
||||
CREATE TABLE IF NOT EXISTS endpoint_results (
|
||||
endpoint_result_id INTEGER PRIMARY KEY,
|
||||
endpoint_id INTEGER REFERENCES endpoints(endpoint_id) ON DELETE CASCADE,
|
||||
success INTEGER,
|
||||
errors TEXT,
|
||||
connected INTEGER,
|
||||
|
@ -44,9 +44,9 @@ func (s *Store) createSQLiteSchema() error {
|
|||
return err
|
||||
}
|
||||
_, err = s.db.Exec(`
|
||||
CREATE TABLE IF NOT EXISTS service_result_condition (
|
||||
service_result_condition_id INTEGER PRIMARY KEY,
|
||||
service_result_id INTEGER REFERENCES service_result(service_result_id) ON DELETE CASCADE,
|
||||
CREATE TABLE IF NOT EXISTS endpoint_result_conditions (
|
||||
endpoint_result_condition_id INTEGER PRIMARY KEY,
|
||||
endpoint_result_id INTEGER REFERENCES endpoint_results(endpoint_result_id) ON DELETE CASCADE,
|
||||
condition TEXT,
|
||||
success INTEGER
|
||||
)
|
||||
|
@ -55,14 +55,14 @@ func (s *Store) createSQLiteSchema() error {
|
|||
return err
|
||||
}
|
||||
_, err = s.db.Exec(`
|
||||
CREATE TABLE IF NOT EXISTS service_uptime (
|
||||
service_uptime_id INTEGER PRIMARY KEY,
|
||||
service_id INTEGER REFERENCES service(service_id) ON DELETE CASCADE,
|
||||
CREATE TABLE IF NOT EXISTS endpoint_uptimes (
|
||||
endpoint_uptime_id INTEGER PRIMARY KEY,
|
||||
endpoint_id INTEGER REFERENCES endpoints(endpoint_id) ON DELETE CASCADE,
|
||||
hour_unix_timestamp INTEGER,
|
||||
total_executions INTEGER,
|
||||
successful_executions INTEGER,
|
||||
total_response_time INTEGER,
|
||||
UNIQUE(service_id, hour_unix_timestamp)
|
||||
UNIQUE(endpoint_id, hour_unix_timestamp)
|
||||
)
|
||||
`)
|
||||
return err
|
||||
|
|
|
@ -89,44 +89,44 @@ func (s *Store) createSchema() error {
|
|||
return s.createPostgresSchema()
|
||||
}
|
||||
|
||||
// GetAllServiceStatuses returns all monitored core.ServiceStatus
|
||||
// GetAllEndpointStatuses returns all monitored core.EndpointStatus
|
||||
// with a subset of core.Result defined by the page and pageSize parameters
|
||||
func (s *Store) GetAllServiceStatuses(params *paging.ServiceStatusParams) ([]*core.ServiceStatus, error) {
|
||||
func (s *Store) GetAllEndpointStatuses(params *paging.EndpointStatusParams) ([]*core.EndpointStatus, error) {
|
||||
tx, err := s.db.Begin()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
keys, err := s.getAllServiceKeys(tx)
|
||||
keys, err := s.getAllEndpointKeys(tx)
|
||||
if err != nil {
|
||||
_ = tx.Rollback()
|
||||
return nil, err
|
||||
}
|
||||
serviceStatuses := make([]*core.ServiceStatus, 0, len(keys))
|
||||
endpointStatuses := make([]*core.EndpointStatus, 0, len(keys))
|
||||
for _, key := range keys {
|
||||
serviceStatus, err := s.getServiceStatusByKey(tx, key, params)
|
||||
endpointStatus, err := s.getEndpointStatusByKey(tx, key, params)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
serviceStatuses = append(serviceStatuses, serviceStatus)
|
||||
endpointStatuses = append(endpointStatuses, endpointStatus)
|
||||
}
|
||||
if err = tx.Commit(); err != nil {
|
||||
_ = tx.Rollback()
|
||||
}
|
||||
return serviceStatuses, err
|
||||
return endpointStatuses, err
|
||||
}
|
||||
|
||||
// GetServiceStatus returns the service status for a given service name in the given group
|
||||
func (s *Store) GetServiceStatus(groupName, serviceName string, params *paging.ServiceStatusParams) (*core.ServiceStatus, error) {
|
||||
return s.GetServiceStatusByKey(util.ConvertGroupAndServiceToKey(groupName, serviceName), params)
|
||||
// GetEndpointStatus returns the endpoint status for a given endpoint name in the given group
|
||||
func (s *Store) GetEndpointStatus(groupName, endpointName string, params *paging.EndpointStatusParams) (*core.EndpointStatus, error) {
|
||||
return s.GetEndpointStatusByKey(util.ConvertGroupAndEndpointNameToKey(groupName, endpointName), params)
|
||||
}
|
||||
|
||||
// GetServiceStatusByKey returns the service status for a given key
|
||||
func (s *Store) GetServiceStatusByKey(key string, params *paging.ServiceStatusParams) (*core.ServiceStatus, error) {
|
||||
// GetEndpointStatusByKey returns the endpoint status for a given key
|
||||
func (s *Store) GetEndpointStatusByKey(key string, params *paging.EndpointStatusParams) (*core.EndpointStatus, error) {
|
||||
tx, err := s.db.Begin()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
serviceStatus, err := s.getServiceStatusByKey(tx, key, params)
|
||||
endpointStatus, err := s.getEndpointStatusByKey(tx, key, params)
|
||||
if err != nil {
|
||||
_ = tx.Rollback()
|
||||
return nil, err
|
||||
|
@ -134,7 +134,7 @@ func (s *Store) GetServiceStatusByKey(key string, params *paging.ServiceStatusPa
|
|||
if err = tx.Commit(); err != nil {
|
||||
_ = tx.Rollback()
|
||||
}
|
||||
return serviceStatus, err
|
||||
return endpointStatus, err
|
||||
}
|
||||
|
||||
// GetUptimeByKey returns the uptime percentage during a time range
|
||||
|
@ -146,12 +146,12 @@ func (s *Store) GetUptimeByKey(key string, from, to time.Time) (float64, error)
|
|||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
serviceID, _, _, err := s.getServiceIDGroupAndNameByKey(tx, key)
|
||||
endpointID, _, _, err := s.getEndpointIDGroupAndNameByKey(tx, key)
|
||||
if err != nil {
|
||||
_ = tx.Rollback()
|
||||
return 0, err
|
||||
}
|
||||
uptime, _, err := s.getServiceUptime(tx, serviceID, from, to)
|
||||
uptime, _, err := s.getEndpointUptime(tx, endpointID, from, to)
|
||||
if err != nil {
|
||||
_ = tx.Rollback()
|
||||
return 0, err
|
||||
|
@ -171,12 +171,12 @@ func (s *Store) GetAverageResponseTimeByKey(key string, from, to time.Time) (int
|
|||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
serviceID, _, _, err := s.getServiceIDGroupAndNameByKey(tx, key)
|
||||
endpointID, _, _, err := s.getEndpointIDGroupAndNameByKey(tx, key)
|
||||
if err != nil {
|
||||
_ = tx.Rollback()
|
||||
return 0, err
|
||||
}
|
||||
averageResponseTime, err := s.getServiceAverageResponseTime(tx, serviceID, from, to)
|
||||
averageResponseTime, err := s.getEndpointAverageResponseTime(tx, endpointID, from, to)
|
||||
if err != nil {
|
||||
_ = tx.Rollback()
|
||||
return 0, err
|
||||
|
@ -196,12 +196,12 @@ func (s *Store) GetHourlyAverageResponseTimeByKey(key string, from, to time.Time
|
|||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
serviceID, _, _, err := s.getServiceIDGroupAndNameByKey(tx, key)
|
||||
endpointID, _, _, err := s.getEndpointIDGroupAndNameByKey(tx, key)
|
||||
if err != nil {
|
||||
_ = tx.Rollback()
|
||||
return nil, err
|
||||
}
|
||||
hourlyAverageResponseTimes, err := s.getServiceHourlyAverageResponseTimes(tx, serviceID, from, to)
|
||||
hourlyAverageResponseTimes, err := s.getEndpointHourlyAverageResponseTimes(tx, endpointID, from, to)
|
||||
if err != nil {
|
||||
_ = tx.Rollback()
|
||||
return nil, err
|
||||
|
@ -212,71 +212,71 @@ func (s *Store) GetHourlyAverageResponseTimeByKey(key string, from, to time.Time
|
|||
return hourlyAverageResponseTimes, nil
|
||||
}
|
||||
|
||||
// Insert adds the observed result for the specified service into the store
|
||||
func (s *Store) Insert(service *core.Service, result *core.Result) error {
|
||||
// Insert adds the observed result for the specified endpoint into the store
|
||||
func (s *Store) Insert(endpoint *core.Endpoint, result *core.Result) error {
|
||||
tx, err := s.db.Begin()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
serviceID, err := s.getServiceID(tx, service)
|
||||
endpointID, err := s.getEndpointID(tx, endpoint)
|
||||
if err != nil {
|
||||
if err == common.ErrServiceNotFound {
|
||||
// Service doesn't exist in the database, insert it
|
||||
if serviceID, err = s.insertService(tx, service); err != nil {
|
||||
if err == common.ErrEndpointNotFound {
|
||||
// Endpoint doesn't exist in the database, insert it
|
||||
if endpointID, err = s.insertEndpoint(tx, endpoint); err != nil {
|
||||
_ = tx.Rollback()
|
||||
log.Printf("[sql][Insert] Failed to create service with group=%s; service=%s: %s", service.Group, service.Name, err.Error())
|
||||
log.Printf("[sql][Insert] Failed to create endpoint with group=%s; endpoint=%s: %s", endpoint.Group, endpoint.Name, err.Error())
|
||||
return err
|
||||
}
|
||||
} else {
|
||||
_ = tx.Rollback()
|
||||
log.Printf("[sql][Insert] Failed to retrieve id of service with group=%s; service=%s: %s", service.Group, service.Name, err.Error())
|
||||
log.Printf("[sql][Insert] Failed to retrieve id of endpoint with group=%s; endpoint=%s: %s", endpoint.Group, endpoint.Name, err.Error())
|
||||
return err
|
||||
}
|
||||
}
|
||||
// First, we need to check if we need to insert a new event.
|
||||
//
|
||||
// A new event must be added if either of the following cases happen:
|
||||
// 1. There is only 1 event. The total number of events for a service can only be 1 if the only existing event is
|
||||
// 1. There is only 1 event. The total number of events for a endpoint can only be 1 if the only existing event is
|
||||
// of type EventStart, in which case we will have to create a new event of type EventHealthy or EventUnhealthy
|
||||
// based on result.Success.
|
||||
// 2. The lastResult.Success != result.Success. This implies that the service went from healthy to unhealthy or
|
||||
// 2. The lastResult.Success != result.Success. This implies that the endpoint went from healthy to unhealthy or
|
||||
// vice-versa, in which case we will have to create a new event of type EventHealthy or EventUnhealthy
|
||||
// based on result.Success.
|
||||
numberOfEvents, err := s.getNumberOfEventsByServiceID(tx, serviceID)
|
||||
numberOfEvents, err := s.getNumberOfEventsByEndpointID(tx, endpointID)
|
||||
if err != nil {
|
||||
// Silently fail
|
||||
log.Printf("[sql][Insert] Failed to retrieve total number of events for group=%s; service=%s: %s", service.Group, service.Name, err.Error())
|
||||
log.Printf("[sql][Insert] Failed to retrieve total number of events for group=%s; endpoint=%s: %s", endpoint.Group, endpoint.Name, err.Error())
|
||||
}
|
||||
if numberOfEvents == 0 {
|
||||
// There's no events yet, which means we need to add the EventStart and the first healthy/unhealthy event
|
||||
err = s.insertEvent(tx, serviceID, &core.Event{
|
||||
err = s.insertEndpointEvent(tx, endpointID, &core.Event{
|
||||
Type: core.EventStart,
|
||||
Timestamp: result.Timestamp.Add(-50 * time.Millisecond),
|
||||
})
|
||||
if err != nil {
|
||||
// Silently fail
|
||||
log.Printf("[sql][Insert] Failed to insert event=%s for group=%s; service=%s: %s", core.EventStart, service.Group, service.Name, err.Error())
|
||||
log.Printf("[sql][Insert] Failed to insert event=%s for group=%s; endpoint=%s: %s", core.EventStart, endpoint.Group, endpoint.Name, err.Error())
|
||||
}
|
||||
event := core.NewEventFromResult(result)
|
||||
if err = s.insertEvent(tx, serviceID, event); err != nil {
|
||||
if err = s.insertEndpointEvent(tx, endpointID, event); err != nil {
|
||||
// Silently fail
|
||||
log.Printf("[sql][Insert] Failed to insert event=%s for group=%s; service=%s: %s", event.Type, service.Group, service.Name, err.Error())
|
||||
log.Printf("[sql][Insert] Failed to insert event=%s for group=%s; endpoint=%s: %s", event.Type, endpoint.Group, endpoint.Name, err.Error())
|
||||
}
|
||||
} else {
|
||||
// Get the success value of the previous result
|
||||
var lastResultSuccess bool
|
||||
if lastResultSuccess, err = s.getLastServiceResultSuccessValue(tx, serviceID); err != nil {
|
||||
log.Printf("[sql][Insert] Failed to retrieve outcome of previous result for group=%s; service=%s: %s", service.Group, service.Name, err.Error())
|
||||
if lastResultSuccess, err = s.getLastEndpointResultSuccessValue(tx, endpointID); err != nil {
|
||||
log.Printf("[sql][Insert] Failed to retrieve outcome of previous result for group=%s; endpoint=%s: %s", endpoint.Group, endpoint.Name, err.Error())
|
||||
} else {
|
||||
// If we managed to retrieve the outcome of the previous result, we'll compare it with the new result.
|
||||
// If the final outcome (success or failure) of the previous and the new result aren't the same, it means
|
||||
// that the service either went from Healthy to Unhealthy or Unhealthy -> Healthy, therefore, we'll add
|
||||
// that the endpoint either went from Healthy to Unhealthy or Unhealthy -> Healthy, therefore, we'll add
|
||||
// an event to mark the change in state
|
||||
if lastResultSuccess != result.Success {
|
||||
event := core.NewEventFromResult(result)
|
||||
if err = s.insertEvent(tx, serviceID, event); err != nil {
|
||||
if err = s.insertEndpointEvent(tx, endpointID, event); err != nil {
|
||||
// Silently fail
|
||||
log.Printf("[sql][Insert] Failed to insert event=%s for group=%s; service=%s: %s", event.Type, service.Group, service.Name, err.Error())
|
||||
log.Printf("[sql][Insert] Failed to insert event=%s for group=%s; endpoint=%s: %s", event.Type, endpoint.Group, endpoint.Name, err.Error())
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -284,41 +284,41 @@ func (s *Store) Insert(service *core.Service, result *core.Result) error {
|
|||
// This lets us both keep the table clean without impacting performance too much
|
||||
// (since we're only deleting MaximumNumberOfEvents at a time instead of 1)
|
||||
if numberOfEvents > eventsCleanUpThreshold {
|
||||
if err = s.deleteOldServiceEvents(tx, serviceID); err != nil {
|
||||
log.Printf("[sql][Insert] Failed to delete old events for group=%s; service=%s: %s", service.Group, service.Name, err.Error())
|
||||
if err = s.deleteOldEndpointEvents(tx, endpointID); err != nil {
|
||||
log.Printf("[sql][Insert] Failed to delete old events for group=%s; endpoint=%s: %s", endpoint.Group, endpoint.Name, err.Error())
|
||||
}
|
||||
}
|
||||
}
|
||||
// Second, we need to insert the result.
|
||||
if err = s.insertResult(tx, serviceID, result); err != nil {
|
||||
log.Printf("[sql][Insert] Failed to insert result for group=%s; service=%s: %s", service.Group, service.Name, err.Error())
|
||||
if err = s.insertEndpointResult(tx, endpointID, result); err != nil {
|
||||
log.Printf("[sql][Insert] Failed to insert result for group=%s; endpoint=%s: %s", endpoint.Group, endpoint.Name, err.Error())
|
||||
_ = tx.Rollback() // If we can't insert the result, we'll rollback now since there's no point continuing
|
||||
return err
|
||||
}
|
||||
// Clean up old results
|
||||
numberOfResults, err := s.getNumberOfResultsByServiceID(tx, serviceID)
|
||||
numberOfResults, err := s.getNumberOfResultsByEndpointID(tx, endpointID)
|
||||
if err != nil {
|
||||
log.Printf("[sql][Insert] Failed to retrieve total number of results for group=%s; service=%s: %s", service.Group, service.Name, err.Error())
|
||||
log.Printf("[sql][Insert] Failed to retrieve total number of results for group=%s; endpoint=%s: %s", endpoint.Group, endpoint.Name, err.Error())
|
||||
} else {
|
||||
if numberOfResults > resultsCleanUpThreshold {
|
||||
if err = s.deleteOldServiceResults(tx, serviceID); err != nil {
|
||||
log.Printf("[sql][Insert] Failed to delete old results for group=%s; service=%s: %s", service.Group, service.Name, err.Error())
|
||||
if err = s.deleteOldEndpointResults(tx, endpointID); err != nil {
|
||||
log.Printf("[sql][Insert] Failed to delete old results for group=%s; endpoint=%s: %s", endpoint.Group, endpoint.Name, err.Error())
|
||||
}
|
||||
}
|
||||
}
|
||||
// Finally, we need to insert the uptime data.
|
||||
// Because the uptime data significantly outlives the results, we can't rely on the results for determining the uptime
|
||||
if err = s.updateServiceUptime(tx, serviceID, result); err != nil {
|
||||
log.Printf("[sql][Insert] Failed to update uptime for group=%s; service=%s: %s", service.Group, service.Name, err.Error())
|
||||
if err = s.updateEndpointUptime(tx, endpointID, result); err != nil {
|
||||
log.Printf("[sql][Insert] Failed to update uptime for group=%s; endpoint=%s: %s", endpoint.Group, endpoint.Name, err.Error())
|
||||
}
|
||||
// Clean up old uptime entries
|
||||
ageOfOldestUptimeEntry, err := s.getAgeOfOldestServiceUptimeEntry(tx, serviceID)
|
||||
ageOfOldestUptimeEntry, err := s.getAgeOfOldestEndpointUptimeEntry(tx, endpointID)
|
||||
if err != nil {
|
||||
log.Printf("[sql][Insert] Failed to retrieve oldest service uptime entry for group=%s; service=%s: %s", service.Group, service.Name, err.Error())
|
||||
log.Printf("[sql][Insert] Failed to retrieve oldest endpoint uptime entry for group=%s; endpoint=%s: %s", endpoint.Group, endpoint.Name, err.Error())
|
||||
} else {
|
||||
if ageOfOldestUptimeEntry > uptimeCleanUpThreshold {
|
||||
if err = s.deleteOldUptimeEntries(tx, serviceID, time.Now().Add(-(uptimeRetention + time.Hour))); err != nil {
|
||||
log.Printf("[sql][Insert] Failed to delete old uptime entries for group=%s; service=%s: %s", service.Group, service.Name, err.Error())
|
||||
if err = s.deleteOldUptimeEntries(tx, endpointID, time.Now().Add(-(uptimeRetention + time.Hour))); err != nil {
|
||||
log.Printf("[sql][Insert] Failed to delete old uptime entries for group=%s; endpoint=%s: %s", endpoint.Group, endpoint.Name, err.Error())
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -328,25 +328,25 @@ func (s *Store) Insert(service *core.Service, result *core.Result) error {
|
|||
return err
|
||||
}
|
||||
|
||||
// DeleteAllServiceStatusesNotInKeys removes all rows owned by a service whose key is not within the keys provided
|
||||
func (s *Store) DeleteAllServiceStatusesNotInKeys(keys []string) int {
|
||||
// DeleteAllEndpointStatusesNotInKeys removes all rows owned by an endpoint whose key is not within the keys provided
|
||||
func (s *Store) DeleteAllEndpointStatusesNotInKeys(keys []string) int {
|
||||
var err error
|
||||
var result sql.Result
|
||||
if len(keys) == 0 {
|
||||
// Delete everything
|
||||
result, err = s.db.Exec("DELETE FROM service")
|
||||
result, err = s.db.Exec("DELETE FROM endpoints")
|
||||
} else {
|
||||
args := make([]interface{}, 0, len(keys))
|
||||
query := "DELETE FROM service WHERE service_key NOT IN ("
|
||||
query := "DELETE FROM endpoints WHERE endpoint_key NOT IN ("
|
||||
for i := range keys {
|
||||
query += fmt.Sprintf("$%d,", i+1)
|
||||
args = append(args, keys[i])
|
||||
}
|
||||
query = query[:len(query)-1] + ")"
|
||||
query = query[:len(query)-1] + ")" // Remove the last comma and close the parenthesis
|
||||
result, err = s.db.Exec(query, args...)
|
||||
}
|
||||
if err != nil {
|
||||
log.Printf("[sql][DeleteAllServiceStatusesNotInKeys] Failed to delete rows that do not belong to any of keys=%v: %s", keys, err.Error())
|
||||
log.Printf("[sql][DeleteAllEndpointStatusesNotInKeys] Failed to delete rows that do not belong to any of keys=%v: %s", keys, err.Error())
|
||||
return 0
|
||||
}
|
||||
rowsAffects, _ := result.RowsAffected()
|
||||
|
@ -355,7 +355,7 @@ func (s *Store) DeleteAllServiceStatusesNotInKeys(keys []string) int {
|
|||
|
||||
// Clear deletes everything from the store
|
||||
func (s *Store) Clear() {
|
||||
_, _ = s.db.Exec("DELETE FROM service")
|
||||
_, _ = s.db.Exec("DELETE FROM endpoints")
|
||||
}
|
||||
|
||||
// Save does nothing, because this store is immediately persistent.
|
||||
|
@ -368,15 +368,15 @@ func (s *Store) Close() {
|
|||
_ = s.db.Close()
|
||||
}
|
||||
|
||||
// insertService inserts a service in the store and returns the generated id of said service
|
||||
func (s *Store) insertService(tx *sql.Tx, service *core.Service) (int64, error) {
|
||||
//log.Printf("[sql][insertService] Inserting service with group=%s and name=%s", service.Group, service.Name)
|
||||
// insertEndpoint inserts an endpoint in the store and returns the generated id of said endpoint
|
||||
func (s *Store) insertEndpoint(tx *sql.Tx, endpoint *core.Endpoint) (int64, error) {
|
||||
//log.Printf("[sql][insertEndpoint] Inserting endpoint with group=%s and name=%s", endpoint.Group, endpoint.Name)
|
||||
var id int64
|
||||
err := tx.QueryRow(
|
||||
"INSERT INTO service (service_key, service_name, service_group) VALUES ($1, $2, $3) RETURNING service_id",
|
||||
service.Key(),
|
||||
service.Name,
|
||||
service.Group,
|
||||
"INSERT INTO endpoints (endpoint_key, endpoint_name, endpoint_group) VALUES ($1, $2, $3) RETURNING endpoint_id",
|
||||
endpoint.Key(),
|
||||
endpoint.Name,
|
||||
endpoint.Group,
|
||||
).Scan(&id)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
|
@ -384,11 +384,11 @@ func (s *Store) insertService(tx *sql.Tx, service *core.Service) (int64, error)
|
|||
return id, nil
|
||||
}
|
||||
|
||||
// insertEvent inserts a service event in the store
|
||||
func (s *Store) insertEvent(tx *sql.Tx, serviceID int64, event *core.Event) error {
|
||||
// insertEndpointEvent inserts en event in the store
|
||||
func (s *Store) insertEndpointEvent(tx *sql.Tx, endpointID int64, event *core.Event) error {
|
||||
_, err := tx.Exec(
|
||||
"INSERT INTO service_event (service_id, event_type, event_timestamp) VALUES ($1, $2, $3)",
|
||||
serviceID,
|
||||
"INSERT INTO endpoint_events (endpoint_id, event_type, event_timestamp) VALUES ($1, $2, $3)",
|
||||
endpointID,
|
||||
event.Type,
|
||||
event.Timestamp.UTC(),
|
||||
)
|
||||
|
@ -398,16 +398,16 @@ func (s *Store) insertEvent(tx *sql.Tx, serviceID int64, event *core.Event) erro
|
|||
return nil
|
||||
}
|
||||
|
||||
// insertResult inserts a result in the store
|
||||
func (s *Store) insertResult(tx *sql.Tx, serviceID int64, result *core.Result) error {
|
||||
var serviceResultID int64
|
||||
// insertEndpointResult inserts a result in the store
|
||||
func (s *Store) insertEndpointResult(tx *sql.Tx, endpointID int64, result *core.Result) error {
|
||||
var endpointResultID int64
|
||||
err := tx.QueryRow(
|
||||
`
|
||||
INSERT INTO service_result (service_id, success, errors, connected, status, dns_rcode, certificate_expiration, hostname, ip, duration, timestamp)
|
||||
INSERT INTO endpoint_results (endpoint_id, success, errors, connected, status, dns_rcode, certificate_expiration, hostname, ip, duration, timestamp)
|
||||
VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11)
|
||||
RETURNING service_result_id
|
||||
RETURNING endpoint_result_id
|
||||
`,
|
||||
serviceID,
|
||||
endpointID,
|
||||
result.Success,
|
||||
strings.Join(result.Errors, arraySeparator),
|
||||
result.Connected,
|
||||
|
@ -418,18 +418,18 @@ func (s *Store) insertResult(tx *sql.Tx, serviceID int64, result *core.Result) e
|
|||
result.IP,
|
||||
result.Duration,
|
||||
result.Timestamp.UTC(),
|
||||
).Scan(&serviceResultID)
|
||||
).Scan(&endpointResultID)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return s.insertConditionResults(tx, serviceResultID, result.ConditionResults)
|
||||
return s.insertConditionResults(tx, endpointResultID, result.ConditionResults)
|
||||
}
|
||||
|
||||
func (s *Store) insertConditionResults(tx *sql.Tx, serviceResultID int64, conditionResults []*core.ConditionResult) error {
|
||||
func (s *Store) insertConditionResults(tx *sql.Tx, endpointResultID int64, conditionResults []*core.ConditionResult) error {
|
||||
var err error
|
||||
for _, cr := range conditionResults {
|
||||
_, err = tx.Exec("INSERT INTO service_result_condition (service_result_id, condition, success) VALUES ($1, $2, $3)",
|
||||
serviceResultID,
|
||||
_, err = tx.Exec("INSERT INTO endpoint_result_conditions (endpoint_result_id, condition, success) VALUES ($1, $2, $3)",
|
||||
endpointResultID,
|
||||
cr.Condition,
|
||||
cr.Success,
|
||||
)
|
||||
|
@ -440,7 +440,7 @@ func (s *Store) insertConditionResults(tx *sql.Tx, serviceResultID int64, condit
|
|||
return nil
|
||||
}
|
||||
|
||||
func (s *Store) updateServiceUptime(tx *sql.Tx, serviceID int64, result *core.Result) error {
|
||||
func (s *Store) updateEndpointUptime(tx *sql.Tx, endpointID int64, result *core.Result) error {
|
||||
unixTimestampFlooredAtHour := result.Timestamp.Truncate(time.Hour).Unix()
|
||||
var successfulExecutions int
|
||||
if result.Success {
|
||||
|
@ -448,14 +448,14 @@ func (s *Store) updateServiceUptime(tx *sql.Tx, serviceID int64, result *core.Re
|
|||
}
|
||||
_, err := tx.Exec(
|
||||
`
|
||||
INSERT INTO service_uptime (service_id, hour_unix_timestamp, total_executions, successful_executions, total_response_time)
|
||||
INSERT INTO endpoint_uptimes (endpoint_id, hour_unix_timestamp, total_executions, successful_executions, total_response_time)
|
||||
VALUES ($1, $2, $3, $4, $5)
|
||||
ON CONFLICT(service_id, hour_unix_timestamp) DO UPDATE SET
|
||||
total_executions = excluded.total_executions + service_uptime.total_executions,
|
||||
successful_executions = excluded.successful_executions + service_uptime.successful_executions,
|
||||
total_response_time = excluded.total_response_time + service_uptime.total_response_time
|
||||
ON CONFLICT(endpoint_id, hour_unix_timestamp) DO UPDATE SET
|
||||
total_executions = excluded.total_executions + endpoint_uptimes.total_executions,
|
||||
successful_executions = excluded.successful_executions + endpoint_uptimes.successful_executions,
|
||||
total_response_time = excluded.total_response_time + endpoint_uptimes.total_response_time
|
||||
`,
|
||||
serviceID,
|
||||
endpointID,
|
||||
unixTimestampFlooredAtHour,
|
||||
1,
|
||||
successfulExecutions,
|
||||
|
@ -464,8 +464,8 @@ func (s *Store) updateServiceUptime(tx *sql.Tx, serviceID int64, result *core.Re
|
|||
return err
|
||||
}
|
||||
|
||||
func (s *Store) getAllServiceKeys(tx *sql.Tx) (keys []string, err error) {
|
||||
rows, err := tx.Query("SELECT service_key FROM service ORDER BY service_key")
|
||||
func (s *Store) getAllEndpointKeys(tx *sql.Tx) (keys []string, err error) {
|
||||
rows, err := tx.Query("SELECT endpoint_key FROM endpoints ORDER BY endpoint_key")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@ -477,60 +477,60 @@ func (s *Store) getAllServiceKeys(tx *sql.Tx) (keys []string, err error) {
|
|||
return
|
||||
}
|
||||
|
||||
func (s *Store) getServiceStatusByKey(tx *sql.Tx, key string, parameters *paging.ServiceStatusParams) (*core.ServiceStatus, error) {
|
||||
serviceID, serviceGroup, serviceName, err := s.getServiceIDGroupAndNameByKey(tx, key)
|
||||
func (s *Store) getEndpointStatusByKey(tx *sql.Tx, key string, parameters *paging.EndpointStatusParams) (*core.EndpointStatus, error) {
|
||||
endpointID, group, endpointName, err := s.getEndpointIDGroupAndNameByKey(tx, key)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
serviceStatus := core.NewServiceStatus(key, serviceGroup, serviceName)
|
||||
endpointStatus := core.NewEndpointStatus(group, endpointName)
|
||||
if parameters.EventsPageSize > 0 {
|
||||
if serviceStatus.Events, err = s.getEventsByServiceID(tx, serviceID, parameters.EventsPage, parameters.EventsPageSize); err != nil {
|
||||
log.Printf("[sql][getServiceStatusByKey] Failed to retrieve events for key=%s: %s", key, err.Error())
|
||||
if endpointStatus.Events, err = s.getEndpointEventsByEndpointID(tx, endpointID, parameters.EventsPage, parameters.EventsPageSize); err != nil {
|
||||
log.Printf("[sql][getEndpointStatusByKey] Failed to retrieve events for key=%s: %s", key, err.Error())
|
||||
}
|
||||
}
|
||||
if parameters.ResultsPageSize > 0 {
|
||||
if serviceStatus.Results, err = s.getResultsByServiceID(tx, serviceID, parameters.ResultsPage, parameters.ResultsPageSize); err != nil {
|
||||
log.Printf("[sql][getServiceStatusByKey] Failed to retrieve results for key=%s: %s", key, err.Error())
|
||||
if endpointStatus.Results, err = s.getEndpointResultsByEndpointID(tx, endpointID, parameters.ResultsPage, parameters.ResultsPageSize); err != nil {
|
||||
log.Printf("[sql][getEndpointStatusByKey] Failed to retrieve results for key=%s: %s", key, err.Error())
|
||||
}
|
||||
}
|
||||
//if parameters.IncludeUptime {
|
||||
// now := time.Now()
|
||||
// serviceStatus.Uptime.LastHour, _, err = s.getServiceUptime(tx, serviceID, now.Add(-time.Hour), now)
|
||||
// serviceStatus.Uptime.LastTwentyFourHours, _, err = s.getServiceUptime(tx, serviceID, now.Add(-24*time.Hour), now)
|
||||
// serviceStatus.Uptime.LastSevenDays, _, err = s.getServiceUptime(tx, serviceID, now.Add(-7*24*time.Hour), now)
|
||||
// endpointStatus.Uptime.LastHour, _, err = s.getEndpointUptime(tx, endpointID, now.Add(-time.Hour), now)
|
||||
// endpointStatus.Uptime.LastTwentyFourHours, _, err = s.getEndpointUptime(tx, endpointID, now.Add(-24*time.Hour), now)
|
||||
// endpointStatus.Uptime.LastSevenDays, _, err = s.getEndpointUptime(tx, endpointID, now.Add(-7*24*time.Hour), now)
|
||||
//}
|
||||
return serviceStatus, nil
|
||||
return endpointStatus, nil
|
||||
}
|
||||
|
||||
func (s *Store) getServiceIDGroupAndNameByKey(tx *sql.Tx, key string) (id int64, group, name string, err error) {
|
||||
func (s *Store) getEndpointIDGroupAndNameByKey(tx *sql.Tx, key string) (id int64, group, name string, err error) {
|
||||
err = tx.QueryRow(
|
||||
`
|
||||
SELECT service_id, service_group, service_name
|
||||
FROM service
|
||||
WHERE service_key = $1
|
||||
SELECT endpoint_id, endpoint_group, endpoint_name
|
||||
FROM endpoints
|
||||
WHERE endpoint_key = $1
|
||||
LIMIT 1
|
||||
`,
|
||||
key,
|
||||
).Scan(&id, &group, &name)
|
||||
if err != nil {
|
||||
if err == sql.ErrNoRows {
|
||||
return 0, "", "", common.ErrServiceNotFound
|
||||
return 0, "", "", common.ErrEndpointNotFound
|
||||
}
|
||||
return 0, "", "", err
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func (s *Store) getEventsByServiceID(tx *sql.Tx, serviceID int64, page, pageSize int) (events []*core.Event, err error) {
|
||||
func (s *Store) getEndpointEventsByEndpointID(tx *sql.Tx, endpointID int64, page, pageSize int) (events []*core.Event, err error) {
|
||||
rows, err := tx.Query(
|
||||
`
|
||||
SELECT event_type, event_timestamp
|
||||
FROM service_event
|
||||
WHERE service_id = $1
|
||||
ORDER BY service_event_id ASC
|
||||
FROM endpoint_events
|
||||
WHERE endpoint_id = $1
|
||||
ORDER BY endpoint_event_id ASC
|
||||
LIMIT $2 OFFSET $3
|
||||
`,
|
||||
serviceID,
|
||||
endpointID,
|
||||
pageSize,
|
||||
(page-1)*pageSize,
|
||||
)
|
||||
|
@ -545,16 +545,16 @@ func (s *Store) getEventsByServiceID(tx *sql.Tx, serviceID int64, page, pageSize
|
|||
return
|
||||
}
|
||||
|
||||
func (s *Store) getResultsByServiceID(tx *sql.Tx, serviceID int64, page, pageSize int) (results []*core.Result, err error) {
|
||||
func (s *Store) getEndpointResultsByEndpointID(tx *sql.Tx, endpointID int64, page, pageSize int) (results []*core.Result, err error) {
|
||||
rows, err := tx.Query(
|
||||
`
|
||||
SELECT service_result_id, success, errors, connected, status, dns_rcode, certificate_expiration, hostname, ip, duration, timestamp
|
||||
FROM service_result
|
||||
WHERE service_id = $1
|
||||
ORDER BY service_result_id DESC -- Normally, we'd sort by timestamp, but sorting by service_result_id is faster
|
||||
SELECT endpoint_result_id, success, errors, connected, status, dns_rcode, certificate_expiration, hostname, ip, duration, timestamp
|
||||
FROM endpoint_results
|
||||
WHERE endpoint_id = $1
|
||||
ORDER BY endpoint_result_id DESC -- Normally, we'd sort by timestamp, but sorting by endpoint_result_id is faster
|
||||
LIMIT $2 OFFSET $3
|
||||
`,
|
||||
serviceID,
|
||||
endpointID,
|
||||
pageSize,
|
||||
(page-1)*pageSize,
|
||||
)
|
||||
|
@ -576,13 +576,13 @@ func (s *Store) getResultsByServiceID(tx *sql.Tx, serviceID int64, page, pageSiz
|
|||
}
|
||||
// Get condition results
|
||||
args := make([]interface{}, 0, len(idResultMap))
|
||||
query := `SELECT service_result_id, condition, success
|
||||
FROM service_result_condition
|
||||
WHERE service_result_id IN (`
|
||||
query := `SELECT endpoint_result_id, condition, success
|
||||
FROM endpoint_result_conditions
|
||||
WHERE endpoint_result_id IN (`
|
||||
index := 1
|
||||
for serviceResultID := range idResultMap {
|
||||
for endpointResultID := range idResultMap {
|
||||
query += fmt.Sprintf("$%d,", index)
|
||||
args = append(args, serviceResultID)
|
||||
args = append(args, endpointResultID)
|
||||
index++
|
||||
}
|
||||
query = query[:len(query)-1] + ")"
|
||||
|
@ -593,25 +593,25 @@ func (s *Store) getResultsByServiceID(tx *sql.Tx, serviceID int64, page, pageSiz
|
|||
defer rows.Close() // explicitly defer the close in case an error happens during the scan
|
||||
for rows.Next() {
|
||||
conditionResult := &core.ConditionResult{}
|
||||
var serviceResultID int64
|
||||
if err = rows.Scan(&serviceResultID, &conditionResult.Condition, &conditionResult.Success); err != nil {
|
||||
var endpointResultID int64
|
||||
if err = rows.Scan(&endpointResultID, &conditionResult.Condition, &conditionResult.Success); err != nil {
|
||||
return
|
||||
}
|
||||
idResultMap[serviceResultID].ConditionResults = append(idResultMap[serviceResultID].ConditionResults, conditionResult)
|
||||
idResultMap[endpointResultID].ConditionResults = append(idResultMap[endpointResultID].ConditionResults, conditionResult)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func (s *Store) getServiceUptime(tx *sql.Tx, serviceID int64, from, to time.Time) (uptime float64, avgResponseTime time.Duration, err error) {
|
||||
func (s *Store) getEndpointUptime(tx *sql.Tx, endpointID int64, from, to time.Time) (uptime float64, avgResponseTime time.Duration, err error) {
|
||||
rows, err := tx.Query(
|
||||
`
|
||||
SELECT SUM(total_executions), SUM(successful_executions), SUM(total_response_time)
|
||||
FROM service_uptime
|
||||
WHERE service_id = $1
|
||||
FROM endpoint_uptimes
|
||||
WHERE endpoint_id = $1
|
||||
AND hour_unix_timestamp >= $2
|
||||
AND hour_unix_timestamp <= $3
|
||||
`,
|
||||
serviceID,
|
||||
endpointID,
|
||||
from.Unix(),
|
||||
to.Unix(),
|
||||
)
|
||||
|
@ -629,17 +629,17 @@ func (s *Store) getServiceUptime(tx *sql.Tx, serviceID int64, from, to time.Time
|
|||
return
|
||||
}
|
||||
|
||||
func (s *Store) getServiceAverageResponseTime(tx *sql.Tx, serviceID int64, from, to time.Time) (int, error) {
|
||||
func (s *Store) getEndpointAverageResponseTime(tx *sql.Tx, endpointID int64, from, to time.Time) (int, error) {
|
||||
rows, err := tx.Query(
|
||||
`
|
||||
SELECT SUM(total_executions), SUM(total_response_time)
|
||||
FROM service_uptime
|
||||
WHERE service_id = $1
|
||||
FROM endpoint_uptimes
|
||||
WHERE endpoint_id = $1
|
||||
AND total_executions > 0
|
||||
AND hour_unix_timestamp >= $2
|
||||
AND hour_unix_timestamp <= $3
|
||||
`,
|
||||
serviceID,
|
||||
endpointID,
|
||||
from.Unix(),
|
||||
to.Unix(),
|
||||
)
|
||||
|
@ -656,17 +656,17 @@ func (s *Store) getServiceAverageResponseTime(tx *sql.Tx, serviceID int64, from,
|
|||
return int(float64(totalResponseTime) / float64(totalExecutions)), nil
|
||||
}
|
||||
|
||||
func (s *Store) getServiceHourlyAverageResponseTimes(tx *sql.Tx, serviceID int64, from, to time.Time) (map[int64]int, error) {
|
||||
func (s *Store) getEndpointHourlyAverageResponseTimes(tx *sql.Tx, endpointID int64, from, to time.Time) (map[int64]int, error) {
|
||||
rows, err := tx.Query(
|
||||
`
|
||||
SELECT hour_unix_timestamp, total_executions, total_response_time
|
||||
FROM service_uptime
|
||||
WHERE service_id = $1
|
||||
FROM endpoint_uptimes
|
||||
WHERE endpoint_id = $1
|
||||
AND total_executions > 0
|
||||
AND hour_unix_timestamp >= $2
|
||||
AND hour_unix_timestamp <= $3
|
||||
`,
|
||||
serviceID,
|
||||
endpointID,
|
||||
from.Unix(),
|
||||
to.Unix(),
|
||||
)
|
||||
|
@ -683,59 +683,59 @@ func (s *Store) getServiceHourlyAverageResponseTimes(tx *sql.Tx, serviceID int64
|
|||
return hourlyAverageResponseTimes, nil
|
||||
}
|
||||
|
||||
func (s *Store) getServiceID(tx *sql.Tx, service *core.Service) (int64, error) {
|
||||
func (s *Store) getEndpointID(tx *sql.Tx, endpoint *core.Endpoint) (int64, error) {
|
||||
var id int64
|
||||
err := tx.QueryRow("SELECT service_id FROM service WHERE service_key = $1", service.Key()).Scan(&id)
|
||||
err := tx.QueryRow("SELECT endpoint_id FROM endpoints WHERE endpoint_key = $1", endpoint.Key()).Scan(&id)
|
||||
if err != nil {
|
||||
if err == sql.ErrNoRows {
|
||||
return 0, common.ErrServiceNotFound
|
||||
return 0, common.ErrEndpointNotFound
|
||||
}
|
||||
return 0, err
|
||||
}
|
||||
return id, nil
|
||||
}
|
||||
|
||||
func (s *Store) getNumberOfEventsByServiceID(tx *sql.Tx, serviceID int64) (int64, error) {
|
||||
func (s *Store) getNumberOfEventsByEndpointID(tx *sql.Tx, endpointID int64) (int64, error) {
|
||||
var numberOfEvents int64
|
||||
err := tx.QueryRow("SELECT COUNT(1) FROM service_event WHERE service_id = $1", serviceID).Scan(&numberOfEvents)
|
||||
err := tx.QueryRow("SELECT COUNT(1) FROM endpoint_events WHERE endpoint_id = $1", endpointID).Scan(&numberOfEvents)
|
||||
return numberOfEvents, err
|
||||
}
|
||||
|
||||
func (s *Store) getNumberOfResultsByServiceID(tx *sql.Tx, serviceID int64) (int64, error) {
|
||||
func (s *Store) getNumberOfResultsByEndpointID(tx *sql.Tx, endpointID int64) (int64, error) {
|
||||
var numberOfResults int64
|
||||
err := tx.QueryRow("SELECT COUNT(1) FROM service_result WHERE service_id = $1", serviceID).Scan(&numberOfResults)
|
||||
err := tx.QueryRow("SELECT COUNT(1) FROM endpoint_results WHERE endpoint_id = $1", endpointID).Scan(&numberOfResults)
|
||||
return numberOfResults, err
|
||||
}
|
||||
|
||||
func (s *Store) getAgeOfOldestServiceUptimeEntry(tx *sql.Tx, serviceID int64) (time.Duration, error) {
|
||||
func (s *Store) getAgeOfOldestEndpointUptimeEntry(tx *sql.Tx, endpointID int64) (time.Duration, error) {
|
||||
rows, err := tx.Query(
|
||||
`
|
||||
SELECT hour_unix_timestamp
|
||||
FROM service_uptime
|
||||
WHERE service_id = $1
|
||||
FROM endpoint_uptimes
|
||||
WHERE endpoint_id = $1
|
||||
ORDER BY hour_unix_timestamp
|
||||
LIMIT 1
|
||||
`,
|
||||
serviceID,
|
||||
endpointID,
|
||||
)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
var oldestServiceUptimeUnixTimestamp int64
|
||||
var oldestEndpointUptimeUnixTimestamp int64
|
||||
var found bool
|
||||
for rows.Next() {
|
||||
_ = rows.Scan(&oldestServiceUptimeUnixTimestamp)
|
||||
_ = rows.Scan(&oldestEndpointUptimeUnixTimestamp)
|
||||
found = true
|
||||
}
|
||||
if !found {
|
||||
return 0, errNoRowsReturned
|
||||
}
|
||||
return time.Since(time.Unix(oldestServiceUptimeUnixTimestamp, 0)), nil
|
||||
return time.Since(time.Unix(oldestEndpointUptimeUnixTimestamp, 0)), nil
|
||||
}
|
||||
|
||||
func (s *Store) getLastServiceResultSuccessValue(tx *sql.Tx, serviceID int64) (bool, error) {
|
||||
func (s *Store) getLastEndpointResultSuccessValue(tx *sql.Tx, endpointID int64) (bool, error) {
|
||||
var success bool
|
||||
err := tx.QueryRow("SELECT success FROM service_result WHERE service_id = $1 ORDER BY service_result_id DESC LIMIT 1", serviceID).Scan(&success)
|
||||
err := tx.QueryRow("SELECT success FROM endpoint_results WHERE endpoint_id = $1 ORDER BY endpoint_result_id DESC LIMIT 1", endpointID).Scan(&success)
|
||||
if err != nil {
|
||||
if err == sql.ErrNoRows {
|
||||
return false, errNoRowsReturned
|
||||
|
@ -745,47 +745,47 @@ func (s *Store) getLastServiceResultSuccessValue(tx *sql.Tx, serviceID int64) (b
|
|||
return success, nil
|
||||
}
|
||||
|
||||
// deleteOldServiceEvents deletes old service events that are no longer needed
|
||||
func (s *Store) deleteOldServiceEvents(tx *sql.Tx, serviceID int64) error {
|
||||
// deleteOldEndpointEvents deletes endpoint events that are no longer needed
|
||||
func (s *Store) deleteOldEndpointEvents(tx *sql.Tx, endpointID int64) error {
|
||||
_, err := tx.Exec(
|
||||
`
|
||||
DELETE FROM service_event
|
||||
WHERE service_id = $1
|
||||
AND service_event_id NOT IN (
|
||||
SELECT service_event_id
|
||||
FROM service_event
|
||||
WHERE service_id = $1
|
||||
ORDER BY service_event_id DESC
|
||||
DELETE FROM endpoint_events
|
||||
WHERE endpoint_id = $1
|
||||
AND endpoint_event_id NOT IN (
|
||||
SELECT endpoint_event_id
|
||||
FROM endpoint_events
|
||||
WHERE endpoint_id = $1
|
||||
ORDER BY endpoint_event_id DESC
|
||||
LIMIT $2
|
||||
)
|
||||
`,
|
||||
serviceID,
|
||||
endpointID,
|
||||
common.MaximumNumberOfEvents,
|
||||
)
|
||||
return err
|
||||
}
|
||||
|
||||
// deleteOldServiceResults deletes old service results that are no longer needed
|
||||
func (s *Store) deleteOldServiceResults(tx *sql.Tx, serviceID int64) error {
|
||||
// deleteOldEndpointResults deletes endpoint results that are no longer needed
|
||||
func (s *Store) deleteOldEndpointResults(tx *sql.Tx, endpointID int64) error {
|
||||
_, err := tx.Exec(
|
||||
`
|
||||
DELETE FROM service_result
|
||||
WHERE service_id = $1
|
||||
AND service_result_id NOT IN (
|
||||
SELECT service_result_id
|
||||
FROM service_result
|
||||
WHERE service_id = $1
|
||||
ORDER BY service_result_id DESC
|
||||
DELETE FROM endpoint_results
|
||||
WHERE endpoint_id = $1
|
||||
AND endpoint_result_id NOT IN (
|
||||
SELECT endpoint_result_id
|
||||
FROM endpoint_results
|
||||
WHERE endpoint_id = $1
|
||||
ORDER BY endpoint_result_id DESC
|
||||
LIMIT $2
|
||||
)
|
||||
`,
|
||||
serviceID,
|
||||
endpointID,
|
||||
common.MaximumNumberOfResults,
|
||||
)
|
||||
return err
|
||||
}
|
||||
|
||||
func (s *Store) deleteOldUptimeEntries(tx *sql.Tx, serviceID int64, maxAge time.Time) error {
|
||||
_, err := tx.Exec("DELETE FROM service_uptime WHERE service_id = $1 AND hour_unix_timestamp < $2", serviceID, maxAge.Unix())
|
||||
func (s *Store) deleteOldUptimeEntries(tx *sql.Tx, endpointID int64, maxAge time.Time) error {
|
||||
_, err := tx.Exec("DELETE FROM endpoint_uptimes WHERE endpoint_id = $1 AND hour_unix_timestamp < $2", endpointID, maxAge.Unix())
|
||||
return err
|
||||
}
|
||||
|
|
|
@ -16,7 +16,7 @@ var (
|
|||
|
||||
now = time.Now()
|
||||
|
||||
testService = core.Service{
|
||||
testEndpoint = core.Endpoint{
|
||||
Name: "name",
|
||||
Group: "group",
|
||||
URL: "https://example.org/what/ever",
|
||||
|
@ -100,54 +100,54 @@ func TestStore_InsertCleansUpOldUptimeEntriesProperly(t *testing.T) {
|
|||
now := time.Now().Round(time.Minute)
|
||||
now = time.Date(now.Year(), now.Month(), now.Day(), now.Hour(), 0, 0, 0, now.Location())
|
||||
|
||||
store.Insert(&testService, &core.Result{Timestamp: now.Add(-5 * time.Hour), Success: true})
|
||||
store.Insert(&testEndpoint, &core.Result{Timestamp: now.Add(-5 * time.Hour), Success: true})
|
||||
|
||||
tx, _ := store.db.Begin()
|
||||
oldest, _ := store.getAgeOfOldestServiceUptimeEntry(tx, 1)
|
||||
oldest, _ := store.getAgeOfOldestEndpointUptimeEntry(tx, 1)
|
||||
_ = tx.Commit()
|
||||
if oldest.Truncate(time.Hour) != 5*time.Hour {
|
||||
t.Errorf("oldest service uptime entry should've been ~5 hours old, was %s", oldest)
|
||||
t.Errorf("oldest endpoint uptime entry should've been ~5 hours old, was %s", oldest)
|
||||
}
|
||||
|
||||
// The oldest cache entry should remain at ~5 hours old, because this entry is more recent
|
||||
store.Insert(&testService, &core.Result{Timestamp: now.Add(-3 * time.Hour), Success: true})
|
||||
store.Insert(&testEndpoint, &core.Result{Timestamp: now.Add(-3 * time.Hour), Success: true})
|
||||
|
||||
tx, _ = store.db.Begin()
|
||||
oldest, _ = store.getAgeOfOldestServiceUptimeEntry(tx, 1)
|
||||
oldest, _ = store.getAgeOfOldestEndpointUptimeEntry(tx, 1)
|
||||
_ = tx.Commit()
|
||||
if oldest.Truncate(time.Hour) != 5*time.Hour {
|
||||
t.Errorf("oldest service uptime entry should've been ~5 hours old, was %s", oldest)
|
||||
t.Errorf("oldest endpoint uptime entry should've been ~5 hours old, was %s", oldest)
|
||||
}
|
||||
|
||||
// The oldest cache entry should now become at ~8 hours old, because this entry is older
|
||||
store.Insert(&testService, &core.Result{Timestamp: now.Add(-8 * time.Hour), Success: true})
|
||||
store.Insert(&testEndpoint, &core.Result{Timestamp: now.Add(-8 * time.Hour), Success: true})
|
||||
|
||||
tx, _ = store.db.Begin()
|
||||
oldest, _ = store.getAgeOfOldestServiceUptimeEntry(tx, 1)
|
||||
oldest, _ = store.getAgeOfOldestEndpointUptimeEntry(tx, 1)
|
||||
_ = tx.Commit()
|
||||
if oldest.Truncate(time.Hour) != 8*time.Hour {
|
||||
t.Errorf("oldest service uptime entry should've been ~8 hours old, was %s", oldest)
|
||||
t.Errorf("oldest endpoint uptime entry should've been ~8 hours old, was %s", oldest)
|
||||
}
|
||||
|
||||
// Since this is one hour before reaching the clean up threshold, the oldest entry should now be this one
|
||||
store.Insert(&testService, &core.Result{Timestamp: now.Add(-(uptimeCleanUpThreshold - time.Hour)), Success: true})
|
||||
store.Insert(&testEndpoint, &core.Result{Timestamp: now.Add(-(uptimeCleanUpThreshold - time.Hour)), Success: true})
|
||||
|
||||
tx, _ = store.db.Begin()
|
||||
oldest, _ = store.getAgeOfOldestServiceUptimeEntry(tx, 1)
|
||||
oldest, _ = store.getAgeOfOldestEndpointUptimeEntry(tx, 1)
|
||||
_ = tx.Commit()
|
||||
if oldest.Truncate(time.Hour) != uptimeCleanUpThreshold-time.Hour {
|
||||
t.Errorf("oldest service uptime entry should've been ~%s hours old, was %s", uptimeCleanUpThreshold-time.Hour, oldest)
|
||||
t.Errorf("oldest endpoint uptime entry should've been ~%s hours old, was %s", uptimeCleanUpThreshold-time.Hour, oldest)
|
||||
}
|
||||
|
||||
// Since this entry is after the uptimeCleanUpThreshold, both this entry as well as the previous
|
||||
// one should be deleted since they both surpass uptimeRetention
|
||||
store.Insert(&testService, &core.Result{Timestamp: now.Add(-(uptimeCleanUpThreshold + time.Hour)), Success: true})
|
||||
store.Insert(&testEndpoint, &core.Result{Timestamp: now.Add(-(uptimeCleanUpThreshold + time.Hour)), Success: true})
|
||||
|
||||
tx, _ = store.db.Begin()
|
||||
oldest, _ = store.getAgeOfOldestServiceUptimeEntry(tx, 1)
|
||||
oldest, _ = store.getAgeOfOldestEndpointUptimeEntry(tx, 1)
|
||||
_ = tx.Commit()
|
||||
if oldest.Truncate(time.Hour) != 8*time.Hour {
|
||||
t.Errorf("oldest service uptime entry should've been ~8 hours old, was %s", oldest)
|
||||
t.Errorf("oldest endpoint uptime entry should've been ~8 hours old, was %s", oldest)
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -155,9 +155,9 @@ func TestStore_InsertCleansUpEventsAndResultsProperly(t *testing.T) {
|
|||
store, _ := NewStore("sqlite", t.TempDir()+"/TestStore_InsertCleansUpEventsAndResultsProperly.db")
|
||||
defer store.Close()
|
||||
for i := 0; i < resultsCleanUpThreshold+eventsCleanUpThreshold; i++ {
|
||||
store.Insert(&testService, &testSuccessfulResult)
|
||||
store.Insert(&testService, &testUnsuccessfulResult)
|
||||
ss, _ := store.GetServiceStatusByKey(testService.Key(), paging.NewServiceStatusParams().WithResults(1, common.MaximumNumberOfResults*5).WithEvents(1, common.MaximumNumberOfEvents*5))
|
||||
store.Insert(&testEndpoint, &testSuccessfulResult)
|
||||
store.Insert(&testEndpoint, &testUnsuccessfulResult)
|
||||
ss, _ := store.GetEndpointStatusByKey(testEndpoint.Key(), paging.NewEndpointStatusParams().WithResults(1, common.MaximumNumberOfResults*5).WithEvents(1, common.MaximumNumberOfEvents*5))
|
||||
if len(ss.Results) > resultsCleanUpThreshold+1 {
|
||||
t.Errorf("number of results shouldn't have exceeded %d, reached %d", resultsCleanUpThreshold, len(ss.Results))
|
||||
}
|
||||
|
@ -171,18 +171,18 @@ func TestStore_InsertCleansUpEventsAndResultsProperly(t *testing.T) {
|
|||
func TestStore_Persistence(t *testing.T) {
|
||||
file := t.TempDir() + "/TestStore_Persistence.db"
|
||||
store, _ := NewStore("sqlite", file)
|
||||
store.Insert(&testService, &testSuccessfulResult)
|
||||
store.Insert(&testService, &testUnsuccessfulResult)
|
||||
if uptime, _ := store.GetUptimeByKey(testService.Key(), time.Now().Add(-time.Hour), time.Now()); uptime != 0.5 {
|
||||
store.Insert(&testEndpoint, &testSuccessfulResult)
|
||||
store.Insert(&testEndpoint, &testUnsuccessfulResult)
|
||||
if uptime, _ := store.GetUptimeByKey(testEndpoint.Key(), time.Now().Add(-time.Hour), time.Now()); uptime != 0.5 {
|
||||
t.Errorf("the uptime over the past 1h should've been 0.5, got %f", uptime)
|
||||
}
|
||||
if uptime, _ := store.GetUptimeByKey(testService.Key(), time.Now().Add(-time.Hour*24), time.Now()); uptime != 0.5 {
|
||||
if uptime, _ := store.GetUptimeByKey(testEndpoint.Key(), time.Now().Add(-time.Hour*24), time.Now()); uptime != 0.5 {
|
||||
t.Errorf("the uptime over the past 24h should've been 0.5, got %f", uptime)
|
||||
}
|
||||
if uptime, _ := store.GetUptimeByKey(testService.Key(), time.Now().Add(-time.Hour*24*7), time.Now()); uptime != 0.5 {
|
||||
if uptime, _ := store.GetUptimeByKey(testEndpoint.Key(), time.Now().Add(-time.Hour*24*7), time.Now()); uptime != 0.5 {
|
||||
t.Errorf("the uptime over the past 7d should've been 0.5, got %f", uptime)
|
||||
}
|
||||
ssFromOldStore, _ := store.GetServiceStatus(testService.Group, testService.Name, paging.NewServiceStatusParams().WithResults(1, common.MaximumNumberOfResults).WithEvents(1, common.MaximumNumberOfEvents))
|
||||
ssFromOldStore, _ := store.GetEndpointStatus(testEndpoint.Group, testEndpoint.Name, paging.NewEndpointStatusParams().WithResults(1, common.MaximumNumberOfResults).WithEvents(1, common.MaximumNumberOfEvents))
|
||||
if ssFromOldStore == nil || ssFromOldStore.Group != "group" || ssFromOldStore.Name != "name" || len(ssFromOldStore.Events) != 3 || len(ssFromOldStore.Results) != 2 {
|
||||
store.Close()
|
||||
t.Fatal("sanity check failed")
|
||||
|
@ -190,7 +190,7 @@ func TestStore_Persistence(t *testing.T) {
|
|||
store.Close()
|
||||
store, _ = NewStore("sqlite", file)
|
||||
defer store.Close()
|
||||
ssFromNewStore, _ := store.GetServiceStatus(testService.Group, testService.Name, paging.NewServiceStatusParams().WithResults(1, common.MaximumNumberOfResults).WithEvents(1, common.MaximumNumberOfEvents))
|
||||
ssFromNewStore, _ := store.GetEndpointStatus(testEndpoint.Group, testEndpoint.Name, paging.NewEndpointStatusParams().WithResults(1, common.MaximumNumberOfResults).WithEvents(1, common.MaximumNumberOfEvents))
|
||||
if ssFromNewStore == nil || ssFromNewStore.Group != "group" || ssFromNewStore.Name != "name" || len(ssFromNewStore.Events) != 3 || len(ssFromNewStore.Results) != 2 {
|
||||
t.Fatal("failed sanity check")
|
||||
}
|
||||
|
@ -264,42 +264,42 @@ func TestStore_Save(t *testing.T) {
|
|||
func TestStore_SanityCheck(t *testing.T) {
|
||||
store, _ := NewStore("sqlite", t.TempDir()+"/TestStore_SanityCheck.db")
|
||||
defer store.Close()
|
||||
store.Insert(&testService, &testSuccessfulResult)
|
||||
serviceStatuses, _ := store.GetAllServiceStatuses(paging.NewServiceStatusParams())
|
||||
if numberOfServiceStatuses := len(serviceStatuses); numberOfServiceStatuses != 1 {
|
||||
t.Fatalf("expected 1 ServiceStatus, got %d", numberOfServiceStatuses)
|
||||
store.Insert(&testEndpoint, &testSuccessfulResult)
|
||||
endpointStatuses, _ := store.GetAllEndpointStatuses(paging.NewEndpointStatusParams())
|
||||
if numberOfEndpointStatuses := len(endpointStatuses); numberOfEndpointStatuses != 1 {
|
||||
t.Fatalf("expected 1 EndpointStatus, got %d", numberOfEndpointStatuses)
|
||||
}
|
||||
store.Insert(&testService, &testUnsuccessfulResult)
|
||||
// Both results inserted are for the same service, therefore, the count shouldn't have increased
|
||||
serviceStatuses, _ = store.GetAllServiceStatuses(paging.NewServiceStatusParams())
|
||||
if numberOfServiceStatuses := len(serviceStatuses); numberOfServiceStatuses != 1 {
|
||||
t.Fatalf("expected 1 ServiceStatus, got %d", numberOfServiceStatuses)
|
||||
store.Insert(&testEndpoint, &testUnsuccessfulResult)
|
||||
// Both results inserted are for the same endpoint, therefore, the count shouldn't have increased
|
||||
endpointStatuses, _ = store.GetAllEndpointStatuses(paging.NewEndpointStatusParams())
|
||||
if numberOfEndpointStatuses := len(endpointStatuses); numberOfEndpointStatuses != 1 {
|
||||
t.Fatalf("expected 1 EndpointStatus, got %d", numberOfEndpointStatuses)
|
||||
}
|
||||
if hourlyAverageResponseTime, err := store.GetHourlyAverageResponseTimeByKey(testService.Key(), time.Now().Add(-24*time.Hour), time.Now()); err != nil {
|
||||
if hourlyAverageResponseTime, err := store.GetHourlyAverageResponseTimeByKey(testEndpoint.Key(), time.Now().Add(-24*time.Hour), time.Now()); err != nil {
|
||||
t.Errorf("expected no error, got %v", err)
|
||||
} else if len(hourlyAverageResponseTime) != 1 {
|
||||
t.Errorf("expected 1 hour to have had a result in the past 24 hours, got %d", len(hourlyAverageResponseTime))
|
||||
}
|
||||
if uptime, _ := store.GetUptimeByKey(testService.Key(), time.Now().Add(-24*time.Hour), time.Now()); uptime != 0.5 {
|
||||
if uptime, _ := store.GetUptimeByKey(testEndpoint.Key(), time.Now().Add(-24*time.Hour), time.Now()); uptime != 0.5 {
|
||||
t.Errorf("expected uptime of last 24h to be 0.5, got %f", uptime)
|
||||
}
|
||||
if averageResponseTime, _ := store.GetAverageResponseTimeByKey(testService.Key(), time.Now().Add(-24*time.Hour), time.Now()); averageResponseTime != 450 {
|
||||
if averageResponseTime, _ := store.GetAverageResponseTimeByKey(testEndpoint.Key(), time.Now().Add(-24*time.Hour), time.Now()); averageResponseTime != 450 {
|
||||
t.Errorf("expected average response time of last 24h to be 450, got %d", averageResponseTime)
|
||||
}
|
||||
ss, _ := store.GetServiceStatus(testService.Group, testService.Name, paging.NewServiceStatusParams().WithResults(1, 20).WithEvents(1, 20))
|
||||
ss, _ := store.GetEndpointStatus(testEndpoint.Group, testEndpoint.Name, paging.NewEndpointStatusParams().WithResults(1, 20).WithEvents(1, 20))
|
||||
if ss == nil {
|
||||
t.Fatalf("Store should've had key '%s', but didn't", testService.Key())
|
||||
t.Fatalf("Store should've had key '%s', but didn't", testEndpoint.Key())
|
||||
}
|
||||
if len(ss.Events) != 3 {
|
||||
t.Errorf("Service '%s' should've had 3 events, got %d", ss.Name, len(ss.Events))
|
||||
t.Errorf("Endpoint '%s' should've had 3 events, got %d", ss.Name, len(ss.Events))
|
||||
}
|
||||
if len(ss.Results) != 2 {
|
||||
t.Errorf("Service '%s' should've had 2 results, got %d", ss.Name, len(ss.Results))
|
||||
t.Errorf("Endpoint '%s' should've had 2 results, got %d", ss.Name, len(ss.Results))
|
||||
}
|
||||
if deleted := store.DeleteAllServiceStatusesNotInKeys([]string{"invalid-key-which-means-everything-should-get-deleted"}); deleted != 1 {
|
||||
if deleted := store.DeleteAllEndpointStatusesNotInKeys([]string{"invalid-key-which-means-everything-should-get-deleted"}); deleted != 1 {
|
||||
t.Errorf("%d entries should've been deleted, got %d", 1, deleted)
|
||||
}
|
||||
if deleted := store.DeleteAllServiceStatusesNotInKeys([]string{}); deleted != 0 {
|
||||
if deleted := store.DeleteAllEndpointStatusesNotInKeys([]string{}); deleted != 0 {
|
||||
t.Errorf("There should've been no entries left to delete, got %d", deleted)
|
||||
}
|
||||
}
|
||||
|
@ -310,55 +310,55 @@ func TestStore_InvalidTransaction(t *testing.T) {
|
|||
defer store.Close()
|
||||
tx, _ := store.db.Begin()
|
||||
tx.Commit()
|
||||
if _, err := store.insertService(tx, &testService); err == nil {
|
||||
if _, err := store.insertEndpoint(tx, &testEndpoint); err == nil {
|
||||
t.Error("should've returned an error, because the transaction was already committed")
|
||||
}
|
||||
if err := store.insertEvent(tx, 1, core.NewEventFromResult(&testSuccessfulResult)); err == nil {
|
||||
if err := store.insertEndpointEvent(tx, 1, core.NewEventFromResult(&testSuccessfulResult)); err == nil {
|
||||
t.Error("should've returned an error, because the transaction was already committed")
|
||||
}
|
||||
if err := store.insertResult(tx, 1, &testSuccessfulResult); err == nil {
|
||||
if err := store.insertEndpointResult(tx, 1, &testSuccessfulResult); err == nil {
|
||||
t.Error("should've returned an error, because the transaction was already committed")
|
||||
}
|
||||
if err := store.insertConditionResults(tx, 1, testSuccessfulResult.ConditionResults); err == nil {
|
||||
t.Error("should've returned an error, because the transaction was already committed")
|
||||
}
|
||||
if err := store.updateServiceUptime(tx, 1, &testSuccessfulResult); err == nil {
|
||||
if err := store.updateEndpointUptime(tx, 1, &testSuccessfulResult); err == nil {
|
||||
t.Error("should've returned an error, because the transaction was already committed")
|
||||
}
|
||||
if _, err := store.getAllServiceKeys(tx); err == nil {
|
||||
if _, err := store.getAllEndpointKeys(tx); err == nil {
|
||||
t.Error("should've returned an error, because the transaction was already committed")
|
||||
}
|
||||
if _, err := store.getServiceStatusByKey(tx, testService.Key(), paging.NewServiceStatusParams().WithResults(1, 20)); err == nil {
|
||||
if _, err := store.getEndpointStatusByKey(tx, testEndpoint.Key(), paging.NewEndpointStatusParams().WithResults(1, 20)); err == nil {
|
||||
t.Error("should've returned an error, because the transaction was already committed")
|
||||
}
|
||||
if _, err := store.getEventsByServiceID(tx, 1, 1, 50); err == nil {
|
||||
if _, err := store.getEndpointEventsByEndpointID(tx, 1, 1, 50); err == nil {
|
||||
t.Error("should've returned an error, because the transaction was already committed")
|
||||
}
|
||||
if _, err := store.getResultsByServiceID(tx, 1, 1, 50); err == nil {
|
||||
if _, err := store.getEndpointResultsByEndpointID(tx, 1, 1, 50); err == nil {
|
||||
t.Error("should've returned an error, because the transaction was already committed")
|
||||
}
|
||||
if err := store.deleteOldServiceEvents(tx, 1); err == nil {
|
||||
if err := store.deleteOldEndpointEvents(tx, 1); err == nil {
|
||||
t.Error("should've returned an error, because the transaction was already committed")
|
||||
}
|
||||
if err := store.deleteOldServiceResults(tx, 1); err == nil {
|
||||
if err := store.deleteOldEndpointResults(tx, 1); err == nil {
|
||||
t.Error("should've returned an error, because the transaction was already committed")
|
||||
}
|
||||
if _, _, err := store.getServiceUptime(tx, 1, time.Now(), time.Now()); err == nil {
|
||||
if _, _, err := store.getEndpointUptime(tx, 1, time.Now(), time.Now()); err == nil {
|
||||
t.Error("should've returned an error, because the transaction was already committed")
|
||||
}
|
||||
if _, err := store.getServiceID(tx, &testService); err == nil {
|
||||
if _, err := store.getEndpointID(tx, &testEndpoint); err == nil {
|
||||
t.Error("should've returned an error, because the transaction was already committed")
|
||||
}
|
||||
if _, err := store.getNumberOfEventsByServiceID(tx, 1); err == nil {
|
||||
if _, err := store.getNumberOfEventsByEndpointID(tx, 1); err == nil {
|
||||
t.Error("should've returned an error, because the transaction was already committed")
|
||||
}
|
||||
if _, err := store.getNumberOfResultsByServiceID(tx, 1); err == nil {
|
||||
if _, err := store.getNumberOfResultsByEndpointID(tx, 1); err == nil {
|
||||
t.Error("should've returned an error, because the transaction was already committed")
|
||||
}
|
||||
if _, err := store.getAgeOfOldestServiceUptimeEntry(tx, 1); err == nil {
|
||||
if _, err := store.getAgeOfOldestEndpointUptimeEntry(tx, 1); err == nil {
|
||||
t.Error("should've returned an error, because the transaction was already committed")
|
||||
}
|
||||
if _, err := store.getLastServiceResultSuccessValue(tx, 1); err == nil {
|
||||
if _, err := store.getLastEndpointResultSuccessValue(tx, 1); err == nil {
|
||||
t.Error("should've returned an error, because the transaction was already committed")
|
||||
}
|
||||
}
|
||||
|
@ -368,10 +368,10 @@ func TestStore_NoRows(t *testing.T) {
|
|||
defer store.Close()
|
||||
tx, _ := store.db.Begin()
|
||||
defer tx.Rollback()
|
||||
if _, err := store.getLastServiceResultSuccessValue(tx, 1); err != errNoRowsReturned {
|
||||
if _, err := store.getLastEndpointResultSuccessValue(tx, 1); err != errNoRowsReturned {
|
||||
t.Errorf("should've %v, got %v", errNoRowsReturned, err)
|
||||
}
|
||||
if _, err := store.getAgeOfOldestServiceUptimeEntry(tx, 1); err != errNoRowsReturned {
|
||||
if _, err := store.getAgeOfOldestEndpointUptimeEntry(tx, 1); err != errNoRowsReturned {
|
||||
t.Errorf("should've %v, got %v", errNoRowsReturned, err)
|
||||
}
|
||||
}
|
||||
|
@ -380,33 +380,33 @@ func TestStore_NoRows(t *testing.T) {
|
|||
func TestStore_BrokenSchema(t *testing.T) {
|
||||
store, _ := NewStore("sqlite", t.TempDir()+"/TestStore_BrokenSchema.db")
|
||||
defer store.Close()
|
||||
if err := store.Insert(&testService, &testSuccessfulResult); err != nil {
|
||||
if err := store.Insert(&testEndpoint, &testSuccessfulResult); err != nil {
|
||||
t.Fatal("expected no error, got", err.Error())
|
||||
}
|
||||
if _, err := store.GetAverageResponseTimeByKey(testService.Key(), time.Now().Add(-time.Hour), time.Now()); err != nil {
|
||||
if _, err := store.GetAverageResponseTimeByKey(testEndpoint.Key(), time.Now().Add(-time.Hour), time.Now()); err != nil {
|
||||
t.Fatal("expected no error, got", err.Error())
|
||||
}
|
||||
if _, err := store.GetAllServiceStatuses(paging.NewServiceStatusParams()); err != nil {
|
||||
if _, err := store.GetAllEndpointStatuses(paging.NewEndpointStatusParams()); err != nil {
|
||||
t.Fatal("expected no error, got", err.Error())
|
||||
}
|
||||
// Break
|
||||
_, _ = store.db.Exec("DROP TABLE service")
|
||||
if err := store.Insert(&testService, &testSuccessfulResult); err == nil {
|
||||
_, _ = store.db.Exec("DROP TABLE endpoints")
|
||||
if err := store.Insert(&testEndpoint, &testSuccessfulResult); err == nil {
|
||||
t.Fatal("expected an error")
|
||||
}
|
||||
if _, err := store.GetAverageResponseTimeByKey(testService.Key(), time.Now().Add(-time.Hour), time.Now()); err == nil {
|
||||
if _, err := store.GetAverageResponseTimeByKey(testEndpoint.Key(), time.Now().Add(-time.Hour), time.Now()); err == nil {
|
||||
t.Fatal("expected an error")
|
||||
}
|
||||
if _, err := store.GetHourlyAverageResponseTimeByKey(testService.Key(), time.Now().Add(-time.Hour), time.Now()); err == nil {
|
||||
if _, err := store.GetHourlyAverageResponseTimeByKey(testEndpoint.Key(), time.Now().Add(-time.Hour), time.Now()); err == nil {
|
||||
t.Fatal("expected an error")
|
||||
}
|
||||
if _, err := store.GetAllServiceStatuses(paging.NewServiceStatusParams()); err == nil {
|
||||
if _, err := store.GetAllEndpointStatuses(paging.NewEndpointStatusParams()); err == nil {
|
||||
t.Fatal("expected an error")
|
||||
}
|
||||
if _, err := store.GetUptimeByKey(testService.Key(), time.Now().Add(-time.Hour), time.Now()); err == nil {
|
||||
if _, err := store.GetUptimeByKey(testEndpoint.Key(), time.Now().Add(-time.Hour), time.Now()); err == nil {
|
||||
t.Fatal("expected an error")
|
||||
}
|
||||
if _, err := store.GetServiceStatusByKey(testService.Key(), paging.NewServiceStatusParams()); err == nil {
|
||||
if _, err := store.GetEndpointStatusByKey(testEndpoint.Key(), paging.NewEndpointStatusParams()); err == nil {
|
||||
t.Fatal("expected an error")
|
||||
}
|
||||
// Repair
|
||||
|
@ -414,15 +414,15 @@ func TestStore_BrokenSchema(t *testing.T) {
|
|||
t.Fatal("schema should've been repaired")
|
||||
}
|
||||
store.Clear()
|
||||
if err := store.Insert(&testService, &testSuccessfulResult); err != nil {
|
||||
if err := store.Insert(&testEndpoint, &testSuccessfulResult); err != nil {
|
||||
t.Fatal("expected no error, got", err.Error())
|
||||
}
|
||||
// Break
|
||||
_, _ = store.db.Exec("DROP TABLE service_event")
|
||||
if err := store.Insert(&testService, &testSuccessfulResult); err != nil {
|
||||
_, _ = store.db.Exec("DROP TABLE endpoint_events")
|
||||
if err := store.Insert(&testEndpoint, &testSuccessfulResult); err != nil {
|
||||
t.Fatal("expected no error, because this should silently fails, got", err.Error())
|
||||
}
|
||||
if _, err := store.GetAllServiceStatuses(paging.NewServiceStatusParams().WithResults(1, 1).WithEvents(1, 1)); err != nil {
|
||||
if _, err := store.GetAllEndpointStatuses(paging.NewEndpointStatusParams().WithResults(1, 1).WithEvents(1, 1)); err != nil {
|
||||
t.Fatal("expected no error, because this should silently fail, got", err.Error())
|
||||
}
|
||||
// Repair
|
||||
|
@ -430,15 +430,15 @@ func TestStore_BrokenSchema(t *testing.T) {
|
|||
t.Fatal("schema should've been repaired")
|
||||
}
|
||||
store.Clear()
|
||||
if err := store.Insert(&testService, &testSuccessfulResult); err != nil {
|
||||
if err := store.Insert(&testEndpoint, &testSuccessfulResult); err != nil {
|
||||
t.Fatal("expected no error, got", err.Error())
|
||||
}
|
||||
// Break
|
||||
_, _ = store.db.Exec("DROP TABLE service_result")
|
||||
if err := store.Insert(&testService, &testSuccessfulResult); err == nil {
|
||||
_, _ = store.db.Exec("DROP TABLE endpoint_results")
|
||||
if err := store.Insert(&testEndpoint, &testSuccessfulResult); err == nil {
|
||||
t.Fatal("expected an error")
|
||||
}
|
||||
if _, err := store.GetAllServiceStatuses(paging.NewServiceStatusParams().WithResults(1, 1).WithEvents(1, 1)); err != nil {
|
||||
if _, err := store.GetAllEndpointStatuses(paging.NewEndpointStatusParams().WithResults(1, 1).WithEvents(1, 1)); err != nil {
|
||||
t.Fatal("expected no error, because this should silently fail, got", err.Error())
|
||||
}
|
||||
// Repair
|
||||
|
@ -446,12 +446,12 @@ func TestStore_BrokenSchema(t *testing.T) {
|
|||
t.Fatal("schema should've been repaired")
|
||||
}
|
||||
store.Clear()
|
||||
if err := store.Insert(&testService, &testSuccessfulResult); err != nil {
|
||||
if err := store.Insert(&testEndpoint, &testSuccessfulResult); err != nil {
|
||||
t.Fatal("expected no error, got", err.Error())
|
||||
}
|
||||
// Break
|
||||
_, _ = store.db.Exec("DROP TABLE service_result_condition")
|
||||
if err := store.Insert(&testService, &testSuccessfulResult); err == nil {
|
||||
_, _ = store.db.Exec("DROP TABLE endpoint_result_conditions")
|
||||
if err := store.Insert(&testEndpoint, &testSuccessfulResult); err == nil {
|
||||
t.Fatal("expected an error")
|
||||
}
|
||||
// Repair
|
||||
|
@ -459,21 +459,21 @@ func TestStore_BrokenSchema(t *testing.T) {
|
|||
t.Fatal("schema should've been repaired")
|
||||
}
|
||||
store.Clear()
|
||||
if err := store.Insert(&testService, &testSuccessfulResult); err != nil {
|
||||
if err := store.Insert(&testEndpoint, &testSuccessfulResult); err != nil {
|
||||
t.Fatal("expected no error, got", err.Error())
|
||||
}
|
||||
// Break
|
||||
_, _ = store.db.Exec("DROP TABLE service_uptime")
|
||||
if err := store.Insert(&testService, &testSuccessfulResult); err != nil {
|
||||
_, _ = store.db.Exec("DROP TABLE endpoint_uptimes")
|
||||
if err := store.Insert(&testEndpoint, &testSuccessfulResult); err != nil {
|
||||
t.Fatal("expected no error, because this should silently fails, got", err.Error())
|
||||
}
|
||||
if _, err := store.GetAverageResponseTimeByKey(testService.Key(), time.Now().Add(-time.Hour), time.Now()); err == nil {
|
||||
if _, err := store.GetAverageResponseTimeByKey(testEndpoint.Key(), time.Now().Add(-time.Hour), time.Now()); err == nil {
|
||||
t.Fatal("expected an error")
|
||||
}
|
||||
if _, err := store.GetHourlyAverageResponseTimeByKey(testService.Key(), time.Now().Add(-time.Hour), time.Now()); err == nil {
|
||||
if _, err := store.GetHourlyAverageResponseTimeByKey(testEndpoint.Key(), time.Now().Add(-time.Hour), time.Now()); err == nil {
|
||||
t.Fatal("expected an error")
|
||||
}
|
||||
if _, err := store.GetUptimeByKey(testService.Key(), time.Now().Add(-time.Hour), time.Now()); err == nil {
|
||||
if _, err := store.GetUptimeByKey(testEndpoint.Key(), time.Now().Add(-time.Hour), time.Now()); err == nil {
|
||||
t.Fatal("expected an error")
|
||||
}
|
||||
}
|
||||
|
|
|
@ -11,15 +11,15 @@ import (
|
|||
|
||||
// Store is the interface that each stores should implement
|
||||
type Store interface {
|
||||
// GetAllServiceStatuses returns the JSON encoding of all monitored core.ServiceStatus
|
||||
// GetAllEndpointStatuses returns the JSON encoding of all monitored core.EndpointStatus
|
||||
// with a subset of core.Result defined by the page and pageSize parameters
|
||||
GetAllServiceStatuses(params *paging.ServiceStatusParams) ([]*core.ServiceStatus, error)
|
||||
GetAllEndpointStatuses(params *paging.EndpointStatusParams) ([]*core.EndpointStatus, error)
|
||||
|
||||
// GetServiceStatus returns the service status for a given service name in the given group
|
||||
GetServiceStatus(groupName, serviceName string, params *paging.ServiceStatusParams) (*core.ServiceStatus, error)
|
||||
// GetEndpointStatus returns the endpoint status for a given endpoint name in the given group
|
||||
GetEndpointStatus(groupName, endpointName string, params *paging.EndpointStatusParams) (*core.EndpointStatus, error)
|
||||
|
||||
// GetServiceStatusByKey returns the service status for a given key
|
||||
GetServiceStatusByKey(key string, params *paging.ServiceStatusParams) (*core.ServiceStatus, error)
|
||||
// GetEndpointStatusByKey returns the endpoint status for a given key
|
||||
GetEndpointStatusByKey(key string, params *paging.EndpointStatusParams) (*core.EndpointStatus, error)
|
||||
|
||||
// GetUptimeByKey returns the uptime percentage during a time range
|
||||
GetUptimeByKey(key string, from, to time.Time) (float64, error)
|
||||
|
@ -30,13 +30,13 @@ type Store interface {
|
|||
// GetHourlyAverageResponseTimeByKey returns a map of hourly (key) average response time in milliseconds (value) during a time range
|
||||
GetHourlyAverageResponseTimeByKey(key string, from, to time.Time) (map[int64]int, error)
|
||||
|
||||
// Insert adds the observed result for the specified service into the store
|
||||
Insert(service *core.Service, result *core.Result) error
|
||||
// Insert adds the observed result for the specified endpoint into the store
|
||||
Insert(endpoint *core.Endpoint, result *core.Result) error
|
||||
|
||||
// DeleteAllServiceStatusesNotInKeys removes all ServiceStatus that are not within the keys provided
|
||||
// DeleteAllEndpointStatusesNotInKeys removes all EndpointStatus that are not within the keys provided
|
||||
//
|
||||
// Used to delete services that have been persisted but are no longer part of the configured services
|
||||
DeleteAllServiceStatusesNotInKeys(keys []string) int
|
||||
// Used to delete endpoints that have been persisted but are no longer part of the configured endpoints
|
||||
DeleteAllEndpointStatusesNotInKeys(keys []string) int
|
||||
|
||||
// Clear deletes everything from the store
|
||||
Clear()
|
||||
|
|
|
@ -10,12 +10,12 @@ import (
|
|||
"github.com/TwiN/gatus/v3/storage/store/sql"
|
||||
)
|
||||
|
||||
func BenchmarkStore_GetAllServiceStatuses(b *testing.B) {
|
||||
func BenchmarkStore_GetAllEndpointStatuses(b *testing.B) {
|
||||
memoryStore, err := memory.NewStore("")
|
||||
if err != nil {
|
||||
b.Fatal("failed to create store:", err.Error())
|
||||
}
|
||||
sqliteStore, err := sql.NewStore("sqlite", b.TempDir()+"/BenchmarkStore_GetAllServiceStatuses.db")
|
||||
sqliteStore, err := sql.NewStore("sqlite", b.TempDir()+"/BenchmarkStore_GetAllEndpointStatuses.db")
|
||||
if err != nil {
|
||||
b.Fatal("failed to create store:", err.Error())
|
||||
}
|
||||
|
@ -48,18 +48,18 @@ func BenchmarkStore_GetAllServiceStatuses(b *testing.B) {
|
|||
},
|
||||
}
|
||||
for _, scenario := range scenarios {
|
||||
scenario.Store.Insert(&testService, &testSuccessfulResult)
|
||||
scenario.Store.Insert(&testService, &testUnsuccessfulResult)
|
||||
scenario.Store.Insert(&testEndpoint, &testSuccessfulResult)
|
||||
scenario.Store.Insert(&testEndpoint, &testUnsuccessfulResult)
|
||||
b.Run(scenario.Name, func(b *testing.B) {
|
||||
if scenario.Parallel {
|
||||
b.RunParallel(func(pb *testing.PB) {
|
||||
for pb.Next() {
|
||||
scenario.Store.GetAllServiceStatuses(paging.NewServiceStatusParams().WithResults(1, 20))
|
||||
scenario.Store.GetAllEndpointStatuses(paging.NewEndpointStatusParams().WithResults(1, 20))
|
||||
}
|
||||
})
|
||||
} else {
|
||||
for n := 0; n < b.N; n++ {
|
||||
scenario.Store.GetAllServiceStatuses(paging.NewServiceStatusParams().WithResults(1, 20))
|
||||
scenario.Store.GetAllEndpointStatuses(paging.NewEndpointStatusParams().WithResults(1, 20))
|
||||
}
|
||||
}
|
||||
b.ReportAllocs()
|
||||
|
@ -118,7 +118,7 @@ func BenchmarkStore_Insert(b *testing.B) {
|
|||
result = testSuccessfulResult
|
||||
}
|
||||
result.Timestamp = time.Now()
|
||||
scenario.Store.Insert(&testService, &result)
|
||||
scenario.Store.Insert(&testEndpoint, &result)
|
||||
n++
|
||||
}
|
||||
})
|
||||
|
@ -131,7 +131,7 @@ func BenchmarkStore_Insert(b *testing.B) {
|
|||
result = testSuccessfulResult
|
||||
}
|
||||
result.Timestamp = time.Now()
|
||||
scenario.Store.Insert(&testService, &result)
|
||||
scenario.Store.Insert(&testEndpoint, &result)
|
||||
}
|
||||
}
|
||||
b.ReportAllocs()
|
||||
|
@ -140,12 +140,12 @@ func BenchmarkStore_Insert(b *testing.B) {
|
|||
}
|
||||
}
|
||||
|
||||
func BenchmarkStore_GetServiceStatusByKey(b *testing.B) {
|
||||
func BenchmarkStore_GetEndpointStatusByKey(b *testing.B) {
|
||||
memoryStore, err := memory.NewStore("")
|
||||
if err != nil {
|
||||
b.Fatal("failed to create store:", err.Error())
|
||||
}
|
||||
sqliteStore, err := sql.NewStore("sqlite", b.TempDir()+"/BenchmarkStore_GetServiceStatusByKey.db")
|
||||
sqliteStore, err := sql.NewStore("sqlite", b.TempDir()+"/BenchmarkStore_GetEndpointStatusByKey.db")
|
||||
if err != nil {
|
||||
b.Fatal("failed to create store:", err.Error())
|
||||
}
|
||||
|
@ -179,19 +179,19 @@ func BenchmarkStore_GetServiceStatusByKey(b *testing.B) {
|
|||
}
|
||||
for _, scenario := range scenarios {
|
||||
for i := 0; i < 50; i++ {
|
||||
scenario.Store.Insert(&testService, &testSuccessfulResult)
|
||||
scenario.Store.Insert(&testService, &testUnsuccessfulResult)
|
||||
scenario.Store.Insert(&testEndpoint, &testSuccessfulResult)
|
||||
scenario.Store.Insert(&testEndpoint, &testUnsuccessfulResult)
|
||||
}
|
||||
b.Run(scenario.Name, func(b *testing.B) {
|
||||
if scenario.Parallel {
|
||||
b.RunParallel(func(pb *testing.PB) {
|
||||
for pb.Next() {
|
||||
scenario.Store.GetServiceStatusByKey(testService.Key(), paging.NewServiceStatusParams().WithResults(1, 20))
|
||||
scenario.Store.GetEndpointStatusByKey(testEndpoint.Key(), paging.NewEndpointStatusParams().WithResults(1, 20))
|
||||
}
|
||||
})
|
||||
} else {
|
||||
for n := 0; n < b.N; n++ {
|
||||
scenario.Store.GetServiceStatusByKey(testService.Key(), paging.NewServiceStatusParams().WithResults(1, 20))
|
||||
scenario.Store.GetEndpointStatusByKey(testEndpoint.Key(), paging.NewEndpointStatusParams().WithResults(1, 20))
|
||||
}
|
||||
}
|
||||
b.ReportAllocs()
|
||||
|
|
|
@ -18,7 +18,7 @@ var (
|
|||
|
||||
now = time.Now().Truncate(time.Hour)
|
||||
|
||||
testService = core.Service{
|
||||
testEndpoint = core.Endpoint{
|
||||
Name: "name",
|
||||
Group: "group",
|
||||
URL: "https://example.org/what/ever",
|
||||
|
@ -114,8 +114,8 @@ func cleanUp(scenarios []*Scenario) {
|
|||
}
|
||||
}
|
||||
|
||||
func TestStore_GetServiceStatusByKey(t *testing.T) {
|
||||
scenarios := initStoresAndBaseScenarios(t, "TestStore_GetServiceStatusByKey")
|
||||
func TestStore_GetEndpointStatusByKey(t *testing.T) {
|
||||
scenarios := initStoresAndBaseScenarios(t, "TestStore_GetEndpointStatusByKey")
|
||||
defer cleanUp(scenarios)
|
||||
firstResult := testSuccessfulResult
|
||||
firstResult.Timestamp = now.Add(-time.Minute)
|
||||
|
@ -123,25 +123,25 @@ func TestStore_GetServiceStatusByKey(t *testing.T) {
|
|||
secondResult.Timestamp = now
|
||||
for _, scenario := range scenarios {
|
||||
t.Run(scenario.Name, func(t *testing.T) {
|
||||
scenario.Store.Insert(&testService, &firstResult)
|
||||
scenario.Store.Insert(&testService, &secondResult)
|
||||
serviceStatus, err := scenario.Store.GetServiceStatusByKey(testService.Key(), paging.NewServiceStatusParams().WithEvents(1, common.MaximumNumberOfEvents).WithResults(1, common.MaximumNumberOfResults))
|
||||
scenario.Store.Insert(&testEndpoint, &firstResult)
|
||||
scenario.Store.Insert(&testEndpoint, &secondResult)
|
||||
endpointStatus, err := scenario.Store.GetEndpointStatusByKey(testEndpoint.Key(), paging.NewEndpointStatusParams().WithEvents(1, common.MaximumNumberOfEvents).WithResults(1, common.MaximumNumberOfResults))
|
||||
if err != nil {
|
||||
t.Fatal("shouldn't have returned an error, got", err.Error())
|
||||
}
|
||||
if serviceStatus == nil {
|
||||
t.Fatalf("serviceStatus shouldn't have been nil")
|
||||
if endpointStatus == nil {
|
||||
t.Fatalf("endpointStatus shouldn't have been nil")
|
||||
}
|
||||
if serviceStatus.Name != testService.Name {
|
||||
t.Fatalf("serviceStatus.Name should've been %s, got %s", testService.Name, serviceStatus.Name)
|
||||
if endpointStatus.Name != testEndpoint.Name {
|
||||
t.Fatalf("endpointStatus.Name should've been %s, got %s", testEndpoint.Name, endpointStatus.Name)
|
||||
}
|
||||
if serviceStatus.Group != testService.Group {
|
||||
t.Fatalf("serviceStatus.Group should've been %s, got %s", testService.Group, serviceStatus.Group)
|
||||
if endpointStatus.Group != testEndpoint.Group {
|
||||
t.Fatalf("endpointStatus.Group should've been %s, got %s", testEndpoint.Group, endpointStatus.Group)
|
||||
}
|
||||
if len(serviceStatus.Results) != 2 {
|
||||
t.Fatalf("serviceStatus.Results should've had 2 entries")
|
||||
if len(endpointStatus.Results) != 2 {
|
||||
t.Fatalf("endpointStatus.Results should've had 2 entries")
|
||||
}
|
||||
if serviceStatus.Results[0].Timestamp.After(serviceStatus.Results[1].Timestamp) {
|
||||
if endpointStatus.Results[0].Timestamp.After(endpointStatus.Results[1].Timestamp) {
|
||||
t.Error("The result at index 0 should've been older than the result at index 1")
|
||||
}
|
||||
scenario.Store.Clear()
|
||||
|
@ -149,57 +149,57 @@ func TestStore_GetServiceStatusByKey(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
func TestStore_GetServiceStatusForMissingStatusReturnsNil(t *testing.T) {
|
||||
scenarios := initStoresAndBaseScenarios(t, "TestStore_GetServiceStatusForMissingStatusReturnsNil")
|
||||
func TestStore_GetEndpointStatusForMissingStatusReturnsNil(t *testing.T) {
|
||||
scenarios := initStoresAndBaseScenarios(t, "TestStore_GetEndpointStatusForMissingStatusReturnsNil")
|
||||
defer cleanUp(scenarios)
|
||||
for _, scenario := range scenarios {
|
||||
t.Run(scenario.Name, func(t *testing.T) {
|
||||
scenario.Store.Insert(&testService, &testSuccessfulResult)
|
||||
serviceStatus, err := scenario.Store.GetServiceStatus("nonexistantgroup", "nonexistantname", paging.NewServiceStatusParams().WithEvents(1, common.MaximumNumberOfEvents).WithResults(1, common.MaximumNumberOfResults))
|
||||
if err != common.ErrServiceNotFound {
|
||||
t.Error("should've returned ErrServiceNotFound, got", err)
|
||||
scenario.Store.Insert(&testEndpoint, &testSuccessfulResult)
|
||||
endpointStatus, err := scenario.Store.GetEndpointStatus("nonexistantgroup", "nonexistantname", paging.NewEndpointStatusParams().WithEvents(1, common.MaximumNumberOfEvents).WithResults(1, common.MaximumNumberOfResults))
|
||||
if err != common.ErrEndpointNotFound {
|
||||
t.Error("should've returned ErrEndpointNotFound, got", err)
|
||||
}
|
||||
if serviceStatus != nil {
|
||||
t.Errorf("Returned service status for group '%s' and name '%s' not nil after inserting the service into the store", testService.Group, testService.Name)
|
||||
if endpointStatus != nil {
|
||||
t.Errorf("Returned endpoint status for group '%s' and name '%s' not nil after inserting the endpoint into the store", testEndpoint.Group, testEndpoint.Name)
|
||||
}
|
||||
serviceStatus, err = scenario.Store.GetServiceStatus(testService.Group, "nonexistantname", paging.NewServiceStatusParams().WithEvents(1, common.MaximumNumberOfEvents).WithResults(1, common.MaximumNumberOfResults))
|
||||
if err != common.ErrServiceNotFound {
|
||||
t.Error("should've returned ErrServiceNotFound, got", err)
|
||||
endpointStatus, err = scenario.Store.GetEndpointStatus(testEndpoint.Group, "nonexistantname", paging.NewEndpointStatusParams().WithEvents(1, common.MaximumNumberOfEvents).WithResults(1, common.MaximumNumberOfResults))
|
||||
if err != common.ErrEndpointNotFound {
|
||||
t.Error("should've returned ErrEndpointNotFound, got", err)
|
||||
}
|
||||
if serviceStatus != nil {
|
||||
t.Errorf("Returned service status for group '%s' and name '%s' not nil after inserting the service into the store", testService.Group, "nonexistantname")
|
||||
if endpointStatus != nil {
|
||||
t.Errorf("Returned endpoint status for group '%s' and name '%s' not nil after inserting the endpoint into the store", testEndpoint.Group, "nonexistantname")
|
||||
}
|
||||
serviceStatus, err = scenario.Store.GetServiceStatus("nonexistantgroup", testService.Name, paging.NewServiceStatusParams().WithEvents(1, common.MaximumNumberOfEvents).WithResults(1, common.MaximumNumberOfResults))
|
||||
if err != common.ErrServiceNotFound {
|
||||
t.Error("should've returned ErrServiceNotFound, got", err)
|
||||
endpointStatus, err = scenario.Store.GetEndpointStatus("nonexistantgroup", testEndpoint.Name, paging.NewEndpointStatusParams().WithEvents(1, common.MaximumNumberOfEvents).WithResults(1, common.MaximumNumberOfResults))
|
||||
if err != common.ErrEndpointNotFound {
|
||||
t.Error("should've returned ErrEndpointNotFound, got", err)
|
||||
}
|
||||
if serviceStatus != nil {
|
||||
t.Errorf("Returned service status for group '%s' and name '%s' not nil after inserting the service into the store", "nonexistantgroup", testService.Name)
|
||||
if endpointStatus != nil {
|
||||
t.Errorf("Returned endpoint status for group '%s' and name '%s' not nil after inserting the endpoint into the store", "nonexistantgroup", testEndpoint.Name)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestStore_GetAllServiceStatuses(t *testing.T) {
|
||||
scenarios := initStoresAndBaseScenarios(t, "TestStore_GetAllServiceStatuses")
|
||||
func TestStore_GetAllEndpointStatuses(t *testing.T) {
|
||||
scenarios := initStoresAndBaseScenarios(t, "TestStore_GetAllEndpointStatuses")
|
||||
defer cleanUp(scenarios)
|
||||
firstResult := testSuccessfulResult
|
||||
secondResult := testUnsuccessfulResult
|
||||
for _, scenario := range scenarios {
|
||||
t.Run(scenario.Name, func(t *testing.T) {
|
||||
scenario.Store.Insert(&testService, &firstResult)
|
||||
scenario.Store.Insert(&testService, &secondResult)
|
||||
scenario.Store.Insert(&testEndpoint, &firstResult)
|
||||
scenario.Store.Insert(&testEndpoint, &secondResult)
|
||||
// Can't be bothered dealing with timezone issues on the worker that runs the automated tests
|
||||
serviceStatuses, err := scenario.Store.GetAllServiceStatuses(paging.NewServiceStatusParams().WithResults(1, 20))
|
||||
endpointStatuses, err := scenario.Store.GetAllEndpointStatuses(paging.NewEndpointStatusParams().WithResults(1, 20))
|
||||
if err != nil {
|
||||
t.Error("shouldn't have returned an error, got", err.Error())
|
||||
}
|
||||
if len(serviceStatuses) != 1 {
|
||||
t.Fatal("expected 1 service status")
|
||||
if len(endpointStatuses) != 1 {
|
||||
t.Fatal("expected 1 endpoint status")
|
||||
}
|
||||
actual := serviceStatuses[0]
|
||||
actual := endpointStatuses[0]
|
||||
if actual == nil {
|
||||
t.Fatal("expected service status to exist")
|
||||
t.Fatal("expected endpoint status to exist")
|
||||
}
|
||||
if len(actual.Results) != 2 {
|
||||
t.Error("expected 2 results, got", len(actual.Results))
|
||||
|
@ -212,26 +212,26 @@ func TestStore_GetAllServiceStatuses(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
func TestStore_GetAllServiceStatusesWithResultsAndEvents(t *testing.T) {
|
||||
scenarios := initStoresAndBaseScenarios(t, "TestStore_GetAllServiceStatusesWithResultsAndEvents")
|
||||
func TestStore_GetAllEndpointStatusesWithResultsAndEvents(t *testing.T) {
|
||||
scenarios := initStoresAndBaseScenarios(t, "TestStore_GetAllEndpointStatusesWithResultsAndEvents")
|
||||
defer cleanUp(scenarios)
|
||||
firstResult := testSuccessfulResult
|
||||
secondResult := testUnsuccessfulResult
|
||||
for _, scenario := range scenarios {
|
||||
t.Run(scenario.Name, func(t *testing.T) {
|
||||
scenario.Store.Insert(&testService, &firstResult)
|
||||
scenario.Store.Insert(&testService, &secondResult)
|
||||
scenario.Store.Insert(&testEndpoint, &firstResult)
|
||||
scenario.Store.Insert(&testEndpoint, &secondResult)
|
||||
// Can't be bothered dealing with timezone issues on the worker that runs the automated tests
|
||||
serviceStatuses, err := scenario.Store.GetAllServiceStatuses(paging.NewServiceStatusParams().WithResults(1, 20).WithEvents(1, 50))
|
||||
endpointStatuses, err := scenario.Store.GetAllEndpointStatuses(paging.NewEndpointStatusParams().WithResults(1, 20).WithEvents(1, 50))
|
||||
if err != nil {
|
||||
t.Error("shouldn't have returned an error, got", err.Error())
|
||||
}
|
||||
if len(serviceStatuses) != 1 {
|
||||
t.Fatal("expected 1 service status")
|
||||
if len(endpointStatuses) != 1 {
|
||||
t.Fatal("expected 1 endpoint status")
|
||||
}
|
||||
actual := serviceStatuses[0]
|
||||
actual := endpointStatuses[0]
|
||||
if actual == nil {
|
||||
t.Fatal("expected service status to exist")
|
||||
t.Fatal("expected endpoint status to exist")
|
||||
}
|
||||
if len(actual.Results) != 2 {
|
||||
t.Error("expected 2 results, got", len(actual.Results))
|
||||
|
@ -244,8 +244,8 @@ func TestStore_GetAllServiceStatusesWithResultsAndEvents(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
func TestStore_GetServiceStatusPage1IsHasMoreRecentResultsThanPage2(t *testing.T) {
|
||||
scenarios := initStoresAndBaseScenarios(t, "TestStore_GetServiceStatusPage1IsHasMoreRecentResultsThanPage2")
|
||||
func TestStore_GetEndpointStatusPage1IsHasMoreRecentResultsThanPage2(t *testing.T) {
|
||||
scenarios := initStoresAndBaseScenarios(t, "TestStore_GetEndpointStatusPage1IsHasMoreRecentResultsThanPage2")
|
||||
defer cleanUp(scenarios)
|
||||
firstResult := testSuccessfulResult
|
||||
firstResult.Timestamp = now.Add(-time.Minute)
|
||||
|
@ -253,30 +253,30 @@ func TestStore_GetServiceStatusPage1IsHasMoreRecentResultsThanPage2(t *testing.T
|
|||
secondResult.Timestamp = now
|
||||
for _, scenario := range scenarios {
|
||||
t.Run(scenario.Name, func(t *testing.T) {
|
||||
scenario.Store.Insert(&testService, &firstResult)
|
||||
scenario.Store.Insert(&testService, &secondResult)
|
||||
serviceStatusPage1, err := scenario.Store.GetServiceStatusByKey(testService.Key(), paging.NewServiceStatusParams().WithResults(1, 1))
|
||||
scenario.Store.Insert(&testEndpoint, &firstResult)
|
||||
scenario.Store.Insert(&testEndpoint, &secondResult)
|
||||
endpointStatusPage1, err := scenario.Store.GetEndpointStatusByKey(testEndpoint.Key(), paging.NewEndpointStatusParams().WithResults(1, 1))
|
||||
if err != nil {
|
||||
t.Error("shouldn't have returned an error, got", err.Error())
|
||||
}
|
||||
if serviceStatusPage1 == nil {
|
||||
t.Fatalf("serviceStatusPage1 shouldn't have been nil")
|
||||
if endpointStatusPage1 == nil {
|
||||
t.Fatalf("endpointStatusPage1 shouldn't have been nil")
|
||||
}
|
||||
if len(serviceStatusPage1.Results) != 1 {
|
||||
t.Fatalf("serviceStatusPage1 should've had 1 result")
|
||||
if len(endpointStatusPage1.Results) != 1 {
|
||||
t.Fatalf("endpointStatusPage1 should've had 1 result")
|
||||
}
|
||||
serviceStatusPage2, err := scenario.Store.GetServiceStatusByKey(testService.Key(), paging.NewServiceStatusParams().WithResults(2, 1))
|
||||
endpointStatusPage2, err := scenario.Store.GetEndpointStatusByKey(testEndpoint.Key(), paging.NewEndpointStatusParams().WithResults(2, 1))
|
||||
if err != nil {
|
||||
t.Error("shouldn't have returned an error, got", err.Error())
|
||||
}
|
||||
if serviceStatusPage2 == nil {
|
||||
t.Fatalf("serviceStatusPage2 shouldn't have been nil")
|
||||
if endpointStatusPage2 == nil {
|
||||
t.Fatalf("endpointStatusPage2 shouldn't have been nil")
|
||||
}
|
||||
if len(serviceStatusPage2.Results) != 1 {
|
||||
t.Fatalf("serviceStatusPage2 should've had 1 result")
|
||||
if len(endpointStatusPage2.Results) != 1 {
|
||||
t.Fatalf("endpointStatusPage2 should've had 1 result")
|
||||
}
|
||||
// Compare the timestamp of both pages
|
||||
if !serviceStatusPage1.Results[0].Timestamp.After(serviceStatusPage2.Results[0].Timestamp) {
|
||||
if !endpointStatusPage1.Results[0].Timestamp.After(endpointStatusPage2.Results[0].Timestamp) {
|
||||
t.Errorf("The result from the first page should've been more recent than the results from the second page")
|
||||
}
|
||||
scenario.Store.Clear()
|
||||
|
@ -293,21 +293,21 @@ func TestStore_GetUptimeByKey(t *testing.T) {
|
|||
secondResult.Timestamp = now
|
||||
for _, scenario := range scenarios {
|
||||
t.Run(scenario.Name, func(t *testing.T) {
|
||||
if _, err := scenario.Store.GetUptimeByKey(testService.Key(), time.Now().Add(-time.Hour), time.Now()); err != common.ErrServiceNotFound {
|
||||
if _, err := scenario.Store.GetUptimeByKey(testEndpoint.Key(), time.Now().Add(-time.Hour), time.Now()); err != common.ErrEndpointNotFound {
|
||||
t.Errorf("should've returned not found because there's nothing yet, got %v", err)
|
||||
}
|
||||
scenario.Store.Insert(&testService, &firstResult)
|
||||
scenario.Store.Insert(&testService, &secondResult)
|
||||
if uptime, _ := scenario.Store.GetUptimeByKey(testService.Key(), now.Add(-time.Hour), time.Now()); uptime != 0.5 {
|
||||
scenario.Store.Insert(&testEndpoint, &firstResult)
|
||||
scenario.Store.Insert(&testEndpoint, &secondResult)
|
||||
if uptime, _ := scenario.Store.GetUptimeByKey(testEndpoint.Key(), now.Add(-time.Hour), time.Now()); uptime != 0.5 {
|
||||
t.Errorf("the uptime over the past 1h should've been 0.5, got %f", uptime)
|
||||
}
|
||||
if uptime, _ := scenario.Store.GetUptimeByKey(testService.Key(), now.Add(-time.Hour*24), time.Now()); uptime != 0.5 {
|
||||
if uptime, _ := scenario.Store.GetUptimeByKey(testEndpoint.Key(), now.Add(-time.Hour*24), time.Now()); uptime != 0.5 {
|
||||
t.Errorf("the uptime over the past 24h should've been 0.5, got %f", uptime)
|
||||
}
|
||||
if uptime, _ := scenario.Store.GetUptimeByKey(testService.Key(), now.Add(-time.Hour*24*7), time.Now()); uptime != 0.5 {
|
||||
if uptime, _ := scenario.Store.GetUptimeByKey(testEndpoint.Key(), now.Add(-time.Hour*24*7), time.Now()); uptime != 0.5 {
|
||||
t.Errorf("the uptime over the past 7d should've been 0.5, got %f", uptime)
|
||||
}
|
||||
if _, err := scenario.Store.GetUptimeByKey(testService.Key(), now, time.Now().Add(-time.Hour)); err == nil {
|
||||
if _, err := scenario.Store.GetUptimeByKey(testEndpoint.Key(), now, time.Now().Add(-time.Hour)); err == nil {
|
||||
t.Error("should've returned an error because the parameter 'from' cannot be older than 'to'")
|
||||
}
|
||||
})
|
||||
|
@ -331,39 +331,39 @@ func TestStore_GetAverageResponseTimeByKey(t *testing.T) {
|
|||
fourthResult.Timestamp = now
|
||||
for _, scenario := range scenarios {
|
||||
t.Run(scenario.Name, func(t *testing.T) {
|
||||
scenario.Store.Insert(&testService, &firstResult)
|
||||
scenario.Store.Insert(&testService, &secondResult)
|
||||
scenario.Store.Insert(&testService, &thirdResult)
|
||||
scenario.Store.Insert(&testService, &fourthResult)
|
||||
if averageResponseTime, err := scenario.Store.GetAverageResponseTimeByKey(testService.Key(), now.Add(-48*time.Hour), now.Add(-24*time.Hour)); err == nil {
|
||||
scenario.Store.Insert(&testEndpoint, &firstResult)
|
||||
scenario.Store.Insert(&testEndpoint, &secondResult)
|
||||
scenario.Store.Insert(&testEndpoint, &thirdResult)
|
||||
scenario.Store.Insert(&testEndpoint, &fourthResult)
|
||||
if averageResponseTime, err := scenario.Store.GetAverageResponseTimeByKey(testEndpoint.Key(), now.Add(-48*time.Hour), now.Add(-24*time.Hour)); err == nil {
|
||||
if averageResponseTime != 0 {
|
||||
t.Errorf("expected average response time to be 0ms, got %v", averageResponseTime)
|
||||
}
|
||||
} else {
|
||||
t.Error("shouldn't have returned an error, got", err)
|
||||
}
|
||||
if averageResponseTime, err := scenario.Store.GetAverageResponseTimeByKey(testService.Key(), now.Add(-24*time.Hour), now); err == nil {
|
||||
if averageResponseTime, err := scenario.Store.GetAverageResponseTimeByKey(testEndpoint.Key(), now.Add(-24*time.Hour), now); err == nil {
|
||||
if averageResponseTime != 287 {
|
||||
t.Errorf("expected average response time to be 287ms, got %v", averageResponseTime)
|
||||
}
|
||||
} else {
|
||||
t.Error("shouldn't have returned an error, got", err)
|
||||
}
|
||||
if averageResponseTime, err := scenario.Store.GetAverageResponseTimeByKey(testService.Key(), now.Add(-time.Hour), now); err == nil {
|
||||
if averageResponseTime, err := scenario.Store.GetAverageResponseTimeByKey(testEndpoint.Key(), now.Add(-time.Hour), now); err == nil {
|
||||
if averageResponseTime != 350 {
|
||||
t.Errorf("expected average response time to be 350ms, got %v", averageResponseTime)
|
||||
}
|
||||
} else {
|
||||
t.Error("shouldn't have returned an error, got", err)
|
||||
}
|
||||
if averageResponseTime, err := scenario.Store.GetAverageResponseTimeByKey(testService.Key(), now.Add(-2*time.Hour), now.Add(-time.Hour)); err == nil {
|
||||
if averageResponseTime, err := scenario.Store.GetAverageResponseTimeByKey(testEndpoint.Key(), now.Add(-2*time.Hour), now.Add(-time.Hour)); err == nil {
|
||||
if averageResponseTime != 216 {
|
||||
t.Errorf("expected average response time to be 216ms, got %v", averageResponseTime)
|
||||
}
|
||||
} else {
|
||||
t.Error("shouldn't have returned an error, got", err)
|
||||
}
|
||||
if _, err := scenario.Store.GetAverageResponseTimeByKey(testService.Key(), now, now.Add(-2*time.Hour)); err == nil {
|
||||
if _, err := scenario.Store.GetAverageResponseTimeByKey(testEndpoint.Key(), now, now.Add(-2*time.Hour)); err == nil {
|
||||
t.Error("expected an error because from > to, got nil")
|
||||
}
|
||||
scenario.Store.Clear()
|
||||
|
@ -388,11 +388,11 @@ func TestStore_GetHourlyAverageResponseTimeByKey(t *testing.T) {
|
|||
fourthResult.Timestamp = now
|
||||
for _, scenario := range scenarios {
|
||||
t.Run(scenario.Name, func(t *testing.T) {
|
||||
scenario.Store.Insert(&testService, &firstResult)
|
||||
scenario.Store.Insert(&testService, &secondResult)
|
||||
scenario.Store.Insert(&testService, &thirdResult)
|
||||
scenario.Store.Insert(&testService, &fourthResult)
|
||||
hourlyAverageResponseTime, err := scenario.Store.GetHourlyAverageResponseTimeByKey(testService.Key(), now.Add(-24*time.Hour), now)
|
||||
scenario.Store.Insert(&testEndpoint, &firstResult)
|
||||
scenario.Store.Insert(&testEndpoint, &secondResult)
|
||||
scenario.Store.Insert(&testEndpoint, &thirdResult)
|
||||
scenario.Store.Insert(&testEndpoint, &fourthResult)
|
||||
hourlyAverageResponseTime, err := scenario.Store.GetHourlyAverageResponseTimeByKey(testEndpoint.Key(), now.Add(-24*time.Hour), now)
|
||||
if err != nil {
|
||||
t.Error("shouldn't have returned an error, got", err)
|
||||
}
|
||||
|
@ -419,20 +419,20 @@ func TestStore_Insert(t *testing.T) {
|
|||
secondResult.Timestamp = now
|
||||
for _, scenario := range scenarios {
|
||||
t.Run(scenario.Name, func(t *testing.T) {
|
||||
scenario.Store.Insert(&testService, &testSuccessfulResult)
|
||||
scenario.Store.Insert(&testService, &testUnsuccessfulResult)
|
||||
ss, err := scenario.Store.GetServiceStatusByKey(testService.Key(), paging.NewServiceStatusParams().WithEvents(1, common.MaximumNumberOfEvents).WithResults(1, common.MaximumNumberOfResults))
|
||||
scenario.Store.Insert(&testEndpoint, &testSuccessfulResult)
|
||||
scenario.Store.Insert(&testEndpoint, &testUnsuccessfulResult)
|
||||
ss, err := scenario.Store.GetEndpointStatusByKey(testEndpoint.Key(), paging.NewEndpointStatusParams().WithEvents(1, common.MaximumNumberOfEvents).WithResults(1, common.MaximumNumberOfResults))
|
||||
if err != nil {
|
||||
t.Error("shouldn't have returned an error, got", err)
|
||||
}
|
||||
if ss == nil {
|
||||
t.Fatalf("Store should've had key '%s', but didn't", testService.Key())
|
||||
t.Fatalf("Store should've had key '%s', but didn't", testEndpoint.Key())
|
||||
}
|
||||
if len(ss.Events) != 3 {
|
||||
t.Fatalf("Service '%s' should've had 3 events, got %d", ss.Name, len(ss.Events))
|
||||
t.Fatalf("Endpoint '%s' should've had 3 events, got %d", ss.Name, len(ss.Events))
|
||||
}
|
||||
if len(ss.Results) != 2 {
|
||||
t.Fatalf("Service '%s' should've had 2 results, got %d", ss.Name, len(ss.Results))
|
||||
t.Fatalf("Endpoint '%s' should've had 2 results, got %d", ss.Name, len(ss.Results))
|
||||
}
|
||||
for i, expectedResult := range []core.Result{testSuccessfulResult, testUnsuccessfulResult} {
|
||||
if expectedResult.HTTPStatus != ss.Results[i].HTTPStatus {
|
||||
|
@ -488,33 +488,33 @@ func TestStore_Insert(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
func TestStore_DeleteAllServiceStatusesNotInKeys(t *testing.T) {
|
||||
scenarios := initStoresAndBaseScenarios(t, "TestStore_DeleteAllServiceStatusesNotInKeys")
|
||||
func TestStore_DeleteAllEndpointStatusesNotInKeys(t *testing.T) {
|
||||
scenarios := initStoresAndBaseScenarios(t, "TestStore_DeleteAllEndpointStatusesNotInKeys")
|
||||
defer cleanUp(scenarios)
|
||||
firstService := core.Service{Name: "service-1", Group: "group"}
|
||||
secondService := core.Service{Name: "service-2", Group: "group"}
|
||||
firstEndpoint := core.Endpoint{Name: "endpoint-1", Group: "group"}
|
||||
secondEndpoint := core.Endpoint{Name: "endpoint-2", Group: "group"}
|
||||
result := &testSuccessfulResult
|
||||
for _, scenario := range scenarios {
|
||||
t.Run(scenario.Name, func(t *testing.T) {
|
||||
scenario.Store.Insert(&firstService, result)
|
||||
scenario.Store.Insert(&secondService, result)
|
||||
if ss, _ := scenario.Store.GetServiceStatusByKey(firstService.Key(), paging.NewServiceStatusParams()); ss == nil {
|
||||
t.Fatal("firstService should exist")
|
||||
scenario.Store.Insert(&firstEndpoint, result)
|
||||
scenario.Store.Insert(&secondEndpoint, result)
|
||||
if ss, _ := scenario.Store.GetEndpointStatusByKey(firstEndpoint.Key(), paging.NewEndpointStatusParams()); ss == nil {
|
||||
t.Fatal("firstEndpoint should exist")
|
||||
}
|
||||
if ss, _ := scenario.Store.GetServiceStatusByKey(secondService.Key(), paging.NewServiceStatusParams()); ss == nil {
|
||||
t.Fatal("secondService should exist")
|
||||
if ss, _ := scenario.Store.GetEndpointStatusByKey(secondEndpoint.Key(), paging.NewEndpointStatusParams()); ss == nil {
|
||||
t.Fatal("secondEndpoint should exist")
|
||||
}
|
||||
scenario.Store.DeleteAllServiceStatusesNotInKeys([]string{firstService.Key()})
|
||||
if ss, _ := scenario.Store.GetServiceStatusByKey(firstService.Key(), paging.NewServiceStatusParams()); ss == nil {
|
||||
t.Error("secondService should've been deleted")
|
||||
scenario.Store.DeleteAllEndpointStatusesNotInKeys([]string{firstEndpoint.Key()})
|
||||
if ss, _ := scenario.Store.GetEndpointStatusByKey(firstEndpoint.Key(), paging.NewEndpointStatusParams()); ss == nil {
|
||||
t.Error("secondEndpoint should've been deleted")
|
||||
}
|
||||
if ss, _ := scenario.Store.GetServiceStatusByKey(secondService.Key(), paging.NewServiceStatusParams()); ss != nil {
|
||||
t.Error("firstService should still exist")
|
||||
if ss, _ := scenario.Store.GetEndpointStatusByKey(secondEndpoint.Key(), paging.NewEndpointStatusParams()); ss != nil {
|
||||
t.Error("firstEndpoint should still exist")
|
||||
}
|
||||
// Delete everything
|
||||
scenario.Store.DeleteAllServiceStatusesNotInKeys([]string{})
|
||||
serviceStatuses, _ := scenario.Store.GetAllServiceStatuses(paging.NewServiceStatusParams())
|
||||
if len(serviceStatuses) != 0 {
|
||||
scenario.Store.DeleteAllEndpointStatusesNotInKeys([]string{})
|
||||
endpointStatuses, _ := scenario.Store.GetAllEndpointStatuses(paging.NewEndpointStatusParams())
|
||||
if len(endpointStatuses) != 0 {
|
||||
t.Errorf("everything should've been deleted")
|
||||
}
|
||||
})
|
||||
|
|
|
@ -2,9 +2,9 @@ package util
|
|||
|
||||
import "strings"
|
||||
|
||||
// ConvertGroupAndServiceToKey converts a group and a service to a key
|
||||
func ConvertGroupAndServiceToKey(group, service string) string {
|
||||
return sanitize(group) + "_" + sanitize(service)
|
||||
// ConvertGroupAndEndpointNameToKey converts a group and an endpoint to a key
|
||||
func ConvertGroupAndEndpointNameToKey(group, endpoint string) string {
|
||||
return sanitize(group) + "_" + sanitize(endpoint)
|
||||
}
|
||||
|
||||
func sanitize(s string) string {
|
||||
|
|
|
@ -4,8 +4,8 @@ import (
|
|||
"testing"
|
||||
)
|
||||
|
||||
func BenchmarkConvertGroupAndServiceToKey(b *testing.B) {
|
||||
func BenchmarkConvertGroupAndEndpointNameToKey(b *testing.B) {
|
||||
for n := 0; n < b.N; n++ {
|
||||
ConvertGroupAndServiceToKey("group", "service")
|
||||
ConvertGroupAndEndpointNameToKey("group", "name")
|
||||
}
|
||||
}
|
||||
|
|
|
@ -2,34 +2,34 @@ package util
|
|||
|
||||
import "testing"
|
||||
|
||||
func TestConvertGroupAndServiceToKey(t *testing.T) {
|
||||
func TestConvertGroupAndEndpointNameToKey(t *testing.T) {
|
||||
type Scenario struct {
|
||||
GroupName string
|
||||
ServiceName string
|
||||
EndpointName string
|
||||
ExpectedOutput string
|
||||
}
|
||||
scenarios := []Scenario{
|
||||
{
|
||||
GroupName: "Core",
|
||||
ServiceName: "Front End",
|
||||
EndpointName: "Front End",
|
||||
ExpectedOutput: "core_front-end",
|
||||
},
|
||||
{
|
||||
GroupName: "Load balancers",
|
||||
ServiceName: "us-west-2",
|
||||
EndpointName: "us-west-2",
|
||||
ExpectedOutput: "load-balancers_us-west-2",
|
||||
},
|
||||
{
|
||||
GroupName: "a/b test",
|
||||
ServiceName: "a",
|
||||
EndpointName: "a",
|
||||
ExpectedOutput: "a-b-test_a",
|
||||
},
|
||||
}
|
||||
for _, scenario := range scenarios {
|
||||
t.Run(scenario.ExpectedOutput, func(t *testing.T) {
|
||||
output := ConvertGroupAndServiceToKey(scenario.GroupName, scenario.ServiceName)
|
||||
output := ConvertGroupAndEndpointNameToKey(scenario.GroupName, scenario.EndpointName)
|
||||
if output != scenario.ExpectedOutput {
|
||||
t.Errorf("Expected '%s', got '%s'", scenario.ExpectedOutput, output)
|
||||
t.Errorf("expected '%s', got '%s'", scenario.ExpectedOutput, output)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
|
|
@ -10,93 +10,93 @@ import (
|
|||
)
|
||||
|
||||
// HandleAlerting takes care of alerts to resolve and alerts to trigger based on result success or failure
|
||||
func HandleAlerting(service *core.Service, result *core.Result, alertingConfig *alerting.Config, debug bool) {
|
||||
func HandleAlerting(endpoint *core.Endpoint, result *core.Result, alertingConfig *alerting.Config, debug bool) {
|
||||
if alertingConfig == nil {
|
||||
return
|
||||
}
|
||||
if result.Success {
|
||||
handleAlertsToResolve(service, result, alertingConfig, debug)
|
||||
handleAlertsToResolve(endpoint, result, alertingConfig, debug)
|
||||
} else {
|
||||
handleAlertsToTrigger(service, result, alertingConfig, debug)
|
||||
handleAlertsToTrigger(endpoint, result, alertingConfig, debug)
|
||||
}
|
||||
}
|
||||
|
||||
func handleAlertsToTrigger(service *core.Service, result *core.Result, alertingConfig *alerting.Config, debug bool) {
|
||||
service.NumberOfSuccessesInARow = 0
|
||||
service.NumberOfFailuresInARow++
|
||||
for _, serviceAlert := range service.Alerts {
|
||||
// If the serviceAlert hasn't been triggered, move to the next one
|
||||
if !serviceAlert.IsEnabled() || serviceAlert.FailureThreshold > service.NumberOfFailuresInARow {
|
||||
func handleAlertsToTrigger(endpoint *core.Endpoint, result *core.Result, alertingConfig *alerting.Config, debug bool) {
|
||||
endpoint.NumberOfSuccessesInARow = 0
|
||||
endpoint.NumberOfFailuresInARow++
|
||||
for _, endpointAlert := range endpoint.Alerts {
|
||||
// If the alert hasn't been triggered, move to the next one
|
||||
if !endpointAlert.IsEnabled() || endpointAlert.FailureThreshold > endpoint.NumberOfFailuresInARow {
|
||||
continue
|
||||
}
|
||||
if serviceAlert.Triggered {
|
||||
if endpointAlert.Triggered {
|
||||
if debug {
|
||||
log.Printf("[watchdog][handleAlertsToTrigger] Alert for service=%s with description='%s' has already been TRIGGERED, skipping", service.Name, serviceAlert.GetDescription())
|
||||
log.Printf("[watchdog][handleAlertsToTrigger] Alert for endpoint=%s with description='%s' has already been TRIGGERED, skipping", endpoint.Name, endpointAlert.GetDescription())
|
||||
}
|
||||
continue
|
||||
}
|
||||
alertProvider := alertingConfig.GetAlertingProviderByAlertType(serviceAlert.Type)
|
||||
alertProvider := alertingConfig.GetAlertingProviderByAlertType(endpointAlert.Type)
|
||||
if alertProvider != nil && alertProvider.IsValid() {
|
||||
log.Printf("[watchdog][handleAlertsToTrigger] Sending %s serviceAlert because serviceAlert for service=%s with description='%s' has been TRIGGERED", serviceAlert.Type, service.Name, serviceAlert.GetDescription())
|
||||
customAlertProvider := alertProvider.ToCustomAlertProvider(service, serviceAlert, result, false)
|
||||
log.Printf("[watchdog][handleAlertsToTrigger] Sending %s alert because alert for endpoint=%s with description='%s' has been TRIGGERED", endpointAlert.Type, endpoint.Name, endpointAlert.GetDescription())
|
||||
customAlertProvider := alertProvider.ToCustomAlertProvider(endpoint, endpointAlert, result, false)
|
||||
// TODO: retry on error
|
||||
var err error
|
||||
// We need to extract the DedupKey from PagerDuty's response
|
||||
if serviceAlert.Type == alert.TypePagerDuty {
|
||||
if endpointAlert.Type == alert.TypePagerDuty {
|
||||
var body []byte
|
||||
if body, err = customAlertProvider.Send(service.Name, serviceAlert.GetDescription(), false); err == nil {
|
||||
if body, err = customAlertProvider.Send(endpoint.Name, endpointAlert.GetDescription(), false); err == nil {
|
||||
var response pagerDutyResponse
|
||||
if err = json.Unmarshal(body, &response); err != nil {
|
||||
log.Printf("[watchdog][handleAlertsToTrigger] Ran into error unmarshaling pagerduty response: %s", err.Error())
|
||||
} else {
|
||||
serviceAlert.ResolveKey = response.DedupKey
|
||||
endpointAlert.ResolveKey = response.DedupKey
|
||||
}
|
||||
}
|
||||
} else {
|
||||
// All other serviceAlert types don't need to extract anything from the body, so we can just send the request right away
|
||||
_, err = customAlertProvider.Send(service.Name, serviceAlert.GetDescription(), false)
|
||||
// All other alert types don't need to extract anything from the body, so we can just send the request right away
|
||||
_, err = customAlertProvider.Send(endpoint.Name, endpointAlert.GetDescription(), false)
|
||||
}
|
||||
if err != nil {
|
||||
log.Printf("[watchdog][handleAlertsToTrigger] Failed to send an serviceAlert for service=%s: %s", service.Name, err.Error())
|
||||
log.Printf("[watchdog][handleAlertsToTrigger] Failed to send an alert for endpoint=%s: %s", endpoint.Name, err.Error())
|
||||
} else {
|
||||
serviceAlert.Triggered = true
|
||||
endpointAlert.Triggered = true
|
||||
}
|
||||
} else {
|
||||
log.Printf("[watchdog][handleAlertsToResolve] Not sending serviceAlert of type=%s despite being TRIGGERED, because the provider wasn't configured properly", serviceAlert.Type)
|
||||
log.Printf("[watchdog][handleAlertsToResolve] Not sending alert of type=%s despite being TRIGGERED, because the provider wasn't configured properly", endpointAlert.Type)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func handleAlertsToResolve(service *core.Service, result *core.Result, alertingConfig *alerting.Config, debug bool) {
|
||||
service.NumberOfSuccessesInARow++
|
||||
for _, serviceAlert := range service.Alerts {
|
||||
if !serviceAlert.IsEnabled() || !serviceAlert.Triggered || serviceAlert.SuccessThreshold > service.NumberOfSuccessesInARow {
|
||||
func handleAlertsToResolve(endpoint *core.Endpoint, result *core.Result, alertingConfig *alerting.Config, debug bool) {
|
||||
endpoint.NumberOfSuccessesInARow++
|
||||
for _, endpointAlert := range endpoint.Alerts {
|
||||
if !endpointAlert.IsEnabled() || !endpointAlert.Triggered || endpointAlert.SuccessThreshold > endpoint.NumberOfSuccessesInARow {
|
||||
continue
|
||||
}
|
||||
// Even if the serviceAlert provider returns an error, we still set the serviceAlert's Triggered variable to false.
|
||||
// Even if the alert provider returns an error, we still set the alert's Triggered variable to false.
|
||||
// Further explanation can be found on Alert's Triggered field.
|
||||
serviceAlert.Triggered = false
|
||||
if !serviceAlert.IsSendingOnResolved() {
|
||||
endpointAlert.Triggered = false
|
||||
if !endpointAlert.IsSendingOnResolved() {
|
||||
continue
|
||||
}
|
||||
alertProvider := alertingConfig.GetAlertingProviderByAlertType(serviceAlert.Type)
|
||||
alertProvider := alertingConfig.GetAlertingProviderByAlertType(endpointAlert.Type)
|
||||
if alertProvider != nil && alertProvider.IsValid() {
|
||||
log.Printf("[watchdog][handleAlertsToResolve] Sending %s serviceAlert because serviceAlert for service=%s with description='%s' has been RESOLVED", serviceAlert.Type, service.Name, serviceAlert.GetDescription())
|
||||
customAlertProvider := alertProvider.ToCustomAlertProvider(service, serviceAlert, result, true)
|
||||
log.Printf("[watchdog][handleAlertsToResolve] Sending %s alert because alert for endpoint=%s with description='%s' has been RESOLVED", endpointAlert.Type, endpoint.Name, endpointAlert.GetDescription())
|
||||
customAlertProvider := alertProvider.ToCustomAlertProvider(endpoint, endpointAlert, result, true)
|
||||
// TODO: retry on error
|
||||
_, err := customAlertProvider.Send(service.Name, serviceAlert.GetDescription(), true)
|
||||
_, err := customAlertProvider.Send(endpoint.Name, endpointAlert.GetDescription(), true)
|
||||
if err != nil {
|
||||
log.Printf("[watchdog][handleAlertsToResolve] Failed to send an serviceAlert for service=%s: %s", service.Name, err.Error())
|
||||
log.Printf("[watchdog][handleAlertsToResolve] Failed to send an alert for endpoint=%s: %s", endpoint.Name, err.Error())
|
||||
} else {
|
||||
if serviceAlert.Type == alert.TypePagerDuty {
|
||||
serviceAlert.ResolveKey = ""
|
||||
if endpointAlert.Type == alert.TypePagerDuty {
|
||||
endpointAlert.ResolveKey = ""
|
||||
}
|
||||
}
|
||||
} else {
|
||||
log.Printf("[watchdog][handleAlertsToResolve] Not sending serviceAlert of type=%s despite being RESOLVED, because the provider wasn't configured properly", serviceAlert.Type)
|
||||
log.Printf("[watchdog][handleAlertsToResolve] Not sending alert of type=%s despite being RESOLVED, because the provider wasn't configured properly", endpointAlert.Type)
|
||||
}
|
||||
}
|
||||
service.NumberOfFailuresInARow = 0
|
||||
endpoint.NumberOfFailuresInARow = 0
|
||||
}
|
||||
|
||||
type pagerDutyResponse struct {
|
||||
|
|
|
@ -26,8 +26,8 @@ func TestHandleAlerting(t *testing.T) {
|
|||
},
|
||||
}
|
||||
enabled := true
|
||||
service := &core.Service{
|
||||
URL: "http://example.com",
|
||||
endpoint := &core.Endpoint{
|
||||
URL: "https://example.com",
|
||||
Alerts: []*alert.Alert{
|
||||
{
|
||||
Type: alert.TypeCustom,
|
||||
|
@ -40,23 +40,23 @@ func TestHandleAlerting(t *testing.T) {
|
|||
},
|
||||
}
|
||||
|
||||
verify(t, service, 0, 0, false, "The alert shouldn't start triggered")
|
||||
HandleAlerting(service, &core.Result{Success: false}, cfg.Alerting, cfg.Debug)
|
||||
verify(t, service, 1, 0, false, "The alert shouldn't have triggered")
|
||||
HandleAlerting(service, &core.Result{Success: false}, cfg.Alerting, cfg.Debug)
|
||||
verify(t, service, 2, 0, true, "The alert should've triggered")
|
||||
HandleAlerting(service, &core.Result{Success: false}, cfg.Alerting, cfg.Debug)
|
||||
verify(t, service, 3, 0, true, "The alert should still be triggered")
|
||||
HandleAlerting(service, &core.Result{Success: false}, cfg.Alerting, cfg.Debug)
|
||||
verify(t, service, 4, 0, true, "The alert should still be triggered")
|
||||
HandleAlerting(service, &core.Result{Success: true}, cfg.Alerting, cfg.Debug)
|
||||
verify(t, service, 0, 1, true, "The alert should still be triggered (because service.Alerts[0].SuccessThreshold is 3)")
|
||||
HandleAlerting(service, &core.Result{Success: true}, cfg.Alerting, cfg.Debug)
|
||||
verify(t, service, 0, 2, true, "The alert should still be triggered (because service.Alerts[0].SuccessThreshold is 3)")
|
||||
HandleAlerting(service, &core.Result{Success: true}, cfg.Alerting, cfg.Debug)
|
||||
verify(t, service, 0, 3, false, "The alert should've been resolved")
|
||||
HandleAlerting(service, &core.Result{Success: true}, cfg.Alerting, cfg.Debug)
|
||||
verify(t, service, 0, 4, false, "The alert should no longer be triggered")
|
||||
verify(t, endpoint, 0, 0, false, "The alert shouldn't start triggered")
|
||||
HandleAlerting(endpoint, &core.Result{Success: false}, cfg.Alerting, cfg.Debug)
|
||||
verify(t, endpoint, 1, 0, false, "The alert shouldn't have triggered")
|
||||
HandleAlerting(endpoint, &core.Result{Success: false}, cfg.Alerting, cfg.Debug)
|
||||
verify(t, endpoint, 2, 0, true, "The alert should've triggered")
|
||||
HandleAlerting(endpoint, &core.Result{Success: false}, cfg.Alerting, cfg.Debug)
|
||||
verify(t, endpoint, 3, 0, true, "The alert should still be triggered")
|
||||
HandleAlerting(endpoint, &core.Result{Success: false}, cfg.Alerting, cfg.Debug)
|
||||
verify(t, endpoint, 4, 0, true, "The alert should still be triggered")
|
||||
HandleAlerting(endpoint, &core.Result{Success: true}, cfg.Alerting, cfg.Debug)
|
||||
verify(t, endpoint, 0, 1, true, "The alert should still be triggered (because endpoint.Alerts[0].SuccessThreshold is 3)")
|
||||
HandleAlerting(endpoint, &core.Result{Success: true}, cfg.Alerting, cfg.Debug)
|
||||
verify(t, endpoint, 0, 2, true, "The alert should still be triggered (because endpoint.Alerts[0].SuccessThreshold is 3)")
|
||||
HandleAlerting(endpoint, &core.Result{Success: true}, cfg.Alerting, cfg.Debug)
|
||||
verify(t, endpoint, 0, 3, false, "The alert should've been resolved")
|
||||
HandleAlerting(endpoint, &core.Result{Success: true}, cfg.Alerting, cfg.Debug)
|
||||
verify(t, endpoint, 0, 4, false, "The alert should no longer be triggered")
|
||||
}
|
||||
|
||||
func TestHandleAlertingWhenAlertingConfigIsNil(t *testing.T) {
|
||||
|
@ -70,7 +70,7 @@ func TestHandleAlertingWithBadAlertProvider(t *testing.T) {
|
|||
defer os.Clearenv()
|
||||
|
||||
enabled := true
|
||||
service := &core.Service{
|
||||
endpoint := &core.Endpoint{
|
||||
URL: "http://example.com",
|
||||
Alerts: []*alert.Alert{
|
||||
{
|
||||
|
@ -84,14 +84,14 @@ func TestHandleAlertingWithBadAlertProvider(t *testing.T) {
|
|||
},
|
||||
}
|
||||
|
||||
verify(t, service, 0, 0, false, "The alert shouldn't start triggered")
|
||||
HandleAlerting(service, &core.Result{Success: false}, &alerting.Config{}, false)
|
||||
verify(t, service, 1, 0, false, "The alert shouldn't have triggered")
|
||||
HandleAlerting(service, &core.Result{Success: false}, &alerting.Config{}, false)
|
||||
verify(t, service, 2, 0, false, "The alert shouldn't have triggered, because the provider wasn't configured properly")
|
||||
verify(t, endpoint, 0, 0, false, "The alert shouldn't start triggered")
|
||||
HandleAlerting(endpoint, &core.Result{Success: false}, &alerting.Config{}, false)
|
||||
verify(t, endpoint, 1, 0, false, "The alert shouldn't have triggered")
|
||||
HandleAlerting(endpoint, &core.Result{Success: false}, &alerting.Config{}, false)
|
||||
verify(t, endpoint, 2, 0, false, "The alert shouldn't have triggered, because the provider wasn't configured properly")
|
||||
}
|
||||
|
||||
func TestHandleAlertingWhenTriggeredAlertIsAlmostResolvedButServiceStartFailingAgain(t *testing.T) {
|
||||
func TestHandleAlertingWhenTriggeredAlertIsAlmostResolvedButendpointStartFailingAgain(t *testing.T) {
|
||||
_ = os.Setenv("MOCK_ALERT_PROVIDER", "true")
|
||||
defer os.Clearenv()
|
||||
|
||||
|
@ -105,7 +105,7 @@ func TestHandleAlertingWhenTriggeredAlertIsAlmostResolvedButServiceStartFailingA
|
|||
},
|
||||
}
|
||||
enabled := true
|
||||
service := &core.Service{
|
||||
endpoint := &core.Endpoint{
|
||||
URL: "http://example.com",
|
||||
Alerts: []*alert.Alert{
|
||||
{
|
||||
|
@ -121,8 +121,8 @@ func TestHandleAlertingWhenTriggeredAlertIsAlmostResolvedButServiceStartFailingA
|
|||
}
|
||||
|
||||
// This test simulate an alert that was already triggered
|
||||
HandleAlerting(service, &core.Result{Success: false}, cfg.Alerting, cfg.Debug)
|
||||
verify(t, service, 2, 0, true, "The alert was already triggered at the beginning of this test")
|
||||
HandleAlerting(endpoint, &core.Result{Success: false}, cfg.Alerting, cfg.Debug)
|
||||
verify(t, endpoint, 2, 0, true, "The alert was already triggered at the beginning of this test")
|
||||
}
|
||||
|
||||
func TestHandleAlertingWhenTriggeredAlertIsResolvedButSendOnResolvedIsFalse(t *testing.T) {
|
||||
|
@ -140,7 +140,7 @@ func TestHandleAlertingWhenTriggeredAlertIsResolvedButSendOnResolvedIsFalse(t *t
|
|||
}
|
||||
enabled := true
|
||||
disabled := false
|
||||
service := &core.Service{
|
||||
endpoint := &core.Endpoint{
|
||||
URL: "http://example.com",
|
||||
Alerts: []*alert.Alert{
|
||||
{
|
||||
|
@ -155,8 +155,8 @@ func TestHandleAlertingWhenTriggeredAlertIsResolvedButSendOnResolvedIsFalse(t *t
|
|||
NumberOfFailuresInARow: 1,
|
||||
}
|
||||
|
||||
HandleAlerting(service, &core.Result{Success: true}, cfg.Alerting, cfg.Debug)
|
||||
verify(t, service, 0, 1, false, "The alert should've been resolved")
|
||||
HandleAlerting(endpoint, &core.Result{Success: true}, cfg.Alerting, cfg.Debug)
|
||||
verify(t, endpoint, 0, 1, false, "The alert should've been resolved")
|
||||
}
|
||||
|
||||
func TestHandleAlertingWhenTriggeredAlertIsResolvedPagerDuty(t *testing.T) {
|
||||
|
@ -172,7 +172,7 @@ func TestHandleAlertingWhenTriggeredAlertIsResolvedPagerDuty(t *testing.T) {
|
|||
},
|
||||
}
|
||||
enabled := true
|
||||
service := &core.Service{
|
||||
endpoint := &core.Endpoint{
|
||||
URL: "http://example.com",
|
||||
Alerts: []*alert.Alert{
|
||||
{
|
||||
|
@ -187,11 +187,11 @@ func TestHandleAlertingWhenTriggeredAlertIsResolvedPagerDuty(t *testing.T) {
|
|||
NumberOfFailuresInARow: 0,
|
||||
}
|
||||
|
||||
HandleAlerting(service, &core.Result{Success: false}, cfg.Alerting, cfg.Debug)
|
||||
verify(t, service, 1, 0, true, "")
|
||||
HandleAlerting(endpoint, &core.Result{Success: false}, cfg.Alerting, cfg.Debug)
|
||||
verify(t, endpoint, 1, 0, true, "")
|
||||
|
||||
HandleAlerting(service, &core.Result{Success: true}, cfg.Alerting, cfg.Debug)
|
||||
verify(t, service, 0, 1, false, "The alert should've been resolved")
|
||||
HandleAlerting(endpoint, &core.Result{Success: true}, cfg.Alerting, cfg.Debug)
|
||||
verify(t, endpoint, 0, 1, false, "The alert should've been resolved")
|
||||
}
|
||||
|
||||
func TestHandleAlertingWithProviderThatReturnsAnError(t *testing.T) {
|
||||
|
@ -208,7 +208,7 @@ func TestHandleAlertingWithProviderThatReturnsAnError(t *testing.T) {
|
|||
},
|
||||
}
|
||||
enabled := true
|
||||
service := &core.Service{
|
||||
endpoint := &core.Endpoint{
|
||||
URL: "http://example.com",
|
||||
Alerts: []*alert.Alert{
|
||||
{
|
||||
|
@ -223,33 +223,33 @@ func TestHandleAlertingWithProviderThatReturnsAnError(t *testing.T) {
|
|||
}
|
||||
|
||||
_ = os.Setenv("MOCK_ALERT_PROVIDER_ERROR", "true")
|
||||
HandleAlerting(service, &core.Result{Success: false}, cfg.Alerting, cfg.Debug)
|
||||
verify(t, service, 1, 0, false, "")
|
||||
HandleAlerting(service, &core.Result{Success: false}, cfg.Alerting, cfg.Debug)
|
||||
verify(t, service, 2, 0, false, "The alert should have failed to trigger, because the alert provider is returning an error")
|
||||
HandleAlerting(service, &core.Result{Success: false}, cfg.Alerting, cfg.Debug)
|
||||
verify(t, service, 3, 0, false, "The alert should still not be triggered, because the alert provider is still returning an error")
|
||||
HandleAlerting(service, &core.Result{Success: false}, cfg.Alerting, cfg.Debug)
|
||||
verify(t, service, 4, 0, false, "The alert should still not be triggered, because the alert provider is still returning an error")
|
||||
HandleAlerting(endpoint, &core.Result{Success: false}, cfg.Alerting, cfg.Debug)
|
||||
verify(t, endpoint, 1, 0, false, "")
|
||||
HandleAlerting(endpoint, &core.Result{Success: false}, cfg.Alerting, cfg.Debug)
|
||||
verify(t, endpoint, 2, 0, false, "The alert should have failed to trigger, because the alert provider is returning an error")
|
||||
HandleAlerting(endpoint, &core.Result{Success: false}, cfg.Alerting, cfg.Debug)
|
||||
verify(t, endpoint, 3, 0, false, "The alert should still not be triggered, because the alert provider is still returning an error")
|
||||
HandleAlerting(endpoint, &core.Result{Success: false}, cfg.Alerting, cfg.Debug)
|
||||
verify(t, endpoint, 4, 0, false, "The alert should still not be triggered, because the alert provider is still returning an error")
|
||||
_ = os.Setenv("MOCK_ALERT_PROVIDER_ERROR", "false")
|
||||
HandleAlerting(service, &core.Result{Success: false}, cfg.Alerting, cfg.Debug)
|
||||
verify(t, service, 5, 0, true, "The alert should've been triggered because the alert provider is no longer returning an error")
|
||||
HandleAlerting(service, &core.Result{Success: true}, cfg.Alerting, cfg.Debug)
|
||||
verify(t, service, 0, 1, true, "The alert should've still been triggered")
|
||||
HandleAlerting(endpoint, &core.Result{Success: false}, cfg.Alerting, cfg.Debug)
|
||||
verify(t, endpoint, 5, 0, true, "The alert should've been triggered because the alert provider is no longer returning an error")
|
||||
HandleAlerting(endpoint, &core.Result{Success: true}, cfg.Alerting, cfg.Debug)
|
||||
verify(t, endpoint, 0, 1, true, "The alert should've still been triggered")
|
||||
_ = os.Setenv("MOCK_ALERT_PROVIDER_ERROR", "true")
|
||||
HandleAlerting(service, &core.Result{Success: true}, cfg.Alerting, cfg.Debug)
|
||||
verify(t, service, 0, 2, false, "The alert should've been resolved DESPITE THE ALERT PROVIDER RETURNING AN ERROR. See Alert.Triggered for further explanation.")
|
||||
HandleAlerting(endpoint, &core.Result{Success: true}, cfg.Alerting, cfg.Debug)
|
||||
verify(t, endpoint, 0, 2, false, "The alert should've been resolved DESPITE THE ALERT PROVIDER RETURNING AN ERROR. See Alert.Triggered for further explanation.")
|
||||
_ = os.Setenv("MOCK_ALERT_PROVIDER_ERROR", "false")
|
||||
|
||||
// Make sure that everything's working as expected after a rough patch
|
||||
HandleAlerting(service, &core.Result{Success: false}, cfg.Alerting, cfg.Debug)
|
||||
verify(t, service, 1, 0, false, "")
|
||||
HandleAlerting(service, &core.Result{Success: false}, cfg.Alerting, cfg.Debug)
|
||||
verify(t, service, 2, 0, true, "The alert should have triggered")
|
||||
HandleAlerting(service, &core.Result{Success: true}, cfg.Alerting, cfg.Debug)
|
||||
verify(t, service, 0, 1, true, "The alert should still be triggered")
|
||||
HandleAlerting(service, &core.Result{Success: true}, cfg.Alerting, cfg.Debug)
|
||||
verify(t, service, 0, 2, false, "The alert should have been resolved")
|
||||
HandleAlerting(endpoint, &core.Result{Success: false}, cfg.Alerting, cfg.Debug)
|
||||
verify(t, endpoint, 1, 0, false, "")
|
||||
HandleAlerting(endpoint, &core.Result{Success: false}, cfg.Alerting, cfg.Debug)
|
||||
verify(t, endpoint, 2, 0, true, "The alert should have triggered")
|
||||
HandleAlerting(endpoint, &core.Result{Success: true}, cfg.Alerting, cfg.Debug)
|
||||
verify(t, endpoint, 0, 1, true, "The alert should still be triggered")
|
||||
HandleAlerting(endpoint, &core.Result{Success: true}, cfg.Alerting, cfg.Debug)
|
||||
verify(t, endpoint, 0, 2, false, "The alert should have been resolved")
|
||||
}
|
||||
|
||||
func TestHandleAlertingWithProviderThatOnlyReturnsErrorOnResolve(t *testing.T) {
|
||||
|
@ -266,7 +266,7 @@ func TestHandleAlertingWithProviderThatOnlyReturnsErrorOnResolve(t *testing.T) {
|
|||
},
|
||||
}
|
||||
enabled := true
|
||||
service := &core.Service{
|
||||
endpoint := &core.Endpoint{
|
||||
URL: "http://example.com",
|
||||
Alerts: []*alert.Alert{
|
||||
{
|
||||
|
@ -280,38 +280,38 @@ func TestHandleAlertingWithProviderThatOnlyReturnsErrorOnResolve(t *testing.T) {
|
|||
},
|
||||
}
|
||||
|
||||
HandleAlerting(service, &core.Result{Success: false}, cfg.Alerting, cfg.Debug)
|
||||
verify(t, service, 1, 0, true, "")
|
||||
HandleAlerting(endpoint, &core.Result{Success: false}, cfg.Alerting, cfg.Debug)
|
||||
verify(t, endpoint, 1, 0, true, "")
|
||||
_ = os.Setenv("MOCK_ALERT_PROVIDER_ERROR", "true")
|
||||
HandleAlerting(service, &core.Result{Success: true}, cfg.Alerting, cfg.Debug)
|
||||
verify(t, service, 0, 1, false, "")
|
||||
HandleAlerting(endpoint, &core.Result{Success: true}, cfg.Alerting, cfg.Debug)
|
||||
verify(t, endpoint, 0, 1, false, "")
|
||||
_ = os.Setenv("MOCK_ALERT_PROVIDER_ERROR", "false")
|
||||
HandleAlerting(service, &core.Result{Success: false}, cfg.Alerting, cfg.Debug)
|
||||
verify(t, service, 1, 0, true, "")
|
||||
HandleAlerting(endpoint, &core.Result{Success: false}, cfg.Alerting, cfg.Debug)
|
||||
verify(t, endpoint, 1, 0, true, "")
|
||||
_ = os.Setenv("MOCK_ALERT_PROVIDER_ERROR", "true")
|
||||
HandleAlerting(service, &core.Result{Success: true}, cfg.Alerting, cfg.Debug)
|
||||
verify(t, service, 0, 1, false, "")
|
||||
HandleAlerting(endpoint, &core.Result{Success: true}, cfg.Alerting, cfg.Debug)
|
||||
verify(t, endpoint, 0, 1, false, "")
|
||||
_ = os.Setenv("MOCK_ALERT_PROVIDER_ERROR", "false")
|
||||
|
||||
// Make sure that everything's working as expected after a rough patch
|
||||
HandleAlerting(service, &core.Result{Success: false}, cfg.Alerting, cfg.Debug)
|
||||
verify(t, service, 1, 0, true, "")
|
||||
HandleAlerting(service, &core.Result{Success: false}, cfg.Alerting, cfg.Debug)
|
||||
verify(t, service, 2, 0, true, "")
|
||||
HandleAlerting(service, &core.Result{Success: true}, cfg.Alerting, cfg.Debug)
|
||||
verify(t, service, 0, 1, false, "")
|
||||
HandleAlerting(service, &core.Result{Success: true}, cfg.Alerting, cfg.Debug)
|
||||
verify(t, service, 0, 2, false, "")
|
||||
HandleAlerting(endpoint, &core.Result{Success: false}, cfg.Alerting, cfg.Debug)
|
||||
verify(t, endpoint, 1, 0, true, "")
|
||||
HandleAlerting(endpoint, &core.Result{Success: false}, cfg.Alerting, cfg.Debug)
|
||||
verify(t, endpoint, 2, 0, true, "")
|
||||
HandleAlerting(endpoint, &core.Result{Success: true}, cfg.Alerting, cfg.Debug)
|
||||
verify(t, endpoint, 0, 1, false, "")
|
||||
HandleAlerting(endpoint, &core.Result{Success: true}, cfg.Alerting, cfg.Debug)
|
||||
verify(t, endpoint, 0, 2, false, "")
|
||||
}
|
||||
|
||||
func verify(t *testing.T, service *core.Service, expectedNumberOfFailuresInARow, expectedNumberOfSuccessInARow int, expectedTriggered bool, expectedTriggeredReason string) {
|
||||
if service.NumberOfFailuresInARow != expectedNumberOfFailuresInARow {
|
||||
t.Fatalf("service.NumberOfFailuresInARow should've been %d, got %d", expectedNumberOfFailuresInARow, service.NumberOfFailuresInARow)
|
||||
func verify(t *testing.T, endpoint *core.Endpoint, expectedNumberOfFailuresInARow, expectedNumberOfSuccessInARow int, expectedTriggered bool, expectedTriggeredReason string) {
|
||||
if endpoint.NumberOfFailuresInARow != expectedNumberOfFailuresInARow {
|
||||
t.Fatalf("endpoint.NumberOfFailuresInARow should've been %d, got %d", expectedNumberOfFailuresInARow, endpoint.NumberOfFailuresInARow)
|
||||
}
|
||||
if service.NumberOfSuccessesInARow != expectedNumberOfSuccessInARow {
|
||||
t.Fatalf("service.NumberOfSuccessesInARow should've been %d, got %d", expectedNumberOfSuccessInARow, service.NumberOfSuccessesInARow)
|
||||
if endpoint.NumberOfSuccessesInARow != expectedNumberOfSuccessInARow {
|
||||
t.Fatalf("endpoint.NumberOfSuccessesInARow should've been %d, got %d", expectedNumberOfSuccessInARow, endpoint.NumberOfSuccessesInARow)
|
||||
}
|
||||
if service.Alerts[0].Triggered != expectedTriggered {
|
||||
if endpoint.Alerts[0].Triggered != expectedTriggered {
|
||||
if len(expectedTriggeredReason) != 0 {
|
||||
t.Fatal(expectedTriggeredReason)
|
||||
} else {
|
||||
|
|
|
@ -15,7 +15,7 @@ import (
|
|||
)
|
||||
|
||||
var (
|
||||
// monitoringMutex is used to prevent multiple services from being evaluated at the same time.
|
||||
// monitoringMutex is used to prevent multiple endpoint from being evaluated at the same time.
|
||||
// Without this, conditions using response time may become inaccurate.
|
||||
monitoringMutex sync.Mutex
|
||||
|
||||
|
@ -23,77 +23,77 @@ var (
|
|||
cancelFunc context.CancelFunc
|
||||
)
|
||||
|
||||
// Monitor loops over each services and starts a goroutine to monitor each services separately
|
||||
// Monitor loops over each endpoint and starts a goroutine to monitor each endpoint separately
|
||||
func Monitor(cfg *config.Config) {
|
||||
ctx, cancelFunc = context.WithCancel(context.Background())
|
||||
for _, service := range cfg.Services {
|
||||
if service.IsEnabled() {
|
||||
for _, endpoint := range cfg.Endpoints {
|
||||
if endpoint.IsEnabled() {
|
||||
// To prevent multiple requests from running at the same time, we'll wait for a little before each iteration
|
||||
time.Sleep(1111 * time.Millisecond)
|
||||
go monitor(service, cfg.Alerting, cfg.Maintenance, cfg.DisableMonitoringLock, cfg.Metrics, cfg.Debug, ctx)
|
||||
go monitor(endpoint, cfg.Alerting, cfg.Maintenance, cfg.DisableMonitoringLock, cfg.Metrics, cfg.Debug, ctx)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// monitor monitors a single service in a loop
|
||||
func monitor(service *core.Service, alertingConfig *alerting.Config, maintenanceConfig *maintenance.Config, disableMonitoringLock, enabledMetrics, debug bool, ctx context.Context) {
|
||||
// monitor monitors a single endpoint in a loop
|
||||
func monitor(endpoint *core.Endpoint, alertingConfig *alerting.Config, maintenanceConfig *maintenance.Config, disableMonitoringLock, enabledMetrics, debug bool, ctx context.Context) {
|
||||
// Run it immediately on start
|
||||
execute(service, alertingConfig, maintenanceConfig, disableMonitoringLock, enabledMetrics, debug)
|
||||
execute(endpoint, alertingConfig, maintenanceConfig, disableMonitoringLock, enabledMetrics, debug)
|
||||
// Loop for the next executions
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
log.Printf("[watchdog][monitor] Canceling current execution of group=%s; service=%s", service.Group, service.Name)
|
||||
log.Printf("[watchdog][monitor] Canceling current execution of group=%s; endpoint=%s", endpoint.Group, endpoint.Name)
|
||||
return
|
||||
case <-time.After(service.Interval):
|
||||
execute(service, alertingConfig, maintenanceConfig, disableMonitoringLock, enabledMetrics, debug)
|
||||
case <-time.After(endpoint.Interval):
|
||||
execute(endpoint, alertingConfig, maintenanceConfig, disableMonitoringLock, enabledMetrics, debug)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func execute(service *core.Service, alertingConfig *alerting.Config, maintenanceConfig *maintenance.Config, disableMonitoringLock, enabledMetrics, debug bool) {
|
||||
func execute(endpoint *core.Endpoint, alertingConfig *alerting.Config, maintenanceConfig *maintenance.Config, disableMonitoringLock, enabledMetrics, debug bool) {
|
||||
if !disableMonitoringLock {
|
||||
// By placing the lock here, we prevent multiple services from being monitored at the exact same time, which
|
||||
// By placing the lock here, we prevent multiple endpoints from being monitored at the exact same time, which
|
||||
// could cause performance issues and return inaccurate results
|
||||
monitoringMutex.Lock()
|
||||
}
|
||||
if debug {
|
||||
log.Printf("[watchdog][execute] Monitoring group=%s; service=%s", service.Group, service.Name)
|
||||
log.Printf("[watchdog][execute] Monitoring group=%s; endpoint=%s", endpoint.Group, endpoint.Name)
|
||||
}
|
||||
result := service.EvaluateHealth()
|
||||
result := endpoint.EvaluateHealth()
|
||||
if enabledMetrics {
|
||||
metric.PublishMetricsForService(service, result)
|
||||
metric.PublishMetricsForEndpoint(endpoint, result)
|
||||
}
|
||||
UpdateServiceStatuses(service, result)
|
||||
UpdateEndpointStatuses(endpoint, result)
|
||||
log.Printf(
|
||||
"[watchdog][execute] Monitored group=%s; service=%s; success=%v; errors=%d; duration=%s",
|
||||
service.Group,
|
||||
service.Name,
|
||||
"[watchdog][execute] Monitored group=%s; endpoint=%s; success=%v; errors=%d; duration=%s",
|
||||
endpoint.Group,
|
||||
endpoint.Name,
|
||||
result.Success,
|
||||
len(result.Errors),
|
||||
result.Duration.Round(time.Millisecond),
|
||||
)
|
||||
if !maintenanceConfig.IsUnderMaintenance() {
|
||||
HandleAlerting(service, result, alertingConfig, debug)
|
||||
HandleAlerting(endpoint, result, alertingConfig, debug)
|
||||
} else if debug {
|
||||
log.Println("[watchdog][execute] Not handling alerting because currently in the maintenance window")
|
||||
}
|
||||
if debug {
|
||||
log.Printf("[watchdog][execute] Waiting for interval=%s before monitoring group=%s service=%s again", service.Interval, service.Group, service.Name)
|
||||
log.Printf("[watchdog][execute] Waiting for interval=%s before monitoring group=%s endpoint=%s again", endpoint.Interval, endpoint.Group, endpoint.Name)
|
||||
}
|
||||
if !disableMonitoringLock {
|
||||
monitoringMutex.Unlock()
|
||||
}
|
||||
}
|
||||
|
||||
// UpdateServiceStatuses updates the slice of service statuses
|
||||
func UpdateServiceStatuses(service *core.Service, result *core.Result) {
|
||||
if err := storage.Get().Insert(service, result); err != nil {
|
||||
log.Println("[watchdog][UpdateServiceStatuses] Failed to insert data in storage:", err.Error())
|
||||
// UpdateEndpointStatuses updates the slice of endpoint statuses
|
||||
func UpdateEndpointStatuses(endpoint *core.Endpoint, result *core.Result) {
|
||||
if err := storage.Get().Insert(endpoint, result); err != nil {
|
||||
log.Println("[watchdog][UpdateEndpointStatuses] Failed to insert data in storage:", err.Error())
|
||||
}
|
||||
}
|
||||
|
||||
// Shutdown stops monitoring all services
|
||||
// Shutdown stops monitoring all endpoints
|
||||
func Shutdown() {
|
||||
cancelFunc()
|
||||
}
|
||||
|
|
|
@ -1,8 +1,8 @@
|
|||
<template>
|
||||
<div class='service px-3 py-3 border-l border-r border-t rounded-none hover:bg-gray-100 dark:hover:bg-gray-700 dark:border-gray-500' v-if="data">
|
||||
<div class='endpoint px-3 py-3 border-l border-r border-t rounded-none hover:bg-gray-100 dark:hover:bg-gray-700 dark:border-gray-500' v-if="data">
|
||||
<div class='flex flex-wrap mb-2'>
|
||||
<div class='w-3/4'>
|
||||
<router-link :to="generatePath()" class="font-bold hover:text-blue-800 hover:underline dark:hover:text-blue-400" title="View detailed service health">
|
||||
<router-link :to="generatePath()" class="font-bold hover:text-blue-800 hover:underline dark:hover:text-blue-400" title="View detailed endpoint health">
|
||||
{{ data.name }}
|
||||
</router-link>
|
||||
<span v-if="data.results && data.results.length && data.results[data.results.length - 1].hostname" class='text-gray-500 font-light'> | {{ data.results[data.results.length - 1].hostname }}</span>
|
||||
|
@ -60,7 +60,7 @@
|
|||
import {helper} from "@/mixins/helper";
|
||||
|
||||
export default {
|
||||
name: 'Service',
|
||||
name: 'Endpoint',
|
||||
props: {
|
||||
maximumNumberOfResults: Number,
|
||||
data: Object,
|
||||
|
@ -97,7 +97,7 @@ export default {
|
|||
if (!this.data) {
|
||||
return '/';
|
||||
}
|
||||
return `/services/${this.data.key}`;
|
||||
return `/endpoints/${this.data.key}`;
|
||||
},
|
||||
showTooltip(result, event) {
|
||||
this.$emit('showTooltip', result, event);
|
||||
|
@ -126,12 +126,12 @@ export default {
|
|||
|
||||
|
||||
<style>
|
||||
.service:first-child {
|
||||
.endpoint:first-child {
|
||||
border-top-left-radius: 3px;
|
||||
border-top-right-radius: 3px;
|
||||
}
|
||||
|
||||
.service:last-child {
|
||||
.endpoint:last-child {
|
||||
border-bottom-left-radius: 3px;
|
||||
border-bottom-right-radius: 3px;
|
||||
border-bottom-width: 3px;
|
|
@ -1,21 +1,21 @@
|
|||
<template>
|
||||
<div :class="services.length === 0 ? 'mt-3' : 'mt-4'">
|
||||
<div :class="endpoints.length === 0 ? 'mt-3' : 'mt-4'">
|
||||
<slot v-if="name !== 'undefined'">
|
||||
<div class="service-group pt-2 border dark:bg-gray-800 dark:border-gray-500" @click="toggleGroup">
|
||||
<div class="endpoint-group pt-2 border dark:bg-gray-800 dark:border-gray-500" @click="toggleGroup">
|
||||
<h5 class='font-mono text-gray-400 text-xl font-medium pb-2 px-3 dark:text-gray-200 dark:hover:text-gray-500 dark:border-gray-500'>
|
||||
<span v-if="healthy" class='text-green-600'>✓</span>
|
||||
<span v-else class='text-yellow-400'>~</span>
|
||||
{{ name }}
|
||||
<span class='float-right service-group-arrow'>
|
||||
<span class='float-right endpoint-group-arrow'>
|
||||
{{ collapsed ? '▼' : '▲' }}
|
||||
</span>
|
||||
</h5>
|
||||
</div>
|
||||
</slot>
|
||||
<div v-if="!collapsed" :class="name === 'undefined' ? '' : 'service-group-content'">
|
||||
<slot v-for="service in services" :key="service">
|
||||
<Service
|
||||
:data="service"
|
||||
<div v-if="!collapsed" :class="name === 'undefined' ? '' : 'endpoint-group-content'">
|
||||
<slot v-for="(endpoint, idx) in endpoints" :key="idx">
|
||||
<Endpoint
|
||||
:data="endpoint"
|
||||
:maximumNumberOfResults="20"
|
||||
@showTooltip="showTooltip"
|
||||
@toggleShowAverageResponseTime="toggleShowAverageResponseTime" :showAverageResponseTime="showAverageResponseTime"
|
||||
|
@ -27,26 +27,26 @@
|
|||
|
||||
|
||||
<script>
|
||||
import Service from './Service.vue';
|
||||
import Endpoint from './Endpoint.vue';
|
||||
|
||||
export default {
|
||||
name: 'ServiceGroup',
|
||||
name: 'EndpointGroup',
|
||||
components: {
|
||||
Service
|
||||
Endpoint
|
||||
},
|
||||
props: {
|
||||
name: String,
|
||||
services: Array,
|
||||
endpoints: Array,
|
||||
showAverageResponseTime: Boolean
|
||||
},
|
||||
emits: ['showTooltip', 'toggleShowAverageResponseTime'],
|
||||
methods: {
|
||||
healthCheck() {
|
||||
if (this.services) {
|
||||
for (let i in this.services) {
|
||||
for (let j in this.services[i].results) {
|
||||
if (!this.services[i].results[j].success) {
|
||||
// Set the service group to unhealthy (only if it's currently healthy)
|
||||
if (this.endpoints) {
|
||||
for (let i in this.endpoints) {
|
||||
for (let j in this.endpoints[i].results) {
|
||||
if (!this.endpoints[i].results[j].success) {
|
||||
// Set the endpoint group to unhealthy (only if it's currently healthy)
|
||||
if (this.healthy) {
|
||||
this.healthy = false;
|
||||
}
|
||||
|
@ -55,14 +55,14 @@ export default {
|
|||
}
|
||||
}
|
||||
}
|
||||
// Set the service group to healthy (only if it's currently unhealthy)
|
||||
// Set the endpoint group to healthy (only if it's currently unhealthy)
|
||||
if (!this.healthy) {
|
||||
this.healthy = true;
|
||||
}
|
||||
},
|
||||
toggleGroup() {
|
||||
this.collapsed = !this.collapsed;
|
||||
sessionStorage.setItem(`gatus:service-group:${this.name}:collapsed`, this.collapsed);
|
||||
sessionStorage.setItem(`gatus:endpoint-group:${this.name}:collapsed`, this.collapsed);
|
||||
},
|
||||
showTooltip(result, event) {
|
||||
this.$emit('showTooltip', result, event);
|
||||
|
@ -72,7 +72,7 @@ export default {
|
|||
}
|
||||
},
|
||||
watch: {
|
||||
services: function () {
|
||||
endpoints: function () {
|
||||
this.healthCheck();
|
||||
}
|
||||
},
|
||||
|
@ -82,7 +82,7 @@ export default {
|
|||
data() {
|
||||
return {
|
||||
healthy: true,
|
||||
collapsed: sessionStorage.getItem(`gatus:service-group:${this.name}:collapsed`) === "true"
|
||||
collapsed: sessionStorage.getItem(`gatus:endpoint-group:${this.name}:collapsed`) === "true"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -90,12 +90,12 @@ export default {
|
|||
|
||||
|
||||
<style>
|
||||
.service-group {
|
||||
.endpoint-group {
|
||||
cursor: pointer;
|
||||
user-select: none;
|
||||
}
|
||||
|
||||
.service-group h5:hover {
|
||||
.endpoint-group h5:hover {
|
||||
color: #1b1e21;
|
||||
}
|
||||
</style>
|
74
web/app/src/components/Endpoints.vue
Normal file
74
web/app/src/components/Endpoints.vue
Normal file
|
@ -0,0 +1,74 @@
|
|||
<template>
|
||||
<div id="results">
|
||||
<slot v-for="endpointGroup in endpointGroups" :key="endpointGroup">
|
||||
<EndpointGroup :endpoints="endpointGroup.endpoints" :name="endpointGroup.name" @showTooltip="showTooltip" @toggleShowAverageResponseTime="toggleShowAverageResponseTime" :showAverageResponseTime="showAverageResponseTime" />
|
||||
</slot>
|
||||
</div>
|
||||
</template>
|
||||
|
||||
|
||||
<script>
|
||||
import EndpointGroup from './EndpointGroup.vue';
|
||||
|
||||
export default {
|
||||
name: 'Endpoints',
|
||||
components: {
|
||||
EndpointGroup
|
||||
},
|
||||
props: {
|
||||
showStatusOnHover: Boolean,
|
||||
endpointStatuses: Object,
|
||||
showAverageResponseTime: Boolean
|
||||
},
|
||||
emits: ['showTooltip', 'toggleShowAverageResponseTime'],
|
||||
methods: {
|
||||
process() {
|
||||
let outputByGroup = {};
|
||||
for (let endpointStatusIndex in this.endpointStatuses) {
|
||||
let endpointStatus = this.endpointStatuses[endpointStatusIndex];
|
||||
// create an empty entry if this group is new
|
||||
if (!outputByGroup[endpointStatus.group] || outputByGroup[endpointStatus.group].length === 0) {
|
||||
outputByGroup[endpointStatus.group] = [];
|
||||
}
|
||||
outputByGroup[endpointStatus.group].push(endpointStatus);
|
||||
}
|
||||
let endpointGroups = [];
|
||||
for (let name in outputByGroup) {
|
||||
if (name !== 'undefined') {
|
||||
endpointGroups.push({name: name, endpoints: outputByGroup[name]})
|
||||
}
|
||||
}
|
||||
// Add all endpoints that don't have a group at the end
|
||||
if (outputByGroup['undefined']) {
|
||||
endpointGroups.push({name: 'undefined', endpoints: outputByGroup['undefined']})
|
||||
}
|
||||
this.endpointGroups = endpointGroups;
|
||||
},
|
||||
showTooltip(result, event) {
|
||||
this.$emit('showTooltip', result, event);
|
||||
},
|
||||
toggleShowAverageResponseTime() {
|
||||
this.$emit('toggleShowAverageResponseTime');
|
||||
}
|
||||
},
|
||||
watch: {
|
||||
endpointStatuses: function () {
|
||||
this.process();
|
||||
}
|
||||
},
|
||||
data() {
|
||||
return {
|
||||
userClickedStatus: false,
|
||||
endpointGroups: []
|
||||
}
|
||||
}
|
||||
}
|
||||
</script>
|
||||
|
||||
|
||||
<style>
|
||||
.endpoint-group-content > div:nth-child(1) {
|
||||
border-top-left-radius: 0;
|
||||
border-top-right-radius: 0;
|
||||
}
|
||||
</style>
|
|
@ -1,74 +0,0 @@
|
|||
<template>
|
||||
<div id="results">
|
||||
<slot v-for="serviceGroup in serviceGroups" :key="serviceGroup">
|
||||
<ServiceGroup :services="serviceGroup.services" :name="serviceGroup.name" @showTooltip="showTooltip" @toggleShowAverageResponseTime="toggleShowAverageResponseTime" :showAverageResponseTime="showAverageResponseTime" />
|
||||
</slot>
|
||||
</div>
|
||||
</template>
|
||||
|
||||
|
||||
<script>
|
||||
import ServiceGroup from './ServiceGroup.vue';
|
||||
|
||||
export default {
|
||||
name: 'Services',
|
||||
components: {
|
||||
ServiceGroup
|
||||
},
|
||||
props: {
|
||||
showStatusOnHover: Boolean,
|
||||
serviceStatuses: Object,
|
||||
showAverageResponseTime: Boolean
|
||||
},
|
||||
emits: ['showTooltip', 'toggleShowAverageResponseTime'],
|
||||
methods: {
|
||||
process() {
|
||||
let outputByGroup = {};
|
||||
for (let serviceStatusIndex in this.serviceStatuses) {
|
||||
let serviceStatus = this.serviceStatuses[serviceStatusIndex];
|
||||
// create an empty entry if this group is new
|
||||
if (!outputByGroup[serviceStatus.group] || outputByGroup[serviceStatus.group].length === 0) {
|
||||
outputByGroup[serviceStatus.group] = [];
|
||||
}
|
||||
outputByGroup[serviceStatus.group].push(serviceStatus);
|
||||
}
|
||||
let serviceGroups = [];
|
||||
for (let name in outputByGroup) {
|
||||
if (name !== 'undefined') {
|
||||
serviceGroups.push({name: name, services: outputByGroup[name]})
|
||||
}
|
||||
}
|
||||
// Add all services that don't have a group at the end
|
||||
if (outputByGroup['undefined']) {
|
||||
serviceGroups.push({name: 'undefined', services: outputByGroup['undefined']})
|
||||
}
|
||||
this.serviceGroups = serviceGroups;
|
||||
},
|
||||
showTooltip(result, event) {
|
||||
this.$emit('showTooltip', result, event);
|
||||
},
|
||||
toggleShowAverageResponseTime() {
|
||||
this.$emit('toggleShowAverageResponseTime');
|
||||
}
|
||||
},
|
||||
watch: {
|
||||
serviceStatuses: function () {
|
||||
this.process();
|
||||
}
|
||||
},
|
||||
data() {
|
||||
return {
|
||||
userClickedStatus: false,
|
||||
serviceGroups: []
|
||||
}
|
||||
}
|
||||
}
|
||||
</script>
|
||||
|
||||
|
||||
<style>
|
||||
.service-group-content > div:nth-child(1) {
|
||||
border-top-left-radius: 0;
|
||||
border-top-right-radius: 0;
|
||||
}
|
||||
</style>
|
|
@ -26,7 +26,7 @@
|
|||
|
||||
<script>
|
||||
export default {
|
||||
name: 'Services',
|
||||
name: 'Endpoints',
|
||||
props: {
|
||||
event: Event,
|
||||
result: Object
|
||||
|
|
|
@ -9,15 +9,19 @@ const routes = [
|
|||
component: Home
|
||||
},
|
||||
{
|
||||
path: '/services/:key',
|
||||
path: '/endpoints/:key',
|
||||
name: 'Details',
|
||||
component: Details,
|
||||
},
|
||||
]
|
||||
{ // XXX: Remove in v4.0.0
|
||||
path: '/services/:key',
|
||||
redirect: {name: 'Details'}
|
||||
},
|
||||
];
|
||||
|
||||
const router = createRouter({
|
||||
history: createWebHistory(process.env.BASE_URL),
|
||||
routes
|
||||
})
|
||||
});
|
||||
|
||||
export default router
|
||||
export default router;
|
||||
|
|
|
@ -4,11 +4,11 @@
|
|||
←
|
||||
</router-link>
|
||||
<div>
|
||||
<slot v-if="serviceStatus">
|
||||
<slot v-if="endpointStatus">
|
||||
<h1 class="text-xl xl:text-3xl font-mono text-gray-400">RECENT CHECKS</h1>
|
||||
<hr class="mb-4"/>
|
||||
<Service
|
||||
:data="serviceStatus"
|
||||
<Endpoint
|
||||
:data="endpointStatus"
|
||||
:maximumNumberOfResults="20"
|
||||
@showTooltip="showTooltip"
|
||||
@toggleShowAverageResponseTime="toggleShowAverageResponseTime"
|
||||
|
@ -16,7 +16,7 @@
|
|||
/>
|
||||
<Pagination @page="changePage"/>
|
||||
</slot>
|
||||
<div v-if="serviceStatus && serviceStatus.key" class="mt-12">
|
||||
<div v-if="endpointStatus && endpointStatus.key" class="mt-12">
|
||||
<h1 class="text-xl xl:text-3xl font-mono text-gray-400">UPTIME</h1>
|
||||
<hr/>
|
||||
<div class="flex space-x-4 text-center text-2xl mt-6 relative bottom-2 mb-10">
|
||||
|
@ -34,7 +34,7 @@
|
|||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<div v-if="serviceStatus && serviceStatus.key" class="mt-12">
|
||||
<div v-if="endpointStatus && endpointStatus.key" class="mt-12">
|
||||
<h1 class="text-xl xl:text-3xl font-mono text-gray-400">RESPONSE TIME</h1>
|
||||
<hr/>
|
||||
<img :src="generateResponseTimeChartImageURL()" alt="response time chart" class="mt-6" />
|
||||
|
@ -53,7 +53,7 @@
|
|||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<div v-if="serviceStatus && serviceStatus.key">
|
||||
<div v-if="endpointStatus && endpointStatus.key">
|
||||
<h1 class="text-xl xl:text-3xl font-mono text-gray-400 mt-4">EVENTS</h1>
|
||||
<hr class="mb-4"/>
|
||||
<div>
|
||||
|
@ -87,7 +87,7 @@
|
|||
|
||||
<script>
|
||||
import Settings from '@/components/Settings.vue'
|
||||
import Service from '@/components/Service.vue';
|
||||
import Endpoint from '@/components/Endpoint.vue';
|
||||
import {SERVER_URL} from "@/main.js";
|
||||
import {helper} from "@/mixins/helper.js";
|
||||
import Pagination from "@/components/Pagination";
|
||||
|
@ -96,7 +96,7 @@ export default {
|
|||
name: 'Details',
|
||||
components: {
|
||||
Pagination,
|
||||
Service,
|
||||
Endpoint,
|
||||
Settings,
|
||||
},
|
||||
emits: ['showTooltip'],
|
||||
|
@ -104,32 +104,32 @@ export default {
|
|||
methods: {
|
||||
fetchData() {
|
||||
//console.log("[Details][fetchData] Fetching data");
|
||||
fetch(`${this.serverUrl}/api/v1/services/${this.$route.params.key}/statuses?page=${this.currentPage}`)
|
||||
fetch(`${this.serverUrl}/api/v1/endpoints/${this.$route.params.key}/statuses?page=${this.currentPage}`)
|
||||
.then(response => response.json())
|
||||
.then(data => {
|
||||
if (JSON.stringify(this.serviceStatus) !== JSON.stringify(data)) {
|
||||
this.serviceStatus = data;
|
||||
if (JSON.stringify(this.endpointStatus) !== JSON.stringify(data)) {
|
||||
this.endpointStatus = data;
|
||||
this.uptime = data.uptime;
|
||||
let events = [];
|
||||
for (let i = data.events.length - 1; i >= 0; i--) {
|
||||
let event = data.events[i];
|
||||
if (i === data.events.length - 1) {
|
||||
if (event.type === 'UNHEALTHY') {
|
||||
event.fancyText = 'Service is unhealthy';
|
||||
event.fancyText = 'Endpoint is unhealthy';
|
||||
} else if (event.type === 'HEALTHY') {
|
||||
event.fancyText = 'Service is healthy';
|
||||
event.fancyText = 'Endpoint is healthy';
|
||||
} else if (event.type === 'START') {
|
||||
event.fancyText = 'Monitoring started';
|
||||
}
|
||||
} else {
|
||||
let nextEvent = data.events[i + 1];
|
||||
if (event.type === 'HEALTHY') {
|
||||
event.fancyText = 'Service became healthy';
|
||||
event.fancyText = 'Endpoint became healthy';
|
||||
} else if (event.type === 'UNHEALTHY') {
|
||||
if (nextEvent) {
|
||||
event.fancyText = 'Service was unhealthy for ' + this.prettifyTimeDifference(nextEvent.timestamp, event.timestamp);
|
||||
event.fancyText = 'Endpoint was unhealthy for ' + this.prettifyTimeDifference(nextEvent.timestamp, event.timestamp);
|
||||
} else {
|
||||
event.fancyText = 'Service became unhealthy';
|
||||
event.fancyText = 'Endpoint became unhealthy';
|
||||
}
|
||||
} else if (event.type === 'START') {
|
||||
event.fancyText = 'Monitoring started';
|
||||
|
@ -143,13 +143,13 @@ export default {
|
|||
});
|
||||
},
|
||||
generateUptimeBadgeImageURL(duration) {
|
||||
return `${this.serverUrl}/api/v1/services/${this.serviceStatus.key}/uptimes/${duration}/badge.svg`;
|
||||
return `${this.serverUrl}/api/v1/endpoints/${this.endpointStatus.key}/uptimes/${duration}/badge.svg`;
|
||||
},
|
||||
generateResponseTimeBadgeImageURL(duration) {
|
||||
return `${this.serverUrl}/api/v1/services/${this.serviceStatus.key}/response-times/${duration}/badge.svg`;
|
||||
return `${this.serverUrl}/api/v1/endpoints/${this.endpointStatus.key}/response-times/${duration}/badge.svg`;
|
||||
},
|
||||
generateResponseTimeChartImageURL() {
|
||||
return `${this.serverUrl}/api/v1/services/${this.serviceStatus.key}/response-times/24h/chart.svg`;
|
||||
return `${this.serverUrl}/api/v1/endpoints/${this.endpointStatus.key}/response-times/24h/chart.svg`;
|
||||
},
|
||||
prettifyUptime(uptime) {
|
||||
if (!uptime) {
|
||||
|
@ -174,7 +174,7 @@ export default {
|
|||
},
|
||||
data() {
|
||||
return {
|
||||
serviceStatus: {},
|
||||
endpointStatus: {},
|
||||
uptime: {},
|
||||
events: [],
|
||||
hourlyAverageResponseTime: {},
|
||||
|
@ -193,7 +193,7 @@ export default {
|
|||
</script>
|
||||
|
||||
<style scoped>
|
||||
.service {
|
||||
.endpoint {
|
||||
border-radius: 3px;
|
||||
border-bottom-width: 3px;
|
||||
}
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
<template>
|
||||
<Services
|
||||
:serviceStatuses="serviceStatuses"
|
||||
<Endpoints
|
||||
:endpointStatuses="endpointStatuses"
|
||||
:showStatusOnHover="true"
|
||||
@showTooltip="showTooltip"
|
||||
@toggleShowAverageResponseTime="toggleShowAverageResponseTime" :showAverageResponseTime="showAverageResponseTime"
|
||||
|
@ -11,7 +11,7 @@
|
|||
|
||||
<script>
|
||||
import Settings from '@/components/Settings.vue'
|
||||
import Services from '@/components/Services.vue';
|
||||
import Endpoints from '@/components/Endpoints.vue';
|
||||
import Pagination from "@/components/Pagination";
|
||||
import {SERVER_URL} from "@/main.js";
|
||||
|
||||
|
@ -19,18 +19,18 @@ export default {
|
|||
name: 'Home',
|
||||
components: {
|
||||
Pagination,
|
||||
Services,
|
||||
Endpoints,
|
||||
Settings,
|
||||
},
|
||||
emits: ['showTooltip', 'toggleShowAverageResponseTime'],
|
||||
methods: {
|
||||
fetchData() {
|
||||
//console.log("[Home][fetchData] Fetching data");
|
||||
fetch(`${SERVER_URL}/api/v1/services/statuses?page=${this.currentPage}`)
|
||||
fetch(`${SERVER_URL}/api/v1/endpoints/statuses?page=${this.currentPage}`)
|
||||
.then(response => response.json())
|
||||
.then(data => {
|
||||
if (JSON.stringify(this.serviceStatuses) !== JSON.stringify(data)) {
|
||||
this.serviceStatuses = data;
|
||||
if (JSON.stringify(this.endpointStatuses) !== JSON.stringify(data)) {
|
||||
this.endpointStatuses = data;
|
||||
}
|
||||
});
|
||||
},
|
||||
|
@ -47,7 +47,7 @@ export default {
|
|||
},
|
||||
data() {
|
||||
return {
|
||||
serviceStatuses: [],
|
||||
endpointStatuses: [],
|
||||
currentPage: 1,
|
||||
showAverageResponseTime: true
|
||||
}
|
||||
|
|
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
Loading…
Reference in a new issue