mirror of
https://github.com/prometheus-operator/prometheus-operator.git
synced 2025-04-16 01:06:27 +00:00
doc: rephrase sharding documentation
Signed-off-by: Simon Pasquier <spasquie@redhat.com>
This commit is contained in:
parent
4da36fdd23
commit
e6a6c47afe
9 changed files with 62 additions and 62 deletions
|
@ -1919,17 +1919,17 @@ int32
|
|||
</td>
|
||||
<td>
|
||||
<em>(Optional)</em>
|
||||
<p>Number of shards to distribute scraped targets onto.</p>
|
||||
<p>Number of shards to distribute the scraped targets onto.</p>
|
||||
<p><code>spec.replicas</code> multiplied by <code>spec.shards</code> is the total number of Pods
|
||||
being created.</p>
|
||||
<p>When not defined, the operator assumes only one shard.</p>
|
||||
<p>Note that scaling down shards will not reshard data onto the remaining
|
||||
instances, it must be manually moved. Increasing shards will not reshard
|
||||
data either but it will continue to be available from the same
|
||||
instances. To query globally, use Thanos sidecar and Thanos querier or
|
||||
remote write data to a central location.
|
||||
Alerting and recording rules</p>
|
||||
<p>By default, the sharding is performed on:
|
||||
instances. To query globally, use either
|
||||
* Thanos sidecar + querier for query federation and Thanos Ruler for rules.
|
||||
* Remote-write to send metrics to a central location.</p>
|
||||
<p>By default, the sharding of targets is performed on:
|
||||
* The <code>__address__</code> target’s metadata label for PodMonitor,
|
||||
ServiceMonitor and ScrapeConfig resources.
|
||||
* The <code>__param_target__</code> label for Probe resources.</p>
|
||||
|
@ -7098,17 +7098,17 @@ int32
|
|||
</td>
|
||||
<td>
|
||||
<em>(Optional)</em>
|
||||
<p>Number of shards to distribute scraped targets onto.</p>
|
||||
<p>Number of shards to distribute the scraped targets onto.</p>
|
||||
<p><code>spec.replicas</code> multiplied by <code>spec.shards</code> is the total number of Pods
|
||||
being created.</p>
|
||||
<p>When not defined, the operator assumes only one shard.</p>
|
||||
<p>Note that scaling down shards will not reshard data onto the remaining
|
||||
instances, it must be manually moved. Increasing shards will not reshard
|
||||
data either but it will continue to be available from the same
|
||||
instances. To query globally, use Thanos sidecar and Thanos querier or
|
||||
remote write data to a central location.
|
||||
Alerting and recording rules</p>
|
||||
<p>By default, the sharding is performed on:
|
||||
instances. To query globally, use either
|
||||
* Thanos sidecar + querier for query federation and Thanos Ruler for rules.
|
||||
* Remote-write to send metrics to a central location.</p>
|
||||
<p>By default, the sharding of targets is performed on:
|
||||
* The <code>__address__</code> target’s metadata label for PodMonitor,
|
||||
ServiceMonitor and ScrapeConfig resources.
|
||||
* The <code>__param_target__</code> label for Probe resources.</p>
|
||||
|
@ -11995,17 +11995,17 @@ int32
|
|||
</td>
|
||||
<td>
|
||||
<em>(Optional)</em>
|
||||
<p>Number of shards to distribute scraped targets onto.</p>
|
||||
<p>Number of shards to distribute the scraped targets onto.</p>
|
||||
<p><code>spec.replicas</code> multiplied by <code>spec.shards</code> is the total number of Pods
|
||||
being created.</p>
|
||||
<p>When not defined, the operator assumes only one shard.</p>
|
||||
<p>Note that scaling down shards will not reshard data onto the remaining
|
||||
instances, it must be manually moved. Increasing shards will not reshard
|
||||
data either but it will continue to be available from the same
|
||||
instances. To query globally, use Thanos sidecar and Thanos querier or
|
||||
remote write data to a central location.
|
||||
Alerting and recording rules</p>
|
||||
<p>By default, the sharding is performed on:
|
||||
instances. To query globally, use either
|
||||
* Thanos sidecar + querier for query federation and Thanos Ruler for rules.
|
||||
* Remote-write to send metrics to a central location.</p>
|
||||
<p>By default, the sharding of targets is performed on:
|
||||
* The <code>__address__</code> target’s metadata label for PodMonitor,
|
||||
ServiceMonitor and ScrapeConfig resources.
|
||||
* The <code>__param_target__</code> label for Probe resources.</p>
|
||||
|
@ -18846,17 +18846,17 @@ int32
|
|||
</td>
|
||||
<td>
|
||||
<em>(Optional)</em>
|
||||
<p>Number of shards to distribute scraped targets onto.</p>
|
||||
<p>Number of shards to distribute the scraped targets onto.</p>
|
||||
<p><code>spec.replicas</code> multiplied by <code>spec.shards</code> is the total number of Pods
|
||||
being created.</p>
|
||||
<p>When not defined, the operator assumes only one shard.</p>
|
||||
<p>Note that scaling down shards will not reshard data onto the remaining
|
||||
instances, it must be manually moved. Increasing shards will not reshard
|
||||
data either but it will continue to be available from the same
|
||||
instances. To query globally, use Thanos sidecar and Thanos querier or
|
||||
remote write data to a central location.
|
||||
Alerting and recording rules</p>
|
||||
<p>By default, the sharding is performed on:
|
||||
instances. To query globally, use either
|
||||
* Thanos sidecar + querier for query federation and Thanos Ruler for rules.
|
||||
* Remote-write to send metrics to a central location.</p>
|
||||
<p>By default, the sharding of targets is performed on:
|
||||
* The <code>__address__</code> target’s metadata label for PodMonitor,
|
||||
ServiceMonitor and ScrapeConfig resources.
|
||||
* The <code>__param_target__</code> label for Probe resources.</p>
|
||||
|
@ -27021,17 +27021,17 @@ int32
|
|||
</td>
|
||||
<td>
|
||||
<em>(Optional)</em>
|
||||
<p>Number of shards to distribute scraped targets onto.</p>
|
||||
<p>Number of shards to distribute the scraped targets onto.</p>
|
||||
<p><code>spec.replicas</code> multiplied by <code>spec.shards</code> is the total number of Pods
|
||||
being created.</p>
|
||||
<p>When not defined, the operator assumes only one shard.</p>
|
||||
<p>Note that scaling down shards will not reshard data onto the remaining
|
||||
instances, it must be manually moved. Increasing shards will not reshard
|
||||
data either but it will continue to be available from the same
|
||||
instances. To query globally, use Thanos sidecar and Thanos querier or
|
||||
remote write data to a central location.
|
||||
Alerting and recording rules</p>
|
||||
<p>By default, the sharding is performed on:
|
||||
instances. To query globally, use either
|
||||
* Thanos sidecar + querier for query federation and Thanos Ruler for rules.
|
||||
* Remote-write to send metrics to a central location.</p>
|
||||
<p>By default, the sharding of targets is performed on:
|
||||
* The <code>__address__</code> target’s metadata label for PodMonitor,
|
||||
ServiceMonitor and ScrapeConfig resources.
|
||||
* The <code>__param_target__</code> label for Probe resources.</p>
|
||||
|
|
20
bundle.yaml
generated
20
bundle.yaml
generated
|
@ -28479,7 +28479,7 @@ spec:
|
|||
type: string
|
||||
shards:
|
||||
description: |-
|
||||
Number of shards to distribute scraped targets onto.
|
||||
Number of shards to distribute the scraped targets onto.
|
||||
|
||||
`spec.replicas` multiplied by `spec.shards` is the total number of Pods
|
||||
being created.
|
||||
|
@ -28489,11 +28489,11 @@ spec:
|
|||
Note that scaling down shards will not reshard data onto the remaining
|
||||
instances, it must be manually moved. Increasing shards will not reshard
|
||||
data either but it will continue to be available from the same
|
||||
instances. To query globally, use Thanos sidecar and Thanos querier or
|
||||
remote write data to a central location.
|
||||
Alerting and recording rules
|
||||
instances. To query globally, use either
|
||||
* Thanos sidecar + querier for query federation and Thanos Ruler for rules.
|
||||
* Remote-write to send metrics to a central location.
|
||||
|
||||
By default, the sharding is performed on:
|
||||
By default, the sharding of targets is performed on:
|
||||
* The `__address__` target's metadata label for PodMonitor,
|
||||
ServiceMonitor and ScrapeConfig resources.
|
||||
* The `__param_target__` label for Probe resources.
|
||||
|
@ -41026,7 +41026,7 @@ spec:
|
|||
type: string
|
||||
shards:
|
||||
description: |-
|
||||
Number of shards to distribute scraped targets onto.
|
||||
Number of shards to distribute the scraped targets onto.
|
||||
|
||||
`spec.replicas` multiplied by `spec.shards` is the total number of Pods
|
||||
being created.
|
||||
|
@ -41036,11 +41036,11 @@ spec:
|
|||
Note that scaling down shards will not reshard data onto the remaining
|
||||
instances, it must be manually moved. Increasing shards will not reshard
|
||||
data either but it will continue to be available from the same
|
||||
instances. To query globally, use Thanos sidecar and Thanos querier or
|
||||
remote write data to a central location.
|
||||
Alerting and recording rules
|
||||
instances. To query globally, use either
|
||||
* Thanos sidecar + querier for query federation and Thanos Ruler for rules.
|
||||
* Remote-write to send metrics to a central location.
|
||||
|
||||
By default, the sharding is performed on:
|
||||
By default, the sharding of targets is performed on:
|
||||
* The `__address__` target's metadata label for PodMonitor,
|
||||
ServiceMonitor and ScrapeConfig resources.
|
||||
* The `__param_target__` label for Probe resources.
|
||||
|
|
|
@ -7319,7 +7319,7 @@ spec:
|
|||
type: string
|
||||
shards:
|
||||
description: |-
|
||||
Number of shards to distribute scraped targets onto.
|
||||
Number of shards to distribute the scraped targets onto.
|
||||
|
||||
`spec.replicas` multiplied by `spec.shards` is the total number of Pods
|
||||
being created.
|
||||
|
@ -7329,11 +7329,11 @@ spec:
|
|||
Note that scaling down shards will not reshard data onto the remaining
|
||||
instances, it must be manually moved. Increasing shards will not reshard
|
||||
data either but it will continue to be available from the same
|
||||
instances. To query globally, use Thanos sidecar and Thanos querier or
|
||||
remote write data to a central location.
|
||||
Alerting and recording rules
|
||||
instances. To query globally, use either
|
||||
* Thanos sidecar + querier for query federation and Thanos Ruler for rules.
|
||||
* Remote-write to send metrics to a central location.
|
||||
|
||||
By default, the sharding is performed on:
|
||||
By default, the sharding of targets is performed on:
|
||||
* The `__address__` target's metadata label for PodMonitor,
|
||||
ServiceMonitor and ScrapeConfig resources.
|
||||
* The `__param_target__` label for Probe resources.
|
||||
|
|
|
@ -9032,7 +9032,7 @@ spec:
|
|||
type: string
|
||||
shards:
|
||||
description: |-
|
||||
Number of shards to distribute scraped targets onto.
|
||||
Number of shards to distribute the scraped targets onto.
|
||||
|
||||
`spec.replicas` multiplied by `spec.shards` is the total number of Pods
|
||||
being created.
|
||||
|
@ -9042,11 +9042,11 @@ spec:
|
|||
Note that scaling down shards will not reshard data onto the remaining
|
||||
instances, it must be manually moved. Increasing shards will not reshard
|
||||
data either but it will continue to be available from the same
|
||||
instances. To query globally, use Thanos sidecar and Thanos querier or
|
||||
remote write data to a central location.
|
||||
Alerting and recording rules
|
||||
instances. To query globally, use either
|
||||
* Thanos sidecar + querier for query federation and Thanos Ruler for rules.
|
||||
* Remote-write to send metrics to a central location.
|
||||
|
||||
By default, the sharding is performed on:
|
||||
By default, the sharding of targets is performed on:
|
||||
* The `__address__` target's metadata label for PodMonitor,
|
||||
ServiceMonitor and ScrapeConfig resources.
|
||||
* The `__param_target__` label for Probe resources.
|
||||
|
|
|
@ -7320,7 +7320,7 @@ spec:
|
|||
type: string
|
||||
shards:
|
||||
description: |-
|
||||
Number of shards to distribute scraped targets onto.
|
||||
Number of shards to distribute the scraped targets onto.
|
||||
|
||||
`spec.replicas` multiplied by `spec.shards` is the total number of Pods
|
||||
being created.
|
||||
|
@ -7330,11 +7330,11 @@ spec:
|
|||
Note that scaling down shards will not reshard data onto the remaining
|
||||
instances, it must be manually moved. Increasing shards will not reshard
|
||||
data either but it will continue to be available from the same
|
||||
instances. To query globally, use Thanos sidecar and Thanos querier or
|
||||
remote write data to a central location.
|
||||
Alerting and recording rules
|
||||
instances. To query globally, use either
|
||||
* Thanos sidecar + querier for query federation and Thanos Ruler for rules.
|
||||
* Remote-write to send metrics to a central location.
|
||||
|
||||
By default, the sharding is performed on:
|
||||
By default, the sharding of targets is performed on:
|
||||
* The `__address__` target's metadata label for PodMonitor,
|
||||
ServiceMonitor and ScrapeConfig resources.
|
||||
* The `__param_target__` label for Probe resources.
|
||||
|
|
|
@ -9033,7 +9033,7 @@ spec:
|
|||
type: string
|
||||
shards:
|
||||
description: |-
|
||||
Number of shards to distribute scraped targets onto.
|
||||
Number of shards to distribute the scraped targets onto.
|
||||
|
||||
`spec.replicas` multiplied by `spec.shards` is the total number of Pods
|
||||
being created.
|
||||
|
@ -9043,11 +9043,11 @@ spec:
|
|||
Note that scaling down shards will not reshard data onto the remaining
|
||||
instances, it must be manually moved. Increasing shards will not reshard
|
||||
data either but it will continue to be available from the same
|
||||
instances. To query globally, use Thanos sidecar and Thanos querier or
|
||||
remote write data to a central location.
|
||||
Alerting and recording rules
|
||||
instances. To query globally, use either
|
||||
* Thanos sidecar + querier for query federation and Thanos Ruler for rules.
|
||||
* Remote-write to send metrics to a central location.
|
||||
|
||||
By default, the sharding is performed on:
|
||||
By default, the sharding of targets is performed on:
|
||||
* The `__address__` target's metadata label for PodMonitor,
|
||||
ServiceMonitor and ScrapeConfig resources.
|
||||
* The `__param_target__` label for Probe resources.
|
||||
|
|
|
@ -6177,7 +6177,7 @@
|
|||
"type": "string"
|
||||
},
|
||||
"shards": {
|
||||
"description": "Number of shards to distribute scraped targets onto.\n\n`spec.replicas` multiplied by `spec.shards` is the total number of Pods\nbeing created.\n\nWhen not defined, the operator assumes only one shard.\n\nNote that scaling down shards will not reshard data onto the remaining\ninstances, it must be manually moved. Increasing shards will not reshard\ndata either but it will continue to be available from the same\ninstances. To query globally, use Thanos sidecar and Thanos querier or\nremote write data to a central location.\nAlerting and recording rules\n\nBy default, the sharding is performed on:\n* The `__address__` target's metadata label for PodMonitor,\nServiceMonitor and ScrapeConfig resources.\n* The `__param_target__` label for Probe resources.\n\nUsers can define their own sharding implementation by setting the\n`__tmp_hash` label during the target discovery with relabeling\nconfiguration (either in the monitoring resources or via scrape class).",
|
||||
"description": "Number of shards to distribute the scraped targets onto.\n\n`spec.replicas` multiplied by `spec.shards` is the total number of Pods\nbeing created.\n\nWhen not defined, the operator assumes only one shard.\n\nNote that scaling down shards will not reshard data onto the remaining\ninstances, it must be manually moved. Increasing shards will not reshard\ndata either but it will continue to be available from the same\ninstances. To query globally, use either\n* Thanos sidecar + querier for query federation and Thanos Ruler for rules.\n* Remote-write to send metrics to a central location.\n\nBy default, the sharding of targets is performed on:\n* The `__address__` target's metadata label for PodMonitor,\nServiceMonitor and ScrapeConfig resources.\n* The `__param_target__` label for Probe resources.\n\nUsers can define their own sharding implementation by setting the\n`__tmp_hash` label during the target discovery with relabeling\nconfiguration (either in the monitoring resources or via scrape class).",
|
||||
"format": "int32",
|
||||
"type": "integer"
|
||||
},
|
||||
|
|
|
@ -7689,7 +7689,7 @@
|
|||
"type": "string"
|
||||
},
|
||||
"shards": {
|
||||
"description": "Number of shards to distribute scraped targets onto.\n\n`spec.replicas` multiplied by `spec.shards` is the total number of Pods\nbeing created.\n\nWhen not defined, the operator assumes only one shard.\n\nNote that scaling down shards will not reshard data onto the remaining\ninstances, it must be manually moved. Increasing shards will not reshard\ndata either but it will continue to be available from the same\ninstances. To query globally, use Thanos sidecar and Thanos querier or\nremote write data to a central location.\nAlerting and recording rules\n\nBy default, the sharding is performed on:\n* The `__address__` target's metadata label for PodMonitor,\nServiceMonitor and ScrapeConfig resources.\n* The `__param_target__` label for Probe resources.\n\nUsers can define their own sharding implementation by setting the\n`__tmp_hash` label during the target discovery with relabeling\nconfiguration (either in the monitoring resources or via scrape class).",
|
||||
"description": "Number of shards to distribute the scraped targets onto.\n\n`spec.replicas` multiplied by `spec.shards` is the total number of Pods\nbeing created.\n\nWhen not defined, the operator assumes only one shard.\n\nNote that scaling down shards will not reshard data onto the remaining\ninstances, it must be manually moved. Increasing shards will not reshard\ndata either but it will continue to be available from the same\ninstances. To query globally, use either\n* Thanos sidecar + querier for query federation and Thanos Ruler for rules.\n* Remote-write to send metrics to a central location.\n\nBy default, the sharding of targets is performed on:\n* The `__address__` target's metadata label for PodMonitor,\nServiceMonitor and ScrapeConfig resources.\n* The `__param_target__` label for Probe resources.\n\nUsers can define their own sharding implementation by setting the\n`__tmp_hash` label during the target discovery with relabeling\nconfiguration (either in the monitoring resources or via scrape class).",
|
||||
"format": "int32",
|
||||
"type": "integer"
|
||||
},
|
||||
|
|
|
@ -242,7 +242,7 @@ type CommonPrometheusFields struct {
|
|||
// +optional
|
||||
Replicas *int32 `json:"replicas,omitempty"`
|
||||
|
||||
// Number of shards to distribute scraped targets onto.
|
||||
// Number of shards to distribute the scraped targets onto.
|
||||
//
|
||||
// `spec.replicas` multiplied by `spec.shards` is the total number of Pods
|
||||
// being created.
|
||||
|
@ -252,11 +252,11 @@ type CommonPrometheusFields struct {
|
|||
// Note that scaling down shards will not reshard data onto the remaining
|
||||
// instances, it must be manually moved. Increasing shards will not reshard
|
||||
// data either but it will continue to be available from the same
|
||||
// instances. To query globally, use Thanos sidecar and Thanos querier or
|
||||
// remote write data to a central location.
|
||||
// Alerting and recording rules
|
||||
// instances. To query globally, use either
|
||||
// * Thanos sidecar + querier for query federation and Thanos Ruler for rules.
|
||||
// * Remote-write to send metrics to a central location.
|
||||
//
|
||||
// By default, the sharding is performed on:
|
||||
// By default, the sharding of targets is performed on:
|
||||
// * The `__address__` target's metadata label for PodMonitor,
|
||||
// ServiceMonitor and ScrapeConfig resources.
|
||||
// * The `__param_target__` label for Probe resources.
|
||||
|
|
Loading…
Add table
Reference in a new issue