1
0
Fork 0
mirror of https://github.com/arangodb/kube-arangodb.git synced 2024-12-15 17:51:03 +00:00

(Documentation) Update ArangoDeployment CR auto-generated docs (#1451)

This commit is contained in:
Nikita Vaniasin 2023-10-20 09:28:44 +02:00 committed by GitHub
parent fe66d98444
commit f28c6981dc
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
37 changed files with 1961 additions and 1590 deletions

View file

@ -6,6 +6,7 @@
- (Maintenance) Remove obsolete docs, restructure for better UX, generate index files - (Maintenance) Remove obsolete docs, restructure for better UX, generate index files
- (Feature) Add `spec.upgrade.debugLog` option to configure upgrade container logging - (Feature) Add `spec.upgrade.debugLog` option to configure upgrade container logging
- (Documentation) Move documentation from ArangoDB into this repo, update and improve structure - (Documentation) Move documentation from ArangoDB into this repo, update and improve structure
- (Documentation) Update ArangoDeployment CR auto-generated docs
## [1.2.34](https://github.com/arangodb/kube-arangodb/tree/1.2.34) (2023-10-16) ## [1.2.34](https://github.com/arangodb/kube-arangodb/tree/1.2.34) (2023-10-16)
- (Bugfix) Fix make manifests-crd-file command - (Bugfix) Fix make manifests-crd-file command

View file

@ -40,18 +40,15 @@ that you deploy in your Kubernetes cluster to:
Each of these uses involves a different custom resource. Each of these uses involves a different custom resource.
- Use an [`ArangoDeployment` resource](deployment-resource-reference.md) to - Use an [ArangoDeployment resource](deployment-resource-reference.md) to create an ArangoDB database deployment.
create an ArangoDB database deployment. - Use an [ArangoMember resource](api/ArangoMember.V1.md) to observe and adjust individual deployment members.
- Use an [`ArangoBackup`](backup-resource.md) and `ArangoBackupPolicy` resources to - Use an [ArangoBackup](backup-resource.md) and [ArangoBackupPolicy](backuppolicy-resource.md) resources to create ArangoDB backups.
create ArangoDB backups. - Use an [ArangoLocalStorage resource](storage-resource.md) to provide local `PersistentVolumes` for optimal I/O performance.
- Use an [`ArangoLocalStorage` resource](storage-resource.md) to - Use an [ArangoDeploymentReplication resource](deployment-replication-resource-reference.md) to configure ArangoDB Datacenter-to-Datacenter Replication.
provide local `PersistentVolumes` for optimal I/O performance.
- Use an [`ArangoDeploymentReplication` resource](deployment-replication-resource-reference.md) to
configure ArangoDB Datacenter-to-Datacenter Replication.
Continue with [Using the ArangoDB Kubernetes Operator](using-the-operator.md) Continue with [Using the ArangoDB Kubernetes Operator](using-the-operator.md)
to learn how to install the ArangoDB Kubernetes operator and create to learn how to install the ArangoDB Kubernetes operator and create
your first deployment. your first deployment.
For more information about the production readiness state, please refer to the For more information about the production readiness state, please refer to the
[ArangoDB Kubernetes Operator repository](https://github.com/arangodb/kube-arangodb#production-readiness-state). [main README file](https://github.com/arangodb/kube-arangodb#production-readiness-state).

File diff suppressed because it is too large Load diff

View file

@ -1,4 +1,6 @@
# ArangoDeployment Custom Resource # ArangoDeployment Custom Resource Overview
[Full CustomResourceDefinition reference ->](./api/ArangoDeployment.V1.md)
The ArangoDB Deployment Operator creates and maintains ArangoDB deployments The ArangoDB Deployment Operator creates and maintains ArangoDB deployments
in a Kubernetes cluster, given a deployment specification. in a Kubernetes cluster, given a deployment specification.
@ -44,797 +46,3 @@ spec:
count: 3 count: 3
image: "arangodb/arangodb:3.9.3" image: "arangodb/arangodb:3.9.3"
``` ```
## Specification reference
Below you'll find all settings of the `ArangoDeployment` custom resource.
Several settings are for various groups of servers. These are indicated
with `<group>` where `<group>` can be any of:
- `agents` for all Agents of a `Cluster` or `ActiveFailover` pair.
- `dbservers` for all DB-Servers of a `Cluster`.
- `coordinators` for all Coordinators of a `Cluster`.
- `single` for all single servers of a `Single` instance or `ActiveFailover` pair.
- `syncmasters` for all syncmasters of a `Cluster`.
- `syncworkers` for all syncworkers of a `Cluster`.
Special group `id` can be used for image discovery and testing affinity/toleration settings.
### `spec.architecture: []string`
This setting specifies a CPU architecture for the deployment.
Possible values are:
- `amd64` (default): Use processors with the x86-64 architecture.
- `arm64`: Use processors with the 64-bit ARM architecture.
The setting expects a list of strings, but you should only specify a single
list item for the architecture, except when you want to migrate from one
architecture to the other. The first list item defines the new default
architecture for the deployment that you want to migrate to.
_Tip:_
To use the ARM architecture, you need to enable it in the operator first using
`--set "operator.architectures={amd64,arm64}"`. See
[Installation with Helm](using-the-operator.md#installation-with-helm).
To create a new deployment with `arm64` nodes, specify the architecture in the
deployment specification as follows:
```yaml
spec:
architecture:
- arm64
```
To migrate nodes of an existing deployment from `amd64` to `arm64`, modify the
deployment specification so that both architectures are listed:
```diff
spec:
architecture:
+ - arm64
- amd64
```
This lets new members as well as recreated members use `arm64` nodes.
Then run the following command:
```bash
kubectl annotate pod $POD "deployment.arangodb.com/replace=true"
```
To change an existing member to `arm64`, annotate the pod as follows:
```bash
kubectl annotate pod $POD "deployment.arangodb.com/arch=arm64"
```
An `ArchitectureMismatch` condition occurs in the deployment:
```yaml
members:
single:
- arango-version: 3.10.0
architecture: arm64
conditions:
reason: Member has a different architecture than the deployment
status: "True"
type: ArchitectureMismatch
```
Restart the pod using this command:
```bash
kubectl annotate pod $POD "deployment.arangodb.com/rotate=true"
```
### `spec.mode: string`
This setting specifies the type of deployment you want to create.
Possible values are:
- `Cluster` (default) Full cluster. Defaults to 3 Agents, 3 DB-Servers & 3 Coordinators.
- `ActiveFailover` Active-failover single pair. Defaults to 3 Agents and 2 single servers.
- `Single` Single server only (note this does not provide high availability or reliability).
This setting cannot be changed after the deployment has been created.
### `spec.environment: string`
This setting specifies the type of environment in which the deployment is created.
Possible values are:
- `Development` (default) This value optimizes the deployment for development
use. It is possible to run a deployment on a small number of nodes (e.g. minikube).
- `Production` This value optimizes the deployment for production use.
It puts required affinity constraints on all pods to avoid Agents & DB-Servers
from running on the same machine.
### `spec.image: string`
This setting specifies the docker image to use for all ArangoDB servers.
In a `development` environment this setting defaults to `arangodb/arangodb:latest`.
For `production` environments this is a required setting without a default value.
It is highly recommend to use explicit version (not `latest`) for production
environments.
### `spec.imagePullPolicy: string`
This setting specifies the pull policy for the docker image to use for all ArangoDB servers.
Possible values are:
- `IfNotPresent` (default) to pull only when the image is not found on the node.
- `Always` to always pull the image before using it.
### `spec.imagePullSecrets: []string`
This setting specifies the list of image pull secrets for the docker image to use for all ArangoDB servers.
### `spec.annotations: map[string]string`
This setting set specified annotations to all ArangoDeployment owned resources (pods, services, PVC's, PDB's).
### `spec.storageEngine: string`
This setting specifies the type of storage engine used for all servers
in the cluster.
Possible values are:
- `MMFiles` To use the MMFiles storage engine.
- `RocksDB` (default) To use the RocksDB storage engine.
This setting cannot be changed after the cluster has been created.
### `spec.downtimeAllowed: bool`
This setting is used to allow automatic reconciliation actions that yield
some downtime of the ArangoDB deployment.
When this setting is set to `false` (the default), no automatic action that
may result in downtime is allowed.
If the need for such an action is detected, an event is added to the `ArangoDeployment`.
Once this setting is set to `true`, the automatic action is executed.
Operations that may result in downtime are:
- Rotating TLS CA certificate
Note: It is still possible that there is some downtime when the Kubernetes
cluster is down, or in a bad state, irrespective of the value of this setting.
### `spec.memberPropagationMode`
Changes to a pod's configuration require a restart of that pod in almost all
cases. Pods are restarted eagerly by default, which can cause more restarts than
desired, especially when updating _arangod_ as well as the operator.
The propagation of the configuration changes can be deferred to the next restart,
either triggered manually by the user or by another operation like an upgrade.
This reduces the number of restarts for upgrading both the server and the
operator from two to one.
- `always`: Restart the member as soon as a configuration change is discovered
- `on-restart`: Wait until the next restart to change the member configuration
### `spec.rocksdb.encryption.keySecretName`
This setting specifies the name of a Kubernetes `Secret` that contains
an encryption key used for encrypting all data stored by ArangoDB servers.
When an encryption key is used, encryption of the data in the cluster is enabled,
without it encryption is disabled.
The default value is empty.
This requires the Enterprise Edition.
The encryption key cannot be changed after the cluster has been created.
The secret specified by this setting, must have a data field named 'key' containing
an encryption key that is exactly 32 bytes long.
### `spec.networkAttachedVolumes: bool`
The default of this option is `false`. If set to `true`, a `ResignLeaderShip`
operation will be triggered when a DB-Server pod is evicted (rather than a
`CleanOutServer` operation). Furthermore, the pod will simply be
redeployed on a different node, rather than cleaned and retired and
replaced by a new member. You must only set this option to `true` if
your persistent volumes are "movable" in the sense that they can be
mounted from a different k8s node, like in the case of network attached
volumes. If your persistent volumes are tied to a specific pod, you
must leave this option on `false`.
### `spec.externalAccess.type: string`
This setting specifies the type of `Service` that will be created to provide
access to the ArangoDB deployment from outside the Kubernetes cluster.
Possible values are:
- `None` To limit access to application running inside the Kubernetes cluster.
- `LoadBalancer` To create a `Service` of type `LoadBalancer` for the ArangoDB deployment.
- `NodePort` To create a `Service` of type `NodePort` for the ArangoDB deployment.
- `Auto` (default) To create a `Service` of type `LoadBalancer` and fallback to a `Service` or type `NodePort` when the
`LoadBalancer` is not assigned an IP address.
### `spec.externalAccess.loadBalancerIP: string`
This setting specifies the IP used to for the LoadBalancer to expose the ArangoDB deployment on.
This setting is used when `spec.externalAccess.type` is set to `LoadBalancer` or `Auto`.
If you do not specify this setting, an IP will be chosen automatically by the load-balancer provisioner.
### `spec.externalAccess.loadBalancerSourceRanges: []string`
If specified and supported by the platform (cloud provider), this will restrict traffic through the cloud-provider
load-balancer will be restricted to the specified client IPs. This field will be ignored if the
cloud-provider does not support the feature.
More info: https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/
### `spec.externalAccess.nodePort: int`
This setting specifies the port used to expose the ArangoDB deployment on.
This setting is used when `spec.externalAccess.type` is set to `NodePort` or `Auto`.
If you do not specify this setting, a random port will be chosen automatically.
### `spec.externalAccess.advertisedEndpoint: string`
This setting specifies the advertised endpoint for all Coordinators.
### `spec.auth.jwtSecretName: string`
This setting specifies the name of a kubernetes `Secret` that contains
the JWT token used for accessing all ArangoDB servers.
When no name is specified, it defaults to `<deployment-name>-jwt`.
To disable authentication, set this value to `None`.
If you specify a name of a `Secret`, that secret must have the token
in a data field named `token`.
If you specify a name of a `Secret` that does not exist, a random token is created
and stored in a `Secret` with given name.
Changing a JWT token results in stopping the entire cluster
and restarting it.
### `spec.tls.caSecretName: string`
This setting specifies the name of a kubernetes `Secret` that contains
a standard CA certificate + private key used to sign certificates for individual
ArangoDB servers.
When no name is specified, it defaults to `<deployment-name>-ca`.
To disable authentication, set this value to `None`.
If you specify a name of a `Secret` that does not exist, a self-signed CA certificate + key is created
and stored in a `Secret` with given name.
The specified `Secret`, must contain the following data fields:
- `ca.crt` PEM encoded public key of the CA certificate
- `ca.key` PEM encoded private key of the CA certificate
### `spec.tls.altNames: []string`
This setting specifies a list of alternate names that will be added to all generated
certificates. These names can be DNS names or email addresses.
The default value is empty.
### `spec.tls.ttl: duration`
This setting specifies the time to live of all generated
server certificates.
The default value is `2160h` (about 3 month).
When the server certificate is about to expire, it will be automatically replaced
by a new one and the affected server will be restarted.
Note: The time to live of the CA certificate (when created automatically)
will be set to 10 years.
### `spec.sync.enabled: bool`
This setting enables/disables support for data center 2 data center
replication in the cluster. When enabled, the cluster will contain
a number of `syncmaster` & `syncworker` servers.
The default value is `false`.
### `spec.sync.externalAccess.type: string`
This setting specifies the type of `Service` that will be created to provide
access to the ArangoSync syncMasters from outside the Kubernetes cluster.
Possible values are:
- `None` To limit access to applications running inside the Kubernetes cluster.
- `LoadBalancer` To create a `Service` of type `LoadBalancer` for the ArangoSync SyncMasters.
- `NodePort` To create a `Service` of type `NodePort` for the ArangoSync SyncMasters.
- `Auto` (default) To create a `Service` of type `LoadBalancer` and fallback to a `Service` or type `NodePort` when the
`LoadBalancer` is not assigned an IP address.
Note that when you specify a value of `None`, a `Service` will still be created, but of type `ClusterIP`.
### `spec.sync.externalAccess.loadBalancerIP: string`
This setting specifies the IP used for the LoadBalancer to expose the ArangoSync SyncMasters on.
This setting is used when `spec.sync.externalAccess.type` is set to `LoadBalancer` or `Auto`.
If you do not specify this setting, an IP will be chosen automatically by the load-balancer provisioner.
### `spec.sync.externalAccess.nodePort: int`
This setting specifies the port used to expose the ArangoSync SyncMasters on.
This setting is used when `spec.sync.externalAccess.type` is set to `NodePort` or `Auto`.
If you do not specify this setting, a random port will be chosen automatically.
### `spec.sync.externalAccess.loadBalancerSourceRanges: []string`
If specified and supported by the platform (cloud provider), this will restrict traffic through the cloud-provider
load-balancer will be restricted to the specified client IPs. This field will be ignored if the
cloud-provider does not support the feature.
More info: https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/
### `spec.sync.externalAccess.masterEndpoint: []string`
This setting specifies the master endpoint(s) advertised by the ArangoSync SyncMasters.
If not set, this setting defaults to:
- If `spec.sync.externalAccess.loadBalancerIP` is set, it defaults to `https://<load-balancer-ip>:<8629>`.
- Otherwise it defaults to `https://<sync-service-dns-name>:<8629>`.
### `spec.sync.externalAccess.accessPackageSecretNames: []string`
This setting specifies the names of zero of more `Secrets` that will be created by the deployment
operator containing "access packages". An access package contains those `Secrets` that are needed
to access the SyncMasters of this `ArangoDeployment`.
By removing a name from this setting, the corresponding `Secret` is also deleted.
Note that to remove all access packages, leave an empty array in place (`[]`).
Completely removing the setting results in not modifying the list.
See [the `ArangoDeploymentReplication` specification](deployment-replication-resource-reference.md) for more information
on access packages.
### `spec.sync.auth.jwtSecretName: string`
This setting specifies the name of a kubernetes `Secret` that contains
the JWT token used for accessing all ArangoSync master servers.
When not specified, the `spec.auth.jwtSecretName` value is used.
If you specify a name of a `Secret` that does not exist, a random token is created
and stored in a `Secret` with given name.
### `spec.sync.auth.clientCASecretName: string`
This setting specifies the name of a kubernetes `Secret` that contains
a PEM encoded CA certificate used for client certificate verification
in all ArangoSync master servers.
This is a required setting when `spec.sync.enabled` is `true`.
The default value is empty.
### `spec.sync.mq.type: string`
This setting sets the type of message queue used by ArangoSync.
Possible values are:
- `Direct` (default) for direct HTTP connections between the 2 data centers.
### `spec.sync.tls.caSecretName: string`
This setting specifies the name of a kubernetes `Secret` that contains
a standard CA certificate + private key used to sign certificates for individual
ArangoSync master servers.
When no name is specified, it defaults to `<deployment-name>-sync-ca`.
If you specify a name of a `Secret` that does not exist, a self-signed CA certificate + key is created
and stored in a `Secret` with given name.
The specified `Secret`, must contain the following data fields:
- `ca.crt` PEM encoded public key of the CA certificate
- `ca.key` PEM encoded private key of the CA certificate
### `spec.sync.tls.altNames: []string`
This setting specifies a list of alternate names that will be added to all generated
certificates. These names can be DNS names or email addresses.
The default value is empty.
### `spec.sync.monitoring.tokenSecretName: string`
This setting specifies the name of a kubernetes `Secret` that contains
the bearer token used for accessing all monitoring endpoints of all ArangoSync
servers.
When not specified, no monitoring token is used.
The default value is empty.
### `spec.disableIPv6: bool`
This setting prevents the use of IPv6 addresses by ArangoDB servers.
The default is `false`.
This setting cannot be changed after the deployment has been created.
### `spec.restoreFrom: string`
This setting specifies a `ArangoBackup` resource name the cluster should be restored from.
After a restore or failure to do so, the status of the deployment contains information about the
restore operation in the `restore` key.
It will contain some of the following fields:
- _requestedFrom_: name of the `ArangoBackup` used to restore from.
- _message_: optional message explaining why the restore failed.
- _state_: state indicating if the restore was successful or not. Possible values: `Restoring`, `Restored`, `RestoreFailed`
If the `restoreFrom` key is removed from the spec, the `restore` key is deleted as well.
A new restore attempt is made if and only if either in the status restore is not set or if spec.restoreFrom and status.requestedFrom are different.
### `spec.license.secretName: string`
This setting specifies the name of a kubernetes `Secret` that contains
the license key token used for enterprise images. This value is not used for
the Community Edition.
### `spec.bootstrap.passwordSecretNames.root: string`
This setting specifies a secret name for the credentials of the root user.
When a deployment is created the operator will setup the root user account
according to the credentials given by the secret. If the secret doesn't exist
the operator creates a secret with a random password.
There are two magic values for the secret name:
- `None` specifies no action. This disables root password randomization. This is the default value. (Thus the root password is empty - not recommended)
- `Auto` specifies automatic name generation, which is `<deploymentname>-root-password`.
### `spec.metrics.enabled: bool`
If this is set to `true`, the operator runs a sidecar container for
every Agent, DB-Server, Coordinator and Single server.
In addition to the sidecar containers the operator will deploy a service
to access the exporter ports (from within the k8s cluster), and a
resource of type `ServiceMonitor`, provided the corresponding custom
resource definition is deployed in the k8s cluster. If you are running
Prometheus in the same k8s cluster with the Prometheus operator, this
will be the case. The `ServiceMonitor` will have the following labels
set:
- `app: arangodb`
- `arango_deployment: YOUR_DEPLOYMENT_NAME`
- `context: metrics`
- `metrics: prometheus`
This makes it possible that you configure your Prometheus deployment to
automatically start monitoring on the available Prometheus feeds. To
this end, you must configure the `serviceMonitorSelector` in the specs
of your Prometheus deployment to match these labels. For example:
```yaml
serviceMonitorSelector:
matchLabels:
metrics: prometheus
```
would automatically select all pods of all ArangoDB cluster deployments
which have metrics enabled.
### `spec.metrics.image: string`
<small>Deprecated in: v1.2.0 (kube-arangodb)</small>
See above, this is the name of the Docker image for the ArangoDB
exporter to expose metrics. If empty, the same image as for the main
deployment is used.
### `spec.metrics.resources: ResourceRequirements`
<small>Introduced in: v0.4.3 (kube-arangodb)</small>
This setting specifies the resources required by the metrics container.
This includes requests and limits.
See [Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container).
### `spec.metrics.mode: string`
<small>Introduced in: v1.0.2 (kube-arangodb)</small>
Defines metrics exporter mode.
Possible values:
- `exporter` (default): add sidecar to pods (except Agency pods) and exposes
metrics collected by exporter from ArangoDB Container. Exporter in this mode
exposes metrics which are accessible without authentication.
- `sidecar`: add sidecar to all pods and expose metrics from ArangoDB metrics
endpoint. Exporter in this mode exposes metrics which are accessible without
authentication.
- `internal`: configure ServiceMonitor to use internal ArangoDB metrics endpoint
(proper JWT token is generated for this endpoint).
### `spec.metrics.tls: bool`
<small>Introduced in: v1.1.0 (kube-arangodb)</small>
Defines if TLS should be enabled on Metrics exporter endpoint.
The default is `true`.
This option will enable TLS only if TLS is enabled on ArangoDeployment,
otherwise `true` value will not take any effect.
### `spec.lifecycle.resources: ResourceRequirements`
<small>Introduced in: v0.4.3 (kube-arangodb)</small>
This setting specifies the resources required by the lifecycle init container.
This includes requests and limits.
See [Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container).
### `spec.<group>.count: number`
This setting specifies the number of servers to start for the given group.
For the Agent group, this value must be a positive, odd number.
The default value is `3` for all groups except `single` (there the default is `1`
for `spec.mode: Single` and `2` for `spec.mode: ActiveFailover`).
For the `syncworkers` group, it is highly recommended to use the same number
as for the `dbservers` group.
### `spec.<group>.minCount: number`
Specifies a minimum for the count of servers. If set, a specification is invalid if `count < minCount`.
### `spec.<group>.maxCount: number`
Specifies a maximum for the count of servers. If set, a specification is invalid if `count > maxCount`.
### `spec.<group>.args: []string`
This setting specifies additional command-line arguments passed to all servers of this group.
The default value is an empty array.
### `spec.<group>.resources: ResourceRequirements`
This setting specifies the resources required by pods of this group. This includes requests and limits.
See https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ for details.
### `spec.<group>.overrideDetectedTotalMemory: bool`
<small>Introduced in: v1.0.1 (kube-arangodb)</small>
Set additional flag in ArangoDeployment pods to propagate Memory resource limits
### `spec.<group>.volumeClaimTemplate.Spec: PersistentVolumeClaimSpec`
Specifies a volumeClaimTemplate used by operator to create to volume claims for pods of this group.
This setting is not available for group `coordinators`, `syncmasters` & `syncworkers`.
The default value describes a volume with `8Gi` storage, `ReadWriteOnce` access mode and volume mode set to `PersistentVolumeFilesystem`.
If this field is not set and `spec.<group>.resources.requests.storage` is set, then a default volume claim
with size as specified by `spec.<group>.resources.requests.storage` will be created. In that case `storage`
and `iops` is not forwarded to the pods resource requirements.
### `spec.<group>.pvcResizeMode: string`
Specifies a resize mode used by operator to resize PVCs and PVs.
Supported modes:
- runtime (default) - PVC will be resized in Pod runtime (EKS, GKE)
- rotate - Pod will be shutdown and PVC will be resized (AKS)
### `spec.<group>.serviceAccountName: string`
This setting specifies the `serviceAccountName` for the `Pods` created
for each server of this group. If empty, it defaults to using the
`default` service account.
Using an alternative `ServiceAccount` is typically used to separate access rights.
The ArangoDB deployments need some very minimal access rights. With the
deployment of the operator, we grant the following rights for the `default`
service account:
```yaml
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
```
If you are using a different service account, please grant these rights
to that service account.
### `spec.<group>.annotations: map[string]string`
This setting set annotations overrides for pods in this group. Annotations are merged with `spec.annotations`.
### `spec.<group>.priorityClassName: string`
Priority class name for pods of this group. Will be forwarded to the pod spec. [Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/)
### `spec.<group>.probes.livenessProbeDisabled: bool`
If set to true, the operator does not generate a liveness probe for new pods belonging to this group.
### `spec.<group>.probes.livenessProbeSpec.initialDelaySeconds: int`
Number of seconds after the container has started before liveness or readiness probes are initiated. Defaults to 2 seconds. Minimum value is 0.
### `spec.<group>.probes.livenessProbeSpec.periodSeconds: int`
How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1.
### `spec.<group>.probes.livenessProbeSpec.timeoutSeconds: int`
Number of seconds after which the probe times out. Defaults to 2 second. Minimum value is 1.
### `spec.<group>.probes.livenessProbeSpec.failureThreshold: int`
When a Pod starts and the probe fails, Kubernetes will try failureThreshold times before giving up.
Giving up means restarting the container. Defaults to 3. Minimum value is 1.
### `spec.<group>.probes.readinessProbeDisabled: bool`
If set to true, the operator does not generate a readiness probe for new pods belonging to this group.
### `spec.<group>.probes.readinessProbeSpec.initialDelaySeconds: int`
Number of seconds after the container has started before liveness or readiness probes are initiated. Defaults to 2 seconds. Minimum value is 0.
### `spec.<group>.probes.readinessProbeSpec.periodSeconds: int`
How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1.
### `spec.<group>.probes.readinessProbeSpec.timeoutSeconds: int`
Number of seconds after which the probe times out. Defaults to 2 second. Minimum value is 1.
### `spec.<group>.probes.readinessProbeSpec.successThreshold: int`
Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Minimum value is 1.
### `spec.<group>.probes.readinessProbeSpec.failureThreshold: int`
When a Pod starts and the probe fails, Kubernetes will try failureThreshold times before giving up.
Giving up means the Pod will be marked Unready. Defaults to 3. Minimum value is 1.
### `spec.<group>.allowMemberRecreation: bool`
<small>Introduced in: v1.2.1 (kube-arangodb)</small>
This setting changes the member recreation logic based on group:
- For Sync Masters, Sync Workers, Coordinator and DB-Servers it determines if a member can be recreated in case of failure (default `true`)
- For Agents and Single this value is hardcoded to `false` and the value provided in spec is ignored.
### `spec.<group>.tolerations: []Toleration`
This setting specifies the `tolerations` for the `Pod`s created
for each server of this group.
By default, suitable tolerations are set for the following keys with the `NoExecute` effect:
- `node.kubernetes.io/not-ready`
- `node.kubernetes.io/unreachable`
- `node.alpha.kubernetes.io/unreachable` (will be removed in future version)
For more information on tolerations, consult the
[Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/).
### `spec.<group>.nodeSelector: map[string]string`
This setting specifies a set of labels to be used as `nodeSelector` for Pods of this node.
For more information on node selectors, consult the
[Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/).
### `spec.<group>.entrypoint: string`
Entrypoint overrides container executable.
### `spec.<group>.antiAffinity: PodAntiAffinity`
Specifies additional `antiAffinity` settings in ArangoDB Pod definitions.
For more information on `antiAffinity`, consult the
[Kubernetes documentation](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/).
### `spec.<group>.affinity: PodAffinity`
Specifies additional `affinity` settings in ArangoDB Pod definitions.
For more information on `affinity`, consult the
[Kubernetes documentation](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/).
### `spec.<group>.nodeAffinity: NodeAffinity`
Specifies additional `nodeAffinity` settings in ArangoDB Pod definitions.
For more information on `nodeAffinity`, consult the
[Kubernetes documentation](https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/).
### `spec.<group>.securityContext: ServerGroupSpecSecurityContext`
Specifies additional `securityContext` settings in ArangoDB Pod definitions.
This is similar (but not fully compatible) to k8s SecurityContext definition.
For more information on `securityContext`, consult the
[Kubernetes documentation](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/).
### `spec.<group>.securityContext.addCapabilities: []Capability`
Adds new capabilities to containers.
### `spec.<group>.securityContext.allowPrivilegeEscalation: bool`
Controls whether a process can gain more privileges than its parent process.
### `spec.<group>.securityContext.privileged: bool`
Runs container in privileged mode. Processes in privileged containers are
essentially equivalent to root on the host.
### `spec.<group>.securityContext.readOnlyRootFilesystem: bool`
Mounts the container's root filesystem as read-only.
### `spec.<group>.securityContext.runAsNonRoot: bool`
Indicates that the container must run as a non-root user.
### `spec.<group>.securityContext.runAsUser: integer`
The UID to run the entrypoint of the container process.
### `spec.<group>.securityContext.runAsGroup: integer`
The GID to run the entrypoint of the container process.
### `spec.<group>.securityContext.supplementalGroups: []integer`
A list of groups applied to the first process run in each container, in addition to the container's primary GID,
the fsGroup (if specified), and group memberships defined in the container image for the uid of the container process.
### `spec.<group>.securityContext.fsGroup: integer`
A special supplemental group that applies to all containers in a pod.
### `spec.<group>.securityContext.seccompProfile: SeccompProfile`
The seccomp options to use by the containers in this pod.
### `spec.<group>.securityContext.seLinuxOptions: SELinuxOptions`
The SELinux context to be applied to all containers.
## Image discovery group `spec.id` fields
Image discovery (`id`) group supports only next subset of fields.
Refer to according field documentation in `spec.<group>` description.
- `spec.id.entrypoint: string`
- `spec.id.tolerations: []Toleration`
- `spec.id.nodeSelector: map[string]string`
- `spec.id.priorityClassName: string`
- `spec.id.antiAffinity: PodAntiAffinity`
- `spec.id.affinity: PodAffinity`
- `spec.id.nodeAffinity: NodeAffinity`
- `spec.id.serviceAccountName: string`
- `spec.id.securityContext: ServerGroupSpecSecurityContext`
- `spec.id.resources: ResourceRequirements`
## Deprecated Fields
### `spec.<group>.resources.requests.storage: storageUnit`
This setting specifies the amount of storage required for each server of this group.
The default value is `8Gi`.
This setting is not available for group `coordinators`, `syncmasters` & `syncworkers`
because servers in these groups do not need persistent storage.
Please use VolumeClaimTemplate from now on. This field is not considered if
VolumeClaimTemplate is set. Note however, that the information in requests
is completely handed over to the pod in this case.
### `spec.<group>.storageClassName: string`
This setting specifies the `storageClass` for the `PersistentVolume`s created
for each server of this group.
This setting is not available for group `coordinators`, `syncmasters` & `syncworkers`
because servers in these groups do not need persistent storage.
Please use VolumeClaimTemplate from now on. This field is not considered if
VolumeClaimTemplate is set. Note however, that the information in requests
is completely handed over to the pod in this case.

View file

@ -1,7 +1,8 @@
## How-to... ## How-to...
- [How to set a license key](./set_license.md) - [Set a license key](./set_license.md)
- [Pass additional params to operator](additional_configuration.md) - [Pass additional params to operator](additional_configuration.md)
- [Set a root user password](./set_root_user_password.md)
- [Change architecture / enable ARM support](arch_change.md) - [Change architecture / enable ARM support](arch_change.md)
- [Configure timezone for cluster](configuring_tz.md) - [Configure timezone for cluster](configuring_tz.md)
- [Collect debug data for support case](debugging.md) - [Collect debug data for support case](debugging.md)

View file

@ -1,148 +0,0 @@
# Metrics collection
Operator provides metrics of its operations in a format supported by [Prometheus](https://prometheus.io/).
The metrics are exposed through HTTPS on port `8528` under path `/metrics`.
For a full list of available metrics, see [here](../generated/metrics/README.md).
#### Contents
- [Integration with standard Prometheus installation (no TLS)](#Integration-with-standard-Prometheus-installation-no-TLS)
- [Integration with standard Prometheus installation (TLS)](#Integration-with-standard-Prometheus-installation-TLS)
- [Integration with Prometheus Operator](#Integration-with-Prometheus-Operator)
- [Exposing ArangoDB metrics](#ArangoDB-metrics)
## Integration with standard Prometheus installation (no TLS)
After creating operator deployment, you must configure Prometheus using a configuration file that instructs it
about which targets to scrape.
To do so, add a new scrape job to your prometheus.yaml config:
```yaml
scrape_configs:
- job_name: 'arangodb-operator'
scrape_interval: 10s # scrape every 10 seconds.
scheme: 'https'
tls_config:
insecure_skip_verify: true
static_configs:
- targets:
- "<operator-endpoint-ip>:8528"
```
## Integration with standard Prometheus installation (TLS)
By default, the operator uses self-signed certificate for its server API.
To use your own certificate, you need to create k8s secret containing certificate and provide secret name to operator.
Create k8s secret (in same namespace where the operator is running):
```shell
kubectl create secret tls my-own-certificate --cert ./cert.crt --key ./cert.key
```
Then edit the operator deployment definition (`kubectl edit deployments.apps`) to use your secret for its server API:
```
spec:
# ...
containers:
# ...
args:
- --server.tls-secret-name=my-own-certificate
# ...
```
Wait for operator pods to restart.
Now update Prometheus config to use your certificate for operator scrape job:
```yaml
tls_config:
# if you are using self-signed certificate, just specify CA certificate:
ca_file: /etc/prometheus/rootCA.crt
# otherwise, specify the generated client certificate and key:
cert_file: /etc/prometheus/cert.crt
key_file: /etc/prometheus/cert.key
```
## Integration with Prometheus Operator
Assuming that you have [Prometheus Operator](https://prometheus-operator.dev/) installed in your cluster (`monitoring` namespace),
and kube-arangodb installed in `default` namespace, you can easily configure the integration with ArangoDB operator.
The easiest way to do that is to create new a ServiceMonitor:
```yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: arango-deployment-operator
namespace: monitoring
labels:
prometheus: kube-prometheus
spec:
selector:
matchLabels:
app.kubernetes.io/name: kube-arangodb
namespaceSelector:
matchNames:
- default
endpoints:
- port: server
scheme: https
tlsConfig:
insecureSkipVerify: true
```
You also can see the example of Grafana dashboard at `examples/metrics` folder of this repo.
## ArangoDB metrics
The operator can run sidecar containers for ArangoDB deployments of type `Cluster` which expose metrics in Prometheus format.
Edit your `ArangoDeployment` resource, setting `spec.metrics.enabled` to true to enable ArangoDB metrics:
```yaml
spec:
metrics:
enabled: true
```
The operator will run a sidecar container for every cluster component.
In addition to the sidecar containers the operator will deploy a `Service` to access the exporter ports (from within the k8s cluster),
and a resource of type `ServiceMonitor`, provided the corresponding custom resource definition is deployed in the k8s cluster.
If you are running Prometheus in the same k8s cluster with the Prometheus operator, this will be the case.
The ServiceMonitor will have the following labels set:
```yaml
app: arangodb
arango_deployment: YOUR_DEPLOYMENT_NAME
context: metrics
metrics: prometheus
```
This makes it possible to configure your Prometheus deployment to automatically start monitoring on the available Prometheus feeds.
To this end, you must configure the `serviceMonitorSelector` in the specs of your Prometheus deployment to match these labels. For example:
```yaml
serviceMonitorSelector:
matchLabels:
metrics: prometheus
```
would automatically select all pods of all ArangoDB cluster deployments which have metrics enabled.
By default, the sidecar metrics exporters are using TLS for all connections. You can disable the TLS by specifying
```yaml
spec:
metrics:
enabled: true
tls: false
```
You can fine-tune the monitored metrics by specifying `ArangoDeployment` annotations. Example:
```yaml
spec:
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '9101'
prometheus.io/scrape_interval: '5s'
```
See the [Metrics HTTP API documentation](https://docs.arangodb.com/stable/develop/http/monitoring/#metrics)
for the metrics exposed by ArangoDB deployments.

View file

@ -0,0 +1,14 @@
# Set root user password
1) Create a kubernetes [Secret](https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl/) with root password:
```bash
kubectl create secret generic arango-root-pwd --from-literal=password=<paste_your_password_here>
```
1) Then specify the newly created secret in the ArangoDeploymentSpec:
```yaml
spec:
bootstrap:
passwordSecretNames:
root: arango-root-pwd
```

View file

@ -28,6 +28,15 @@ import (
// AuthenticationSpec holds authentication specific configuration settings // AuthenticationSpec holds authentication specific configuration settings
type AuthenticationSpec struct { type AuthenticationSpec struct {
// JWTSecretName setting specifies the name of a kubernetes `Secret` that contains
// the JWT token used for accessing all ArangoDB servers.
// When no name is specified, it defaults to `<deployment-name>-jwt`.
// To disable authentication, set this value to `None`.
// If you specify a name of a `Secret`, that secret must have the token
// in a data field named `token`.
// If you specify a name of a `Secret` that does not exist, a random token is created
// and stored in a `Secret` with given name.
// Changing a JWT token results in restarting of a whole cluster.
JWTSecretName *string `json:"jwtSecretName,omitempty"` JWTSecretName *string `json:"jwtSecretName,omitempty"`
} }

View file

@ -1,7 +1,7 @@
// //
// DISCLAIMER // DISCLAIMER
// //
// Copyright 2016-2022 ArangoDB GmbH, Cologne, Germany // Copyright 2016-2023 ArangoDB GmbH, Cologne, Germany
// //
// Licensed under the Apache License, Version 2.0 (the "License"); // Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License. // you may not use this file except in compliance with the License.
@ -50,6 +50,15 @@ type PasswordSecretNameList map[string]PasswordSecretName
// BootstrapSpec contains information for cluster bootstrapping // BootstrapSpec contains information for cluster bootstrapping
type BootstrapSpec struct { type BootstrapSpec struct {
// PasswordSecretNames contains a map of username to password-secret-name // PasswordSecretNames contains a map of username to password-secret-name
// This setting specifies a secret name for the credentials per specific users.
// When a deployment is created the operator will setup the user accounts
// according to the credentials given by the secret. If the secret doesn't exist
// the operator creates a secret with a random password.
// There are two magic values for the secret name:
// - `None` specifies no action. This disables root password randomization. This is the default value. (Thus the root password is empty - not recommended)
// - `Auto` specifies automatic name generation, which is `<deploymentname>-root-password`.
// +doc/type: map[string]string
// +doc/link: How to set root user password|/docs/how-to/set_root_user_password.md
PasswordSecretNames PasswordSecretNameList `json:"passwordSecretNames,omitempty"` PasswordSecretNames PasswordSecretNameList `json:"passwordSecretNames,omitempty"`
} }

View file

@ -74,6 +74,10 @@ func (m *MetricsMode) Get() MetricsMode {
// MetricsSpec contains spec for arangodb exporter // MetricsSpec contains spec for arangodb exporter
type MetricsSpec struct { type MetricsSpec struct {
// Enabled if this is set to `true`, the operator runs a sidecar container for
// every Agent, DB-Server, Coordinator and Single server.
// +doc/default: false
// +doc/link: Metrics collection|/docs/metrics.md
Enabled *bool `json:"enabled,omitempty"` Enabled *bool `json:"enabled,omitempty"`
// deprecated // deprecated
Image *string `json:"image,omitempty"` Image *string `json:"image,omitempty"`
@ -84,6 +88,10 @@ type MetricsSpec struct {
Resources core.ResourceRequirements `json:"resources,omitempty"` Resources core.ResourceRequirements `json:"resources,omitempty"`
// deprecated // deprecated
Mode *MetricsMode `json:"mode,omitempty"` Mode *MetricsMode `json:"mode,omitempty"`
// TLS defines if TLS should be enabled on Metrics exporter endpoint.
// This option will enable TLS only if TLS is enabled on ArangoDeployment,
// otherwise `true` value will not take any effect.
// +doc/default: true
TLS *bool `json:"tls,omitempty"` TLS *bool `json:"tls,omitempty"`
ServiceMonitor *MetricsServiceMonitorSpec `json:"serviceMonitor,omitempty"` ServiceMonitor *MetricsServiceMonitorSpec `json:"serviceMonitor,omitempty"`

View file

@ -252,6 +252,13 @@ type DeploymentSpec struct {
// Architecture defines the list of supported architectures. // Architecture defines the list of supported architectures.
// First element on the list is marked as default architecture. // First element on the list is marked as default architecture.
// Possible values are:
// - `amd64`: Use processors with the x86-64 architecture.
// - `arm64`: Use processors with the 64-bit ARM architecture.
// The setting expects a list of strings, but you should only specify a single
// list item for the architecture, except when you want to migrate from one
// architecture to the other. The first list item defines the new default
// architecture for the deployment that you want to migrate to.
// +doc/link: Architecture Change|/docs/how-to/arch_change.md // +doc/link: Architecture Change|/docs/how-to/arch_change.md
// +doc/type: []string // +doc/type: []string
// +doc/default: ['amd64'] // +doc/default: ['amd64']

View file

@ -39,9 +39,12 @@ type ExternalAccessSpec struct {
Type *ExternalAccessType `json:"type,omitempty"` Type *ExternalAccessType `json:"type,omitempty"`
// NodePort define optional port used in case of Auto or NodePort type. // NodePort define optional port used in case of Auto or NodePort type.
// This setting is used when `spec.externalAccess.type` is set to `NodePort` or `Auto`.
// If you do not specify this setting, a random port will be chosen automatically.
NodePort *int `json:"nodePort,omitempty"` NodePort *int `json:"nodePort,omitempty"`
// LoadBalancerIP define optional IP used to configure a load-balancer on, in case of Auto or LoadBalancer type. // LoadBalancerIP define optional IP used to configure a load-balancer on, in case of Auto or LoadBalancer type.
// If you do not specify this setting, an IP will be chosen automatically by the load-balancer provisioner.
LoadBalancerIP *string `json:"loadBalancerIP,omitempty"` LoadBalancerIP *string `json:"loadBalancerIP,omitempty"`
// LoadBalancerSourceRanges define LoadBalancerSourceRanges used for LoadBalancer Service type // LoadBalancerSourceRanges define LoadBalancerSourceRanges used for LoadBalancer Service type

View file

@ -27,6 +27,9 @@ import (
// LicenseSpec holds the license related information // LicenseSpec holds the license related information
type LicenseSpec struct { type LicenseSpec struct {
// SecretName setting specifies the name of a kubernetes `Secret` that contains
// the license key token used for enterprise images. This value is not used for
// the Community Edition.
SecretName *string `json:"secretName,omitempty"` SecretName *string `json:"secretName,omitempty"`
} }

View file

@ -28,6 +28,12 @@ import (
// RocksDBEncryptionSpec holds rocksdb encryption at rest specific configuration settings // RocksDBEncryptionSpec holds rocksdb encryption at rest specific configuration settings
type RocksDBEncryptionSpec struct { type RocksDBEncryptionSpec struct {
// KeySecretName setting specifies the name of a Kubernetes `Secret` that contains an encryption key used for encrypting all data stored by ArangoDB servers.
// When an encryption key is used, encryption of the data in the cluster is enabled, without it encryption is disabled.
// The default value is empty.
// This requires the Enterprise Edition.
// The encryption key cannot be changed after the cluster has been created.
// The secret specified by this setting, must have a data field named 'key' containing an encryption key that is exactly 32 bytes long.
KeySecretName *string `json:"keySecretName,omitempty"` KeySecretName *string `json:"keySecretName,omitempty"`
} }

View file

@ -42,16 +42,27 @@ type ServerGroupSpecSecurityContext struct {
// Deprecated: This field is added for backward compatibility. Will be removed in 1.1.0. // Deprecated: This field is added for backward compatibility. Will be removed in 1.1.0.
DropAllCapabilities *bool `json:"dropAllCapabilities,omitempty"` DropAllCapabilities *bool `json:"dropAllCapabilities,omitempty"`
// AddCapabilities add new capabilities to containers // AddCapabilities add new capabilities to containers
// +doc/type: []core.Capability
AddCapabilities []core.Capability `json:"addCapabilities,omitempty"` AddCapabilities []core.Capability `json:"addCapabilities,omitempty"`
// AllowPrivilegeEscalation Controls whether a process can gain more privileges than its parent process.
AllowPrivilegeEscalation *bool `json:"allowPrivilegeEscalation,omitempty"` AllowPrivilegeEscalation *bool `json:"allowPrivilegeEscalation,omitempty"`
// Privileged If true, runs container in privileged mode. Processes in privileged containers are
// essentially equivalent to root on the host.
Privileged *bool `json:"privileged,omitempty"` Privileged *bool `json:"privileged,omitempty"`
// ReadOnlyRootFilesystem if true, mounts the container's root filesystem as read-only.
ReadOnlyRootFilesystem *bool `json:"readOnlyRootFilesystem,omitempty"` ReadOnlyRootFilesystem *bool `json:"readOnlyRootFilesystem,omitempty"`
// RunAsNonRoot if true, indicates that the container must run as a non-root user.
RunAsNonRoot *bool `json:"runAsNonRoot,omitempty"` RunAsNonRoot *bool `json:"runAsNonRoot,omitempty"`
// RunAsUser is the UID to run the entrypoint of the container process.
RunAsUser *int64 `json:"runAsUser,omitempty"` RunAsUser *int64 `json:"runAsUser,omitempty"`
// RunAsGroup is the GID to run the entrypoint of the container process.
RunAsGroup *int64 `json:"runAsGroup,omitempty"` RunAsGroup *int64 `json:"runAsGroup,omitempty"`
// SupplementalGroups is a list of groups applied to the first process run in each container, in addition to the container's primary GID,
// the fsGroup (if specified), and group memberships defined in the container image for the uid of the container process.
SupplementalGroups []int64 `json:"supplementalGroups,omitempty"` SupplementalGroups []int64 `json:"supplementalGroups,omitempty"`
// FSGroup is a special supplemental group that applies to all containers in a pod.
FSGroup *int64 `json:"fsGroup,omitempty"` FSGroup *int64 `json:"fsGroup,omitempty"`
// Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported // Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported

View file

@ -68,13 +68,20 @@ const (
type ServerGroupSpec struct { type ServerGroupSpec struct {
group ServerGroup `json:"-"` group ServerGroup `json:"-"`
// Count holds the requested number of servers // Count setting specifies the number of servers to start for the given group.
// For the Agent group, this value must be a positive, odd number.
// The default value is `3` for all groups except `single` (there the default is `1`
// for `spec.mode: Single` and `2` for `spec.mode: ActiveFailover`).
// For the `syncworkers` group, it is highly recommended to use the same number
// as for the `dbservers` group.
Count *int `json:"count,omitempty"` Count *int `json:"count,omitempty"`
// MinCount specifies a lower limit for count // MinCount specifies a minimum for the count of servers. If set, a specification is invalid if `count < minCount`.
MinCount *int `json:"minCount,omitempty"` MinCount *int `json:"minCount,omitempty"`
// MaxCount specifies a upper limit for count // MaxCount specifies a maximum for the count of servers. If set, a specification is invalid if `count > maxCount`.
MaxCount *int `json:"maxCount,omitempty"` MaxCount *int `json:"maxCount,omitempty"`
// Args holds additional commandline arguments // Args setting specifies additional command-line arguments passed to all servers of this group.
// +doc/type: []string
// +doc/default: []
Args []string `json:"args,omitempty"` Args []string `json:"args,omitempty"`
// Entrypoint overrides container executable // Entrypoint overrides container executable
Entrypoint *string `json:"entrypoint,omitempty"` Entrypoint *string `json:"entrypoint,omitempty"`
@ -99,10 +106,16 @@ type ServerGroupSpec struct {
// +doc/link: Docs of the ArangoDB Envs|https://docs.arangodb.com/devel/components/arangodb-server/environment-variables/ // +doc/link: Docs of the ArangoDB Envs|https://docs.arangodb.com/devel/components/arangodb-server/environment-variables/
OverrideDetectedNumberOfCores *bool `json:"overrideDetectedNumberOfCores,omitempty"` OverrideDetectedNumberOfCores *bool `json:"overrideDetectedNumberOfCores,omitempty"`
// Tolerations specifies the tolerations added to Pods in this group. // Tolerations specifies the tolerations added to Pods in this group.
// By default, suitable tolerations are set for the following keys with the `NoExecute` effect:
// - `node.kubernetes.io/not-ready`
// - `node.kubernetes.io/unreachable`
// - `node.alpha.kubernetes.io/unreachable` (will be removed in future version)
// For more information on tolerations, consult the https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
// +doc/type: []core.Toleration // +doc/type: []core.Toleration
// +doc/link: Documentation of core.Toleration|https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#toleration-v1-core // +doc/link: Documentation of core.Toleration|https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#toleration-v1-core
Tolerations []core.Toleration `json:"tolerations,omitempty"` Tolerations []core.Toleration `json:"tolerations,omitempty"`
// Annotations specified the annotations added to Pods in this group. // Annotations specified the annotations added to Pods in this group.
// Annotations are merged with `spec.annotations`.
Annotations map[string]string `json:"annotations,omitempty"` Annotations map[string]string `json:"annotations,omitempty"`
// AnnotationsIgnoreList list regexp or plain definitions which annotations should be ignored // AnnotationsIgnoreList list regexp or plain definitions which annotations should be ignored
AnnotationsIgnoreList []string `json:"annotationsIgnoreList,omitempty"` AnnotationsIgnoreList []string `json:"annotationsIgnoreList,omitempty"`
@ -116,19 +129,38 @@ type ServerGroupSpec struct {
LabelsMode *LabelsMode `json:"labelsMode,omitempty"` LabelsMode *LabelsMode `json:"labelsMode,omitempty"`
// Envs allow to specify additional envs in this group. // Envs allow to specify additional envs in this group.
Envs ServerGroupEnvVars `json:"envs,omitempty"` Envs ServerGroupEnvVars `json:"envs,omitempty"`
// ServiceAccountName specifies the name of the service account used for Pods in this group. // ServiceAccountName setting specifies the `serviceAccountName` for the `Pods` created
// for each server of this group. If empty, it defaults to using the
// `default` service account.
// Using an alternative `ServiceAccount` is typically used to separate access rights.
// The ArangoDB deployments need some very minimal access rights. With the
// deployment of the operator, we grant the rights to 'get' all 'pod' resources.
// If you are using a different service account, please grant these rights
// to that service account.
ServiceAccountName *string `json:"serviceAccountName,omitempty"` ServiceAccountName *string `json:"serviceAccountName,omitempty"`
// NodeSelector speficies a set of selectors for nodes // NodeSelector setting specifies a set of labels to be used as `nodeSelector` for Pods of this node.
// +doc/type: map[string]string
// +doc/link: Kubernetes documentation|https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
NodeSelector map[string]string `json:"nodeSelector,omitempty"` NodeSelector map[string]string `json:"nodeSelector,omitempty"`
// Probes specifies additional behaviour for probes // Probes specifies additional behaviour for probes
Probes *ServerGroupProbesSpec `json:"probes,omitempty"` Probes *ServerGroupProbesSpec `json:"probes,omitempty"`
// PriorityClassName specifies a priority class name // PriorityClassName specifies a priority class name
// Will be forwarded to the pod spec.
// +doc/link: Kubernetes documentation|https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
PriorityClassName string `json:"priorityClassName,omitempty"` PriorityClassName string `json:"priorityClassName,omitempty"`
// VolumeClaimTemplate specifies a template for volume claims // VolumeClaimTemplate specifies a volumeClaimTemplate used by operator to create to volume claims for pods of this group.
// This setting is not available for group `coordinators`, `syncmasters` & `syncworkers`.
// The default value describes a volume with `8Gi` storage, `ReadWriteOnce` access mode and volume mode set to `PersistentVolumeFilesystem`.
// If this field is not set and `spec.<group>.resources.requests.storage` is set, then a default volume claim
// with size as specified by `spec.<group>.resources.requests.storage` will be created. In that case `storage`
// and `iops` is not forwarded to the pods resource requirements.
// +doc/type: core.PersistentVolumeClaim // +doc/type: core.PersistentVolumeClaim
// +doc/link: Documentation of core.PersistentVolumeClaim|https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#persistentvolumeclaim-v1-core // +doc/link: Documentation of core.PersistentVolumeClaim|https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#persistentvolumeclaim-v1-core
VolumeClaimTemplate *core.PersistentVolumeClaim `json:"volumeClaimTemplate,omitempty"` VolumeClaimTemplate *core.PersistentVolumeClaim `json:"volumeClaimTemplate,omitempty"`
// VolumeResizeMode specified resize mode for pvc // VolumeResizeMode specified resize mode for PVCs and PVs
// +doc/enum: runtime|PVC will be resized in Pod runtime (EKS, GKE)
// +doc/enum: rotate|Pod will be shutdown and PVC will be resized (AKS)
// +doc/default: runtime
VolumeResizeMode *PVCResizeMode `json:"pvcResizeMode,omitempty"` VolumeResizeMode *PVCResizeMode `json:"pvcResizeMode,omitempty"`
// Deprecated: VolumeAllowShrink allows shrink the volume // Deprecated: VolumeAllowShrink allows shrink the volume
VolumeAllowShrink *bool `json:"volumeAllowShrink,omitempty"` VolumeAllowShrink *bool `json:"volumeAllowShrink,omitempty"`
@ -151,7 +183,9 @@ type ServerGroupSpec struct {
// +doc/type: []core.Container // +doc/type: []core.Container
// +doc/link: Documentation of core.Container|https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#container-v1-core // +doc/link: Documentation of core.Container|https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#container-v1-core
Sidecars []core.Container `json:"sidecars,omitempty"` Sidecars []core.Container `json:"sidecars,omitempty"`
// SecurityContext specifies security context for group // SecurityContext specifies additional `securityContext` settings in ArangoDB Pod definitions.
// This is similar (but not fully compatible) to k8s SecurityContext definition.
// +doc/link: Kubernetes documentation|https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
SecurityContext *ServerGroupSpecSecurityContext `json:"securityContext,omitempty"` SecurityContext *ServerGroupSpecSecurityContext `json:"securityContext,omitempty"`
// Volumes define list of volumes mounted to pod // Volumes define list of volumes mounted to pod
Volumes ServerGroupSpecVolumes `json:"volumes,omitempty"` Volumes ServerGroupSpecVolumes `json:"volumes,omitempty"`
@ -175,7 +209,10 @@ type ServerGroupSpec struct {
InternalPortProtocol *ServerGroupPortProtocol `json:"internalPortProtocol,omitempty"` InternalPortProtocol *ServerGroupPortProtocol `json:"internalPortProtocol,omitempty"`
// ExternalPortEnabled if external port should be enabled. If is set to false, ports needs to be exposed via sidecar. Only for ArangoD members // ExternalPortEnabled if external port should be enabled. If is set to false, ports needs to be exposed via sidecar. Only for ArangoD members
ExternalPortEnabled *bool `json:"externalPortEnabled,omitempty"` ExternalPortEnabled *bool `json:"externalPortEnabled,omitempty"`
// AllowMemberRecreation allows to recreate member. Value is used only for Coordinator and DBServer with default to True, for all other groups set to false. // AllowMemberRecreation allows to recreate member.
// This setting changes the member recreation logic based on group:
// - For Sync Masters, Sync Workers, Coordinator and DB-Servers it determines if a member can be recreated in case of failure (default `true`)
// - For Agents and Single this value is hardcoded to `false` and the value provided in spec is ignored.
AllowMemberRecreation *bool `json:"allowMemberRecreation,omitempty"` AllowMemberRecreation *bool `json:"allowMemberRecreation,omitempty"`
// TerminationGracePeriodSeconds override default TerminationGracePeriodSeconds for pods - via silent rotation // TerminationGracePeriodSeconds override default TerminationGracePeriodSeconds for pods - via silent rotation
TerminationGracePeriodSeconds *int64 `json:"terminationGracePeriodSeconds,omitempty"` TerminationGracePeriodSeconds *int64 `json:"terminationGracePeriodSeconds,omitempty"`
@ -197,7 +234,8 @@ type ServerGroupSpec struct {
// ServerGroupProbesSpec contains specification for probes for pods of the server group // ServerGroupProbesSpec contains specification for probes for pods of the server group
type ServerGroupProbesSpec struct { type ServerGroupProbesSpec struct {
// LivenessProbeDisabled if true livenessProbes are disabled // LivenessProbeDisabled if set to true, the operator does not generate a liveness probe for new pods belonging to this group
// +doc/default: false
LivenessProbeDisabled *bool `json:"livenessProbeDisabled,omitempty"` LivenessProbeDisabled *bool `json:"livenessProbeDisabled,omitempty"`
// LivenessProbeSpec override liveness probe configuration // LivenessProbeSpec override liveness probe configuration
LivenessProbeSpec *ServerGroupProbeSpec `json:"livenessProbeSpec,omitempty"` LivenessProbeSpec *ServerGroupProbeSpec `json:"livenessProbeSpec,omitempty"`
@ -228,10 +266,26 @@ func (s ServerGroupProbesSpec) GetReadinessProbeDisabled() *bool {
// ServerGroupProbeSpec // ServerGroupProbeSpec
type ServerGroupProbeSpec struct { type ServerGroupProbeSpec struct {
// InitialDelaySeconds specifies number of seconds after the container has started before liveness or readiness probes are initiated.
// Minimum value is 0.
// +doc/default: 2
InitialDelaySeconds *int32 `json:"initialDelaySeconds,omitempty"` InitialDelaySeconds *int32 `json:"initialDelaySeconds,omitempty"`
// PeriodSeconds How often (in seconds) to perform the probe.
// Minimum value is 1.
// +doc/default: 10
PeriodSeconds *int32 `json:"periodSeconds,omitempty"` PeriodSeconds *int32 `json:"periodSeconds,omitempty"`
// TimeoutSeconds specifies number of seconds after which the probe times out
// Minimum value is 1.
// +doc/default: 2
TimeoutSeconds *int32 `json:"timeoutSeconds,omitempty"` TimeoutSeconds *int32 `json:"timeoutSeconds,omitempty"`
// SuccessThreshold Minimum consecutive successes for the probe to be considered successful after having failed.
// Minimum value is 1.
// +doc/default: 1
SuccessThreshold *int32 `json:"successThreshold,omitempty"` SuccessThreshold *int32 `json:"successThreshold,omitempty"`
// FailureThreshold when a Pod starts and the probe fails, Kubernetes will try failureThreshold times before giving up.
// Giving up means restarting the container.
// Minimum value is 1.
// +doc/default: 3
FailureThreshold *int32 `json:"failureThreshold,omitempty"` FailureThreshold *int32 `json:"failureThreshold,omitempty"`
} }

View file

@ -30,7 +30,7 @@ type ServerIDGroupSpec struct {
// +doc/type: []core.Toleration // +doc/type: []core.Toleration
// +doc/link: Documentation of core.Toleration|https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#toleration-v1-core // +doc/link: Documentation of core.Toleration|https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#toleration-v1-core
Tolerations []core.Toleration `json:"tolerations,omitempty"` Tolerations []core.Toleration `json:"tolerations,omitempty"`
// NodeSelector speficies a set of selectors for nodes // NodeSelector specifies a set of selectors for nodes
NodeSelector map[string]string `json:"nodeSelector,omitempty"` NodeSelector map[string]string `json:"nodeSelector,omitempty"`
// PriorityClassName specifies a priority class name // PriorityClassName specifies a priority class name
PriorityClassName string `json:"priorityClassName,omitempty"` PriorityClassName string `json:"priorityClassName,omitempty"`

View file

@ -28,8 +28,17 @@ import (
// SyncAuthenticationSpec holds dc2dc sync authentication specific configuration settings // SyncAuthenticationSpec holds dc2dc sync authentication specific configuration settings
type SyncAuthenticationSpec struct { type SyncAuthenticationSpec struct {
JWTSecretName *string `json:"jwtSecretName,omitempty"` // JWT secret for sync masters // JWTSecretName setting specifies the name of a kubernetes `Secret` that contains
ClientCASecretName *string `json:"clientCASecretName,omitempty"` // Secret containing client authentication CA // the JWT token used for accessing all ArangoSync master servers.
// When not specified, the `spec.auth.jwtSecretName` value is used.
// If you specify a name of a `Secret` that does not exist, a random token is created
// and stored in a `Secret` with given name.
JWTSecretName *string `json:"jwtSecretName,omitempty"`
// ClientCASecretName setting specifies the name of a kubernetes `Secret` that contains
// a PEM encoded CA certificate used for client certificate verification
// in all ArangoSync master servers.
// This is a required setting when `spec.sync.enabled` is `true`.
ClientCASecretName *string `json:"clientCASecretName,omitempty"`
} }
// GetJWTSecretName returns the value of jwtSecretName. // GetJWTSecretName returns the value of jwtSecretName.

View file

@ -32,7 +32,20 @@ import (
// SyncExternalAccessSpec holds configuration for the external access provided for the sync deployment. // SyncExternalAccessSpec holds configuration for the external access provided for the sync deployment.
type SyncExternalAccessSpec struct { type SyncExternalAccessSpec struct {
ExternalAccessSpec ExternalAccessSpec
// MasterEndpoint setting specifies the master endpoint(s) advertised by the ArangoSync SyncMasters.
// If not set, this setting defaults to:
// - If `spec.sync.externalAccess.loadBalancerIP` is set, it defaults to `https://<load-balancer-ip>:<8629>`.
// - Otherwise it defaults to `https://<sync-service-dns-name>:<8629>`.
// +doc/type: []string
MasterEndpoint []string `json:"masterEndpoint,omitempty"` MasterEndpoint []string `json:"masterEndpoint,omitempty"`
// AccessPackageSecretNames setting specifies the names of zero of more `Secrets` that will be created by the deployment
// operator containing "access packages". An access package contains those `Secrets` that are needed
// to access the SyncMasters of this `ArangoDeployment`.
// By removing a name from this setting, the corresponding `Secret` is also deleted.
// Note that to remove all access packages, leave an empty array in place (`[]`).
// Completely removing the setting results in not modifying the list.
// +doc/type: []string
// +doc/link: See the ArangoDeploymentReplication specification|deployment-replication-resource-reference.md
AccessPackageSecretNames []string `json:"accessPackageSecretNames,omitempty"` AccessPackageSecretNames []string `json:"accessPackageSecretNames,omitempty"`
} }

View file

@ -28,6 +28,9 @@ import (
// MonitoringSpec holds monitoring specific configuration settings // MonitoringSpec holds monitoring specific configuration settings
type MonitoringSpec struct { type MonitoringSpec struct {
// TokenSecretName setting specifies the name of a kubernetes `Secret` that contains
// the bearer token used for accessing all monitoring endpoints of all arangod/arangosync servers.
// When not specified, no monitoring token is used.
TokenSecretName *string `json:"tokenSecretName,omitempty"` TokenSecretName *string `json:"tokenSecretName,omitempty"`
} }

View file

@ -27,6 +27,10 @@ import (
// SyncSpec holds dc2dc replication specific configuration settings // SyncSpec holds dc2dc replication specific configuration settings
type SyncSpec struct { type SyncSpec struct {
// Enabled setting enables/disables support for data center 2 data center
// replication in the cluster. When enabled, the cluster will contain
// a number of `syncmaster` & `syncworker` servers.
// +doc/default: false
Enabled *bool `json:"enabled,omitempty"` Enabled *bool `json:"enabled,omitempty"`
ExternalAccess SyncExternalAccessSpec `json:"externalAccess"` ExternalAccess SyncExternalAccessSpec `json:"externalAccess"`

View file

@ -54,8 +54,28 @@ const (
// TLSSpec holds TLS specific configuration settings // TLSSpec holds TLS specific configuration settings
type TLSSpec struct { type TLSSpec struct {
// CASecretName setting specifies the name of a kubernetes `Secret` that contains
// a standard CA certificate + private key used to sign certificates for individual
// ArangoDB servers.
// When no name is specified, it defaults to `<deployment-name>-ca`.
// To disable authentication, set this value to `None`.
// If you specify a name of a `Secret` that does not exist, a self-signed CA certificate + key is created
// and stored in a `Secret` with given name.
// The specified `Secret`, must contain the following data fields:
// - `ca.crt` PEM encoded public key of the CA certificate
// - `ca.key` PEM encoded private key of the CA certificate
CASecretName *string `json:"caSecretName,omitempty"` CASecretName *string `json:"caSecretName,omitempty"`
// AltNames setting specifies a list of alternate names that will be added to all generated
// certificates. These names can be DNS names or email addresses.
// The default value is empty.
// +doc/type: []string
AltNames []string `json:"altNames,omitempty"` AltNames []string `json:"altNames,omitempty"`
// TTL setting specifies the time to live of all generated server certificates.
// When the server certificate is about to expire, it will be automatically replaced
// by a new one and the affected server will be restarted.
// Note: The time to live of the CA certificate (when created automatically)
// will be set to 10 years.
// +doc/default: "2160h" (about 3 months)
TTL *Duration `json:"ttl,omitempty"` TTL *Duration `json:"ttl,omitempty"`
SNI *TLSSNISpec `json:"sni,omitempty"` SNI *TLSSNISpec `json:"sni,omitempty"`
Mode *TLSRotateMode `json:"mode,omitempty"` Mode *TLSRotateMode `json:"mode,omitempty"`

View file

@ -28,6 +28,15 @@ import (
// AuthenticationSpec holds authentication specific configuration settings // AuthenticationSpec holds authentication specific configuration settings
type AuthenticationSpec struct { type AuthenticationSpec struct {
// JWTSecretName setting specifies the name of a kubernetes `Secret` that contains
// the JWT token used for accessing all ArangoDB servers.
// When no name is specified, it defaults to `<deployment-name>-jwt`.
// To disable authentication, set this value to `None`.
// If you specify a name of a `Secret`, that secret must have the token
// in a data field named `token`.
// If you specify a name of a `Secret` that does not exist, a random token is created
// and stored in a `Secret` with given name.
// Changing a JWT token results in restarting of a whole cluster.
JWTSecretName *string `json:"jwtSecretName,omitempty"` JWTSecretName *string `json:"jwtSecretName,omitempty"`
} }

View file

@ -1,7 +1,7 @@
// //
// DISCLAIMER // DISCLAIMER
// //
// Copyright 2016-2022 ArangoDB GmbH, Cologne, Germany // Copyright 2016-2023 ArangoDB GmbH, Cologne, Germany
// //
// Licensed under the Apache License, Version 2.0 (the "License"); // Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License. // you may not use this file except in compliance with the License.
@ -50,6 +50,15 @@ type PasswordSecretNameList map[string]PasswordSecretName
// BootstrapSpec contains information for cluster bootstrapping // BootstrapSpec contains information for cluster bootstrapping
type BootstrapSpec struct { type BootstrapSpec struct {
// PasswordSecretNames contains a map of username to password-secret-name // PasswordSecretNames contains a map of username to password-secret-name
// This setting specifies a secret name for the credentials per specific users.
// When a deployment is created the operator will setup the user accounts
// according to the credentials given by the secret. If the secret doesn't exist
// the operator creates a secret with a random password.
// There are two magic values for the secret name:
// - `None` specifies no action. This disables root password randomization. This is the default value. (Thus the root password is empty - not recommended)
// - `Auto` specifies automatic name generation, which is `<deploymentname>-root-password`.
// +doc/type: map[string]string
// +doc/link: How to set root user password|/docs/how-to/set_root_user_password.md
PasswordSecretNames PasswordSecretNameList `json:"passwordSecretNames,omitempty"` PasswordSecretNames PasswordSecretNameList `json:"passwordSecretNames,omitempty"`
} }

View file

@ -74,6 +74,10 @@ func (m *MetricsMode) Get() MetricsMode {
// MetricsSpec contains spec for arangodb exporter // MetricsSpec contains spec for arangodb exporter
type MetricsSpec struct { type MetricsSpec struct {
// Enabled if this is set to `true`, the operator runs a sidecar container for
// every Agent, DB-Server, Coordinator and Single server.
// +doc/default: false
// +doc/link: Metrics collection|/docs/metrics.md
Enabled *bool `json:"enabled,omitempty"` Enabled *bool `json:"enabled,omitempty"`
// deprecated // deprecated
Image *string `json:"image,omitempty"` Image *string `json:"image,omitempty"`
@ -84,6 +88,10 @@ type MetricsSpec struct {
Resources core.ResourceRequirements `json:"resources,omitempty"` Resources core.ResourceRequirements `json:"resources,omitempty"`
// deprecated // deprecated
Mode *MetricsMode `json:"mode,omitempty"` Mode *MetricsMode `json:"mode,omitempty"`
// TLS defines if TLS should be enabled on Metrics exporter endpoint.
// This option will enable TLS only if TLS is enabled on ArangoDeployment,
// otherwise `true` value will not take any effect.
// +doc/default: true
TLS *bool `json:"tls,omitempty"` TLS *bool `json:"tls,omitempty"`
ServiceMonitor *MetricsServiceMonitorSpec `json:"serviceMonitor,omitempty"` ServiceMonitor *MetricsServiceMonitorSpec `json:"serviceMonitor,omitempty"`

View file

@ -252,6 +252,13 @@ type DeploymentSpec struct {
// Architecture defines the list of supported architectures. // Architecture defines the list of supported architectures.
// First element on the list is marked as default architecture. // First element on the list is marked as default architecture.
// Possible values are:
// - `amd64`: Use processors with the x86-64 architecture.
// - `arm64`: Use processors with the 64-bit ARM architecture.
// The setting expects a list of strings, but you should only specify a single
// list item for the architecture, except when you want to migrate from one
// architecture to the other. The first list item defines the new default
// architecture for the deployment that you want to migrate to.
// +doc/link: Architecture Change|/docs/how-to/arch_change.md // +doc/link: Architecture Change|/docs/how-to/arch_change.md
// +doc/type: []string // +doc/type: []string
// +doc/default: ['amd64'] // +doc/default: ['amd64']

View file

@ -39,9 +39,12 @@ type ExternalAccessSpec struct {
Type *ExternalAccessType `json:"type,omitempty"` Type *ExternalAccessType `json:"type,omitempty"`
// NodePort define optional port used in case of Auto or NodePort type. // NodePort define optional port used in case of Auto or NodePort type.
// This setting is used when `spec.externalAccess.type` is set to `NodePort` or `Auto`.
// If you do not specify this setting, a random port will be chosen automatically.
NodePort *int `json:"nodePort,omitempty"` NodePort *int `json:"nodePort,omitempty"`
// LoadBalancerIP define optional IP used to configure a load-balancer on, in case of Auto or LoadBalancer type. // LoadBalancerIP define optional IP used to configure a load-balancer on, in case of Auto or LoadBalancer type.
// If you do not specify this setting, an IP will be chosen automatically by the load-balancer provisioner.
LoadBalancerIP *string `json:"loadBalancerIP,omitempty"` LoadBalancerIP *string `json:"loadBalancerIP,omitempty"`
// LoadBalancerSourceRanges define LoadBalancerSourceRanges used for LoadBalancer Service type // LoadBalancerSourceRanges define LoadBalancerSourceRanges used for LoadBalancer Service type

View file

@ -27,6 +27,9 @@ import (
// LicenseSpec holds the license related information // LicenseSpec holds the license related information
type LicenseSpec struct { type LicenseSpec struct {
// SecretName setting specifies the name of a kubernetes `Secret` that contains
// the license key token used for enterprise images. This value is not used for
// the Community Edition.
SecretName *string `json:"secretName,omitempty"` SecretName *string `json:"secretName,omitempty"`
} }

View file

@ -28,6 +28,12 @@ import (
// RocksDBEncryptionSpec holds rocksdb encryption at rest specific configuration settings // RocksDBEncryptionSpec holds rocksdb encryption at rest specific configuration settings
type RocksDBEncryptionSpec struct { type RocksDBEncryptionSpec struct {
// KeySecretName setting specifies the name of a Kubernetes `Secret` that contains an encryption key used for encrypting all data stored by ArangoDB servers.
// When an encryption key is used, encryption of the data in the cluster is enabled, without it encryption is disabled.
// The default value is empty.
// This requires the Enterprise Edition.
// The encryption key cannot be changed after the cluster has been created.
// The secret specified by this setting, must have a data field named 'key' containing an encryption key that is exactly 32 bytes long.
KeySecretName *string `json:"keySecretName,omitempty"` KeySecretName *string `json:"keySecretName,omitempty"`
} }

View file

@ -42,16 +42,27 @@ type ServerGroupSpecSecurityContext struct {
// Deprecated: This field is added for backward compatibility. Will be removed in 1.1.0. // Deprecated: This field is added for backward compatibility. Will be removed in 1.1.0.
DropAllCapabilities *bool `json:"dropAllCapabilities,omitempty"` DropAllCapabilities *bool `json:"dropAllCapabilities,omitempty"`
// AddCapabilities add new capabilities to containers // AddCapabilities add new capabilities to containers
// +doc/type: []core.Capability
AddCapabilities []core.Capability `json:"addCapabilities,omitempty"` AddCapabilities []core.Capability `json:"addCapabilities,omitempty"`
// AllowPrivilegeEscalation Controls whether a process can gain more privileges than its parent process.
AllowPrivilegeEscalation *bool `json:"allowPrivilegeEscalation,omitempty"` AllowPrivilegeEscalation *bool `json:"allowPrivilegeEscalation,omitempty"`
// Privileged If true, runs container in privileged mode. Processes in privileged containers are
// essentially equivalent to root on the host.
Privileged *bool `json:"privileged,omitempty"` Privileged *bool `json:"privileged,omitempty"`
// ReadOnlyRootFilesystem if true, mounts the container's root filesystem as read-only.
ReadOnlyRootFilesystem *bool `json:"readOnlyRootFilesystem,omitempty"` ReadOnlyRootFilesystem *bool `json:"readOnlyRootFilesystem,omitempty"`
// RunAsNonRoot if true, indicates that the container must run as a non-root user.
RunAsNonRoot *bool `json:"runAsNonRoot,omitempty"` RunAsNonRoot *bool `json:"runAsNonRoot,omitempty"`
// RunAsUser is the UID to run the entrypoint of the container process.
RunAsUser *int64 `json:"runAsUser,omitempty"` RunAsUser *int64 `json:"runAsUser,omitempty"`
// RunAsGroup is the GID to run the entrypoint of the container process.
RunAsGroup *int64 `json:"runAsGroup,omitempty"` RunAsGroup *int64 `json:"runAsGroup,omitempty"`
// SupplementalGroups is a list of groups applied to the first process run in each container, in addition to the container's primary GID,
// the fsGroup (if specified), and group memberships defined in the container image for the uid of the container process.
SupplementalGroups []int64 `json:"supplementalGroups,omitempty"` SupplementalGroups []int64 `json:"supplementalGroups,omitempty"`
// FSGroup is a special supplemental group that applies to all containers in a pod.
FSGroup *int64 `json:"fsGroup,omitempty"` FSGroup *int64 `json:"fsGroup,omitempty"`
// Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported // Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported

View file

@ -68,13 +68,20 @@ const (
type ServerGroupSpec struct { type ServerGroupSpec struct {
group ServerGroup `json:"-"` group ServerGroup `json:"-"`
// Count holds the requested number of servers // Count setting specifies the number of servers to start for the given group.
// For the Agent group, this value must be a positive, odd number.
// The default value is `3` for all groups except `single` (there the default is `1`
// for `spec.mode: Single` and `2` for `spec.mode: ActiveFailover`).
// For the `syncworkers` group, it is highly recommended to use the same number
// as for the `dbservers` group.
Count *int `json:"count,omitempty"` Count *int `json:"count,omitempty"`
// MinCount specifies a lower limit for count // MinCount specifies a minimum for the count of servers. If set, a specification is invalid if `count < minCount`.
MinCount *int `json:"minCount,omitempty"` MinCount *int `json:"minCount,omitempty"`
// MaxCount specifies a upper limit for count // MaxCount specifies a maximum for the count of servers. If set, a specification is invalid if `count > maxCount`.
MaxCount *int `json:"maxCount,omitempty"` MaxCount *int `json:"maxCount,omitempty"`
// Args holds additional commandline arguments // Args setting specifies additional command-line arguments passed to all servers of this group.
// +doc/type: []string
// +doc/default: []
Args []string `json:"args,omitempty"` Args []string `json:"args,omitempty"`
// Entrypoint overrides container executable // Entrypoint overrides container executable
Entrypoint *string `json:"entrypoint,omitempty"` Entrypoint *string `json:"entrypoint,omitempty"`
@ -99,10 +106,16 @@ type ServerGroupSpec struct {
// +doc/link: Docs of the ArangoDB Envs|https://docs.arangodb.com/devel/components/arangodb-server/environment-variables/ // +doc/link: Docs of the ArangoDB Envs|https://docs.arangodb.com/devel/components/arangodb-server/environment-variables/
OverrideDetectedNumberOfCores *bool `json:"overrideDetectedNumberOfCores,omitempty"` OverrideDetectedNumberOfCores *bool `json:"overrideDetectedNumberOfCores,omitempty"`
// Tolerations specifies the tolerations added to Pods in this group. // Tolerations specifies the tolerations added to Pods in this group.
// By default, suitable tolerations are set for the following keys with the `NoExecute` effect:
// - `node.kubernetes.io/not-ready`
// - `node.kubernetes.io/unreachable`
// - `node.alpha.kubernetes.io/unreachable` (will be removed in future version)
// For more information on tolerations, consult the https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
// +doc/type: []core.Toleration // +doc/type: []core.Toleration
// +doc/link: Documentation of core.Toleration|https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#toleration-v1-core // +doc/link: Documentation of core.Toleration|https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#toleration-v1-core
Tolerations []core.Toleration `json:"tolerations,omitempty"` Tolerations []core.Toleration `json:"tolerations,omitempty"`
// Annotations specified the annotations added to Pods in this group. // Annotations specified the annotations added to Pods in this group.
// Annotations are merged with `spec.annotations`.
Annotations map[string]string `json:"annotations,omitempty"` Annotations map[string]string `json:"annotations,omitempty"`
// AnnotationsIgnoreList list regexp or plain definitions which annotations should be ignored // AnnotationsIgnoreList list regexp or plain definitions which annotations should be ignored
AnnotationsIgnoreList []string `json:"annotationsIgnoreList,omitempty"` AnnotationsIgnoreList []string `json:"annotationsIgnoreList,omitempty"`
@ -116,19 +129,38 @@ type ServerGroupSpec struct {
LabelsMode *LabelsMode `json:"labelsMode,omitempty"` LabelsMode *LabelsMode `json:"labelsMode,omitempty"`
// Envs allow to specify additional envs in this group. // Envs allow to specify additional envs in this group.
Envs ServerGroupEnvVars `json:"envs,omitempty"` Envs ServerGroupEnvVars `json:"envs,omitempty"`
// ServiceAccountName specifies the name of the service account used for Pods in this group. // ServiceAccountName setting specifies the `serviceAccountName` for the `Pods` created
// for each server of this group. If empty, it defaults to using the
// `default` service account.
// Using an alternative `ServiceAccount` is typically used to separate access rights.
// The ArangoDB deployments need some very minimal access rights. With the
// deployment of the operator, we grant the rights to 'get' all 'pod' resources.
// If you are using a different service account, please grant these rights
// to that service account.
ServiceAccountName *string `json:"serviceAccountName,omitempty"` ServiceAccountName *string `json:"serviceAccountName,omitempty"`
// NodeSelector speficies a set of selectors for nodes // NodeSelector setting specifies a set of labels to be used as `nodeSelector` for Pods of this node.
// +doc/type: map[string]string
// +doc/link: Kubernetes documentation|https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
NodeSelector map[string]string `json:"nodeSelector,omitempty"` NodeSelector map[string]string `json:"nodeSelector,omitempty"`
// Probes specifies additional behaviour for probes // Probes specifies additional behaviour for probes
Probes *ServerGroupProbesSpec `json:"probes,omitempty"` Probes *ServerGroupProbesSpec `json:"probes,omitempty"`
// PriorityClassName specifies a priority class name // PriorityClassName specifies a priority class name
// Will be forwarded to the pod spec.
// +doc/link: Kubernetes documentation|https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
PriorityClassName string `json:"priorityClassName,omitempty"` PriorityClassName string `json:"priorityClassName,omitempty"`
// VolumeClaimTemplate specifies a template for volume claims // VolumeClaimTemplate specifies a volumeClaimTemplate used by operator to create to volume claims for pods of this group.
// This setting is not available for group `coordinators`, `syncmasters` & `syncworkers`.
// The default value describes a volume with `8Gi` storage, `ReadWriteOnce` access mode and volume mode set to `PersistentVolumeFilesystem`.
// If this field is not set and `spec.<group>.resources.requests.storage` is set, then a default volume claim
// with size as specified by `spec.<group>.resources.requests.storage` will be created. In that case `storage`
// and `iops` is not forwarded to the pods resource requirements.
// +doc/type: core.PersistentVolumeClaim // +doc/type: core.PersistentVolumeClaim
// +doc/link: Documentation of core.PersistentVolumeClaim|https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#persistentvolumeclaim-v1-core // +doc/link: Documentation of core.PersistentVolumeClaim|https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#persistentvolumeclaim-v1-core
VolumeClaimTemplate *core.PersistentVolumeClaim `json:"volumeClaimTemplate,omitempty"` VolumeClaimTemplate *core.PersistentVolumeClaim `json:"volumeClaimTemplate,omitempty"`
// VolumeResizeMode specified resize mode for pvc // VolumeResizeMode specified resize mode for PVCs and PVs
// +doc/enum: runtime|PVC will be resized in Pod runtime (EKS, GKE)
// +doc/enum: rotate|Pod will be shutdown and PVC will be resized (AKS)
// +doc/default: runtime
VolumeResizeMode *PVCResizeMode `json:"pvcResizeMode,omitempty"` VolumeResizeMode *PVCResizeMode `json:"pvcResizeMode,omitempty"`
// Deprecated: VolumeAllowShrink allows shrink the volume // Deprecated: VolumeAllowShrink allows shrink the volume
VolumeAllowShrink *bool `json:"volumeAllowShrink,omitempty"` VolumeAllowShrink *bool `json:"volumeAllowShrink,omitempty"`
@ -151,7 +183,9 @@ type ServerGroupSpec struct {
// +doc/type: []core.Container // +doc/type: []core.Container
// +doc/link: Documentation of core.Container|https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#container-v1-core // +doc/link: Documentation of core.Container|https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#container-v1-core
Sidecars []core.Container `json:"sidecars,omitempty"` Sidecars []core.Container `json:"sidecars,omitempty"`
// SecurityContext specifies security context for group // SecurityContext specifies additional `securityContext` settings in ArangoDB Pod definitions.
// This is similar (but not fully compatible) to k8s SecurityContext definition.
// +doc/link: Kubernetes documentation|https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
SecurityContext *ServerGroupSpecSecurityContext `json:"securityContext,omitempty"` SecurityContext *ServerGroupSpecSecurityContext `json:"securityContext,omitempty"`
// Volumes define list of volumes mounted to pod // Volumes define list of volumes mounted to pod
Volumes ServerGroupSpecVolumes `json:"volumes,omitempty"` Volumes ServerGroupSpecVolumes `json:"volumes,omitempty"`
@ -175,7 +209,10 @@ type ServerGroupSpec struct {
InternalPortProtocol *ServerGroupPortProtocol `json:"internalPortProtocol,omitempty"` InternalPortProtocol *ServerGroupPortProtocol `json:"internalPortProtocol,omitempty"`
// ExternalPortEnabled if external port should be enabled. If is set to false, ports needs to be exposed via sidecar. Only for ArangoD members // ExternalPortEnabled if external port should be enabled. If is set to false, ports needs to be exposed via sidecar. Only for ArangoD members
ExternalPortEnabled *bool `json:"externalPortEnabled,omitempty"` ExternalPortEnabled *bool `json:"externalPortEnabled,omitempty"`
// AllowMemberRecreation allows to recreate member. Value is used only for Coordinator and DBServer with default to True, for all other groups set to false. // AllowMemberRecreation allows to recreate member.
// This setting changes the member recreation logic based on group:
// - For Sync Masters, Sync Workers, Coordinator and DB-Servers it determines if a member can be recreated in case of failure (default `true`)
// - For Agents and Single this value is hardcoded to `false` and the value provided in spec is ignored.
AllowMemberRecreation *bool `json:"allowMemberRecreation,omitempty"` AllowMemberRecreation *bool `json:"allowMemberRecreation,omitempty"`
// TerminationGracePeriodSeconds override default TerminationGracePeriodSeconds for pods - via silent rotation // TerminationGracePeriodSeconds override default TerminationGracePeriodSeconds for pods - via silent rotation
TerminationGracePeriodSeconds *int64 `json:"terminationGracePeriodSeconds,omitempty"` TerminationGracePeriodSeconds *int64 `json:"terminationGracePeriodSeconds,omitempty"`
@ -197,7 +234,8 @@ type ServerGroupSpec struct {
// ServerGroupProbesSpec contains specification for probes for pods of the server group // ServerGroupProbesSpec contains specification for probes for pods of the server group
type ServerGroupProbesSpec struct { type ServerGroupProbesSpec struct {
// LivenessProbeDisabled if true livenessProbes are disabled // LivenessProbeDisabled if set to true, the operator does not generate a liveness probe for new pods belonging to this group
// +doc/default: false
LivenessProbeDisabled *bool `json:"livenessProbeDisabled,omitempty"` LivenessProbeDisabled *bool `json:"livenessProbeDisabled,omitempty"`
// LivenessProbeSpec override liveness probe configuration // LivenessProbeSpec override liveness probe configuration
LivenessProbeSpec *ServerGroupProbeSpec `json:"livenessProbeSpec,omitempty"` LivenessProbeSpec *ServerGroupProbeSpec `json:"livenessProbeSpec,omitempty"`
@ -228,10 +266,26 @@ func (s ServerGroupProbesSpec) GetReadinessProbeDisabled() *bool {
// ServerGroupProbeSpec // ServerGroupProbeSpec
type ServerGroupProbeSpec struct { type ServerGroupProbeSpec struct {
// InitialDelaySeconds specifies number of seconds after the container has started before liveness or readiness probes are initiated.
// Minimum value is 0.
// +doc/default: 2
InitialDelaySeconds *int32 `json:"initialDelaySeconds,omitempty"` InitialDelaySeconds *int32 `json:"initialDelaySeconds,omitempty"`
// PeriodSeconds How often (in seconds) to perform the probe.
// Minimum value is 1.
// +doc/default: 10
PeriodSeconds *int32 `json:"periodSeconds,omitempty"` PeriodSeconds *int32 `json:"periodSeconds,omitempty"`
// TimeoutSeconds specifies number of seconds after which the probe times out
// Minimum value is 1.
// +doc/default: 2
TimeoutSeconds *int32 `json:"timeoutSeconds,omitempty"` TimeoutSeconds *int32 `json:"timeoutSeconds,omitempty"`
// SuccessThreshold Minimum consecutive successes for the probe to be considered successful after having failed.
// Minimum value is 1.
// +doc/default: 1
SuccessThreshold *int32 `json:"successThreshold,omitempty"` SuccessThreshold *int32 `json:"successThreshold,omitempty"`
// FailureThreshold when a Pod starts and the probe fails, Kubernetes will try failureThreshold times before giving up.
// Giving up means restarting the container.
// Minimum value is 1.
// +doc/default: 3
FailureThreshold *int32 `json:"failureThreshold,omitempty"` FailureThreshold *int32 `json:"failureThreshold,omitempty"`
} }

View file

@ -30,7 +30,7 @@ type ServerIDGroupSpec struct {
// +doc/type: []core.Toleration // +doc/type: []core.Toleration
// +doc/link: Documentation of core.Toleration|https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#toleration-v1-core // +doc/link: Documentation of core.Toleration|https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#toleration-v1-core
Tolerations []core.Toleration `json:"tolerations,omitempty"` Tolerations []core.Toleration `json:"tolerations,omitempty"`
// NodeSelector speficies a set of selectors for nodes // NodeSelector specifies a set of selectors for nodes
NodeSelector map[string]string `json:"nodeSelector,omitempty"` NodeSelector map[string]string `json:"nodeSelector,omitempty"`
// PriorityClassName specifies a priority class name // PriorityClassName specifies a priority class name
PriorityClassName string `json:"priorityClassName,omitempty"` PriorityClassName string `json:"priorityClassName,omitempty"`

View file

@ -28,8 +28,17 @@ import (
// SyncAuthenticationSpec holds dc2dc sync authentication specific configuration settings // SyncAuthenticationSpec holds dc2dc sync authentication specific configuration settings
type SyncAuthenticationSpec struct { type SyncAuthenticationSpec struct {
JWTSecretName *string `json:"jwtSecretName,omitempty"` // JWT secret for sync masters // JWTSecretName setting specifies the name of a kubernetes `Secret` that contains
ClientCASecretName *string `json:"clientCASecretName,omitempty"` // Secret containing client authentication CA // the JWT token used for accessing all ArangoSync master servers.
// When not specified, the `spec.auth.jwtSecretName` value is used.
// If you specify a name of a `Secret` that does not exist, a random token is created
// and stored in a `Secret` with given name.
JWTSecretName *string `json:"jwtSecretName,omitempty"`
// ClientCASecretName setting specifies the name of a kubernetes `Secret` that contains
// a PEM encoded CA certificate used for client certificate verification
// in all ArangoSync master servers.
// This is a required setting when `spec.sync.enabled` is `true`.
ClientCASecretName *string `json:"clientCASecretName,omitempty"`
} }
// GetJWTSecretName returns the value of jwtSecretName. // GetJWTSecretName returns the value of jwtSecretName.

View file

@ -32,7 +32,20 @@ import (
// SyncExternalAccessSpec holds configuration for the external access provided for the sync deployment. // SyncExternalAccessSpec holds configuration for the external access provided for the sync deployment.
type SyncExternalAccessSpec struct { type SyncExternalAccessSpec struct {
ExternalAccessSpec ExternalAccessSpec
// MasterEndpoint setting specifies the master endpoint(s) advertised by the ArangoSync SyncMasters.
// If not set, this setting defaults to:
// - If `spec.sync.externalAccess.loadBalancerIP` is set, it defaults to `https://<load-balancer-ip>:<8629>`.
// - Otherwise it defaults to `https://<sync-service-dns-name>:<8629>`.
// +doc/type: []string
MasterEndpoint []string `json:"masterEndpoint,omitempty"` MasterEndpoint []string `json:"masterEndpoint,omitempty"`
// AccessPackageSecretNames setting specifies the names of zero of more `Secrets` that will be created by the deployment
// operator containing "access packages". An access package contains those `Secrets` that are needed
// to access the SyncMasters of this `ArangoDeployment`.
// By removing a name from this setting, the corresponding `Secret` is also deleted.
// Note that to remove all access packages, leave an empty array in place (`[]`).
// Completely removing the setting results in not modifying the list.
// +doc/type: []string
// +doc/link: See the ArangoDeploymentReplication specification|deployment-replication-resource-reference.md
AccessPackageSecretNames []string `json:"accessPackageSecretNames,omitempty"` AccessPackageSecretNames []string `json:"accessPackageSecretNames,omitempty"`
} }

View file

@ -28,6 +28,9 @@ import (
// MonitoringSpec holds monitoring specific configuration settings // MonitoringSpec holds monitoring specific configuration settings
type MonitoringSpec struct { type MonitoringSpec struct {
// TokenSecretName setting specifies the name of a kubernetes `Secret` that contains
// the bearer token used for accessing all monitoring endpoints of all arangod/arangosync servers.
// When not specified, no monitoring token is used.
TokenSecretName *string `json:"tokenSecretName,omitempty"` TokenSecretName *string `json:"tokenSecretName,omitempty"`
} }

View file

@ -27,6 +27,10 @@ import (
// SyncSpec holds dc2dc replication specific configuration settings // SyncSpec holds dc2dc replication specific configuration settings
type SyncSpec struct { type SyncSpec struct {
// Enabled setting enables/disables support for data center 2 data center
// replication in the cluster. When enabled, the cluster will contain
// a number of `syncmaster` & `syncworker` servers.
// +doc/default: false
Enabled *bool `json:"enabled,omitempty"` Enabled *bool `json:"enabled,omitempty"`
ExternalAccess SyncExternalAccessSpec `json:"externalAccess"` ExternalAccess SyncExternalAccessSpec `json:"externalAccess"`

View file

@ -54,8 +54,28 @@ const (
// TLSSpec holds TLS specific configuration settings // TLSSpec holds TLS specific configuration settings
type TLSSpec struct { type TLSSpec struct {
// CASecretName setting specifies the name of a kubernetes `Secret` that contains
// a standard CA certificate + private key used to sign certificates for individual
// ArangoDB servers.
// When no name is specified, it defaults to `<deployment-name>-ca`.
// To disable authentication, set this value to `None`.
// If you specify a name of a `Secret` that does not exist, a self-signed CA certificate + key is created
// and stored in a `Secret` with given name.
// The specified `Secret`, must contain the following data fields:
// - `ca.crt` PEM encoded public key of the CA certificate
// - `ca.key` PEM encoded private key of the CA certificate
CASecretName *string `json:"caSecretName,omitempty"` CASecretName *string `json:"caSecretName,omitempty"`
// AltNames setting specifies a list of alternate names that will be added to all generated
// certificates. These names can be DNS names or email addresses.
// The default value is empty.
// +doc/type: []string
AltNames []string `json:"altNames,omitempty"` AltNames []string `json:"altNames,omitempty"`
// TTL setting specifies the time to live of all generated server certificates.
// When the server certificate is about to expire, it will be automatically replaced
// by a new one and the affected server will be restarted.
// Note: The time to live of the CA certificate (when created automatically)
// will be set to 10 years.
// +doc/default: "2160h" (about 3 months)
TTL *Duration `json:"ttl,omitempty"` TTL *Duration `json:"ttl,omitempty"`
SNI *TLSSNISpec `json:"sni,omitempty"` SNI *TLSSNISpec `json:"sni,omitempty"`
Mode *TLSRotateMode `json:"mode,omitempty"` Mode *TLSRotateMode `json:"mode,omitempty"`