1
0
Fork 0
mirror of https://github.com/arangodb/kube-arangodb.git synced 2024-12-14 11:57:37 +00:00

Elaborating

This commit is contained in:
Ewout Prangsma 2018-02-05 16:23:15 +01:00
parent 0c28864fad
commit 15df36608e
No known key found for this signature in database
GPG key ID: 4DBAD380D93D0698
9 changed files with 296 additions and 6 deletions

View file

@ -0,0 +1,45 @@
# Configuration & secrets
An ArangoDB cluster has lots of configuration options.
Some will be supported directly in the ArangoDB operator,
others will have to specified separately.
## Built-in options
All built-in options are passed to ArangoDB servers via commandline
arguments configured in the Pod-spec.
## Other configuration options
### Options 1
Use a `ConfigMap` per type of ArangoDB server.
The operator passes the options listed in the configmap
as commandline options to the ArangoDB servers.
TODO Discuss format of ConfigMap content. Is it `arangod.conf` like?
### Option 2
Add ArangoDB option sections to the custom resource.
## Secrets
The ArangoDB cluster needs several secrets such as JWT tokens
TLS certificates and so on.
All these secrets are stored as Kubernetes Secrets and passed to
the applicable Pods as files, mapped into the Pods filesystem.
The name of the secret is specified in the custom resource.
For example:
```yaml
apiVersion: "cluster.arangodb.com/v1alpha"
kind: "Cluster"
metadata:
name: "example-arangodb-cluster"
spec:
mode: cluster
jwtTokenSecretName: <name-of-JWT-token-secret>
```

66
docs/constraints.md Normal file
View file

@ -0,0 +1,66 @@
# Constraints
The ArangoDB operator tries to honor various constraints to support high availability of
the ArangoDB cluster.
## Run agents on separate machines
It is essential for HA that agents are running on separate nodes.
To ensure this, the agent Pods are configured with pod-anti-affinity.
```yaml
kind: Pod
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- arangodb
- key: arangodb_cluster_name
operator: In
values:
- <cluster-name>
- key: role
operator: In
values:
- agent
```
## Run dbservers on separate machines
It is needed for HA that dbservers are running on separate nodes.
To ensure this, the dbserver Pods are configured with pod-anti-affinity.
```yaml
kind: Pod
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- arangodb
- key: arangodb_cluster_name
operator: In
values:
- <cluster-name>
- key: role
operator: In
values:
- dbserver
```
Q: Do we want to allow multiple dbservers on a single node?
If so, we should use `preferredDuringSchedulingIgnoredDuringExecution`
antiy-affinity.

45
docs/custom_resource.md Normal file
View file

@ -0,0 +1,45 @@
# Custom Resource
The ArangoDB operator creates and maintains ArangoDB clusters
in a Kubernetes cluster, given a cluster specification.
This cluster specification is a CustomResource following
a CustomResourceDefinition created by the operator.
Example minimal cluster definition:
```yaml
apiVersion: "cluster.arangodb.com/v1alpha"
kind: "Cluster"
metadata:
name: "example-arangodb-cluster"
spec:
mode: cluster
```
Example more elaborate cluster definition:
```yaml
apiVersion: "cluster.arangodb.com/v1alpha"
kind: "Cluster"
metadata:
name: "example-arangodb-cluster"
spec:
mode: cluster
agents:
servers: 3
args:
- --log.level=debug
resources:
requests:
storage: 8Gi
storageClassName: ssd
dbservers:
servers: 5
resources:
requests:
storage: 80Gi
storageClassName: ssd
coordinators:
servers: 3
image: "arangodb/arangodb:3.3.3"
```

View file

@ -49,12 +49,6 @@ For a full cluster deployment, the following k8s resources are created:
- `app=arangodb`
- `arangodb_cluster_name: <cluster-name>`
- `role: agent`
- Service for accessing the coordinator, named `<cluster-name>`.
The services will provide access to all coordinators from within the k8s cluster.
- Labels:
- `app=arangodb`
- `arangodb_cluster_name: <cluster-name>`
- `role: coordinator`
- Pods running ArangoDB dbservers named `<cluster-name>_dbserver_<x>`.
- Labels:
@ -86,3 +80,24 @@ For a full cluster deployment, the following k8s resources are created:
- `app=arangodb`
- `arangodb_cluster_name: <cluster-name>`
- `role: coordinator`
## Full cluster with DC2DC
For a full cluster with datacenter replication deployment,
the same resources are created as for a Full cluster, with the following
additions:
- Pods running ArangoSync workers named `<cluster-name>_syncworker_<x>`.
- Labels:
- `app=arangodb`
- `arangodb_cluster_name: <cluster-name>`
- `role: syncworker`
- Pods running ArangoSync master named `<cluster-name>_coordinator_<x>`.
- Labels:
- `app=arangodb`
- `arangodb_cluster_name: <cluster-name>`
- `role: syncmaster`
- Service for accessing the sync masters & workers, named `<cluster-name>-sync`.
The service will provide access to all syncmaster & workers from within the k8s cluster.

9
docs/storage.md Normal file
View file

@ -0,0 +1,9 @@
# Storage
An ArangoDB cluster relies heavily on fast persistent storage.
The ArangoDB operator uses PersistenVolumeClaim's to deliver
this storages to Pods that need them.
TODO how to specify storage class (and other parameters)
Q: Do we want volumes other than PersistentVolumeClaims?

40
docs/upgrading.md Normal file
View file

@ -0,0 +1,40 @@
# Upgrading
The ArangoDB operator supports upgrading an ArangoDB from
one version to the next.
The ArangoDB operator itself should also support upgrades
of its code and the CustomResourceDefinitions.
TODO: Investigate k8s API change process.
## Upgrading ArangoDB single to another version
The process for upgrading an existing ArangoDB single server
to another version is as follows:
- Set CR state to `Upgrading`
- Remove the server Pod (keep persistent volume)
- Create a new server Pod with new version
- Wait until server is ready before continuing
- Set CR state to `Ready`
## Upgrading ArangoDB cluster to another version
The process for upgrading an existing ArangoDB cluster
to another version is as follows:
- Set CR state to `Upgrading`
- For each agent:
- Remove the agent Pod (keep persistent volume)
- Create new agent Pod with new version
- Wait until agent is ready before continuing
- For each dbserver:
- Remove the dbserver Pod (keep persistent volume)
- Create new dbserver Pod with new version
- Wait until dbserver is ready before continuing
- For each coordinator:
- Remove the coordinator Pod (keep persistent volume)
- Create new coordinator Pod with new version
- Wait until coordinator is ready before continuing
- Set CR state to `Ready`

41
docs/usage.md Normal file
View file

@ -0,0 +1,41 @@
# Using the ArangoDB operator
## Installation
The ArangoDB operator needs to be installed in your Kubernetes
cluster first. To do so, clone this repository and run:
```bash
kubectl create -f examples/deployment.yaml
```
## Cluster creation
Once the operator is running, you can create your ArangoDB cluster
by creating a custom resource and deploying it.
For example:
```bash
kubectl create -f examples/simple-cluster.yaml
```
## Cluster removal
To remove an existing cluster, delete the custom
resource. The operator will then delete all created resources.
For example:
```bash
kubectl delete -f examples/simple-cluster.yaml
```
## Operator removal
To remove the entire ArangoDB operator, remove all
clusters first and then remove the operator by running:
```bash
kubectl delete -f examples/deployment.yaml
```

23
examples/deployment.yaml Normal file
View file

@ -0,0 +1,23 @@
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: arangodb-operator
spec:
replicas: 1
template:
metadata:
labels:
name: arangodb-operator
spec:
containers:
- name: arangodb-operator
image: arangodb/arangodb-operator:latest
env:
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name

View file

@ -0,0 +1,6 @@
apiVersion: "cluster.arangodb.com/v1alpha"
kind: "Cluster"
metadata:
name: "example-arangodb-cluster"
spec:
mode: cluster