diff --git a/docs/config_and_secrets.md b/docs/config_and_secrets.md new file mode 100644 index 000000000..f31514f8f --- /dev/null +++ b/docs/config_and_secrets.md @@ -0,0 +1,45 @@ +# Configuration & secrets + +An ArangoDB cluster has lots of configuration options. +Some will be supported directly in the ArangoDB operator, +others will have to specified separately. + +## Built-in options + +All built-in options are passed to ArangoDB servers via commandline +arguments configured in the Pod-spec. + +## Other configuration options + +### Options 1 + +Use a `ConfigMap` per type of ArangoDB server. +The operator passes the options listed in the configmap +as commandline options to the ArangoDB servers. + +TODO Discuss format of ConfigMap content. Is it `arangod.conf` like? + +### Option 2 + +Add ArangoDB option sections to the custom resource. + +## Secrets + +The ArangoDB cluster needs several secrets such as JWT tokens +TLS certificates and so on. + +All these secrets are stored as Kubernetes Secrets and passed to +the applicable Pods as files, mapped into the Pods filesystem. + +The name of the secret is specified in the custom resource. +For example: + +```yaml +apiVersion: "cluster.arangodb.com/v1alpha" +kind: "Cluster" +metadata: + name: "example-arangodb-cluster" +spec: + mode: cluster + jwtTokenSecretName: +``` diff --git a/docs/constraints.md b/docs/constraints.md new file mode 100644 index 000000000..1cc78d822 --- /dev/null +++ b/docs/constraints.md @@ -0,0 +1,66 @@ +# Constraints + +The ArangoDB operator tries to honor various constraints to support high availability of +the ArangoDB cluster. + +## Run agents on separate machines + +It is essential for HA that agents are running on separate nodes. +To ensure this, the agent Pods are configured with pod-anti-affinity. + +```yaml +kind: Pod +spec: + affinity: + podAntiAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - weight: 100 + podAffinityTerm: + labelSelector: + matchExpressions: + - key: app + operator: In + values: + - arangodb + - key: arangodb_cluster_name + operator: In + values: + - + - key: role + operator: In + values: + - agent +``` + +## Run dbservers on separate machines + +It is needed for HA that dbservers are running on separate nodes. +To ensure this, the dbserver Pods are configured with pod-anti-affinity. + +```yaml +kind: Pod +spec: + affinity: + podAntiAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - weight: 100 + podAffinityTerm: + labelSelector: + matchExpressions: + - key: app + operator: In + values: + - arangodb + - key: arangodb_cluster_name + operator: In + values: + - + - key: role + operator: In + values: + - dbserver +``` + +Q: Do we want to allow multiple dbservers on a single node? + If so, we should use `preferredDuringSchedulingIgnoredDuringExecution` + antiy-affinity. diff --git a/docs/custom_resource.md b/docs/custom_resource.md new file mode 100644 index 000000000..b4800dec6 --- /dev/null +++ b/docs/custom_resource.md @@ -0,0 +1,45 @@ +# Custom Resource + +The ArangoDB operator creates and maintains ArangoDB clusters +in a Kubernetes cluster, given a cluster specification. +This cluster specification is a CustomResource following +a CustomResourceDefinition created by the operator. + +Example minimal cluster definition: + +```yaml +apiVersion: "cluster.arangodb.com/v1alpha" +kind: "Cluster" +metadata: + name: "example-arangodb-cluster" +spec: + mode: cluster +``` + +Example more elaborate cluster definition: + +```yaml +apiVersion: "cluster.arangodb.com/v1alpha" +kind: "Cluster" +metadata: + name: "example-arangodb-cluster" +spec: + mode: cluster + agents: + servers: 3 + args: + - --log.level=debug + resources: + requests: + storage: 8Gi + storageClassName: ssd + dbservers: + servers: 5 + resources: + requests: + storage: 80Gi + storageClassName: ssd + coordinators: + servers: 3 + image: "arangodb/arangodb:3.3.3" +``` diff --git a/docs/resource_and_labels.md b/docs/resource_and_labels.md index 5734ed7d3..ab1103762 100644 --- a/docs/resource_and_labels.md +++ b/docs/resource_and_labels.md @@ -49,12 +49,6 @@ For a full cluster deployment, the following k8s resources are created: - `app=arangodb` - `arangodb_cluster_name: ` - `role: agent` -- Service for accessing the coordinator, named ``. - The services will provide access to all coordinators from within the k8s cluster. - - Labels: - - `app=arangodb` - - `arangodb_cluster_name: ` - - `role: coordinator` - Pods running ArangoDB dbservers named `_dbserver_`. - Labels: @@ -86,3 +80,24 @@ For a full cluster deployment, the following k8s resources are created: - `app=arangodb` - `arangodb_cluster_name: ` - `role: coordinator` + +## Full cluster with DC2DC + +For a full cluster with datacenter replication deployment, +the same resources are created as for a Full cluster, with the following +additions: + +- Pods running ArangoSync workers named `_syncworker_`. + - Labels: + - `app=arangodb` + - `arangodb_cluster_name: ` + - `role: syncworker` + +- Pods running ArangoSync master named `_coordinator_`. + - Labels: + - `app=arangodb` + - `arangodb_cluster_name: ` + - `role: syncmaster` + +- Service for accessing the sync masters & workers, named `-sync`. + The service will provide access to all syncmaster & workers from within the k8s cluster. diff --git a/docs/storage.md b/docs/storage.md new file mode 100644 index 000000000..e51ae0059 --- /dev/null +++ b/docs/storage.md @@ -0,0 +1,9 @@ +# Storage + +An ArangoDB cluster relies heavily on fast persistent storage. +The ArangoDB operator uses PersistenVolumeClaim's to deliver +this storages to Pods that need them. + +TODO how to specify storage class (and other parameters) + +Q: Do we want volumes other than PersistentVolumeClaims? diff --git a/docs/upgrading.md b/docs/upgrading.md new file mode 100644 index 000000000..5e7c2ff7a --- /dev/null +++ b/docs/upgrading.md @@ -0,0 +1,40 @@ +# Upgrading + +The ArangoDB operator supports upgrading an ArangoDB from +one version to the next. + +The ArangoDB operator itself should also support upgrades +of its code and the CustomResourceDefinitions. + +TODO: Investigate k8s API change process. + +## Upgrading ArangoDB single to another version + +The process for upgrading an existing ArangoDB single server +to another version is as follows: + +- Set CR state to `Upgrading` +- Remove the server Pod (keep persistent volume) +- Create a new server Pod with new version +- Wait until server is ready before continuing +- Set CR state to `Ready` + +## Upgrading ArangoDB cluster to another version + +The process for upgrading an existing ArangoDB cluster +to another version is as follows: + +- Set CR state to `Upgrading` +- For each agent: + - Remove the agent Pod (keep persistent volume) + - Create new agent Pod with new version + - Wait until agent is ready before continuing +- For each dbserver: + - Remove the dbserver Pod (keep persistent volume) + - Create new dbserver Pod with new version + - Wait until dbserver is ready before continuing +- For each coordinator: + - Remove the coordinator Pod (keep persistent volume) + - Create new coordinator Pod with new version + - Wait until coordinator is ready before continuing +- Set CR state to `Ready` diff --git a/docs/usage.md b/docs/usage.md new file mode 100644 index 000000000..94563cdb5 --- /dev/null +++ b/docs/usage.md @@ -0,0 +1,41 @@ +# Using the ArangoDB operator + +## Installation + +The ArangoDB operator needs to be installed in your Kubernetes +cluster first. To do so, clone this repository and run: + +```bash +kubectl create -f examples/deployment.yaml +``` + +## Cluster creation + +Once the operator is running, you can create your ArangoDB cluster +by creating a custom resource and deploying it. + +For example: + +```bash +kubectl create -f examples/simple-cluster.yaml +``` + +## Cluster removal + +To remove an existing cluster, delete the custom +resource. The operator will then delete all created resources. + +For example: + +```bash +kubectl delete -f examples/simple-cluster.yaml +``` + +## Operator removal + +To remove the entire ArangoDB operator, remove all +clusters first and then remove the operator by running: + +```bash +kubectl delete -f examples/deployment.yaml +``` diff --git a/examples/deployment.yaml b/examples/deployment.yaml new file mode 100644 index 000000000..cbf22c22d --- /dev/null +++ b/examples/deployment.yaml @@ -0,0 +1,23 @@ +apiVersion: extensions/v1beta1 +kind: Deployment +metadata: + name: arangodb-operator +spec: + replicas: 1 + template: + metadata: + labels: + name: arangodb-operator + spec: + containers: + - name: arangodb-operator + image: arangodb/arangodb-operator:latest + env: + - name: MY_POD_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + - name: MY_POD_NAME + valueFrom: + fieldRef: + fieldPath: metadata.name \ No newline at end of file diff --git a/examples/simple-cluster.yaml b/examples/simple-cluster.yaml new file mode 100644 index 000000000..6368d33c4 --- /dev/null +++ b/examples/simple-cluster.yaml @@ -0,0 +1,6 @@ +apiVersion: "cluster.arangodb.com/v1alpha" +kind: "Cluster" +metadata: + name: "example-arangodb-cluster" +spec: + mode: cluster