1
0
Fork 0
mirror of https://github.com/arangodb/kube-arangodb.git synced 2024-12-14 11:57:37 +00:00
kube-arangodb/docs/user/storage.md
2018-02-23 13:41:12 +01:00

2.2 KiB

Storage

An ArangoDB cluster relies heavily on fast persistent storage. The ArangoDB operator uses PersistenVolumeClaims to deliver the storage to Pods that need them.

Storage configuration

In the cluster resource, one can specify the type of storage used by groups of servers using the spec.<group>.storageClassName setting.

The amount of storage needed is configured using the spec.<group>.resources.requests.storage setting.

Note that configuring storage is done per group of servers. It is not possible to configure storage per individual server.

Local storage

For optimal performance, ArangoDB should be configured with locally attached SSD storage.

To accomplish this, one must create PersistentVolumes for all servers that need persistent storage (single, agents & dbservers). E.g. for a cluster with 3 agents and 5 dbservers, you must create 8 volumes.

Note that each volume must have a capacity that is equal to or higher than the capacity needed for each server.

To select the correct node, add a required node-affinity annotation as shown in the example below.

apiVersion: v1
kind: PersistentVolume
metadata:
  name: volume-agent-1
  annotations:
        "volume.alpha.kubernetes.io/node-affinity": '{
            "requiredDuringSchedulingIgnoredDuringExecution": {
                "nodeSelectorTerms": [
                    { "matchExpressions": [
                        { "key": "kubernetes.io/hostname",
                          "operator": "In",
                          "values": ["node-1"]
                        }
                    ]}
                 ]}
              }'
spec:
    capacity:
      storage: 100Gi
    accessModes:
    - ReadWriteOnce
    persistentVolumeReclaimPolicy: Delete
    storageClassName: local-ssd
    local:
      path: /mnt/disks/ssd1

For Kubernetes 1.9 and up, you should create a StorageClass which is configured to bind volumes on their first use as shown in the example below. This ensures that the Kubernetes scheduler takes all constraints on a Pod that into consideration before binding the volume to a claim.

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: local-ssd
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer