mirror of
https://github.com/arangodb/kube-arangodb.git
synced 2024-12-14 11:57:37 +00:00
Merge branch 'master' into test-framework
This commit is contained in:
commit
b66f5812fc
2 changed files with 59 additions and 3 deletions
|
@ -122,9 +122,8 @@ The encryption key cannot be changed after the cluster has been created.
|
|||
|
||||
This setting specifies the name of a kubernetes `Secret` that contains
|
||||
the JWT token used for accessing all ArangoDB servers.
|
||||
When a JWT token is used, authentication of the cluster is enabled, without it
|
||||
authentication is disabled.
|
||||
The default value is empty.
|
||||
When no name is specified, it defaults to `<deployment-name>-jwt`.
|
||||
To disable authentication, set this value to `-`.
|
||||
|
||||
If you specify a name of a `Secret` that does not exist, a random token is created
|
||||
and stored in a `Secret` with given name.
|
||||
|
|
|
@ -16,3 +16,60 @@ The amount of storage needed is configured using the
|
|||
Note that configuring storage is done per group of servers.
|
||||
It is not possible to configure storage per individual
|
||||
server.
|
||||
|
||||
## Local storage
|
||||
|
||||
For optimal performance, ArangoDB should be configured with locally attached
|
||||
SSD storage.
|
||||
|
||||
To accomplish this, one must create `PersistentVolumes` for all servers that
|
||||
need persistent storage (single, agents & dbservers).
|
||||
E.g. for a `cluster` with 3 agents and 5 dbservers, you must create 8 volumes.
|
||||
|
||||
Note that each volume must have a capacity that is equal to or higher than the
|
||||
capacity needed for each server.
|
||||
|
||||
To select the correct node, add a required node-affinity annotation as shown
|
||||
in the example below.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: PersistentVolume
|
||||
metadata:
|
||||
name: volume-agent-1
|
||||
annotations:
|
||||
"volume.alpha.kubernetes.io/node-affinity": '{
|
||||
"requiredDuringSchedulingIgnoredDuringExecution": {
|
||||
"nodeSelectorTerms": [
|
||||
{ "matchExpressions": [
|
||||
{ "key": "kubernetes.io/hostname",
|
||||
"operator": "In",
|
||||
"values": ["node-1"]
|
||||
}
|
||||
]}
|
||||
]}
|
||||
}'
|
||||
spec:
|
||||
capacity:
|
||||
storage: 100Gi
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
persistentVolumeReclaimPolicy: Delete
|
||||
storageClassName: local-ssd
|
||||
local:
|
||||
path: /mnt/disks/ssd1
|
||||
```
|
||||
|
||||
For Kubernetes 1.9 and up, you should create a `StorageClass` which is configured
|
||||
to bind volumes on their first use as shown in the example below.
|
||||
This ensures that the Kubernetes scheduler takes all constraints on a `Pod`
|
||||
that into consideration before binding the volume to a claim.
|
||||
|
||||
```yaml
|
||||
kind: StorageClass
|
||||
apiVersion: storage.k8s.io/v1
|
||||
metadata:
|
||||
name: local-ssd
|
||||
provisioner: kubernetes.io/no-provisioner
|
||||
volumeBindingMode: WaitForFirstConsumer
|
||||
```
|
||||
|
|
Loading…
Reference in a new issue