1
0
Fork 0
mirror of https://github.com/kubernetes-sigs/node-feature-discovery.git synced 2024-12-14 11:57:51 +00:00

docs: document trade-offs in memory configuration

Problem: memory requests and limits has been set for `master` process in
PR #1631. It does not follow best practices for setting those values,
but the intention was provide default values for a wide variety of
clusters, including small ones.

Solution: provide solid documentation about the problems that might
happen in production environments when
`resource.memory.requests << resource.memory.limits`. Add a link to
relevant external sources, which includes the advise from Tim Hockin:
> Always set memory limit == request

Signed-off-by: cmontemuino <1761056+cmontemuino@users.noreply.github.com>
This commit is contained in:
cmontemuino 2024-04-02 19:01:50 +02:00
parent 7938e81c33
commit 54b01a2576
No known key found for this signature in database
GPG key ID: 676EF8F14B2EE29A
2 changed files with 7 additions and 1 deletions

View file

@ -100,6 +100,12 @@ master:
memory: 4Gi
requests:
cpu: 100m
# You may want to use the same value for `requests.memory` and `limits.memory`. The “requests” value affects scheduling to accommodate pods on nodes.
# If there is a large difference between “requests” and “limits” and nodes experience memory pressure, the kernel may invoke
# the OOM Killer, even if the memory does not exceed the “limits” threshold. This can cause unexpected pod evictions. Memory
# cannot be compressed and once allocated to a pod, it can only be reclaimed by killing the pod. There is a great article by
# Robusta that discusses this issue.
# https://home.robusta.dev/blog/kubernetes-memory-limit
memory: 128Mi
nodeSelector: {}

View file

@ -132,7 +132,7 @@ API's you need to install the prometheus operator in your cluster.
| `master.service.type` | string | ClusterIP | NFD master service type. **NOTE**: this parameter is related to the deprecated gRPC API and will be removed with it in a future release |
| `master.service.port` | integer | 8080 | NFD master service port. **NOTE**: this parameter is related to the deprecated gRPC API and will be removed with it in a future release |
| `master.resources.limits` | dict | {cpu: 300m, memory: 4Gi} | NFD master pod [resources limits](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits) |
| `master.resources.requests`| dict | {cpu: 100m, memory: 128Mi} | NFD master pod [resources requests](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits) |
| `master.resources.requests`| dict | {cpu: 100m, memory: 128Mi} | NFD master pod [resources requests](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits). You may want to use the same value for `requests.memory` and `limits.memory`. The “requests” value affects scheduling to accommodate pods on nodes. If there is a large difference between “requests” and “limits” and nodes experience memory pressure, the kernel may invoke the OOM Killer, even if the memory does not exceed the “limits” threshold. This can cause unexpected pod evictions. Memory cannot be compressed and once allocated to a pod, it can only be reclaimed by killing the pod. There is a great article by [Robusta](https://home.robusta.dev/blog/kubernetes-memory-limit) that discusses this issue.|
| `master.tolerations` | dict | _Scheduling to master node is disabled_ | NFD master pod [tolerations](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) |
| `master.annotations` | dict | {} | NFD master pod [annotations](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/) |
| `master.affinity` | dict | | NFD master pod required [node affinity](https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/) |