We had a place in tools/packaging/generate_debian_package.sh that relied on the existence of build-opt,
moreover, if it did not exist the script deadlocked.
1. Added more loggings
2. Removed the loop
3. Removed unnecessary dependency in the script on the build-dir name.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* chore: fix our release pipeline
Also remove alpine prod.wip file that has not been used and unlikely will be for prod.
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
Should allow track caches where Dragonfly is not responsive to I/O
due to big CPU tasks. Also, update the local grafana dashboard.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
The dashboard used `dragonfly_up` metric to boostrap itself
but this metric does not exist anymore. I replaced it with `dragonfly_version`
In addition, the exported format changed slightly because I used the
recent grafana version to export.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* fixes#1936
Eviction Implementation
This patch provides a very simple eviction implementation for the interface mentioned above. In my opinion, the eviction algorithm approximates an LRU policy given that normal buckets always store the most recently accessed data while stash buckets are holding less active data.
The algorithm first selects a small set of segments as eviction targets. Starting from the last slot of the last stash bucket in each of the segments, we walk backward to evict key-value pairs stored in each visited slot. The eviction stopped either when a target memory release goal or the max number of evicted key-value pairs is reached. Therefore, we can upper bound the eviction time through the following two parameters that can be set when DF starts. Note that these two parameters could be retrieved and changed by user through CONFIG GET and CONFIG SET commands.
---------
Signed-off-by: Yue Li <61070669+theyueli@users.noreply.github.com>
The new logrotate settings assume that dragonfly closes a log file
once it grows to large. It never rotates file that is currently open for writing.
Specifically logrotate:
1. rotate only log files
2. skip those that are currently open by as process.
3. compresses using zstd which is more cpu efficient than gzip
4. does not truncate/create old files as 0-sized blobs - just renames them
Fixes#1935
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
1. Move docker build files to separate dir from docker script files
so that they won't be part of build context. Update dockerignore as well
2. Fix lib dependencies for alpine
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
Specifiying an exact boost version is not robust.
Also we do not depend on fibers anymore and boost-context is enough.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
Now this management script can:
* Create a cluster (before this PR)
* Print an existing cluster configuration
* Shutdown an existing cluster
* Move slots between cluster nodes
To support connecting to a cluster (for all new functions), I had to
change the way admin ports are defined. Instead of having the user
(optionally) specify the first port, they are hard-coded to be the
regular port + 10,000. This is done because we can't detect the admin
port based for an existing cluster (like via `CLUSTER SHARDS`).
This script allows easily setting up a local cluster.
Example invocation:
```
killall dragonfly; ./cluster_mgr.py --num_masters=3 --with_replicas
Setting up a Dragonfly cluster:
- Master nodes: 3
- Ports: 7001...7003
- Admin ports: 8001...8003
- Replicas? True
Starting nodes...
- Log file for node 7001: /tmp/dfly.cluster.node.7001.log
- Log file for node 7002: /tmp/dfly.cluster.node.7002.log
- Log file for node 7003: /tmp/dfly.cluster.node.7003.log
- Log file for node 7004: /tmp/dfly.cluster.node.7004.log
- Log file for node 7005: /tmp/dfly.cluster.node.7005.log
- Log file for node 7006: /tmp/dfly.cluster.node.7006.log
Configuring replication...
- Response for 7004: OK
- Response for 7005: OK
- Response for 7006: OK
Getting IDs...
- ID for 7001: acefdc2da5d397cfcb99239b3c29cbe6ff10d75a
- ID for 7002: a8cc67dfa42e91a94bd7c0903df35d60a39508bd
- ID for 7003: 1ad91af7bd96c89a8da877164b2ebb4cf458cab8
- ID for 7004: d209c3603343e25a18c78bd68304b6d883973bd3
- ID for 7005: bd2b25e95aaf7fdd2b955e50a00093a8272954bf
- ID for 7006: beb5cb07b75c33e3ff938d07725f2688d9bc91e0
Pushing config...
- Push into 7001: OK
- Push into 7002: OK
- Push into 7003: OK
- Push into 7004: OK
- Push into 7005: OK
- Push into 7006: OK
```
Currently deployed packages have version in the filename which makes it much harder to fetch
using scripts.
This change fixes the filename and also removes some redundant code.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
1. Tune some security directives.
2. Fix the flags file that mistakenly configured dragonfly to store its dump files into /run (tmpfs).
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
Alpine images don't have bash installed by default, so we need to use
`/bin/sh` instead. This follows the *same existing convention that
we follow in the `entrypoint.sh` script*.
Both ubuntu and alpine images have been tested (i.e healthchecks to
pass) to work with this change.
1. Align checked version with the format provided by the endpoint
2. Improve error reporting as well as install ca certificates in the docker file.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* feat(tools):cache log player batching all the way optimization
Signed-off-by: ashotland <ashotland@gmail.com>
* feat(tools): cache log player add one last print stats after completion
Signed-off-by: ashotland <ashotland@gmail.com>
Signed-off-by: ashotland <ashotland@gmail.com>
feat(tools):cache log player batching all the way optimization
Signed-off-by: ashotland <ashotland@gmail.com>
Signed-off-by: ashotland <ashotland@gmail.com>
- make use of docker buildx caching when possible (helpful with local docker builds)
- introduce a reusable container workflow which is triggered by docker-release and docker-weekly workflows
- added an alpine-dev Dockerfile
- split release.sh contents into different Makefile targets
- make use of job matrix to build alpine + ubuntu in parallel
- make alpine build optional by checking for Dockerfile presence
-- as the pre-built binaries don't work with alpine, because of glibc <-> musl incompatibilities
Signed-off-by: Philipp Born <git@pborn.eu>