This command forces the memory manager to decommit memory pages back to the OS.
In addition, fixed some positional bugs in "memory malloc-stats"
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
Up until know we did not have cached rss metric in the process.
This PR consolidates caching of all values together inside the EngineShard periodic fiber
code. Also, we know expose rss_mem_current that can be used internally for identifying
memory pressuring periods during the process run.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
chore: run dragonfly_test with epoll under gdb
Also, update helio that provide a stacktrace under musl libc (alpine linux).
This version of helio updates absl version as well.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* add a python script to print the most recent log
* if CI timeouts, print the most recent log
* replace global timeout with timeout command
* upload all logs on failure()
* print uid + port + the log files for each df instance
This is enabled by default, but can be disabled via `--migrate_connections=false`.
Measuring with BullMQ benchmark I see a gain of almost 10% in
throughput. I haven't measured, but it's supposed to also reduce
latency.
This will allow some use cases with few busy keys to distribute load
more evenly between threads.
Idea by @dranikpg.
To calculate how many entries are needed in the table I used the
following quick-n-dirty code, to reach <2.5% collision with 100 keys:
```cpp
bool Distribute(int balls = 100, int bins = 100) {
vector<int> v(bins);
for (int i = 0; i < balls; ++i) {
v[rand() % v.size()]++;
}
for (int v : v) {
if (v >= 2) {
return true;
}
}
return false;
}
int main(int argc, char** argv) {
int has_2_balls = 0;
constexpr int kRounds = 1'000'000;
for (int i = 0; i < kRounds; ++i) {
has_2_balls += Distribute(100, 100'000);
}
cout << has_2_balls << " rounds had 2+ balls in a single bin out of " << kRounds << endl;
}
```
Also, few additional changes that do not affect functionality.
1. make sure passed arguments to DispatchCommand are `\0` delimited
during pipelining.
2. extend lua malloc hook to call precise functions - to help with cpu profiling.
3. reuse arguments buffer (save allocations) when calling Dragonfly command from lua scripts.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
The new logrotate settings assume that dragonfly closes a log file
once it grows to large. It never rotates file that is currently open for writing.
Specifically logrotate:
1. rotate only log files
2. skip those that are currently open by as process.
3. compresses using zstd which is more cpu efficient than gzip
4. does not truncate/create old files as 0-sized blobs - just renames them
Fixes#1935
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* requirepass also updates ACL default user password
* update config set requirepass to include the new behaviour
* add tests
* fix non existent default user when loading empty files