* chore: allow limiting pipelining queue by length
We already allow limiting the queue by memory usage but it also makes sense to limit by depth,
so that in extreme cases we would provide backpressure back to client connections. Otherwise if we parse and read everything,
clients do not have a sense of how loaded the connection is on the server side.
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
1. Fix the bug of computing incorrectly the weight index in OpInter
2. Remove code duplication and factor out the parsing code from OpInter and OpUnion into PrepareWeightedSets
3. Implement TODO and support union of zsets and sets, which has already been implemented for ZINTER.
fixes#3553
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
Fixes#2917
The problem is described in this "working as intended" issue https://github.com/moby/moby/issues/3124
So the advised approach of using "USER dfly" directive does not really work because it requires
that the host will also define 'dfly' user with the same id. It's unrealistic expectation.
Therefore, we revert the fix done in #1775 and follow valkey approach:
https://github.com/valkey-io/valkey-container/blob/mainline/docker-entrypoint.sh#L12
1. we run the entrypoint in the container as root which later spawns the dragonfly process
2. if we run as root:
a. we chmod files under /data to dfly.
b. use setpriv to exec ourselves as dfly.
3. if we do not run as root we execute the docker command.
So even though the process starts as root, the server runs as dfly and only the bootstrap
part has elevated permissions is used to fix the volume access.
While we are at it, we also switched to setpriv following the change of https://github.com/valkey-io/valkey-container/pull/24/files
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* fix: disable ThreadLocalMutex when big value ser is off
* refactor: address comments
---------
Co-authored-by: Ubuntu <ubuntu@ip-172-31-28-29.ec2.internal>
Co-authored-by: Borys <borys@dragonflydb.io>
* feat(cluster): Allow appending RDB to existing store
The goal of this PR is to support the loadoing of multiple RDB files into a single server, like when migrating from a Valkey cluster to Dragonfly with a different number of nodes.
It makes the following changes:
* Removes `DEBUG LOAD`, as we already have `DFLY LOAD`
* Adds `APPEND` option to `DFLY LOAD` (i.e. `DFLY LOAD <filename> APPEND`) that loads an RDB without first flushing the data store, overriding existing keys
* Does not load keys belonging to unowned slots, if in cluster mode
Fixes#2840
Background
We tried to be compatible with Valkey in their support of Lua flags, but we generally failed:
We are not really compatible with Valkey because our flags are different (we reject unknown flags, and Valkey flags are unknown to us)
The #!lua syntax doesn't work with Lua (# is not a comment), so scripts written for older versions of Redis can't be used with Dragonfly (i.e. they can't add Dragonfly flags and remain compatible with older Redis versions)
Changes
Instead of the previous syntax:
#!lua flags=allow-undeclared-keys,disable-atomicity
We now use this syntax:
--!df flags=allow-undeclared-keys,disable-atomicity
It is not backwards compatible (with older versions of Dragonfly), but it should be very easy to adapt to, and doesn't suffer from the disadvantages above.
Related to #3512
1. Before - when 15% margin of total size becomes larger than 256MB, we increased the file by 256MB even though
we still had unused 256MB. Now it's fixed.
2. We attempted to grow a file even if it reached the limit and then returned the error.
It is wrong handle "reach the limit" state as an error - it's just a logical constraint,
so now we do not attempt to grow and handle this quietly.
3. There was a segfault bug in extent tree in case the existing segment and the new one needs to be united.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* chore: reduce pipelining latency by reusing existing shard fibers
To prove the benefits, run `./dfly_bench --pipeline=50 -n 20000 --ratio 0:1 --qps=0 --key_maximum=1`
Before: the average pipelining latency was 10ms
After: the average pipelining latency is 5ms.
Avg latency: pipelined_latency_usec / total_pipelined_squashed_commands
Also, improved counting of squashed commands - to count actual squashed ones.
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* fix truncating the timeout red dots on CI failures
* fix deprecated use of with timeout warnings
* remove @pytest.mark.dbg_only as it doesn't exist
---------
Signed-off-by: kostas <kostas@dragonflydb.io>
* chore(generic_family): Fix bad data format error in the RESTORE command
Signed-off-by: Stepan Bagritsevich <bagr.stepan@gmail.com>
* chore(generic_family): Remove the error reply for OpStatus::WRONG_TYPE in the RESTORE command, since it is no longer in use
Signed-off-by: Stepan Bagritsevich <bagr.stepan@gmail.com>
---------
Signed-off-by: Stepan Bagritsevich <bagr.stepan@gmail.com>
There are some problematic flows. First we did not handle deletions, so all sorts of consistency issues could arise while calling DbSlice::Traverse() and DbSlice::Del(). Second, we did not handle FlushAll (same as before, Traverse() preempts and FlushAll() kicks in. Third we did not handle expirations.
---------
Signed-off-by: kostas <kostas@dragonflydb.io>