This PR changes list_family.
zset_family has not been changed yet - the code is just moved to the anonymous namespace.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
The interface around DenseLinkKey is confusing but SetObject
works only for non-link objects.
Added assert to catch these issues in the future.
Fixes#3973
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* chore: get rid of MutableSlice
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* chore: comments
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
Test is flaky because it relies that the producer (the pytest) to send fast enough a bunch of commands before they get dispatched synchronously so I increased the load.
It appears that newer versions of the gh runner require more memory. Some cases of the test test_rss_used_mem_gap allocate more than 6.5-7 gb of memory leaving barely 0.5gb to the gh runner (7.5 in total available) which sometimes cause the instance to run out of memory.
For some of our commands we need to inject another transaction and another SinkReplyBuilder.
This results into error-prone injections of temporary objects into ConnectionContext.
Most commands just need Transaction and SinkReplyBuilder, so lets pass them explicitly.
The final goal will be to remove Transaction and reply_builder fields from ConnectionContext.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
This PR introduces "DEBUG RECVSIZE ENABLE|DISABLE|tid"
command that allows tracking of request sizes.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* chore: remove ToUpper calls in main_service
Also, test for IsPaused() first to avoid doing more checks for common-case.
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
ConnectionContext.reply_builder can be injected and replaced by the service logic.
before - dragonfly_connection accessed it via cc_->reply_builder in some places,
which led it to access the injected object. Moreover, EVAL commands can be offloaded
to another thread and that thread could inject the object, making the access to cc_->reply_builder_
non thread-safe.
Now dragonfly_connection copies aside the replier_builder_ pointer, and uses only this pointer for communicating with client.
Also, remove redundant arguments.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* fix: Fix `test_flushall_in_full_sync`
This test failed in CI many times. The issue was that we reach stable
sync too quickly, and miss the full sync stage.
I changed the seeder to add 100k (instead of 30k) keys for the stage to
take longer.
* StaticSeeder
Until now, we only tested Dragonfly against Redis 6.2. It appears that
something has changed in the way Redis sends stable sync commands, and
now they also forward `MULTI` and `EXEC` as part of their replication.
Since we do not allow all commands to run under `MULTI`/`EXEC`,
specifically `SELECT`, a Dragonfly replica of such servers failed these
commands and became inconsistent with the data on the master.
The proposed fix is to simply ignore (i.e. not execute) `MULTI`/`EXEC`
coming from a Redis/Valkey master, and run the commands within those
transactions individually, like we do for other transactions.
To test this we randomly choose a redis/valkey server based on 3
available installed binaries and test against them.
* chore: Add `--allocator_tracker` for default tracking
Before, in order to use allocation tracker, one had to issue a `MEMORY
TRACK` command. This flag is identical to that, but allows starting
Dragonfly with certain ranges without issuing a command.
While here, fix a bug. Apparently, `absl::InlinedVector<>` has a bug in
the implementation of `max_size()` and so in practice we did not limit
the number of trackers. I switched to use `capacity()` instead, which I
tested and it works well.
Notes:
1. Currently the flag always add 100% "sampling", we can extend that in
the future if need be
2. I added the flag in `dfly_main.cc` with custom initialization,
because it's low level, and I couldn't get it reasonably working with
changes only to `allocation_tracker.cc`
* fixes
The problem - we used file write in non-direct mode when writing snapshots in epoll mode.
As a result - lots of data was cached into OS memory. But then during the rename operation,
when we rename "xxx.dfs.tmp" into "xxx.dfs", the OS flushes the file caches and the thread
is stuck in OS system call rename for a long time.
The fix - to use DIRECT mode and to avoid caching the data into OS caches at all.
Fixes#3895
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
BITPOS returns 0 for non-existent keys according to Redis's
implmentation.
BITPOS allows only 0 and 1 as the bit mode argument.
Signed-off-by: Denis K <kalantaevskii@gmail.com>
Use intrusive queue that allows batching of scheduling calls instead of handling each call separately.
This optimizations improves latency and throughput by 3-5%
In addition, we expose batching statistics in info transaction block.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>