* chore(regression): test bptree on regression pytests
1. stop passing the flag use_zset_tree as it is true on default
2. fix ci test to run replication tests
3. change replication tests seeder to sometimes add more than 128 values
to zset to test the pbtree impl
Signed-off-by: adi_holden <adi@dragonflydb.io>
entries_read and lag have been added to the output of XINFO GROUPS since Redis 7.0. This patch supports both for Dragonfly. This patch also fixes a bug that incorrectly sets the initial value of entries_read when a consumer group is created.
fixes#1948
chore: reduce double encoding for listpacks
Following memory improvements by Redis 7, use double convrsion library to represent
double values with less space for listpacks.
The change is to use double conversion library instead of plain sprintf inside
zzlInsertAt. This requires to move zzlInsertAt to Dragonfly codebase.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
Also, bring back the default max listpack entries count for zset to 128.
The reason for this - I've added some optimizations that improved listpack
performance and also because I would like to write an article about it
and I need to compare Dragonfly to Redis7 that has this setting set to 128
by default.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
1. ExpireIfNeeded was unjuistifiedly high in the profile topk table.
Lets make the initial condition within an inline to reduce its impact.
2. Reserve space for hmap/zset if multiple items are added.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
The issue was that the Heartbeat would run every 1ms and this is problematic because the for-loop would take less than 1ms to finish. Therefore, the memory pool would not adjust and items would not be evicted from the store. By doubling the amount of elements created in the for-loop, we give enough time for the first heartbeat to run and adjust the memory available (which will cause the evictions to happen).
* chore: Avoid allocating unique_members arrays when we have 1 or 2 members
* fix: pr fixes
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
Integrate a wonderful library fast_float by Daniel Lemire.
It achieves x2 improvement on x86_64:
BM_ParseFastFloat 663 ns 663 ns 1049085
BM_ParseDoubleAbsl 1358 ns 1358 ns 523853
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
A regular DenseSet insertion first checks for uniqueness and then inserts a new element.
Sometimes we know that the new element is new and we can insert it without checking for
uniqueness first.
Also, pass hashcode into internal functions so we could save some hash calls.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* bug(server): global command stalls on server load with pipeline mode
fixes#1797
the bug: global command is not able to schedule into txq when high load pipelined commands. Only after the load finish the global transaction gets scheduled into the txq. The reason for this is when we start a global transaction we set the shard lock and all the transactions start to enter the txq. They compete with the global tx on the order they are inserted into the queue to preserve transaction atomicity. Because the global tx needs to be inserted to all shard queues its chance to schedule with order with all the other transactions is low.
the solution: lock the global transaction inside the schedule in shard, locking closer to scheduling decreases the number of transactions in the queue and the competition on ordering correctly has higher chance now.
Signed-off-by: adi_holden <adi@dragonflydb.io>
* bug(server): zadd wrong insert when non unique members
the bug: when calling zadd with non uniq members the server would result
in updating the zset with the higher score instead of the last score
given
f.e
zadd myzset 2 a 1 a
will result in member a score 2 while the expected 1
the fix: if the members in zadd command are not unique than we do not
sort the members. As a result in this case when we dont sort we will not have perfomace optimization on listpack.
Signed-off-by: adi_holden <adi@dragonflydb.io>