* Migrate might fail if dispatch_fiber is active. In that case do no crash but return false to indicate that the migration was not successful.
* After we migrate, we might find ourselves with the socket closed (because of the shutdown signal process/flow). We need to check that the socket is open. If it's not, it means that it was shutdown by the signal flow (because after we finish with DispatchTracker, we start iterating over all of the connections in all of the shards and shut them down.
The number of keys in an _incoming_ migration indicates how many keys
were received, while for _outgoing_ it shows the total number. Combining
the two can provide the control plane with percentage.
This slightly modified the format of the response.
Fixes#2756
fix: authorize the http connection to call DF commands
The assumption is that basic-auth already covers the authentication part.
And thanks to @sunneydev for finding the bug and providing the tests.
The tests actually uncovered another bug where we may parse partial http requests.
This one is handled by https://github.com/romange/helio/pull/243
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
There are a few use cases which cause a temporary block of connections:
* `CLIENT PAUSE` command
* replica takeover
* cluster config / migration
Before this PR, these commands interfered with replication / migration
connections, which could cause delays and even deadlocks.
We do not want such internal connections to ever be blocked, and it's ok
to assume they won't issue regular "data" commands. As such, this PR
disables blocking any commands issued via an admin-port, and once merged
we'll recommend issuing replication and cluster migration via the admin
port.
Fixes#2703
* chore: implement path mutation for JsonFlat
This is done by converting the flat blob into mutable c++ json
and then building a new flat blob.
Also, add JsonFlat encoding to CompactObject class.
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
This commit generalizes the machanism of running transaction callbacks during scheduling, removing the need for specialized ScheduleUniqueShard/RunQuickie. Instead, transactions can be run now during ScheduleInShard - called "immediate" runs - if the transaction is concluding and either only a single shard is active or the operation can be safely repeated if scheduling failed (idempotent commands, like MGET).
Updates transaction stats to mirror the new changes more closely.
---------
Signed-off-by: Vladislav Oleshko <vlad@dragonflydb.io>
Send journal lsn to replica and compare the lsn value against number of records received in replica side
Signed-off-by: kostas <kostas@dragonflydb.io>
Co-authored-by: adi_holden <adi@dragonflydb.io>
* chore: refactor StringFamily::Set to use CmdArgParser
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* chore: adress comments
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* fix(flushslots): Don't miss updates in `FLUSHSLOTS`
This PR registers for PreUpdate() from inside the `FLUSHSLOTS` fiber so
that any attempt to update a to-be-deleted key will work as expected
(first delete, then apply the change).
This fixes several issues:
* Any attempt to touch bucket B (like insert a key), where another key
in B should be removed, caused us to _not_ remove the latter key
* Commands which use an existing value but not completely override then,
like `APPEND` and `LPUSH` did not treat the key as removed but instead
used the original value
Fixes#2771
* fix flushslots syntax in test
* EXPECT_EQ(key:0, xxxx)
* dbsize
* chore: add SBF data structure
Based on https://gsd.di.uminho.pt/members/cbm/ps/dbloom.pdf
The data-structure itself is a growing list of bloom filters,
where the next filter has exponentially larger capacity with exponentially tighter error bound.
The Exist() goes over all the filters and it's enough that at least one of them returns a positive result.
For Add(), we make ensure that all the existing filters do not have the element, as well as making sure that the last
filter that is being filled does not cross its maximum designated capacity.
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* chore: add bloom filter class
Based on https://github.com/jvirkki/libbloom implementation.
Unlike the original, our implementation uses XXH3 hash function to seed bit index generation.
In addition, it assumes mi_malloc interface for dynamic allocation.
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>