* chore: clean up REPLTAKEOVER flow
1. Factor out the catchup function.
2. Simplify the flow and make the second parameters - integer.
3. Return OK if the server is already a master (and do nothing underneath).
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* Going to rewrite hllSparseSet
* Add support to sparse hll
* Add support for PFAdd Sparse
* Add support for PFAdd Sparse
* Add support for PFAdd Sparse
* Add support for Sparse HLL PFADD
Signed-off-by: azuredream <zhaozixuan67@gmail.com>
* Add support for Sparse HLL PFADD
Signed-off-by: azuredream <zhaozixuan67@gmail.com>
* Add support for Sparse HLL PFADD
Signed-off-by: azuredream <zhaozixuan67@gmail.com>
* Add support for Sparse HLL PFADD
Signed-off-by: azuredream <zhaozixuan67@gmail.com>
* Add support for Sparse HLL PFADD
Signed-off-by: azuredream <zhaozixuan67@gmail.com>
* Add support for Sparse HLL PFADD
Signed-off-by: azuredream <zhaozixuan67@gmail.com>
* Add support for Sparse HLL PFADD
Signed-off-by: azuredream <zhaozixuan67@gmail.com>
* Add support for Sparse HLL PFADD
Signed-off-by: azuredream <zhaozixuan67@gmail.com>
---------
Signed-off-by: azuredream <zhaozixuan67@gmail.com>
* chore: LockTable tracks fingerprints of keys
It's a first step that will probably simplify dependencies in many places
where we need to keep key strings for that. A second step will be to reduce the CPU load
of multi-key operations like MSET and precompute Fingerprints once.
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
A self-laundering iterator will enable us to, eventually, yield from fibers while holding an iterator. For example:
```cpp
auto it1 = db_slice.Find(...);
Yield(); // Until now - this could have invalidated `it1`
auto it2 = db_slice.Find(...);
```
Why is this a good idea? Because it will enable yielding inside PreUpdate() which will allow breaking down of writing huge entries in small quantities to disk/network, eliminating the need to allocate huge chunks of memory just for serialization.
Also, it'll probably unlock future developments as well, as yielding can be useful in other contexts.
* Migrate might fail if dispatch_fiber is active. In that case do no crash but return false to indicate that the migration was not successful.
* After we migrate, we might find ourselves with the socket closed (because of the shutdown signal process/flow). We need to check that the socket is open. If it's not, it means that it was shutdown by the signal flow (because after we finish with DispatchTracker, we start iterating over all of the connections in all of the shards and shut them down.
The number of keys in an _incoming_ migration indicates how many keys
were received, while for _outgoing_ it shows the total number. Combining
the two can provide the control plane with percentage.
This slightly modified the format of the response.
Fixes#2756
fix: authorize the http connection to call DF commands
The assumption is that basic-auth already covers the authentication part.
And thanks to @sunneydev for finding the bug and providing the tests.
The tests actually uncovered another bug where we may parse partial http requests.
This one is handled by https://github.com/romange/helio/pull/243
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
There are a few use cases which cause a temporary block of connections:
* `CLIENT PAUSE` command
* replica takeover
* cluster config / migration
Before this PR, these commands interfered with replication / migration
connections, which could cause delays and even deadlocks.
We do not want such internal connections to ever be blocked, and it's ok
to assume they won't issue regular "data" commands. As such, this PR
disables blocking any commands issued via an admin-port, and once merged
we'll recommend issuing replication and cluster migration via the admin
port.
Fixes#2703
* chore: implement path mutation for JsonFlat
This is done by converting the flat blob into mutable c++ json
and then building a new flat blob.
Also, add JsonFlat encoding to CompactObject class.
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
This commit generalizes the machanism of running transaction callbacks during scheduling, removing the need for specialized ScheduleUniqueShard/RunQuickie. Instead, transactions can be run now during ScheduleInShard - called "immediate" runs - if the transaction is concluding and either only a single shard is active or the operation can be safely repeated if scheduling failed (idempotent commands, like MGET).
Updates transaction stats to mirror the new changes more closely.
---------
Signed-off-by: Vladislav Oleshko <vlad@dragonflydb.io>
Send journal lsn to replica and compare the lsn value against number of records received in replica side
Signed-off-by: kostas <kostas@dragonflydb.io>
Co-authored-by: adi_holden <adi@dragonflydb.io>