* fix(cluster-migration): Support cancelling migration right after starting it
This fixes a few small places, but most importantly it does not allow a
migration to start before both the outgoing and incoming side received
the updated config. This solves a few edge cases.
Fixes#2968
* add TODO
* fix test
* gh comments and fixes
* add comment
The number of keys in an _incoming_ migration indicates how many keys
were received, while for _outgoing_ it shows the total number. Combining
the two can provide the control plane with percentage.
This slightly modified the format of the response.
Fixes#2756
* feat(cluster): Add `--cluster_id` flag
This flag sets the unique ID of a node in a cluster.
It is UB (and bad) to set the same IDs to multiple nodes in the same
cluster.
If unset (default), the `master_replid` (previously known as `master_id`) is used.
Fixes#2643
Related to #2636
* gh comments
* oops - revert line removal
* fix
* replica
* disallow cluster_node_id in emulated mode
* fix replica test
* feat(cluster): add tx execution in cluster_shard_migration
refactor(replication): move code that is common for cluster and
replica into a separate file, add full-sync-cut cmd
* feat: add SLOT-MIGRATION-STATUS cmd for source node
implements #2232
add ability using SLOT-MIGRATION-STATUS without args
to print info about all migration processes for the current node
* feat(cluster): add command flow for slot migration process
fixes#2295
DFLYMIGRATE FLOW command was added to establish
connections for every shard replication process.
Slow serialization step is the separate issue so
for now only eof_token is sent for reply to
DFLYMIGRATE FLOW command.
Expected state for START-SLOT-MIGRATION is FULL_SYNC now.
The issue was that, sometimes, the ID generated for one of the nodes
contained the slot ID that was used in the test (either 5259 or 5260).
This caused the test to replace the "slot" part of the id, which in turn
caused the node to think that it no longer owns any slot.
In this case, `redis.RedisCluster`.
To be double sure I also looked at the actual packets and saw that the
client asks for `CLUSTER SLOTS`, and then after the redistribution of
slots, following a few `MOVED` replied, it asks for the new slots again.
This allows masters to send data of non-owned keys to their replicas,
which is useful when:
1. Config is temporarily different between master and replica
2. Preparing for taking ownership over currently not-owned slots (in the upcoming migration feature)
Fixed#1319