**Background**
In v1.21.0 we introduced support for `--announce_ip` for replicas to
announce their public IP addresses.
Like Valkey, this uses `REPLCONF IP-ADDRESS` to announce their IP
address.
**The issue**
Older Dragonfly releases (<1.21) did not support this feature. The
master side simply returned an error for such `REPLCONF` attempts,
however the replica code failed the replication, resulting in
incompatible versions.
**The fix**
The fix is simple, just log an error if the master did not respect
`REPLCONF IP-ADDRESS`. We can make this non-optional in the future
again.
However, in addition, I added a regression test to make sure we are
backwards compatible with v1.19.2. We'll bump this up every once in a
while.
* chore: add timeout fo replication sockets
Master will stop the replication flow if writes could not progress for more than K millis.
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
Signed-off-by: Roman Gershman <romange@gmail.com>
Co-authored-by: Shahar Mike <chakaz@users.noreply.github.com>
os.remove(LAST_LOGS) might throw an exception if the file does not exist which we do not handle. Wrap it in try/catch block
* wrap in try/catch os.remove
---------
Signed-off-by: kostas <kostas@dragonflydb.io>
The env variables exported when regression tests timeout are not working properly and the if statement on the action step Print last log on timeout would fail to read and upload the files set in /tmp/last_log_file.txt. Furthermore, another problem is the job.timeout argument that kills the whole job/matrix before the upload log step has a chance to run. For that, we need manual timeouts on the workflow similar to what we do in regression tests action.
* remove print last log on timeout action step
* copy the logs on timeout directly within the timeout step
* replace global timeout on CI workflow with timeout command per step
---------
Signed-off-by: kostas <kostas@dragonflydb.io>
* chore: improve replication locks
Allow non-exclusive, read-only access to Dfly::ReplicaInfo structure.
The most important change is in DflyCmd::CancelReplication, where before
it has locked ReplicaInfo mutex and then continued with locking the global mutex.
It is dangerous because most operation lock them in the opposite order.
Also rename ambigous GetReplicaInfo accessors to clearer names.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* chore: comments
* chore: comments
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* feat: Support `replica-announce-ip`/`port`
Before this PR, we only supported `cluster_announce_ip`.
It's basically the same feature, but used for cluster announcements
instead of replication.
This PR adds support for `replica-announce-ip` and
`replica-announce-port`, which can be set via new flags `--announce_ip=`
and `--announce_port=`. These flags apply to both cluster and replica
announcements.
Tested via running Sentinel, and making sure it is able to connect to
announced ip+port, while it can't connect to announced false /
unavailable ip+port.
Note: this PR deprecates `--cluster_announce_ip`, but continues to
support it. We will remove it in a future version.
Fixes#3380
* fix failing test
* destructure
1. Add background offloading stats
2. remove direct_fd override - helio is already updated with default=false, so it's not needed anymore.
3. remove redundant tiered_storage_memory_margin flag
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
DastTable::Traverse is error prone when the callback passed preempts because the segment might change. This is problematic and we need atomicity while traversing segments with preemption. The fix is to add Traverse in DbSlice and protect the traversal via ThreadLocalMutex.
* add ConditionFlag to DbSlice
* add Traverse in DbSlice and protect it with the ConditionFlag
* remove condition flag from snapshot
* remove condition flag from streamer
---------
Signed-off-by: kostas <kostas@dragonflydb.io>
Update the flag for extreme testing. We should remove this before the release.
* set serialization_max_chunk_size to 1 byte
---------
Signed-off-by: kostas <kostas@dragonflydb.io>
The problem is that the test test_big_value_serialization_memory_limit will try to shutdown dragonfly at the end with a timeout of 15 seconds. Dragonfly during shutdown takes a snapshot which might take more than 15 seconds and the test fails.
* call flushall before we exit the test
---------
Signed-off-by: kostas <kostas@dragonflydb.io>
* fix: Fix `test_take_over_seeder`
There are a few issues with the test:
1. Not using the admin port, which could cause pause to deadlock
2. Not waiting for some of the `task`s (although that won't cause a
failure)
But also in the product code:
1. We used to `std::move()` the same pointer multiple times
2. We assigned to the same status object from multiple threads
Hopefully this fixes the test. It used to fail every ~100 attempts on my
machine, now it's been >1,000 and they all passed.
* add comments
* remove shard_ptr param
* default serialization_max_chunk_size to 10 mb
* add test for big values
* small rename of enum to conform style guide
---------
Signed-off-by: kostas <kostas@dragonflydb.io>
* chore: fix test_parser_memory_stats flakiness
1. Added a robust assert_eventually decorator for pytests
2. Improved the assertion condition in TieredStorageTest.BackgroundOffloading
3. Added total_uploaded stats for tiering that tells how many times offloaded values
were promoted back to RAM.
* chore: skip test_cluster_fuzzymigration
Leave only connection memory usage in memory stats.
We should think how we can move it also to /metrics.
In addition, added a test verifying that redis parser memory
usage is tracked.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* fix replication test flag name for big values
* fix a bug that triggers ub when RegisterOnChange is called on flows that iterate over the callbacks and preempt
* add a stress test for big value serialization
Signed-off-by: kostas <kostas@dragonflydb.io>
* serialize big slots in chunks
* allow preemption on large slots
* disable big entries serialization for RDB files
* add test
Signed-off-by: kostas <kostas@dragonflydb.io>
* feat(namespaces): Initial support for multi-tenant #3050
This PR introduces a way to create multiple, separate and isolated
namespaces in Dragonfly. Each user can be associated with a single
namespace, and will not be able to interact with other namespaces.
This is still experimental, and lacks some important features, such as:
* Replication and RDB saving completely ignores non-default namespaces
* Defrag and statistics either use the default namespace or all
namespaces without separation
To associate a user with a namespace, use the `ACL` command with the
`TENANT:<namespace>` flag:
```
ACL SETUSER user TENANT:namespace1 ON >user_pass +@all ~*
```
For more examples and up to date info check
`tests/dragonfly/acl_family_test.py` - specifically the
`test_namespaces` function.
* fix: properly clean tiered state upon flash
The bug was around io pending entries that have not been properly cleaned during flush.
This PR simplified the logic around tiered storage handling during flush, it always performs the
cleaning in the synchronous part of the command.
In addition, this PR improves error logging in tests if dragonfly process exits with an error.
Finally, a test is added that makes sure pending tiered items are flushed during the flash call.
Fixes#3252
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>