Send journal lsn to replica and compare the lsn value against number of records received in replica side
Signed-off-by: kostas <kostas@dragonflydb.io>
Co-authored-by: adi_holden <adi@dragonflydb.io>
* chore: preparation for basic http api
The goal is to provide very basic support for simple commands,
fancy stuff like pipelining, blocking commands won't work.
1. Added optional registration for /api handler.
2. Implemented parsing of post body.
3. Added basic formatting routine for the response. It does not cover all the commands but should suffice for
basic usage.
The API is a POST method and the body of the request should contain command arguments formatted as json array.
For example, `'["set", "foo", "bar", "ex", "100"]'`.
The response is a json object with either `result` field holding the response of the command or
`error` field containing the error message sent by the server.
See `test_http` test in tests/dragonfly/connection_test.py for more details.
* chore: cover iouring with enable_direct_fd
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* feat(replication): Do not auto replicate different master
Until now, replicas would re-connect and re-replicate a master after the
master will restart. This is problematic in case the master loses its
data, which will cause the replica to flush all and lose its data as
well.
This is a breaking change though, in that whoever controls the replica
now has to explicitly issue a `REPLICAOF X Y` in order to re-establish
a connection to a new master. This is true even if the master loaded an
up to date RDB file.
It's not necessary if the replica lost connection to the master and the
master was always alive, and the connection is re-established.
Fixes#2636
* fix test
* fixes
* proxy proxy java java
* better comment
* fix comments
* replica_reconnect_on_master_restart
* proxy.close()
* feat(cluster): Add `--cluster_id` flag
This flag sets the unique ID of a node in a cluster.
It is UB (and bad) to set the same IDs to multiple nodes in the same
cluster.
If unset (default), the `master_replid` (previously known as `master_id`) is used.
Fixes#2643
Related to #2636
* gh comments
* oops - revert line removal
* fix
* replica
* disallow cluster_node_id in emulated mode
* fix replica test
1.Add back the search files to MacOs build (linker errors are fixed now).
2. Add default maxmemory argument (if not present already) when launching dragonfly process in regression tests.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* chore: Del and NUMINCRBY use json::Path
Also, fix various protocol bugs when we sent simple string
instead of sending bulk strings.
Fixed a typo in path.cc that lead to a data race bug.
Finally, flip the flag in regression tests to start covering json::Path code
and added test coverage for the data race bug
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* feat(connection): Support pipelining with Memcached
Adds support for pipelining to Memcached, enhances Memcached pytests
---------
Signed-off-by: Vladislav Oleshko <vlad@dragonflydb.io>
* upload only failed test logs
* remove printing log names for passed tests
* print slow tests with --duration
* separate regression and unit logs for CI workflow
* feat(pytest): More types for seeder
Add more types to the seeder and refactor replication test
---------
Signed-off-by: Vladislav Oleshko <vlad@dragonflydb.io>
The bug: crash when starting replica while saving
The problem: accessing the wrong allocator on snapshot class destruction as it was destructed not in the thread of the shard
The fix: call snapshot destructor when we finish snapshot on the correct thread
Signed-off-by: adi_holden <adi@dragonflydb.io>
* fix: do not migrate during connection close
Fixes#2569
Before the change we had a corner case where Dragonfly would call
OnPreMigrateThread but would not call CancelOnErrorCb because OnBreakCb has already been called
(it resets break_cb_engaged_)
On the other hand in OnPostMigrateThread we called RegisterOnErrorCb if breaker_cb_ which resulted in double registration.
This change simplifies the logic by removing break_cb_engaged_ flag since CancelOnErrorCb is safe to call if nothing is registered.
Moreover, we now skip Migrate flow if a socket is being closed.
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* test(memory): Test memory accounting for all types
* slightly faster
* WIP
* working
* Document
* Update test to use DEBUG POPULATE
* Nothing much
* Working
* fix
* yaml
* explicit capture
* fix ci?
* stub tx
* feat(cluster): add tx execution in cluster_shard_migration
refactor(replication): move code that is common for cluster and
replica into a separate file, add full-sync-cut cmd
* fix(replication): Correctly replicate commands even when OOM
Before this change, OOM in shard callbacks could have led to data
inconsistency between the master and the replica. For example, commands
which mutated data on 1 shard but failed on another, like `LMOVE`.
After this change, callbacks that result in an OOM will correctly
replicate their work (none, partial or complete) to replicas.
Note that `MSET` and `MSETNX` required special handling, in that they are
the only commands that can _create_ multiple keys, and so some of them
can fail.
Fixes#2381
* fixes
* test fix
* RecordJournal
* UNDO idiotnessness
* 2 shards
* fix pytest
* feat(server): Implement `CLIENT KILL`
Currently, it supports the following syntax:
* `CLIENT KILL <addr>:<port>`
* `CLIENT KILL ID <id>`
* `CLIENT KILL ADDR <addr>:<port>`
* `CLIENT KILL LADDR <addr>:<port>`
It will not allow killing an admin-connection from a non-admin port.
There are a few parameters of `CLIENT KILL` that Redis supports but this
PR does not yet add. Let's add them as needed.
Fixes#1614
* Add tests
* fixes
fixes#2296
added a regression test that tests both policy based eviction as well as heart beat eviction.
---------
Signed-off-by: Yue Li <61070669+theyueli@users.noreply.github.com>
* feat: add SLOT-MIGRATION-STATUS cmd for source node
implements #2232
add ability using SLOT-MIGRATION-STATUS without args
to print info about all migration processes for the current node
fix#2337
The bug:
replicaof was not rejected while loading snapshot
The fix:
replicaof is allowed while server is in loading state to allow replicaof while replication in full sync mode
I now reject replicaof if the server is in loading state and it is master
Another bug fix:
allow cron snapshot if --replicaof flag was set
Signed-off-by: adi_holden <adi@dragonflydb.io>
* refactor(server): Privatize `PreUpdate()` and `PostUpdate()`
While at it:
* Make `PreUpdate()` not decrease object size
* Remove redundant leftover call to `PreUpdate()` outside `DbSlice`
* Add pytest
* Test delete leads to 0 counters
* Improve test
* fixes
* comments
1. How many transactions we processed by type
2. How many transactions we processed by width (number of unique shards).
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* feat(cluster): add command flow for slot migration process
fixes#2295
DFLYMIGRATE FLOW command was added to establish
connections for every shard replication process.
Slow serialization step is the separate issue so
for now only eof_token is sent for reply to
DFLYMIGRATE FLOW command.
Expected state for START-SLOT-MIGRATION is FULL_SYNC now.
* feat: DispatchTracker
Use a DispatchTracker to track ongoing dispatches for commands that change global state
---------
Signed-off-by: Vladislav Oleshko <vlad@dragonflydb.io>
fix: eliminate the redundant string copy in SendMGetResponse
Also, allow selectively create DflyInstance in pytests that is attached to
an existing dragonfly port, created outside of tests.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
The DF version is being unparseable by Memcached::getVersion() that expects n.n.n string.
Change the version to emulate the old memcached server.
The DF version can still be fetched via Memcached::getStats() function.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* chore: add more states to client connections
* fix: clear pipelined messages before close
* fix: skip same thread on backpressure
---------
Signed-off-by: Vladislav Oleshko <vlad@dragonflydb.io>
Co-authored-by: Roman Gershman <roman@dragonflydb.io>
* fix(server): client pause fix on pipeline squash
allow squashing commands on pause
move await on client pause inside InvokeCommand - this way all flows of command invoke will read pause state
Signed-off-by: adi_holden <adi@dragonflydb.io>
This PR introduces a test case for TLS with `ca_dir`. First, we
did not have any tests for this case. Second, using `ca_dir` requires
to call `c_rehash` on the directory before it is loaded by DF. We
did not have this use case anywhere and therefore we thought there was
a bug when we used `ca_dir` only to find out that we need to call
`c_rehash` on the directory before we load the certificates. Now,
both a test and a use case are properly documented
* add missing test for ca_dir
* use rehash to properly show how to load ca directories instead of
files
Regression test sometimes fails because for a short period of time after `wait_available_async()` returns, the result of `ROLE` could still be different from `stable_sync`
[Failure example](https://github.com/dragonflydb/dragonfly/actions/runs/6726461923/job/18282759612#step:6:1863)
We change our state from `LOADING` to `ACTIVE` [here](d08d7f13b4/src/server/replica.cc (L426)), but then we change the sync state 2 times:
1. `!R_SYNCING` [here](d08d7f13b4/src/server/replica.cc (L427C28-L427C37))
2. And only later to `R_SYNC_OK` (meaning `stable_sync`) [here](d08d7f13b4/src/server/replica.cc (L221))
This is easy to reproduce by adding a sleep right after the set of state to `ACTIVE`, either before or after the flipping of `R_SYNCING` (with different returned states).
BTW without that added sleep I was not able to reproduce, having tried 1000s of times in various configurations.
We could change the order of things such that we first change `state_mask_` and only then switch state from `LOADING` to `ACTIVE` (which is probably the right thing to do), but that would require a subtle refactor, as we change these in a couple of places.
But we should keep in mind that this has no effect on users. So a simple sleep on the test side should fix this fairly well.
* chore: help users to fix a common mistake of setting quotes in the flagfile
Specifically, the confusion is often around the cron expression.
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* add a python script to print the most recent log
* if CI timeouts, print the most recent log
* replace global timeout with timeout command
* upload all logs on failure()
* print uid + port + the log files for each df instance
* requirepass also updates ACL default user password
* update config set requirepass to include the new behaviour
* add tests
* fix non existent default user when loading empty files
* Update Http auth with username default instead of user
* skip auth for /metrics page
* add/improve tests
* fix a bug with admin port requiring auth on http even if nopass was set
* update helio ref
* update listener class to contain its respective Role
* fix http init to only include admin and main listener
* The issue was similar with test `network_disconnect_small_buffers` but this time the debug build could be slow enough for the replication to not finish. As a consequence, by the time the test reached the assertion, the log did not contain the expected output.
* chore(regression): test bptree on regression pytests
1. stop passing the flag use_zset_tree as it is true on default
2. fix ci test to run replication tests
3. change replication tests seeder to sometimes add more than 128 values
to zset to test the pbtree impl
Signed-off-by: adi_holden <adi@dragonflydb.io>
* chore: Add a context manager to DflyInstance so we don't forget to close
them.
* Update tests/dragonfly/config_test.py
Co-authored-by: Roman Gershman <roman@dragonflydb.io>
Signed-off-by: Roy Jacobson <roi.jacobson1@gmail.com>
---------
Signed-off-by: Roy Jacobson <roi.jacobson1@gmail.com>
Co-authored-by: Roman Gershman <roman@dragonflydb.io>
* flags from env variables
* querying environment vars
* remove includes
* refactor
* exit for unknown flag with DFLY_ prefix
* reflecting change in the test
* better tests
* refactor + new test case
* refactor test with inner class
* refactor
* revert back test flags as it might affect ci/cd
* fixing flags
* refactor
* remove includes
* refactor
* move error handling tests from regression to unit
* move ACL LIST regression to unit test
* move AUTH regression to unit test
* move ACL WHOAMI regression to unit test
* add unit tests for SETUSER/DELUSER (so they run on every PR)
* add unit tests for all ACL categories
There was a bug on updates of the acl categories when squashing was used. Basically, the parent context could be accessed in parallel by the "stub" contexts causing a dreaded data race on the update.
This is fixed by adding a new AclUpdateMessage at the front of the dispatch queue of the connection.
1. No logic was changed during refactoring.
2. Flipped the flag to run regression tests for now own with zset_tree=on
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* fix DispatchCommand error reporting when memcached protocol is used (one example is when we use SET command on the replica -- previously we crashed now we properly report an error)
* SendError(ErrorReply) moved to SinkReplyBuilder from RedisReplyBuilder
* SendError(OpStatus) moved to SinkReplyBuilder from RedisReplyBuilder
* added tests for SendError(ErrorReply) in RedisReplyBuilder
* feat: implement CONFIG GET command
The command returns all the matched arguments and their current values.
In addition, this PR adds mutability semantics to each config - whether it can be
changed at runtime.
Fixes#1700
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* feat(server): Support limiting the number of open connections.
* * Update helio after the small fix was merged to master
* Don't limit admin connections (and add a test case)
* Resolve CR comments
1. If the first request sent to the connection is large (2kb or more)
Dragonfly was closing the connection.
2. Changed server side error reporting according to memcache protocol:
https://github.com/memcached/memcached/blob/master/doc/protocol.txt#L172
3. Fixed the wrong casting in DispatchCommand.
4. Remove practically unused code that translated opstatus to strings.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* introduce `--replicaof` flag
Closes#1381.
The behvaiour of `--replicaof` is similar to `REPLICAOF`. On startup, the instance continuously attempts to connect to master. Stop using the normal `REPLICAOF NO ONE` command.
The flag expects format `<IPv4/host>:<port>` or `[<IPv6>]:<port>`.
---------
Signed-off-by: talbii <ido@dragonflydb.io>
Signed-off-by: talbii <41526934+talbii@users.noreply.github.com>
Requested by #1590.
Introducing a new flag --snapshot_cron, enabling users to use cronjob expressions to time snapshot saves.
Cronjob expressions are parsed using a third party library croncpp.
This PR continues #1599, updating cron expressions to crontab style,
up to minutes resolution instead of seconds.
Signed-off-by: Dor Avrahami <da19965@gmail.com>
Introducing a new flag `--snapshot_cron`, which enables users
to use cron expressions to time snapshot saves.
Signed-off-by: Dor Avrahami <da19965@gmail.com>
* Fix(regression test): fix test_flushall_in_full_sync
The bug: the test checks the replication using role command on replica
The replica updates the status to full sync when starting the full sync
flow, but actually the master did not start snapshoting yet.
The fix: check the status using role command on master, because master
updates the status only after snapshoting started.
Signed-off-by: adi_holden <adi@dragonflydb.io>
* sec: Adjust flag checks when using TLS.
* Trust default certificates if no specific roots are given
* Add regression tests for the different scenarios
* Validate that client connections work as well
The test fails sometimes when starting master after killing it.
The reason for this is that OS did not release port untill we started
master again.
The fix - adding sleep after kill
After we will have randomly selected ports on pytest we can remove this
sleep.
Signed-off-by: adi_holden <adi@dragonflydb.io>
* fix(regression_test): fix in shutdown and replication pytests
- skip test_gracefull_shutdown test
- fix test_take_over_seeder test:
bug: the dbfilename was not unique, therefore between different runs the server reload
the snapshot of the last test run and this failed the test.
fix: use random dbfilename
- fix test_take_over_timeout test:
bug: REPLTAKEOVER timeout was not small enough for opt dfly build
fix: decrease timeout
Signed-off-by: adi_holden <adi@dragonflydb.io>
1. add tls-ca-cert-file flag
2. add tls-ca-cert-dir flag
3. enables redis-cli to connect over tls without --insecure flag by properly validating certificate wtih CA
The issue was that, sometimes, the ID generated for one of the nodes
contained the slot ID that was used in the test (either 5259 or 5260).
This caused the test to replace the "slot" part of the id, which in turn
caused the node to think that it no longer owns any slot.
* fix(server): Initialize ServerFamily with all listeners.
- Add a test for CLIENT LIST which is the visible result of this.
* use std move
* feat: Implement replicas take over
* Basic test
* Address CR comments
* Write a better test. Sadly it fails
* chore: Expose AwaitDispatches for reuse in takeover
* Ensure that no commands can execute during or after a takeover
* CR progress
* Actually disable the expiration
* Improve tests coverage
* Fix the dispatch waiting code
* Improve testing coverage and fix a shutdown snaphot bug
* don't replicate a replica
Enables execution of global lua scripts inside multi/exec transactions if the defualt script config enables global execution for scripts. This change is only a fix and does not provide any safeguards against other execution scenarios (namely enabling globality with script flags). In the future, the proper execution mode should be determined more carefully by inspecting the scripts to be executed
Signed-off-by: Vladislav Oleshko <vlad@dragonflydb.io>
Co-authored-by: Kostas Kyrimis <kostaskyrim@gmail.com>
The test case for checking is_loading == 1 is inherently racy because
the client can connect at any time before or after the dragonfly
instance loads the snapshot.
This PR is a temporary solution for clients that are not properly
removed from the connection pool triggering an active client assertion
during dragonfly instance shutdown
fix: remove bad check-fail in the transaction code.
Fixes#1421.
The failure reproduces for dragongly running with a single thread where all the
arguments grouped within the same ShardData
Also, we improve verbosity levels inside reply_builder.cc.
For that we extend SinkReplyBuilder to support protocol errors reporting
and we remove ad-hoc code for this from dragonfly_connection.
Required to track errors easily with `--vmodule=reply_builder=1`
Finally, a pytest is added to cover the issue.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
In this case, `redis.RedisCluster`.
To be double sure I also looked at the actual packets and saw that the
client asks for `CLUSTER SLOTS`, and then after the redistribution of
slots, following a few `MOVED` replied, it asks for the new slots again.
This allows masters to send data of non-owned keys to their replicas,
which is useful when:
1. Config is temporarily different between master and replica
2. Preparing for taking ownership over currently not-owned slots (in the upcoming migration feature)
Fixed#1319