* fix: assign threadlocal data structures during connection migration
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* fix: assign threadlocal data structures during connection migration
Co-authored-by: Shahar Mike <chakaz@users.noreply.github.com>
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
Co-authored-by: Shahar Mike <chakaz@users.noreply.github.com>
The DF version is being unparseable by Memcached::getVersion() that expects n.n.n string.
Change the version to emulate the old memcached server.
The DF version can still be fetched via Memcached::getStats() function.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
Also,
1. rebase helio dependency
2. get rid of varz counters that are superseded by
commands_total/commands_duration_seconds_total metrics
Resolves#2213
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
Specifically, allocate only a single blob when returning multiple entries from a shard.
In addition, refactor and unify MGetResponse between string family code and ReplyBuilder code.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
The bug: One connection calls replica start and second call replica stop. In this flow stop first reset state mask state_mask_.store(0), Start sets state mask state_mask_.store(R_ENABLED) continues to greet and create main replication fiber and then stop runs cntx_.Cancel(), which will at later be reset inside the main replication fiber. In this flow main replication fiber does not cancel and the connection calling to Stop is deadlocked waiting to join the main replication fiber.
The fix: run cntx_.Cancel() and state_mask_.store(0) in replica thread.
Signed-off-by: adi_holden <adi@dragonflydb.io>
* chore: add more states to client connections
* fix: clear pipelined messages before close
* fix: skip same thread on backpressure
---------
Signed-off-by: Vladislav Oleshko <vlad@dragonflydb.io>
Co-authored-by: Roman Gershman <roman@dragonflydb.io>
Fixes the case where a client library expect SCAN to return quicker
(context: ruby's Rails.cache.delete_matched("prefix_*") times out)
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* fix(server): client pause fix on pipeline squash
allow squashing commands on pause
move await on client pause inside InvokeCommand - this way all flows of command invoke will read pause state
Signed-off-by: adi_holden <adi@dragonflydb.io>
* fix(stats): Do not crash upon issuing `mem stats`
The reason for the crash is that we can't use a mutex while iterating
connections. It uses a non-Fiber `Await()`, and it also has a fiber
atomic guard.
Instead use the common trick of allocating per-thread data and aggregate
afterward.
* Use pool size
This PR introduces a test case for TLS with `ca_dir`. First, we
did not have any tests for this case. Second, using `ca_dir` requires
to call `c_rehash` on the directory before it is loaded by DF. We
did not have this use case anywhere and therefore we thought there was
a bug when we used `ca_dir` only to find out that we need to call
`c_rehash` on the directory before we load the certificates. Now,
both a test and a use case are properly documented
* add missing test for ca_dir
* use rehash to properly show how to load ca directories instead of
files