* chore: add oom stats to /metrics
Expose oom/cmd errors when we reject executing a command if we reached OOM state (controlled by oom_deny_ratio flag).
Expose oom/insert errors when we do not insert a new key or do not grow a dashtable (controlled by table_growth_margin).
Move OOM command check to a place that covers all types of transactions - including multi and squashing transactions.
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
1.Add back the search files to MacOs build (linker errors are fixed now).
2. Add default maxmemory argument (if not present already) when launching dragonfly process in regression tests.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
Redis, due to its old lua enginer and bunch of historic reasons returns floats as integers from lua scripts.
This means `eval "return 42.9" 0` would return 42 as long integer.
Dragonfly supports both integers and floats in its lua engine, returning a precise "42.9" in the same scenario.
RESP2 does not support float types so "42.9" is returned as a bulk string for RESP2 connections. For RESP3, dragonfly
returns 42.9 as a native RESP3 double primitive.
This PR introduces an optional legacy behavior for Dragonfly only for the RESP2 protocol. When the `--lua_resp2_legacy_float` flag is passed,
Dragonfly will round down the double value to the nearest integer and return it as RESP2 native long integer.
Fixes#2664
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* WIP: `cluster_mgr.py` to work with remote targets
* Documentation
* No admin port
* Support different hostname move/migrate
* Fix migrate bug
* Fix typo in --help
* fix test
* self.update_id()
It's only a code move, without functional changes.
This is in preparation to implementing the same path functionality
for flexbuffers objects.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* chore: Del and NUMINCRBY use json::Path
Also, fix various protocol bugs when we sent simple string
instead of sending bulk strings.
Fixed a typo in path.cc that lead to a data race bug.
Finally, flip the flag in regression tests to start covering json::Path code
and added test coverage for the data race bug
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* chore: switch to self-built flatbuffers
This should solve the linker errors on MacOs build.
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* feat(connection): Support pipelining with Memcached
Adds support for pipelining to Memcached, enhances Memcached pytests
---------
Signed-off-by: Vladislav Oleshko <vlad@dragonflydb.io>
feat: a test with flat buffers
Also, add an experimental flag `--experimental_flat_json` that allows writing json objects as flat strings using
flexibuffers.
The experiment shows that `debug populate 100000 a 10 type json elements 30`
uses almost 3 times less memory than with native jsoncons objects.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
Also update actions versions to Node 20.
This change allows dragonfly to be built on MacOs.
However, we still have multiple failing tests on MacOS.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* upload only failed test logs
* remove printing log names for passed tests
* print slow tests with --duration
* separate regression and unit logs for CI workflow
* feat(pytest): More types for seeder
Add more types to the seeder and refactor replication test
---------
Signed-off-by: Vladislav Oleshko <vlad@dragonflydb.io>
Example usage:
```bash
# Create a 2-node cluster
./cluster_mgr.py --action=create --replicas_per_master=1 --num_master=2
# Move (no migration) all slots to first node
./cluster_mgr.py --action=move --target_port=7001 --slot_start=8192 --slot_end=16383
# Fill data - like run memtier
# Migrate all slots to 2nd node. One could measure how long this step takes.
./cluster_mgr.py --action=migrate --target_port=7002 --slot_start=0 --slot_end=16383
```
The bug: crash when starting replica while saving
The problem: accessing the wrong allocator on snapshot class destruction as it was destructed not in the thread of the shard
The fix: call snapshot destructor when we finish snapshot on the correct thread
Signed-off-by: adi_holden <adi@dragonflydb.io>