* fix bitiop creating the dst key if result is empty
* fix replicating dst with the wrong type
* make bitop a blind update (similar to set command)
---------
Signed-off-by: kostas <kostas@dragonflydb.io>
* chore: Forbid replicating a replica
We do not support connecting a replica to a replica, but before this PR
we allowed doing so. This PR disables that behavior.
Fixes#3679
* `replicaof_mu_`
* chore: some renames + fix a typo in RETURN_ON_BAD_STATUS
Renames in transaction.h - no functional changes.
Fix a typo in error.h following #3758
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
fix 1: in multi command squasher error message was not set therefore it was not printed to log on the relevant command only on exec, fixed by setting the last error in CapturingReplyBuilder::SendError
fix 2: non clearing cached error replies before the command is Invoked
---------
Signed-off-by: adi_holden <adi@dragonflydb.io>
Co-authored-by: kostas <kostas@dragonflydb.io>
* fix: improve BreakStalledFlowsInShard heuristic
Before this change - we wrote in a single call whatever record chunks we pulled from the channel.
This can be problematic for 1GB chunks for example, which might take 10sec to write.
Lately we added a replication breaker on the master side that breaks the fully sync after
a predefined threshold has passed. By default it was 10sec. To improve the robustness of this
breaker, we now write chunks of upto 1MB and update last_write_time_ns_ more frequently.
Also, we added more logs to make replication delays on both sides more visible.
We also added logs of breaking the replication on the master sides.
Unfortunately, this did not help making BreakStalledFlowsInShard more robust because now the
problem moved to replica side which may take 20s+ seconds to parse huge values.
Therefore, I increased the threshold for breaking the replication to 30s.
Finally, instrument GetMetrics call as it takes sometimes more than 1 sec.
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* chore: introduce a Clone function for the dense set
We use a state machine to prefetch data in batches.
After this change, the hot spots are predominantly inside ObjectClone and
Hash methods.
All in all benchmarks show ~45% CPU reduction:
```
BM_Clone/elements:32000 1517322 ns 1517338 ns 2772
BM_Fill/elements:32000 841087 ns 841097 ns 4900
```
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
Pre-this change, whenever Dragonfly was paused (either by a user or by
internal processes like takeover or slot migration finalization),
migrations and replications were also paused.
This could cause timing issues, which sometime result in migration
failures. Specifically, when 2 nodes have migrations from one to the
other **in parallel** (A->B and B->A), the `Pause()` that happens on A
(which happens because it's a source node) will stop it from processing
incoming traffic from B (incoming because it is also a target node).
If timed correctly, it will be locked until it times out, and so the
migration will fail.
The fix is to prevent replications and migrations from adhering to
`Pause()`s, which I think should not have happened in the first place
because they should use the admin port anyway.
Fixes#3319
For some cases, this map can grow indefinitely.
This change makes it less detailed by makes sure that number of possible keys is bounded.
Still it can provide a good summary of nature of exec transactions.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
chore: deprecate RecordsPopper and serialize channel records during push
Records channel is redundant for DFS/replication because we have single producer/consumer
scenario and both running on the same thread. Unfortunately we need it for RDB snapshotting.
For non-rdb cases we could just pass a io sink to the snapshot producer,
so that it would use it directly instead of StringFile inside FlushChannelRecord.
This would reduce memory usage, eliminate yet another memory copy and generally would make everything simpler.
For that to work, we must serialize the order of FlushChannelRecord, and that's implemented by
this PR. Also fixes#3658.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* chore: cosmetic changes around Snapshot functions
Some renames and added comments. Refactored StartIncremental into a separate function
without any functional changes.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* chore: fix comments
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
Also, remove dependence of absl::TimeZone bloated monstrosity, which was required by
absl::FormatTime api, even though we do not actually format a timezone.
When absl::LocalTimeZone is accessed it allocates hundreds of thousands of bytes
for each shard thread (maybe due to lack thread safety during lazy initialization).
At the end, strftime does a great job without any shenanigans.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
fixes#3636
The problem was that instead of implementing GT operator, we implemented
!LT, which is GE operator. As a result the iterators inside the sort algorithm reached
out of bounds.
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* fix: crash when passing empty arguments
Fix the case where we pass an empty argument, which then is parsed as an
empty string view with null pointer. The null pointer is not handled correctly
by our low level c code, hence switch to using ""sv for that.
* chore: add more list asserts + improve test_hypothesis
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
1. The offset value can be negative, in that case we should return an empty array.
2. Fix edge cases of inf*0 and -inf + inf, so they will result in 0 and non NaN (similarily to Valkey).
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
For legacy mode:
1. For mutate commands, an empty result should throw an error
2. For read commands, it returns nil if path was not found, but if it was matched
but only with a wrong types, it will throw an error.
For non-legacy mode, objlen should throw an error for non existing key.
Simplified JsonCallbackResult a bit and made sure more fakeredis tests are passing.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* feat: add slave_repl_offset to the replication section.
In Valkey slave_repl_offset denotes the replication offset on replica site during stable sync phase.
During fullsync phase it appears with 0 value.
In Dragonfly this field appears only after full sync has completed, thus it allows
to check whether Dragonfly reached stable sync phase. The value of this field describes the cumulative progress
of all the replication flows and it does not directly correspond to master side metrics.
In addition, this PR fixes the bug in wait_available_async() function in our replication tests.
This function is intended to wait until a replica reaches stable state and it did by sending pings until they do not
respond with LOADING error, hence the assumption is that the replica is in full sync state already.
However it can happen that master_link_status is "up" but replica has not reached full sync state, and the PING will succeed
just because wait_available_async() was called before full sync started. The whole approach of polling the state is fragile.
Now we use `slave_repl_offset` explicitly to see if the replica reaches stable state.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* chore: simplify wait_available_async
* chore: comments
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* chore: enable experimental_new_io by default.
It has been running for weeks with the flag on, so enabled it also for community.
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
Signed-off-by: Vladislav Oleshko <vlad@dragonflydb.io>
Co-authored-by: Vladislav Oleshko <vlad@dragonflydb.io>
* fix: xreadgroup replies as a map for RESP3
Moreover, it returns data for all the strings, irrespective whether they have results or not
(unlike with XREAD)
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* fix: properly handle xpending with 0 results
Also reject ENTRIESREAD instead of silently accepting it.
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* fix: JSON.STRAPPEND
JSON.STRAPPEND was completely broken.
First, it accepts exactly 3 arguments, i.e. a single value to append.
Secondly, the value must be a json string, not the regular string. Meaning it must be in double quotes.
So, before we parsed: `JSON.STRAPPEND key $.field bar` and now we parse:
`JSON.STRAPPEND key $.field "bar"`
In addition fixed the behavior of JSON.STRLEN to return "no such key" error in case the
json key does not exist and path is specified.
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
Stop supporting DflyVersion::VER0 from more than a year ago.
In addition, rename Metrics fields to make them more clear
General improvements and fix the reconnect metric.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* chore: improve compatibility of set and ping commands
smismember should return an array of longs and not array of strings.
ping in subscribe mode returns an array for resp2.
Also, fix double rounding for legacy float mode.
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
1. Fix corner cases around non existing keys
2. Fix matching logic for * glob, as well as '' glob.
3. Improve SORT option parsing.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
1. strlen should return 0 for non existing types.
2. reject both EX and PX options in SET
3. prevent overflow of expiry time that is too large
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* chore: allow limiting pipelining queue by length
We already allow limiting the queue by memory usage but it also makes sense to limit by depth,
so that in extreme cases we would provide backpressure back to client connections. Otherwise if we parse and read everything,
clients do not have a sense of how loaded the connection is on the server side.
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
1. Fix the bug of computing incorrectly the weight index in OpInter
2. Remove code duplication and factor out the parsing code from OpInter and OpUnion into PrepareWeightedSets
3. Implement TODO and support union of zsets and sets, which has already been implemented for ZINTER.
fixes#3553
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* fix: disable ThreadLocalMutex when big value ser is off
* refactor: address comments
---------
Co-authored-by: Ubuntu <ubuntu@ip-172-31-28-29.ec2.internal>
Co-authored-by: Borys <borys@dragonflydb.io>
* feat(cluster): Allow appending RDB to existing store
The goal of this PR is to support the loadoing of multiple RDB files into a single server, like when migrating from a Valkey cluster to Dragonfly with a different number of nodes.
It makes the following changes:
* Removes `DEBUG LOAD`, as we already have `DFLY LOAD`
* Adds `APPEND` option to `DFLY LOAD` (i.e. `DFLY LOAD <filename> APPEND`) that loads an RDB without first flushing the data store, overriding existing keys
* Does not load keys belonging to unowned slots, if in cluster mode
Fixes#2840
Background
We tried to be compatible with Valkey in their support of Lua flags, but we generally failed:
We are not really compatible with Valkey because our flags are different (we reject unknown flags, and Valkey flags are unknown to us)
The #!lua syntax doesn't work with Lua (# is not a comment), so scripts written for older versions of Redis can't be used with Dragonfly (i.e. they can't add Dragonfly flags and remain compatible with older Redis versions)
Changes
Instead of the previous syntax:
#!lua flags=allow-undeclared-keys,disable-atomicity
We now use this syntax:
--!df flags=allow-undeclared-keys,disable-atomicity
It is not backwards compatible (with older versions of Dragonfly), but it should be very easy to adapt to, and doesn't suffer from the disadvantages above.
Related to #3512
1. Before - when 15% margin of total size becomes larger than 256MB, we increased the file by 256MB even though
we still had unused 256MB. Now it's fixed.
2. We attempted to grow a file even if it reached the limit and then returned the error.
It is wrong handle "reach the limit" state as an error - it's just a logical constraint,
so now we do not attempt to grow and handle this quietly.
3. There was a segfault bug in extent tree in case the existing segment and the new one needs to be united.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* chore: reduce pipelining latency by reusing existing shard fibers
To prove the benefits, run `./dfly_bench --pipeline=50 -n 20000 --ratio 0:1 --qps=0 --key_maximum=1`
Before: the average pipelining latency was 10ms
After: the average pipelining latency is 5ms.
Avg latency: pipelined_latency_usec / total_pipelined_squashed_commands
Also, improved counting of squashed commands - to count actual squashed ones.
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* chore(generic_family): Fix bad data format error in the RESTORE command
Signed-off-by: Stepan Bagritsevich <bagr.stepan@gmail.com>
* chore(generic_family): Remove the error reply for OpStatus::WRONG_TYPE in the RESTORE command, since it is no longer in use
Signed-off-by: Stepan Bagritsevich <bagr.stepan@gmail.com>
---------
Signed-off-by: Stepan Bagritsevich <bagr.stepan@gmail.com>
There are some problematic flows. First we did not handle deletions, so all sorts of consistency issues could arise while calling DbSlice::Traverse() and DbSlice::Del(). Second, we did not handle FlushAll (same as before, Traverse() preempts and FlushAll() kicks in. Third we did not handle expirations.
---------
Signed-off-by: kostas <kostas@dragonflydb.io>
**Background**
In v1.21.0 we introduced support for `--announce_ip` for replicas to
announce their public IP addresses.
Like Valkey, this uses `REPLCONF IP-ADDRESS` to announce their IP
address.
**The issue**
Older Dragonfly releases (<1.21) did not support this feature. The
master side simply returned an error for such `REPLCONF` attempts,
however the replica code failed the replication, resulting in
incompatible versions.
**The fix**
The fix is simple, just log an error if the master did not respect
`REPLCONF IP-ADDRESS`. We can make this non-optional in the future
again.
However, in addition, I added a regression test to make sure we are
backwards compatible with v1.19.2. We'll bump this up every once in a
while.
* feat: Allow pre-declaring Lua SHAs to run with undeclared keys
By using `--lua_undeclared_keys_shas=SHA,SHA,SHA` users can now specify
which scripts should run globally (undeclared keys) without explicit
support from the scripts themselves.
Fixes#2442
* chore: add timeout fo replication sockets
Master will stop the replication flow if writes could not progress for more than K millis.
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
Signed-off-by: Roman Gershman <romange@gmail.com>
Co-authored-by: Shahar Mike <chakaz@users.noreply.github.com>
* chore: change how we track memory_budget during evictions
We compared memory_budget vs 0 before inserting a new item in DbSlice,
and retired cool pages if we are low on memory.
The problem - when we decide whether we allow growing a table, we estimate the possible object size increase due to the future table growth.
And the memory check described before was not consistent with the actual logic that rejected the insertion.
Moreover, the memory_budget tracking interaction with EvictionPolicy was over-complicated: we passed the memory_budget counter to the evp object
and then read it back, even though evp did not track object deletions memory impact during objects evictions.
Now, we remove the responsibility from evp to update memory_budget_ so it's solely updated by DbSlice.
We also update memory_budget_ during deletions, and when we pass it to evp, we add cool memory size as potential memory resource to avoid
rejections in case we have lots of cool memory.
Fixes#3456
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
The quicklist.* files are mostly equal to the versions at commit 0fc43edc6
Also, removed some unused code.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
Before that PAUSE paused the reconnection reconciler flow,
now it also stops the ongoing full sync replication if such exists.
In addition, this PR applies some clean-ups and removes redundant code
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* chore: simplify master replication cancelation interface
Before that CancelReplication did too many things, moreover,
we had StopReplication that did the same.
This PR moves CancelReplication under ReplicaInfo struct,
and reduces code duplication around this change.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* Update src/server/dflycmd.cc
Co-authored-by: Shahar Mike <chakaz@users.noreply.github.com>
Signed-off-by: Roman Gershman <romange@gmail.com>
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
Signed-off-by: Roman Gershman <romange@gmail.com>
Co-authored-by: Shahar Mike <chakaz@users.noreply.github.com>
* chore: reorganize EngineShard::Heartbeat
1. Simplify CacheStats by using accessorts directly provided by DbSlice
2. Separate eviction for tiering as tiering can be done on replica.
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* chore: improve replication locks
Allow non-exclusive, read-only access to Dfly::ReplicaInfo structure.
The most important change is in DflyCmd::CancelReplication, where before
it has locked ReplicaInfo mutex and then continued with locking the global mutex.
It is dangerous because most operation lock them in the opposite order.
Also rename ambigous GetReplicaInfo accessors to clearer names.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* chore: comments
* chore: comments
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* feat: Support `replica-announce-ip`/`port`
Before this PR, we only supported `cluster_announce_ip`.
It's basically the same feature, but used for cluster announcements
instead of replication.
This PR adds support for `replica-announce-ip` and
`replica-announce-port`, which can be set via new flags `--announce_ip=`
and `--announce_port=`. These flags apply to both cluster and replica
announcements.
Tested via running Sentinel, and making sure it is able to connect to
announced ip+port, while it can't connect to announced false /
unavailable ip+port.
Note: this PR deprecates `--cluster_announce_ip`, but continues to
support it. We will remove it in a future version.
Fixes#3380
* fix failing test
* destructure
Now unit tests will run the same Hearbeat fiber like in prod.
The whole feature was redundant, with just few explicit settings of maxmemory_limit
I succeeeded to make all unit tests pass.
In addition, this change allows passing a global handler that is called by heartbeat from a single thread.
This is not used yet - preparation for the next PR to break hung up replication connections on a master.
Finally, this change has some non-functional clean-ups and warning fixes to improve code quality.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
Before - sending 200K items requires more than 12K send calls.
Now - requires less than 2K calls. Latency also went down though not by x6.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* chore: expose metric that shows how many task submitters are blocked
This should help us in identifying deadlocks quickly.
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* feat: Support non-root paths for json.merge
Pass path argument and rewrite the JSON.MERGE code similar to OpToggle
or other mutating functions. Currently works only with --experimental_flat_json=false.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* chore: comments
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
1. Add background offloading stats
2. remove direct_fd override - helio is already updated with default=false, so it's not needed anymore.
3. remove redundant tiered_storage_memory_margin flag
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
json.merge would throw an exception when the json object did not contain the element to replace because RecursiveMerge functions used &dest->at(k_v.key()) which threw the exception. Remove RecursiveMerge completely and use the one implemented in jsoncons lib.
* add test
* replace RecursiveMerge with mergepatch::apply_merge_patch
* add exception handling for that flow
---------
Signed-off-by: kostas <kostas@dragonflydb.io>
DastTable::Traverse is error prone when the callback passed preempts because the segment might change. This is problematic and we need atomicity while traversing segments with preemption. The fix is to add Traverse in DbSlice and protect the traversal via ThreadLocalMutex.
* add ConditionFlag to DbSlice
* add Traverse in DbSlice and protect it with the ConditionFlag
* remove condition flag from snapshot
* remove condition flag from streamer
---------
Signed-off-by: kostas <kostas@dragonflydb.io>
* feat: stabilize non-coordinated omission mode
1. Our latency/RPS computations were off because we started measuring before drivers
started running. Now, Run/Start phases are separated, so the start time is measured more precisely
(after the start phase)
2. Introduced progress per connection - one of my discoveries is that driver
connections progress with differrent pace when running in coordinated omission mode.
This can reach x5 speed differrences. Now we measure and output fastest/slowest progress.
3. Coordinated omission is great when the Server Under Test is able to sustain the required RPS.
But if the actual RPS is lower than the one is sent than the final latencies will be infinitely high.
We fix it by introducing self-adjusting sleep interval, so if the actual RPS is lower
we will increase the interval to be closer to the actual RPS.
Show p99 latency and maximum pending requests per connection.
Co-authored-by: Shahar Mike <chakaz@users.noreply.github.com>
Signed-off-by: Roman Gershman <romange@gmail.com>
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
Signed-off-by: Roman Gershman <romange@gmail.com>
Co-authored-by: Shahar Mike <chakaz@users.noreply.github.com>
* chore: Support setting the value of `replica-priority`
This PR adds a small refactor to the way we set and get config names
which have dashes (`-`) and underscores (`_`).
Until now, words were separated by underscores because this is how our
flags library (absl) works. However, this is incompatible with Valkey,
which uses dashes as a word separator.
Once merged, we will support both underscores and dashes in config
names, but will only return the name with dashes. **This is a behavior
change**.
We're doing this in order to be compatible with `replica-priority` and
possibly other config names that Valkey uses.
* Flag restore
* normalize to '_'
* fix: reenable macos builds
Also, add debug function that prints local state if deadlocks occure.
* fix: free cold memory for non-cache mode as well
* chore: disable FreeMemWithEvictionStep again
Because it heavily affects the performance when performing evictions.
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
1. Fully support tiered_experimental_cooling for all operations
2. Offset cool storage usage when computing memory pressure situations in Hearbeat.
3. Introduce realtime entry counting per db_slice and provide DCHECK to verify it vs the old approach.
Later we will switch to realtime entry and free memory computations when computing bytes per object,
and remove the old approach in CacheStats().
4. Show hit rate during the run of dfly_bench loadtest.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
None does not exist in Valkey and its entry was missing from the indexes we use to map categories to commands leading to an out of bounds access and causing a segfault.
* remove none from acl categories
---------
Signed-off-by: kostas <kostas@dragonflydb.io>
1. Use introsive::list for CoolQueue.
2. Make sure that we ignore cool memory usage when computing average object size to
prevent evictions during dashtable growth attempts.
3. Remove items from the cool storage before evicting them from the dash table.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
Inline transactions do not acquire any locks and therefore they should not preempt. This is no longer true when db_slice has registered callbacks.
* disable inline transactions when db_slice has registered callbacks
---------
Signed-off-by: kostas <kostas@dragonflydb.io>
For big value serialization it is required to support preemption when db_slice::RegisterOnChange is called to avoid UB when a code path is iterating over the change_cb_ and preempts because it serializes a big value. As this is problematic and can lead to data inconsistencies I replace the std::vector with std::list and bound the iteration of change_cb_ on paths that preempt.
* replace std::vector with std::list for change_cb_
* bound iteration of change_cb_ on paths that preempt
---------
Signed-off-by: kostas <kostas@dragonflydb.io>
We might preempt when we serialize a big value and the code in journal was protected by an atomic guard triggering a check failed.
* remove fiber guard from non atomic section
* move LocalBlockingCounter to common
---------
Signed-off-by: kostas <kostas@dragonflydb.io>
* chore: reenable evictions upon insertion to avoid OOM rejections
Before: when running dragonfly with --cache_mode we could get OOM rejections
even though the eviction policy allowed to evict items to free memory.
Ideally, dragonfly in cache mode should not respond with the OOM error.
This PR reuses the same Eviction step we have in the Heartbeat and conditionally applies it
during the insertion. In my test the OOM errors went from 500K to 0 and the server
still respected memory limit.
Also, remove the old heuristics that has never been used.
Test:
./dfly_bench --key_prefix=bar: -d 1024 --ratio=1:0 --qps=200 -n 3000
./dragonfly --dbfilename= --proactor_threads=2 --maxmemory=600M --cache_mode
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* fix: Fix `test_take_over_seeder`
There are a few issues with the test:
1. Not using the admin port, which could cause pause to deadlock
2. Not waiting for some of the `task`s (although that won't cause a
failure)
But also in the product code:
1. We used to `std::move()` the same pointer multiple times
2. We assigned to the same status object from multiple threads
Hopefully this fixes the test. It used to fail every ~100 attempts on my
machine, now it's been >1,000 and they all passed.
* add comments
* remove shard_ptr param
* chore: introduce a cool queue that gradually retires cool items
This PR introduces a new state in which the offloaded value is not freed from memory but instead stays
in the cool queue.
Upon Read we convert the cool value back to hot table and delete it from storage.
When we low on memory we retire oldest cool values until we are above the threshold.
The PR does not fully finish the feature but it is workable enough to start (load)testing.
Missing:
a) Handle Modify operations
b) Retire cool items in more cases where we are low on memory. Specifically, refrain from evictions as long as cool items exist.
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* default serialization_max_chunk_size to 10 mb
* add test for big values
* small rename of enum to conform style guide
---------
Signed-off-by: kostas <kostas@dragonflydb.io>
* chore: simplify computation of used_mem_current
Before - each thread updated its own variable and then,
the global "used_mem_current" was updated by summing used memory from each thread.
Now, each thread updates used_mem_current directly. The code is simpler and also provides more precise
results more frequently.
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
Also, add the according API to compact object.
Now external objects can be in two states: Cool and Offloaded.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>