* chore: Add `--allocator_tracker` for default tracking
Before, in order to use allocation tracker, one had to issue a `MEMORY
TRACK` command. This flag is identical to that, but allows starting
Dragonfly with certain ranges without issuing a command.
While here, fix a bug. Apparently, `absl::InlinedVector<>` has a bug in
the implementation of `max_size()` and so in practice we did not limit
the number of trackers. I switched to use `capacity()` instead, which I
tested and it works well.
Notes:
1. Currently the flag always add 100% "sampling", we can extend that in
the future if need be
2. I added the flag in `dfly_main.cc` with custom initialization,
because it's low level, and I couldn't get it reasonably working with
changes only to `allocation_tracker.cc`
* fixes
The problem - we used file write in non-direct mode when writing snapshots in epoll mode.
As a result - lots of data was cached into OS memory. But then during the rename operation,
when we rename "xxx.dfs.tmp" into "xxx.dfs", the OS flushes the file caches and the thread
is stuck in OS system call rename for a long time.
The fix - to use DIRECT mode and to avoid caching the data into OS caches at all.
Fixes#3895
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
BITPOS returns 0 for non-existent keys according to Redis's
implmentation.
BITPOS allows only 0 and 1 as the bit mode argument.
Signed-off-by: Denis K <kalantaevskii@gmail.com>
Use intrusive queue that allows batching of scheduling calls instead of handling each call separately.
This optimizations improves latency and throughput by 3-5%
In addition, we expose batching statistics in info transaction block.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
Today, some of the failures to load an RDB file passed via
`--dbfilename` cause Dragonfly to terminate with an error code. This is
ok and works as expected.
The problem is that the same code path is used for `DFLY LOAD`, which
means that if there's an error loading the file (such as corrupted
file), Dragonfly will exit instead of returning an error code to the
client.
This change fixes that, by only exiting in the code path which loads on
init.
Note to reviewer: apparently we can't call `Future::Get()` more than
once, as the first call resets the state of the future and drops the
previously saved value, so we use a Fiber here instead.
* fix: Do not publish to connections without context
This is a rare case where a closed connection is kept alive while the
handling fiber yields, therefore leaving `cc_` (the connection context)
pointing to null for other fibers to see.
As far as I can see, this can only happen during server shutdown, but
there could be other cases that I have missed.
The test on its own does _not_ reproduce the crash, however with added
`ThisFiber::SleepFor()`s I could reproduce the crash:
* Right before `DispatchBrief()`
[here](e3214cb603/src/server/channel_store.cc (L154))
* Right after connection context `reset()`
[here](2ab480e160/src/facade/dragonfly_connection.cc (L750))
In any case, calling `SendPubMessageAsync()` to a connection where `cc_`
is null is a bug, and we fix that here.
* rewording
We do not acquire any locks for transactions that are executing optimistically. However, this is problematic for callbacks that need to preempt (e.g. because a journal is active).
---------
Signed-off-by: kostas <kostas@dragonflydb.io>
We currently support rdb files up to version 11. This is a blocker for people who want to migrate to dragonfly with newer versions of the format. As of now, there is only v12 and it only includes the addition of RDB_OPCODE_SLOT_INFO.
* adds support to load rdb files up to version 12
* reads and discards with a warning the contents of RDB_OPCODE_SLOT_INFO if found in the rdb file
---------
Signed-off-by: kostas <kostas@dragonflydb.io>
Today there's a cost to enabling AllocationTracker, even for rarely used
memory bands.
This PR slightly optimizes the "happy path" (i.e. allocations outside
the tracked range), and also for the case where we track 100% of the
allocations.
Also, add a unit test for this class.
We would like to stop passing MutableSlice as arguments and removing ToUpper
is the first step to it.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
Sometimes for large values during snapshot loading/saving we allocate a lot of extra memory. For that, we might need to manually run memory decommit for mimalloc to release memory pages back to the OS. This PR addresses that by manually running memory decommit after each shard finishes loading or saving a snapshot.
---------
Signed-off-by: kostas <kostas@dragonflydb.io>
* chore: ClearInternal now can clear partially
Intended for future use - to deallocate large objects gradually.
Currently nothing is changed in the functionality besides some cleanups.
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* chore: Implement AddMany method
1. Fix a performance bug in Find2 that made redundant comparisons
2. Provide a method to StringSet that adds several items in a batch
3. Use AddMany inside set_family
Before:
```
BM_Add 4253939 ns 4253713 ns 991
```
After:
```
BM_Add 3482177 ns 3482050 ns 1206
BM_AddMany 3101622 ns 3101507 ns 1360
```
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* chore: fixes
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
Clean up interface a bit. AddOrFindDense does not make much sense as a single function
because it does not provide any performance benefits - we still must perform a lookup
before inserting. AddSds should have been removed a long time ago.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* feat: add DenseSet::IteratorBase::SetExpiryTime
This commit is in preparation for adding FIELDEXPIRE and HEXPIRE.
* fix: 0 is a valid input for MakeSetSds
* feat(rdb_load): add support for loading huge hmaps
* feat(rdb_load): add support for loading huge zsets
* feat(rdb_load): log DFATAL when append fails
A common case is that we need to clean up a connection before we exit a test via .close() method. This is needed because otherwise the connection will raise a warning that it is left unclosed. However, remembering to call .close() at each connection at the end of the test is cumbersome! Luckily, fixtures in python can be marked as async which allow us to:
* cache all clients created by DflyInstance.client()
* clean them all at the end of the fixture in one go
---------
Signed-off-by: kostas <kostas@dragonflydb.io>
We do not allow notify_keyspace_events to be set at runtime via config set command.
* allow notify_keyspace_events in config set command
* add tests
---------
Signed-off-by: kostas <kostas@dragonflydb.io>