The bug:
calling lua_error does not return, instead it unwinds the Lua call stack until an error handler is found or the
script exits. This lead to memory leak on object that should release memory in destructor.
Specific example is the absl::FixedArray<string_view, 4> args(argc); which allocates on heap if argc > 4. The free was not called leading to memory leak.
The fix:
Add scoping to to the function so that the destructor is called before calling raise error
Signed-off-by: adi_holden <adi@dragonflydb.io>
This PR syncs some of the improvements that were introduced in streams in Redis 7.2.3 OSS.
1. verify xsetid against max deleted id in the stream
2. Implement precise memory measurement of streams for "memory usage" command.
3. Always compact nodes in stream listpacks after creating new nodes.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
Specifically:
* `INFO REPLICATION` does not list the replicas, but does still show
`connected_slaves`
* `INFO SERVER` does not show `thread_count` and `os`
Fixes#4173
There are actually a few failures fixed in this PR, only one of which is a test bug:
* `db_slice_->Traverse()` can yield, causing `fiber_cancelled_`'s value to change
* When a migration is cancelled, it may never finish `WaitForInflightToComplete()` because it has `in_flight_bytes_` that will never reach destination due to the cancellation
* `IterateMap()` with numeric key/values overrode the key's buffer with the value's buffer
Fixes#4207
* fix: bugs in stream code
1. Memory leak in streamGetEdgeID
2. Addresses CVE-2022-31144
3. Fixes XAUTOCLAIM bugs and adds tests.
4. Limits the count argument in XAUTOCLAIM command to 2^18 (CVE-2022-35951)
Also fixes#3830
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
Signed-off-by: Roman Gershman <romange@gmail.com>
Co-authored-by: Shahar Mike <chakaz@users.noreply.github.com>
1. Use transaction time in streams code, similarly to how we do it in other commands.
Stop using mstime() and delete unused redis code.
2. Check for sequence overflow issue when passing huge sequence ids.
Add a test.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
Also fix "debug objhist" so that its value histogram will show effective malloc
used distributions for all types.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
After running `debug POPULATE 100 list 100 rand type list elements 10000`
with `--list_experimental_v2=false`:
```
type_used_memory_list:16512800
used_memory:105573120
```
When running with `--list_experimental_v2=true`:
```
used_memory:105573120
type_used_memory_list:103601700
```
TODO: does not yet handle compressed entries correctly but we do not enable compression by default.
Fixes#3800
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* chore: Add more qlist tests
Also fix a typo bug in NodeAllowMerge.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* chore: fix build
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
fix(search_family): fix(search_family): Fix crash when no SEPARATOR is specified in the FT.CREATE command
Signed-off-by: Stepan Bagritsevich <stefan@dragonflydb.io>
* feat(contrib/helm): evaluate the provided passwordSecretName value as a template
Useful to reuse some defined variables or functions directly to compute the value
(external chart for instance, that depends on this one)
example:
'{{ include "something.defined.elsewhere" $ }}-secrets'
Signed-off-by: Raphael Glon <oOraph@users.noreply.github.com>
* update golden tests
---------
Signed-off-by: Raphael Glon <oOraph@users.noreply.github.com>
Co-authored-by: Raphael Glon <oOraph@users.noreply.github.com>
Co-authored-by: Tarun Pothulapati <tarun@dragonflydb.io>
* chore: get back on the decision to put a hard limit on command interface
Limiting commands to only Transaction* and SinkReplyBuilder does not hold.
We need sometimes to access context fields for multitude of reasons.
But I do not want to pass the huge ConnectionContext object because, it's hard
then to track unusual access patterns.
The compromise: to introduce CommandContext that currently has tx, rb and extended fields.
It will be relatively easy to identify irregular access patterns by tracking the extended field.
This commit is the first one in series of probably 10-15 commits. No functional changes here.
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
Should work only for self-hosted runners.
The core files will be kept in /var/crash/
We also copy automatically dragonfly binary into /var/crash to be able to debug later.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
Dragonfly responds to ascii based requests to tls port with:
`-ERR Bad TLS header, double check if you enabled TLS for your client.`
Therefore, it is possible to test now both tls and non-tls ports with a plain-text PING.
Fixes#4171
Also, blacklist the bloom-filter test that Dragonfly does not support yet.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* fix: deduplicate mget response
In case of duplicate mget keys, skips fetching the same key twice.
The optimization is straighforward - we just copy the response for the original key,
since the response is a shallow object, we potentially save lots of memory with this
deduplication. Always deduplicate inside OpMGet.
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* feat: Huge values breakdown in cluster migration
Before this PR we used `RESTORE` commands for transferring data between
source and target nodes in cluster slots migration.
While this _works_, it has a side effect of consuming 2x memory for huge
values (i.e. if a single key's value takes 10gb, serializing it will
take 20gb or even 30gb).
With this PR we break down huge keys into multiple commands (`RPUSH`,
`HSET`, etc), respecting the existing `--serialization_max_chunk_size`
flag.
Part of #4100