1
0
Fork 0
mirror of https://github.com/dragonflydb/dragonfly.git synced 2024-12-14 11:58:02 +00:00

chore: remove support for save_schedule flag (#2327)

Signed-off-by: Roman Gershman <roman@dragonflydb.io>
This commit is contained in:
Roman Gershman 2023-12-22 11:17:18 +02:00 committed by GitHub
parent 34986fc52e
commit 365cb439cf
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
8 changed files with 12 additions and 250 deletions

View file

@ -106,7 +106,6 @@ Dragonfly 特有の議論もある:
* `dbnum`: `select` でサポートされるデータベースの最大数。
* `cache_mode`: 以下の[斬新なキャッシュデザイン](#斬新なキャッシュデザイン)のセクションを参照してください。
* `hz`: キーの有効期限評価頻度 (`default: 100`)。この頻度が低いと、アイドル時の CPU 使用量が少なくなるが、その分古くなったキーをクリアする速度が遅くなる。
* `save_schedule`: スナップショットを HH:MM (24 時間)フォーマットで保存する UTC のグローブ指定 (`default: ""`)。
* `primary_port_http_enabled`: もし `true` (`default: true`) なら、メイン TCP ポートで HTTP コンソールにアクセスできるようにする。
* `admin_port`: 割り当てられたポートのコンソールへの管理者アクセスを有効にする(`default: disabled`)。HTTP と RESP プロトコルの両方をサポートする。
* `admin_bind`: 管理コンソールの TCP 接続を指定されたアドレスにバインドする(`default: any`)。HTTP と RESP の両方のプロトコルをサポートする。
@ -117,7 +116,7 @@ Dragonfly 特有の議論もある:
### 一般的なオプションを使用した開始スクリプトの例:
```bash
./dragonfly-x86_64 --logtostderr --requirepass=youshallnotpass --cache_mode=true -dbnum 1 --bind localhost --port 6379 --save_schedule "*:30" --maxmemory=12gb --keys_output_limit=12288 --dbfilename dump.rdb
./dragonfly-x86_64 --logtostderr --requirepass=youshallnotpass --cache_mode=true -dbnum 1 --bind localhost --port 6379 --maxmemory=12gb --keys_output_limit=12288 --dbfilename dump.rdb
```
また、`dragonfly --flagfile <filename>` を実行することで、設定ファイルから引数を指定することもできる。ファイルには 1 行に 1 つのフラグを記述し、キーと値のフラグには空白の代わりに等号を記述します。

View file

@ -104,7 +104,6 @@ Dragonfly는 현재 아래와 같은 Redis 인수들을 지원합니다 :
* `dbnum`: `select` 명령에 대해 지원되는 최대 데이터베이스 수.
* `cache_mode`: 아래의 섹션 [새로운 캐시 설계](#novel-cache-design)을 참고해주시기 바랍니다.
* `hz`: 키가 만료되었는지를 판단하는 빈도(`기본값: 100`). 낮은 빈도는 키 방출이 느려지는 대신, 유휴 상태일 때 CPU 사용량을 줄입니다.
* `save_schedule`: UTC 기준으로 스냅샷을 HH:MM(24시간제) 형식으로 저장하기 위한 Glob 패턴 (`기본값: ""`).
* `primary_port_http_enabled`: `true` 인 경우 HTTP 콘솔로 메인 TCP 포트 접근을 허용합니다. (`기본값: true`).
* `admin_port`: 할당된 포트에서 관리자 콘솔 접근을 활성화합니다. (`기본값: disabled`). HTTP와 RESP 프로토콜 모두를 지원합니다.
* `admin_bind`: 주어진 주소에 관리자 콘솔 TCP 연결을 바인딩합니다. (`기본값: any`). HTTP와 RESP 프로토콜 모두를 지원합니다.
@ -116,7 +115,7 @@ Dragonfly는 현재 아래와 같은 Redis 인수들을 지원합니다 :
### 주요 옵션을 활용한 실행 스크립트 예시:
```bash
./dragonfly-x86_64 --logtostderr --requirepass=youshallnotpass --cache_mode=true -dbnum 1 --bind localhost --port 6379 --save_schedule "*:30" --maxmemory=12gb --keys_output_limit=12288 --dbfilename dump.rdb
./dragonfly-x86_64 --logtostderr --requirepass=youshallnotpass --cache_mode=true -dbnum 1 --bind localhost --port 6379 --maxmemory=12gb --keys_output_limit=12288 --dbfilename dump.rdb
```
인수들은 `dragonfly --flagfile <filename>`을 실행하여 설정 파일을 통해서도 전달할 수 있습니다. 전달될 파일은 각 줄에 키-값 형태의 플래그 나열 하기위해 등호를 사용합니다.

View file

@ -158,7 +158,6 @@ There are also some Dragonfly-specific arguments:
| `0 0 * * *` | At 00:00 (midnight) every day |
| `0 6 * * 1-5` | At 06:00 (dawn) from Monday through Friday |
* `save_schedule`: Glob spec for the UTC to save a snapshot in HH:MM (24h time) format (`default: ""`). This argument is deprecated, and `snapshot_cron` should be preferred when available.
* `primary_port_http_enabled`: Allows accessing HTTP console on main TCP port if `true` (`default: true`).
* `admin_port`: To enable admin access to the console on the assigned port (`default: disabled`). Supports both HTTP and RESP protocols.
* `admin_bind`: To bind the admin console TCP connection to a given address (`default: any`). Supports both HTTP and RESP protocols.
@ -169,7 +168,7 @@ There are also some Dragonfly-specific arguments:
### Example start script with popular options:
```bash
./dragonfly-x86_64 --logtostderr --requirepass=youshallnotpass --cache_mode=true -dbnum 1 --bind localhost --port 6379 --save_schedule "*:30" --maxmemory=12gb --keys_output_limit=12288 --dbfilename dump.rdb
./dragonfly-x86_64 --logtostderr --requirepass=youshallnotpass --cache_mode=true -dbnum 1 --bind localhost --port 6379 --maxmemory=12gb --keys_output_limit=12288 --dbfilename dump.rdb
```
Arguments can be also provided via:

View file

@ -123,9 +123,6 @@ Dragonfly 支持 Redis 的常见参数。
| `5 */2 * * *` | 每隔 2 小时的第 5 分钟 |
| `0 0 * * *` | 每天的 00:00 午夜 |
| `0 6 * * 1-5` | 从星期一到星期五的每天 06:00 黎明 |
* `save_schedule`:以 UTC 时间规范保存快照,格式: HH:MM24 小时制时间)。默认为空 `""`。该参数被标记为弃用,新版本中推荐使用 `snapshot_cron` 参数替代。
* `primary_port_http_enabled`:如果为 true则允许在主 TCP 端口上访问 HTTP 控制台。默认为 `true`
* `admin_port`:如果设置,将在指定的端口上启用对控制台的管理访问。支持 HTTP 和 RESP 协议。默认禁用。
@ -141,7 +138,7 @@ Dragonfly 支持 Redis 的常见参数。
### 启动脚本示例,包含常用选项:
```bash
./dragonfly-x86_64 --logtostderr --requirepass=youshallnotpass --cache_mode=true -dbnum 1 --bind localhost --port 6379 --save_schedule "*:30" --maxmemory=12gb --keys_output_limit=12288 --dbfilename dump.rdb
./dragonfly-x86_64 --logtostderr --requirepass=youshallnotpass --cache_mode=true -dbnum 1 --bind localhost --port 6379 --maxmemory=12gb --keys_output_limit=12288 --dbfilename dump.rdb
```
还可以通过运行 `dragonfly --flagfile <filename>` 从配置文件中获取参数,配置文件的每行应该列出一个参数,并用等号代替键值参数的空格。

View file

@ -80,7 +80,6 @@ cxx_test(rdb_test dfly_test_lib DATA testdata/empty.rdb testdata/redis6_small.rd
testdata/redis6_stream.rdb testdata/hll.rdb testdata/redis7_small.rdb LABELS DFLY)
cxx_test(zset_family_test dfly_test_lib LABELS DFLY)
cxx_test(blocking_controller_test dfly_test_lib LABELS DFLY)
cxx_test(snapshot_test dragonfly_lib LABELS DFLY)
cxx_test(json_family_test dfly_test_lib LABELS DFLY)
cxx_test(journal/journal_test dfly_test_lib LABELS DFLY)
cxx_test(tiered_storage_test dfly_test_lib LABELS DFLY)
@ -97,6 +96,6 @@ cxx_test(engine_shard_set_test dfly_test_lib LABELS DFLY)
add_custom_target(check_dfly WORKING_DIRECTORY .. COMMAND ctest -L DFLY)
add_dependencies(check_dfly dragonfly_test json_family_test list_family_test
generic_family_test memcache_parser_test rdb_test journal_test
redis_parser_test snapshot_test stream_family_test string_family_test
redis_parser_test stream_family_test string_family_test
bitops_family_test set_family_test zset_family_test hll_family_test
cluster_config_test cluster_family_test user_registry_test acl_family_test)

View file

@ -313,31 +313,6 @@ bool IsCloudPath(string_view path) {
return absl::StartsWith(path, kS3Prefix);
}
bool IsValidSaveScheduleNibble(string_view time, unsigned int max) {
/*
* a nibble is valid iff there exists one time that matches the pattern
* and that time is <= max. For any wildcard the minimum value is 0.
* Therefore the minimum time the pattern can match is the time with
* all *s replaced with 0s. If this time is > max all other times that
* match the pattern are > max and the pattern is invalid. Otherwise
* there exists at least one valid nibble specified by this pattern
*
* Note the edge case of "*" is equivalent to "**". While using this
* approach "*" and "**" both map to 0.
*/
unsigned int min_match = 0;
for (size_t i = 0; i < time.size(); ++i) {
// check for valid characters
if (time[i] != '*' && (time[i] < '0' || time[i] > '9')) {
return false;
}
min_match *= 10;
min_match += time[i] == '*' ? 0 : time[i] - '0';
}
return min_match <= max;
}
// Check that if TLS is used at least one form of client authentication is
// enabled. That means either using a password or giving a root
// certificate for authenticating client certificates which will
@ -379,87 +354,24 @@ void SetMasterFlagOnAllThreads(bool is_master) {
shard_set->pool()->DispatchBrief(cb);
}
} // namespace
std::optional<SnapshotSpec> ParseSaveSchedule(string_view time) {
if (time.length() < 3 || time.length() > 5) {
return std::nullopt;
}
size_t separator_idx = time.find(':');
// the time cannot start with ':' and it must be present in the first 3 characters of any time
if (separator_idx == 0 || separator_idx >= 3) {
return std::nullopt;
}
SnapshotSpec spec{string(time.substr(0, separator_idx)), string(time.substr(separator_idx + 1))};
// a minute should be 2 digits as it is zero padded, unless it is a '*' in which case this
// greedily can make up both digits
if (spec.minute_spec != "*" && spec.minute_spec.length() != 2) {
return std::nullopt;
}
return IsValidSaveScheduleNibble(spec.hour_spec, 23) &&
IsValidSaveScheduleNibble(spec.minute_spec, 59)
? std::optional<SnapshotSpec>(spec)
: std::nullopt;
}
bool DoesTimeNibbleMatchSpecifier(string_view time_spec, unsigned int current_time) {
// single greedy wildcard matches everything
if (time_spec == "*") {
return true;
}
for (int i = time_spec.length() - 1; i >= 0; --i) {
// if the current digit is not a wildcard and it does not match the digit in the current time it
// does not match
if (time_spec[i] != '*' && int(current_time % 10) != (time_spec[i] - '0')) {
return false;
}
current_time /= 10;
}
return current_time == 0;
}
bool DoesTimeMatchSpecifier(const SnapshotSpec& spec, time_t now) {
unsigned hour = (now / 3600) % 24;
unsigned min = (now / 60) % 60;
return DoesTimeNibbleMatchSpecifier(spec.hour_spec, hour) &&
DoesTimeNibbleMatchSpecifier(spec.minute_spec, min);
}
std::optional<cron::cronexpr> InferSnapshotCronExpr() {
string save_time = GetFlag(FLAGS_save_schedule);
auto cron_expr = GetFlag(FLAGS_snapshot_cron);
if (!save_time.empty()) {
LOG(ERROR) << "save_schedule flag is deprecated, please use snapshot_cron instead";
exit(1);
}
if (cron_expr.cron_expr) {
if (!save_time.empty()) {
LOG(ERROR) << "snapshot_cron and save_schedule flags should not be set simultaneously";
exit(1);
}
return std::move(cron_expr.cron_expr);
}
if (!save_time.empty()) {
if (std::optional<SnapshotSpec> spec = ParseSaveSchedule(save_time); spec) {
// Setting snapshot to HH:mm everyday, as specified by `save_schedule` flag
string raw_cron_expr = absl::StrCat(CronExprFlag::kCronPrefix, spec.value().minute_spec, " ",
spec.value().hour_spec, " * * *");
try {
VLOG(1) << "creating cron from: `" << raw_cron_expr << "`";
return cron::make_cron(raw_cron_expr);
} catch (const cron::bad_cronexpr& ex) {
LOG(WARNING) << "Invalid cron expression: " << raw_cron_expr;
}
} else {
LOG(WARNING) << "Invalid snapshot time specifier " << save_time;
}
}
return std::nullopt;
}
} // namespace
ServerFamily::ServerFamily(Service* service) : service_(*service) {
start_time_ = time(NULL);
last_save_info_ = make_shared<LastSaveInfo>();

View file

@ -1,122 +0,0 @@
// Copyright 2022, DragonflyDB authors. All rights reserved.
// See LICENSE for licensing terms.
//
#include <gmock/gmock.h>
#include <gtest/gtest.h>
#include <chrono>
#include "base/gtest.h"
#include "server/test_utils.h"
using namespace testing;
using namespace std;
using namespace util;
using namespace facade;
using absl::StrCat;
namespace dfly {
class SnapshotTest : public Test {
protected:
};
std::optional<SnapshotSpec> ParseSaveSchedule(string_view time);
bool DoesTimeMatchSpecifier(const SnapshotSpec&, time_t);
bool DoesTimeMatchSpecifier(string_view time_spec, unsigned int hour, unsigned int min) {
auto spec = ParseSaveSchedule(time_spec);
if (!spec) {
return false;
}
time_t now = ((hour * 60) + min) * 60;
return DoesTimeMatchSpecifier(spec.value(), now);
}
TEST_F(SnapshotTest, InvalidTimes) {
EXPECT_FALSE(ParseSaveSchedule("24:00"));
EXPECT_FALSE(ParseSaveSchedule("00:60"));
EXPECT_FALSE(ParseSaveSchedule("100:00"));
EXPECT_FALSE(ParseSaveSchedule("00:100"));
// invalid times with regex
EXPECT_FALSE(ParseSaveSchedule("23:6*"));
// Minutes must be zero padded
EXPECT_FALSE(ParseSaveSchedule("00:9"));
// No separators or start with separator
EXPECT_FALSE(ParseSaveSchedule(":12"));
EXPECT_FALSE(ParseSaveSchedule("1234"));
EXPECT_FALSE(ParseSaveSchedule("1"));
// Negative numbers / non numeric characters
EXPECT_FALSE(ParseSaveSchedule("-1:-2"));
EXPECT_FALSE(ParseSaveSchedule("12:34b"));
EXPECT_FALSE(ParseSaveSchedule("0;:1="));
// Wildcards for full times
EXPECT_FALSE(ParseSaveSchedule("12*:09"));
EXPECT_FALSE(ParseSaveSchedule("23:45*"));
}
TEST_F(SnapshotTest, ValidTimes) {
// Test endpoints
EXPECT_TRUE(ParseSaveSchedule("23:59"));
EXPECT_TRUE(ParseSaveSchedule("00:00"));
// hours don't need to be zero padded
EXPECT_TRUE(ParseSaveSchedule("0:00"));
// wildcard checks
EXPECT_TRUE(ParseSaveSchedule("1*:09"));
EXPECT_TRUE(ParseSaveSchedule("*9:23"));
EXPECT_TRUE(ParseSaveSchedule("23:*1"));
EXPECT_TRUE(ParseSaveSchedule("18:1*"));
// Greedy wildcards
EXPECT_TRUE(ParseSaveSchedule("*:12"));
EXPECT_TRUE(ParseSaveSchedule("9:*"));
EXPECT_TRUE(ParseSaveSchedule("09:*"));
EXPECT_TRUE(ParseSaveSchedule("*:*"));
}
TEST_F(SnapshotTest, TimeMatches) {
EXPECT_TRUE(DoesTimeMatchSpecifier("12:34", 12, 34));
EXPECT_TRUE(DoesTimeMatchSpecifier("2:34", 2, 34));
EXPECT_TRUE(DoesTimeMatchSpecifier("2:04", 2, 4));
EXPECT_FALSE(DoesTimeMatchSpecifier("12:34", 2, 4));
EXPECT_FALSE(DoesTimeMatchSpecifier("12:34", 2, 34));
EXPECT_FALSE(DoesTimeMatchSpecifier("2:34", 12, 34));
EXPECT_FALSE(DoesTimeMatchSpecifier("2:34", 3, 34));
EXPECT_FALSE(DoesTimeMatchSpecifier("2:04", 3, 5));
// Check wildcard for one slot
for (int i = 0; i < 9; ++i)
EXPECT_TRUE(DoesTimeMatchSpecifier("1*:34", 10 + i, 34));
EXPECT_TRUE(DoesTimeMatchSpecifier("*3:04", 13, 4));
EXPECT_TRUE(DoesTimeMatchSpecifier("*3:04", 23, 4));
// do the same checks for the minutes
for (int i = 0; i < 9; ++i)
EXPECT_TRUE(DoesTimeMatchSpecifier("10:3*", 10, 30 + i));
for (int i = 0; i < 6; ++i)
EXPECT_TRUE(DoesTimeMatchSpecifier("13:*4", 13, (10 * i) + 4));
// check greedy wildcards
for (int i = 0; i < 24; ++i)
EXPECT_TRUE(DoesTimeMatchSpecifier("*:12", i, 12));
for (int i = 0; i < 60; ++i)
EXPECT_TRUE(DoesTimeMatchSpecifier("3:*", 3, i));
for (int i = 0; i < 24; ++i)
for (int j = 0; j < 60; ++j)
EXPECT_TRUE(DoesTimeMatchSpecifier("*:*", i, j));
}
} // namespace dfly

View file

@ -151,27 +151,6 @@ class TestDflyAutoLoadSnapshot(SnapshotTestBase):
assert response == str(hash(dbfilename))
@dfly_args({**BASIC_ARGS, "dbfilename": "test-periodic", "save_schedule": "*:*"})
class TestPeriodicSnapshot(SnapshotTestBase):
"""Test periodic snapshotting"""
@pytest.fixture(autouse=True)
def setup(self, tmp_dir: Path):
super().setup(tmp_dir)
@pytest.mark.asyncio
@pytest.mark.slow
async def test_snapshot(self, df_seeder_factory, df_server):
seeder = df_seeder_factory.create(
port=df_server.port, keys=10, multi_transaction_probability=0
)
await seeder.run(target_deviation=0.5)
await super().wait_for_save("test-periodic-summary.dfs")
assert super().get_main_file("test-periodic-summary.dfs")
# save every 1 minute
@dfly_args({**BASIC_ARGS, "dbfilename": "test-cron", "snapshot_cron": "* * * * *"})
class TestCronPeriodicSnapshot(SnapshotTestBase):