Merge branch 'develop' into madlittlemods/17368-bust-_membership_stream_cache

This commit is contained in:
Eric Eastwood 2024-11-27 13:11:39 -06:00
commit f40e64940a
66 changed files with 947 additions and 441 deletions

View file

@ -1,3 +1,59 @@
# Synapse 1.120.0 (2024-11-26)
### Bugfixes
- Fix a bug introduced in Synapse v1.120rc1 which would cause the newly-introduced `delete_old_otks` job to fail in worker-mode deployments. ([\#17960](https://github.com/element-hq/synapse/issues/17960))
# Synapse 1.120.0rc1 (2024-11-20)
This release enables the enforcement of authenticated media by default, with exemptions for media that is already present in the
homeserver's media store.
Most homeservers operating in the public federation will not be impacted by this change, given that
the large homeserver `matrix.org` enabled this in September 2024 and therefore most clients and servers
will already have updated as a result.
Some server administrators may still wish to disable this enforcement for the time being, in the interest of compatibility with older clients
and older federated homeservers.
See the [upgrade notes](https://element-hq.github.io/synapse/v1.120/upgrade.html#authenticated-media-is-now-enforced-by-default) for more information.
### Features
- Enforce authenticated media by default. Administrators can revert this by configuring `enable_authenticated_media` to `false`. In a future release of Synapse, this option will be removed and become always-on. ([\#17889](https://github.com/element-hq/synapse/issues/17889))
- Add a one-off task to delete old One-Time Keys, to guard against us having old OTKs in the database that the client has long forgotten about. ([\#17934](https://github.com/element-hq/synapse/issues/17934))
### Improved Documentation
- Clarify the semantics of the `enable_authenticated_media` configuration option. ([\#17913](https://github.com/element-hq/synapse/issues/17913))
- Add documentation about backing up Synapse. ([\#17931](https://github.com/element-hq/synapse/issues/17931))
### Deprecations and Removals
- Remove support for [MSC3886: Simple client rendezvous capability](https://github.com/matrix-org/matrix-spec-proposals/pull/3886), which has been superseded by [MSC4108](https://github.com/matrix-org/matrix-spec-proposals/pull/4108) and therefore closed. ([\#17638](https://github.com/element-hq/synapse/issues/17638))
### Internal Changes
- Addressed some typos in docs and returned error message for unknown MXC ID. ([\#17865](https://github.com/element-hq/synapse/issues/17865))
- Unpin the upload release GHA action. ([\#17923](https://github.com/element-hq/synapse/issues/17923))
- Bump macOS version used to build wheels during release, as current version used is end-of-life. ([\#17924](https://github.com/element-hq/synapse/issues/17924))
- Move server event filtering logic to Rust. ([\#17928](https://github.com/element-hq/synapse/issues/17928))
- Support new package name of PyPI package `python-multipart` 0.0.13 so that distro packagers do not need to work around name conflict with PyPI package `multipart`. ([\#17932](https://github.com/element-hq/synapse/issues/17932))
- Speed up slow initial sliding syncs on large servers. ([\#17946](https://github.com/element-hq/synapse/issues/17946))
### Updates to locked dependencies
* Bump anyhow from 1.0.92 to 1.0.93. ([\#17920](https://github.com/element-hq/synapse/issues/17920))
* Bump bleach from 6.1.0 to 6.2.0. ([\#17918](https://github.com/element-hq/synapse/issues/17918))
* Bump immutabledict from 4.2.0 to 4.2.1. ([\#17941](https://github.com/element-hq/synapse/issues/17941))
* Bump packaging from 24.1 to 24.2. ([\#17940](https://github.com/element-hq/synapse/issues/17940))
* Bump phonenumbers from 8.13.49 to 8.13.50. ([\#17942](https://github.com/element-hq/synapse/issues/17942))
* Bump pygithub from 2.4.0 to 2.5.0. ([\#17917](https://github.com/element-hq/synapse/issues/17917))
* Bump ruff from 0.7.2 to 0.7.3. ([\#17919](https://github.com/element-hq/synapse/issues/17919))
* Bump serde from 1.0.214 to 1.0.215. ([\#17938](https://github.com/element-hq/synapse/issues/17938))
# Synapse 1.119.0 (2024-11-13) # Synapse 1.119.0 (2024-11-13)
No significant changes since 1.119.0rc2. No significant changes since 1.119.0rc2.

162
Cargo.lock generated
View file

@ -35,12 +35,6 @@ version = "0.21.7"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9d297deb1925b89f2ccc13d7635fa0714f12c87adce1c75356b39ca9b7178567" checksum = "9d297deb1925b89f2ccc13d7635fa0714f12c87adce1c75356b39ca9b7178567"
[[package]]
name = "bitflags"
version = "2.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cf4b9d6a944f767f8e5e0db018570623c85f3d925ac718db4e06d0187adb21c1"
[[package]] [[package]]
name = "blake2" name = "blake2"
version = "0.10.6" version = "0.10.6"
@ -162,9 +156,9 @@ dependencies = [
[[package]] [[package]]
name = "heck" name = "heck"
version = "0.4.1" version = "0.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "95505c38b4572b2d910cecb0281560f54b440a19336cbbcb27bf6ce6adc6f5a8" checksum = "2304e00983f87ffb38b55b444b5e3b60a884b5d30c0fca7d82fe33449bbe55ea"
[[package]] [[package]]
name = "hex" name = "hex"
@ -222,16 +216,6 @@ version = "0.2.154"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ae743338b92ff9146ce83992f766a31066a91a8c84a45e0e9f21e7cf6de6d346" checksum = "ae743338b92ff9146ce83992f766a31066a91a8c84a45e0e9f21e7cf6de6d346"
[[package]]
name = "lock_api"
version = "0.4.12"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "07af8b9cdd281b7915f413fa73f29ebd5d55d0d3f0155584dade1ff18cea1b17"
dependencies = [
"autocfg",
"scopeguard",
]
[[package]] [[package]]
name = "log" name = "log"
version = "0.4.22" version = "0.4.22"
@ -265,29 +249,6 @@ version = "1.19.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3fdb12b2476b595f9358c5161aa467c2438859caa136dec86c26fdd2efe17b92" checksum = "3fdb12b2476b595f9358c5161aa467c2438859caa136dec86c26fdd2efe17b92"
[[package]]
name = "parking_lot"
version = "0.12.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7e4af0ca4f6caed20e900d564c242b8e5d4903fdacf31d3daf527b66fe6f42fb"
dependencies = [
"lock_api",
"parking_lot_core",
]
[[package]]
name = "parking_lot_core"
version = "0.9.10"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1e401f977ab385c9e4e3ab30627d6f26d00e2c73eef317493c4ec6d468726cf8"
dependencies = [
"cfg-if",
"libc",
"redox_syscall",
"smallvec",
"windows-targets",
]
[[package]] [[package]]
name = "portable-atomic" name = "portable-atomic"
version = "1.6.0" version = "1.6.0"
@ -311,16 +272,16 @@ dependencies = [
[[package]] [[package]]
name = "pyo3" name = "pyo3"
version = "0.21.2" version = "0.23.2"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a5e00b96a521718e08e03b1a622f01c8a8deb50719335de3f60b3b3950f069d8" checksum = "f54b3d09cbdd1f8c20650b28e7b09e338881482f4aa908a5f61a00c98fba2690"
dependencies = [ dependencies = [
"anyhow", "anyhow",
"cfg-if", "cfg-if",
"indoc", "indoc",
"libc", "libc",
"memoffset", "memoffset",
"parking_lot", "once_cell",
"portable-atomic", "portable-atomic",
"pyo3-build-config", "pyo3-build-config",
"pyo3-ffi", "pyo3-ffi",
@ -330,9 +291,9 @@ dependencies = [
[[package]] [[package]]
name = "pyo3-build-config" name = "pyo3-build-config"
version = "0.21.2" version = "0.23.2"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7883df5835fafdad87c0d888b266c8ec0f4c9ca48a5bed6bbb592e8dedee1b50" checksum = "3015cf985888fe66cfb63ce0e321c603706cd541b7aec7ddd35c281390af45d8"
dependencies = [ dependencies = [
"once_cell", "once_cell",
"target-lexicon", "target-lexicon",
@ -340,9 +301,9 @@ dependencies = [
[[package]] [[package]]
name = "pyo3-ffi" name = "pyo3-ffi"
version = "0.21.2" version = "0.23.2"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "01be5843dc60b916ab4dad1dca6d20b9b4e6ddc8e15f50c47fe6d85f1fb97403" checksum = "6fca7cd8fd809b5ac4eefb89c1f98f7a7651d3739dfb341ca6980090f554c270"
dependencies = [ dependencies = [
"libc", "libc",
"pyo3-build-config", "pyo3-build-config",
@ -350,9 +311,9 @@ dependencies = [
[[package]] [[package]]
name = "pyo3-log" name = "pyo3-log"
version = "0.10.0" version = "0.12.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2af49834b8d2ecd555177e63b273b708dea75150abc6f5341d0a6e1a9623976c" checksum = "3eb421dc86d38d08e04b927b02424db480be71b777fa3a56f32e2f2a3a1a3b08"
dependencies = [ dependencies = [
"arc-swap", "arc-swap",
"log", "log",
@ -361,9 +322,9 @@ dependencies = [
[[package]] [[package]]
name = "pyo3-macros" name = "pyo3-macros"
version = "0.21.2" version = "0.23.2"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "77b34069fc0682e11b31dbd10321cbf94808394c56fd996796ce45217dfac53c" checksum = "34e657fa5379a79151b6ff5328d9216a84f55dc93b17b08e7c3609a969b73aa0"
dependencies = [ dependencies = [
"proc-macro2", "proc-macro2",
"pyo3-macros-backend", "pyo3-macros-backend",
@ -373,9 +334,9 @@ dependencies = [
[[package]] [[package]]
name = "pyo3-macros-backend" name = "pyo3-macros-backend"
version = "0.21.2" version = "0.23.2"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "08260721f32db5e1a5beae69a55553f56b99bd0e1c3e6e0a5e8851a9d0f5a85c" checksum = "295548d5ffd95fd1981d2d3cf4458831b21d60af046b729b6fd143b0ba7aee2f"
dependencies = [ dependencies = [
"heck", "heck",
"proc-macro2", "proc-macro2",
@ -386,9 +347,9 @@ dependencies = [
[[package]] [[package]]
name = "pythonize" name = "pythonize"
version = "0.21.1" version = "0.23.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9d0664248812c38cc55a4ed07f88e4df516ce82604b93b1ffdc041aa77a6cb3c" checksum = "91a6ee7a084f913f98d70cdc3ebec07e852b735ae3059a1500db2661265da9ff"
dependencies = [ dependencies = [
"pyo3", "pyo3",
"serde", "serde",
@ -433,15 +394,6 @@ dependencies = [
"getrandom", "getrandom",
] ]
[[package]]
name = "redox_syscall"
version = "0.5.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "469052894dcb553421e483e4209ee581a45100d31b4018de03e5a7ad86374a7e"
dependencies = [
"bitflags",
]
[[package]] [[package]]
name = "regex" name = "regex"
version = "1.11.1" version = "1.11.1"
@ -477,12 +429,6 @@ version = "1.0.18"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f3cb5ba0dc43242ce17de99c180e96db90b235b8a9fdc9543c96d2209116bd9f" checksum = "f3cb5ba0dc43242ce17de99c180e96db90b235b8a9fdc9543c96d2209116bd9f"
[[package]]
name = "scopeguard"
version = "1.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "94143f37725109f92c262ed2cf5e59bce7498c01bcc1502d7b9afe439a4e9f49"
[[package]] [[package]]
name = "serde" name = "serde"
version = "1.0.215" version = "1.0.215"
@ -505,9 +451,9 @@ dependencies = [
[[package]] [[package]]
name = "serde_json" name = "serde_json"
version = "1.0.132" version = "1.0.133"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d726bfaff4b320266d395898905d0eba0345aae23b54aee3a737e260fd46db03" checksum = "c7fceb2473b9166b2294ef05efcb65a3db80803f0b03ef86a5fc88a2b85ee377"
dependencies = [ dependencies = [
"itoa", "itoa",
"memchr", "memchr",
@ -537,12 +483,6 @@ dependencies = [
"digest", "digest",
] ]
[[package]]
name = "smallvec"
version = "1.13.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3c5e1a9a646d36c3599cd173a41282daf47c44583ad367b8e6837255952e5c67"
[[package]] [[package]]
name = "subtle" name = "subtle"
version = "2.5.0" version = "2.5.0"
@ -694,67 +634,3 @@ dependencies = [
"js-sys", "js-sys",
"wasm-bindgen", "wasm-bindgen",
] ]
[[package]]
name = "windows-targets"
version = "0.52.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6f0713a46559409d202e70e28227288446bf7841d3211583a4b53e3f6d96e7eb"
dependencies = [
"windows_aarch64_gnullvm",
"windows_aarch64_msvc",
"windows_i686_gnu",
"windows_i686_gnullvm",
"windows_i686_msvc",
"windows_x86_64_gnu",
"windows_x86_64_gnullvm",
"windows_x86_64_msvc",
]
[[package]]
name = "windows_aarch64_gnullvm"
version = "0.52.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7088eed71e8b8dda258ecc8bac5fb1153c5cffaf2578fc8ff5d61e23578d3263"
[[package]]
name = "windows_aarch64_msvc"
version = "0.52.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9985fd1504e250c615ca5f281c3f7a6da76213ebd5ccc9561496568a2752afb6"
[[package]]
name = "windows_i686_gnu"
version = "0.52.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "88ba073cf16d5372720ec942a8ccbf61626074c6d4dd2e745299726ce8b89670"
[[package]]
name = "windows_i686_gnullvm"
version = "0.52.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "87f4261229030a858f36b459e748ae97545d6f1ec60e5e0d6a3d32e0dc232ee9"
[[package]]
name = "windows_i686_msvc"
version = "0.52.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "db3c2bf3d13d5b658be73463284eaf12830ac9a26a90c717b7f771dfe97487bf"
[[package]]
name = "windows_x86_64_gnu"
version = "0.52.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4e4246f76bdeff09eb48875a0fd3e2af6aada79d409d33011886d3e1581517d9"
[[package]]
name = "windows_x86_64_gnullvm"
version = "0.52.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "852298e482cd67c356ddd9570386e2862b5673c85bd5f88df9ab6802b334c596"
[[package]]
name = "windows_x86_64_msvc"
version = "0.52.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bec47e5bfd1bff0eeaf6d8b485cc1074891a197ab4225d504cb7a1ab88b02bf0"

View file

@ -1,8 +1,10 @@
# A build script for poetry that adds the rust extension. # A build script for poetry that adds the rust extension.
import itertools
import os import os
from typing import Any, Dict from typing import Any, Dict
from packaging.specifiers import SpecifierSet
from setuptools_rust import Binding, RustExtension from setuptools_rust import Binding, RustExtension
@ -14,6 +16,8 @@ def build(setup_kwargs: Dict[str, Any]) -> None:
target="synapse.synapse_rust", target="synapse.synapse_rust",
path=cargo_toml_path, path=cargo_toml_path,
binding=Binding.PyO3, binding=Binding.PyO3,
# This flag is a no-op in the latest versions. Instead, we need to
# specify this in the `bdist_wheel` config below.
py_limited_api=True, py_limited_api=True,
# We force always building in release mode, as we can't tell the # We force always building in release mode, as we can't tell the
# difference between using `poetry` in development vs production. # difference between using `poetry` in development vs production.
@ -21,3 +25,18 @@ def build(setup_kwargs: Dict[str, Any]) -> None:
) )
setup_kwargs.setdefault("rust_extensions", []).append(extension) setup_kwargs.setdefault("rust_extensions", []).append(extension)
setup_kwargs["zip_safe"] = False setup_kwargs["zip_safe"] = False
# We lookup the minimum supported python version by looking at
# `python_requires` (e.g. ">=3.9.0,<4.0.0") and finding the first python
# version that matches. We then convert that into the `py_limited_api` form,
# e.g. cp39 for python 3.9.
py_limited_api: str
python_bounds = SpecifierSet(setup_kwargs["python_requires"])
for minor_version in itertools.count(start=8):
if f"3.{minor_version}.0" in python_bounds:
py_limited_api = f"cp3{minor_version}"
break
setup_kwargs.setdefault("options", {}).setdefault("bdist_wheel", {})[
"py_limited_api"
] = py_limited_api

1
changelog.d/17253.misc Normal file
View file

@ -0,0 +1 @@
[MSC4108](https://github.com/matrix-org/matrix-spec-proposals/pull/4108): Add a `Content-Type` header on the `PUT` response to work around a faulty behavior in some caching reverse proxies.

View file

@ -1 +0,0 @@
Remove support for closed [MSC3886](https://github.com/matrix-org/matrix-spec-proposals/pull/3886).

View file

@ -1 +0,0 @@
Addressed some typos in docs and returned error message for unknown MXC ID.

1
changelog.d/17872.doc Normal file
View file

@ -0,0 +1 @@
Add OIDC example configuration for Forgejo (fork of Gitea).

View file

@ -1 +0,0 @@
Clarify the semantics of the `enable_authenticated_media` configuration option.

View file

@ -1 +0,0 @@
Unpin the upload release GHA action.

View file

@ -1 +0,0 @@
Bump macos version used to build wheels during release, as current version used is end-of-life.

View file

@ -1 +0,0 @@
Move server event filtering logic to rust.

View file

@ -1 +0,0 @@
Add documentation about backing up Synapse.

1
changelog.d/17933.bugfix Normal file
View file

@ -0,0 +1 @@
Fix long-standing bug where read receipts could get overly delayed being sent over federation.

1
changelog.d/17936.misc Normal file
View file

@ -0,0 +1 @@
Fix incorrect comment in new schema delta.

1
changelog.d/17944.misc Normal file
View file

@ -0,0 +1 @@
Raise setuptools_rust version cap to 1.10.2.

1
changelog.d/17945.misc Normal file
View file

@ -0,0 +1 @@
Enable encrypted appservice related experimental features in the complement docker image.

1
changelog.d/17952.misc Normal file
View file

@ -0,0 +1 @@
Return whether the user is suspended when querying the user account in the Admin API.

1
changelog.d/17953.doc Normal file
View file

@ -0,0 +1 @@
Link to element-docker-demo from contrib/docker*.

1
changelog.d/17966.misc Normal file
View file

@ -0,0 +1 @@
Bump pyo3 and dependencies to v0.23.2.

1
changelog.d/17969.misc Normal file
View file

@ -0,0 +1 @@
Update setuptools-rust and fix building abi3 wheels in latest version.

View file

@ -30,3 +30,6 @@ docker-compose up -d
### More information ### More information
For more information on required environment variables and mounts, see the main docker documentation at [/docker/README.md](../../docker/README.md) For more information on required environment variables and mounts, see the main docker documentation at [/docker/README.md](../../docker/README.md)
**For a more comprehensive Docker Compose example showcasing a full Matrix 2.0 stack, please see
https://github.com/element-hq/element-docker-demo**

View file

@ -8,6 +8,9 @@ All examples and snippets assume that your Synapse service is called `synapse` i
An example Docker Compose file can be found [here](docker-compose.yaml). An example Docker Compose file can be found [here](docker-compose.yaml).
**For a more comprehensive Docker Compose example, showcasing a full Matrix 2.0 stack (originally based on this
docker-compose.yaml), please see https://github.com/element-hq/element-docker-demo**
## Worker Service Examples in Docker Compose ## Worker Service Examples in Docker Compose
In order to start the Synapse container as a worker, you must specify an `entrypoint` that loads both the `homeserver.yaml` and the configuration for the worker (`synapse-generic-worker-1.yaml` in the example below). You must also include the worker type in the environment variable `SYNAPSE_WORKER` or alternatively pass `-m synapse.app.generic_worker` as part of the `entrypoint` after `"/start.py", "run"`). In order to start the Synapse container as a worker, you must specify an `entrypoint` that loads both the `homeserver.yaml` and the configuration for the worker (`synapse-generic-worker-1.yaml` in the example below). You must also include the worker type in the environment variable `SYNAPSE_WORKER` or alternatively pass `-m synapse.app.generic_worker` as part of the `entrypoint` after `"/start.py", "run"`).

12
debian/changelog vendored
View file

@ -1,3 +1,15 @@
matrix-synapse-py3 (1.120.0) stable; urgency=medium
* New synapse release 1.120.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 26 Nov 2024 13:10:23 +0000
matrix-synapse-py3 (1.120.0~rc1) stable; urgency=medium
* New Synapse release 1.120.0rc1.
-- Synapse Packaging team <packages@matrix.org> Wed, 20 Nov 2024 15:02:21 +0000
matrix-synapse-py3 (1.119.0) stable; urgency=medium matrix-synapse-py3 (1.119.0) stable; urgency=medium
* New Synapse release 1.119.0. * New Synapse release 1.119.0.

View file

@ -104,6 +104,16 @@ experimental_features:
msc3967_enabled: true msc3967_enabled: true
# Expose a room summary for public rooms # Expose a room summary for public rooms
msc3266_enabled: true msc3266_enabled: true
# Send to-device messages to application services
msc2409_to_device_messages_enabled: true
# Allow application services to masquerade devices
msc3202_device_masquerading: true
# Sending device list changes, one-time key counts and fallback key usage to application services
msc3202_transaction_extensions: true
# Proxy OTK claim requests to exclusive ASes
msc3983_appservice_otk_claims: true
# Proxy key queries to exclusive ASes
msc3984_appservice_key_query: true
server_notices: server_notices:
system_mxid_localpart: _server system_mxid_localpart: _server

View file

@ -55,7 +55,8 @@ It returns a JSON body like the following:
} }
], ],
"user_type": null, "user_type": null,
"locked": false "locked": false,
"suspended": false
} }
``` ```

View file

@ -336,6 +336,36 @@ but it has a `response_types_supported` which excludes "code" (which we rely on,
is even mentioned in their [documentation](https://developers.facebook.com/docs/facebook-login/manually-build-a-login-flow#login)), is even mentioned in their [documentation](https://developers.facebook.com/docs/facebook-login/manually-build-a-login-flow#login)),
so we have to disable discovery and configure the URIs manually. so we have to disable discovery and configure the URIs manually.
### Forgejo
Forgejo is a fork of Gitea that can act as an OAuth2 provider.
The implementation of OAuth2 is improved compared to Gitea, as it provides a correctly defined `subject_claim` and `scopes`.
Synapse config:
```yaml
oidc_providers:
- idp_id: forgejo
idp_name: Forgejo
discover: false
issuer: "https://your-forgejo.com/"
client_id: "your-client-id" # TO BE FILLED
client_secret: "your-client-secret" # TO BE FILLED
client_auth_method: client_secret_post
scopes: ["openid", "profile", "email", "groups"]
authorization_endpoint: "https://your-forgejo.com/login/oauth/authorize"
token_endpoint: "https://your-forgejo.com/login/oauth/access_token"
userinfo_endpoint: "https://your-forgejo.com/api/v1/user"
user_mapping_provider:
config:
subject_claim: "sub"
picture_claim: "picture"
localpart_template: "{{ user.preferred_username }}"
display_name_template: "{{ user.name }}"
email_template: "{{ user.email }}"
```
### GitHub ### GitHub
[GitHub][github-idp] is a bit special as it is not an OpenID Connect compliant provider, but [GitHub][github-idp] is a bit special as it is not an OpenID Connect compliant provider, but

View file

@ -128,6 +128,29 @@ removing the experimental support for it in this release.
The `experimental_features.msc3886_endpoint` configuration option has The `experimental_features.msc3886_endpoint` configuration option has
been removed. been removed.
## Authenticated media is now enforced by default
The [`enable_authenticated_media`] configuration option now defaults to true.
This means that clients and remote (federated) homeservers now need to use
the authenticated media endpoints in order to download media from your
homeserver.
As an exception, existing media that was stored on the server prior to
this option changing to `true` will still be accessible over the
unauthenticated endpoints.
The matrix.org homeserver has already been running with this option enabled
since September 2024, so most common clients and homeservers should already
be compatible.
With that said, administrators who wish to disable this feature for broader
compatibility can still do so by manually configuring
`enable_authenticated_media: False`.
[`enable_authenticated_media`]: usage/configuration/config_documentation.md#enable_authenticated_media
# Upgrading to v1.119.0 # Upgrading to v1.119.0
## Minimum supported Python version ## Minimum supported Python version

View file

@ -1887,8 +1887,7 @@ Config options related to Synapse's media store.
When set to true, all subsequent media uploads will be marked as authenticated, and will not be available over legacy When set to true, all subsequent media uploads will be marked as authenticated, and will not be available over legacy
unauthenticated media endpoints (`/_matrix/media/(r0|v3|v1)/download` and `/_matrix/media/(r0|v3|v1)/thumbnail`) - requests for authenticated media over these endpoints will result in a 404. All media, including authenticated media, will be available over the authenticated media endpoints `_matrix/client/v1/media/download` and `_matrix/client/v1/media/thumbnail`. Media uploaded prior to setting this option to true will still be available over the legacy endpoints. Note if the setting is switched to false unauthenticated media endpoints (`/_matrix/media/(r0|v3|v1)/download` and `/_matrix/media/(r0|v3|v1)/thumbnail`) - requests for authenticated media over these endpoints will result in a 404. All media, including authenticated media, will be available over the authenticated media endpoints `_matrix/client/v1/media/download` and `_matrix/client/v1/media/thumbnail`. Media uploaded prior to setting this option to true will still be available over the legacy endpoints. Note if the setting is switched to false
after enabling, media marked as authenticated will be available over legacy endpoints. Defaults to false, but after enabling, media marked as authenticated will be available over legacy endpoints. Defaults to true (previously false). In a future release of Synapse, this option will be removed and become always-on.
this will change to true in a future Synapse release.
In all cases, authenticated requests to download media will succeed, but for unauthenticated requests, this In all cases, authenticated requests to download media will succeed, but for unauthenticated requests, this
case-by-case breakdown describes whether media downloads are permitted: case-by-case breakdown describes whether media downloads are permitted:
@ -1910,9 +1909,11 @@ will perpetually be available over the legacy, unauthenticated endpoint, even af
This is for backwards compatibility with older clients and homeservers that do not yet support requesting authenticated media; This is for backwards compatibility with older clients and homeservers that do not yet support requesting authenticated media;
those older clients or homeservers will not be cut off from media they can already see. those older clients or homeservers will not be cut off from media they can already see.
_Changed in Synapse 1.120:_ This option now defaults to `True` when not set, whereas before this version it defaulted to `False`.
Example configuration: Example configuration:
```yaml ```yaml
enable_authenticated_media: true enable_authenticated_media: false
``` ```
--- ---
### `enable_media_repo` ### `enable_media_repo`

View file

@ -56,24 +56,6 @@
"type": "github" "type": "github"
} }
}, },
"flake-utils_2": {
"inputs": {
"systems": "systems_2"
},
"locked": {
"lastModified": 1681202837,
"narHash": "sha256-H+Rh19JDwRtpVPAWp64F+rlEtxUWBAQW28eAi3SRSzg=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "cfacdce06f30d2b68473a46042957675eebb3401",
"type": "github"
},
"original": {
"owner": "numtide",
"repo": "flake-utils",
"type": "github"
}
},
"gitignore": { "gitignore": {
"inputs": { "inputs": {
"nixpkgs": [ "nixpkgs": [
@ -202,11 +184,11 @@
}, },
"nixpkgs_3": { "nixpkgs_3": {
"locked": { "locked": {
"lastModified": 1681358109, "lastModified": 1728538411,
"narHash": "sha256-eKyxW4OohHQx9Urxi7TQlFBTDWII+F+x2hklDOQPB50=", "narHash": "sha256-f0SBJz1eZ2yOuKUr5CA9BHULGXVSn6miBuUWdTyhUhU=",
"owner": "NixOS", "owner": "NixOS",
"repo": "nixpkgs", "repo": "nixpkgs",
"rev": "96ba1c52e54e74c3197f4d43026b3f3d92e83ff9", "rev": "b69de56fac8c2b6f8fd27f2eca01dcda8e0a4221",
"type": "github" "type": "github"
}, },
"original": { "original": {
@ -249,20 +231,19 @@
"devenv": "devenv", "devenv": "devenv",
"nixpkgs": "nixpkgs_2", "nixpkgs": "nixpkgs_2",
"rust-overlay": "rust-overlay", "rust-overlay": "rust-overlay",
"systems": "systems_3" "systems": "systems_2"
} }
}, },
"rust-overlay": { "rust-overlay": {
"inputs": { "inputs": {
"flake-utils": "flake-utils_2",
"nixpkgs": "nixpkgs_3" "nixpkgs": "nixpkgs_3"
}, },
"locked": { "locked": {
"lastModified": 1693966243, "lastModified": 1731897198,
"narHash": "sha256-a2CA1aMIPE67JWSVIGoGtD3EGlFdK9+OlJQs0FOWCKY=", "narHash": "sha256-Ou7vLETSKwmE/HRQz4cImXXJBr/k9gp4J4z/PF8LzTE=",
"owner": "oxalica", "owner": "oxalica",
"repo": "rust-overlay", "repo": "rust-overlay",
"rev": "a8b4bb4cbb744baaabc3e69099f352f99164e2c1", "rev": "0be641045af6d8666c11c2c40e45ffc9667839b5",
"type": "github" "type": "github"
}, },
"original": { "original": {
@ -300,21 +281,6 @@
"repo": "default", "repo": "default",
"type": "github" "type": "github"
} }
},
"systems_3": {
"locked": {
"lastModified": 1681028828,
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
"owner": "nix-systems",
"repo": "default",
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
"type": "github"
},
"original": {
"owner": "nix-systems",
"repo": "default",
"type": "github"
}
} }
}, },
"root": "root", "root": "root",

View file

@ -82,7 +82,7 @@
# #
# NOTE: We currently need to set the Rust version unnecessarily high # NOTE: We currently need to set the Rust version unnecessarily high
# in order to work around https://github.com/matrix-org/synapse/issues/15939 # in order to work around https://github.com/matrix-org/synapse/issues/15939
(rust-bin.stable."1.71.1".default.override { (rust-bin.stable."1.82.0".default.override {
# Additionally install the "rust-src" extension to allow diving into the # Additionally install the "rust-src" extension to allow diving into the
# Rust source code in an IDE (rust-analyzer will also make use of it). # Rust source code in an IDE (rust-analyzer will also make use of it).
extensions = [ "rust-src" ]; extensions = [ "rust-src" ];
@ -205,7 +205,7 @@
# corresponding Nix packages on https://search.nixos.org/packages. # corresponding Nix packages on https://search.nixos.org/packages.
# #
# This was done until `./install-deps.pl --dryrun` produced no output. # This was done until `./install-deps.pl --dryrun` produced no output.
env.PERL5LIB = "${with pkgs.perl536Packages; makePerlPath [ env.PERL5LIB = "${with pkgs.perl538Packages; makePerlPath [
DBI DBI
ClassMethodModifiers ClassMethodModifiers
CryptEd25519 CryptEd25519

39
poetry.lock generated
View file

@ -1,4 +1,4 @@
# This file is automatically @generated by Poetry 1.8.3 and should not be changed by hand. # This file is automatically @generated by Poetry 1.8.4 and should not be changed by hand.
[[package]] [[package]]
name = "annotated-types" name = "annotated-types"
@ -2405,19 +2405,18 @@ test = ["build[virtualenv] (>=1.0.3)", "filelock (>=3.4.0)", "importlib-metadata
[[package]] [[package]]
name = "setuptools-rust" name = "setuptools-rust"
version = "1.8.1" version = "1.10.2"
description = "Setuptools Rust extension plugin" description = "Setuptools Rust extension plugin"
optional = false optional = false
python-versions = ">=3.8" python-versions = ">=3.8"
files = [ files = [
{file = "setuptools-rust-1.8.1.tar.gz", hash = "sha256:94b1dd5d5308b3138d5b933c3a2b55e6d6927d1a22632e509fcea9ddd0f7e486"}, {file = "setuptools_rust-1.10.2-py3-none-any.whl", hash = "sha256:4b39c435ae9670315d522ed08fa0e8cb29f2a6048033966b6be2571a90ce4f1c"},
{file = "setuptools_rust-1.8.1-py3-none-any.whl", hash = "sha256:b5324493949ccd6aa0c03890c5f6b5f02de4512e3ac1697d02e9a6c02b18aa8e"}, {file = "setuptools_rust-1.10.2.tar.gz", hash = "sha256:5d73e7eee5f87a6417285b617c97088a7c20d1a70fcea60e3bdc94ff567c29dc"},
] ]
[package.dependencies] [package.dependencies]
semantic-version = ">=2.8.2,<3" semantic-version = ">=2.8.2,<3"
setuptools = ">=62.4" setuptools = ">=62.4"
tomli = {version = ">=1.2.1", markers = "python_version < \"3.11\""}
[[package]] [[package]]
name = "signedjson" name = "signedjson"
@ -2515,33 +2514,33 @@ twisted = ["twisted"]
[[package]] [[package]]
name = "tomli" name = "tomli"
version = "2.0.2" version = "2.1.0"
description = "A lil' TOML parser" description = "A lil' TOML parser"
optional = false optional = false
python-versions = ">=3.8" python-versions = ">=3.8"
files = [ files = [
{file = "tomli-2.0.2-py3-none-any.whl", hash = "sha256:2ebe24485c53d303f690b0ec092806a085f07af5a5aa1464f3931eec36caaa38"}, {file = "tomli-2.1.0-py3-none-any.whl", hash = "sha256:a5c57c3d1c56f5ccdf89f6523458f60ef716e210fc47c4cfb188c5ba473e0391"},
{file = "tomli-2.0.2.tar.gz", hash = "sha256:d46d457a85337051c36524bc5349dd91b1877838e2979ac5ced3e710ed8a60ed"}, {file = "tomli-2.1.0.tar.gz", hash = "sha256:3f646cae2aec94e17d04973e4249548320197cfabdf130015d023de4b74d8ab8"},
] ]
[[package]] [[package]]
name = "tornado" name = "tornado"
version = "6.4.1" version = "6.4.2"
description = "Tornado is a Python web framework and asynchronous networking library, originally developed at FriendFeed." description = "Tornado is a Python web framework and asynchronous networking library, originally developed at FriendFeed."
optional = true optional = true
python-versions = ">=3.8" python-versions = ">=3.8"
files = [ files = [
{file = "tornado-6.4.1-cp38-abi3-macosx_10_9_universal2.whl", hash = "sha256:163b0aafc8e23d8cdc3c9dfb24c5368af84a81e3364745ccb4427669bf84aec8"}, {file = "tornado-6.4.2-cp38-abi3-macosx_10_9_universal2.whl", hash = "sha256:e828cce1123e9e44ae2a50a9de3055497ab1d0aeb440c5ac23064d9e44880da1"},
{file = "tornado-6.4.1-cp38-abi3-macosx_10_9_x86_64.whl", hash = "sha256:6d5ce3437e18a2b66fbadb183c1d3364fb03f2be71299e7d10dbeeb69f4b2a14"}, {file = "tornado-6.4.2-cp38-abi3-macosx_10_9_x86_64.whl", hash = "sha256:072ce12ada169c5b00b7d92a99ba089447ccc993ea2143c9ede887e0937aa803"},
{file = "tornado-6.4.1-cp38-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e2e20b9113cd7293f164dc46fffb13535266e713cdb87bd2d15ddb336e96cfc4"}, {file = "tornado-6.4.2-cp38-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1a017d239bd1bb0919f72af256a970624241f070496635784d9bf0db640d3fec"},
{file = "tornado-6.4.1-cp38-abi3-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8ae50a504a740365267b2a8d1a90c9fbc86b780a39170feca9bcc1787ff80842"}, {file = "tornado-6.4.2-cp38-abi3-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c36e62ce8f63409301537222faffcef7dfc5284f27eec227389f2ad11b09d946"},
{file = "tornado-6.4.1-cp38-abi3-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:613bf4ddf5c7a95509218b149b555621497a6cc0d46ac341b30bd9ec19eac7f3"}, {file = "tornado-6.4.2-cp38-abi3-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bca9eb02196e789c9cb5c3c7c0f04fb447dc2adffd95265b2c7223a8a615ccbf"},
{file = "tornado-6.4.1-cp38-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:25486eb223babe3eed4b8aecbac33b37e3dd6d776bc730ca14e1bf93888b979f"}, {file = "tornado-6.4.2-cp38-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:304463bd0772442ff4d0f5149c6f1c2135a1fae045adf070821c6cdc76980634"},
{file = "tornado-6.4.1-cp38-abi3-musllinux_1_2_i686.whl", hash = "sha256:454db8a7ecfcf2ff6042dde58404164d969b6f5d58b926da15e6b23817950fc4"}, {file = "tornado-6.4.2-cp38-abi3-musllinux_1_2_i686.whl", hash = "sha256:c82c46813ba483a385ab2a99caeaedf92585a1f90defb5693351fa7e4ea0bf73"},
{file = "tornado-6.4.1-cp38-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:a02a08cc7a9314b006f653ce40483b9b3c12cda222d6a46d4ac63bb6c9057698"}, {file = "tornado-6.4.2-cp38-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:932d195ca9015956fa502c6b56af9eb06106140d844a335590c1ec7f5277d10c"},
{file = "tornado-6.4.1-cp38-abi3-win32.whl", hash = "sha256:d9a566c40b89757c9aa8e6f032bcdb8ca8795d7c1a9762910c722b1635c9de4d"}, {file = "tornado-6.4.2-cp38-abi3-win32.whl", hash = "sha256:2876cef82e6c5978fde1e0d5b1f919d756968d5b4282418f3146b79b58556482"},
{file = "tornado-6.4.1-cp38-abi3-win_amd64.whl", hash = "sha256:b24b8982ed444378d7f21d563f4180a2de31ced9d8d84443907a0a64da2072e7"}, {file = "tornado-6.4.2-cp38-abi3-win_amd64.whl", hash = "sha256:908b71bf3ff37d81073356a5fadcc660eb10c1476ee6e2725588626ce7e5ca38"},
{file = "tornado-6.4.1.tar.gz", hash = "sha256:92d3ab53183d8c50f8204a51e6f91d18a15d5ef261e84d452800d4ff6fc504e9"}, {file = "tornado-6.4.2.tar.gz", hash = "sha256:92bad5b4746e9879fd7bf1eb21dce4e3fc5128d71601f80005afa39237ad620b"},
] ]
[[package]] [[package]]

View file

@ -97,7 +97,7 @@ module-name = "synapse.synapse_rust"
[tool.poetry] [tool.poetry]
name = "matrix-synapse" name = "matrix-synapse"
version = "1.119.0" version = "1.120.0"
description = "Homeserver for the Matrix decentralised comms protocol" description = "Homeserver for the Matrix decentralised comms protocol"
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"] authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
license = "AGPL-3.0-or-later" license = "AGPL-3.0-or-later"
@ -370,7 +370,7 @@ tomli = ">=1.2.3"
# runtime errors caused by build system changes. # runtime errors caused by build system changes.
# We are happy to raise these upper bounds upon request, # We are happy to raise these upper bounds upon request,
# provided we check that it's safe to do so (i.e. that CI passes). # provided we check that it's safe to do so (i.e. that CI passes).
requires = ["poetry-core>=1.1.0,<=1.9.1", "setuptools_rust>=1.3,<=1.8.1"] requires = ["poetry-core>=1.1.0,<=1.9.1", "setuptools_rust>=1.3,<=1.10.2"]
build-backend = "poetry.core.masonry.api" build-backend = "poetry.core.masonry.api"

View file

@ -30,14 +30,14 @@ http = "1.1.0"
lazy_static = "1.4.0" lazy_static = "1.4.0"
log = "0.4.17" log = "0.4.17"
mime = "0.3.17" mime = "0.3.17"
pyo3 = { version = "0.21.0", features = [ pyo3 = { version = "0.23.2", features = [
"macros", "macros",
"anyhow", "anyhow",
"abi3", "abi3",
"abi3-py38", "abi3-py38",
] } ] }
pyo3-log = "0.10.0" pyo3-log = "0.12.0"
pythonize = "0.21.0" pythonize = "0.23.0"
regex = "1.6.0" regex = "1.6.0"
sha2 = "0.10.8" sha2 = "0.10.8"
serde = { version = "1.0.144", features = ["derive"] } serde = { version = "1.0.144", features = ["derive"] }

View file

@ -32,14 +32,14 @@ use crate::push::utils::{glob_to_regex, GlobMatchType};
/// Called when registering modules with python. /// Called when registering modules with python.
pub fn register_module(py: Python<'_>, m: &Bound<'_, PyModule>) -> PyResult<()> { pub fn register_module(py: Python<'_>, m: &Bound<'_, PyModule>) -> PyResult<()> {
let child_module = PyModule::new_bound(py, "acl")?; let child_module = PyModule::new(py, "acl")?;
child_module.add_class::<ServerAclEvaluator>()?; child_module.add_class::<ServerAclEvaluator>()?;
m.add_submodule(&child_module)?; m.add_submodule(&child_module)?;
// We need to manually add the module to sys.modules to make `from // We need to manually add the module to sys.modules to make `from
// synapse.synapse_rust import acl` work. // synapse.synapse_rust import acl` work.
py.import_bound("sys")? py.import("sys")?
.getattr("modules")? .getattr("modules")?
.set_item("synapse.synapse_rust.acl", child_module)?; .set_item("synapse.synapse_rust.acl", child_module)?;

View file

@ -41,9 +41,11 @@ use pyo3::{
pybacked::PyBackedStr, pybacked::PyBackedStr,
pyclass, pymethods, pyclass, pymethods,
types::{PyAnyMethods, PyDict, PyDictMethods, PyString}, types::{PyAnyMethods, PyDict, PyDictMethods, PyString},
Bound, IntoPy, PyAny, PyObject, PyResult, Python, Bound, IntoPyObject, PyAny, PyObject, PyResult, Python,
}; };
use crate::UnwrapInfallible;
/// Definitions of the various fields of the internal metadata. /// Definitions of the various fields of the internal metadata.
#[derive(Clone)] #[derive(Clone)]
enum EventInternalMetadataData { enum EventInternalMetadataData {
@ -60,31 +62,59 @@ enum EventInternalMetadataData {
impl EventInternalMetadataData { impl EventInternalMetadataData {
/// Convert the field to its name and python object. /// Convert the field to its name and python object.
fn to_python_pair<'a>(&self, py: Python<'a>) -> (&'a Bound<'a, PyString>, PyObject) { fn to_python_pair<'a>(&self, py: Python<'a>) -> (&'a Bound<'a, PyString>, Bound<'a, PyAny>) {
match self { match self {
EventInternalMetadataData::OutOfBandMembership(o) => { EventInternalMetadataData::OutOfBandMembership(o) => (
(pyo3::intern!(py, "out_of_band_membership"), o.into_py(py)) pyo3::intern!(py, "out_of_band_membership"),
} o.into_pyobject(py)
EventInternalMetadataData::SendOnBehalfOf(o) => { .unwrap_infallible()
(pyo3::intern!(py, "send_on_behalf_of"), o.into_py(py)) .to_owned()
} .into_any(),
EventInternalMetadataData::RecheckRedaction(o) => { ),
(pyo3::intern!(py, "recheck_redaction"), o.into_py(py)) EventInternalMetadataData::SendOnBehalfOf(o) => (
} pyo3::intern!(py, "send_on_behalf_of"),
EventInternalMetadataData::SoftFailed(o) => { o.into_pyobject(py).unwrap_infallible().into_any(),
(pyo3::intern!(py, "soft_failed"), o.into_py(py)) ),
} EventInternalMetadataData::RecheckRedaction(o) => (
EventInternalMetadataData::ProactivelySend(o) => { pyo3::intern!(py, "recheck_redaction"),
(pyo3::intern!(py, "proactively_send"), o.into_py(py)) o.into_pyobject(py)
} .unwrap_infallible()
EventInternalMetadataData::Redacted(o) => { .to_owned()
(pyo3::intern!(py, "redacted"), o.into_py(py)) .into_any(),
} ),
EventInternalMetadataData::TxnId(o) => (pyo3::intern!(py, "txn_id"), o.into_py(py)), EventInternalMetadataData::SoftFailed(o) => (
EventInternalMetadataData::TokenId(o) => (pyo3::intern!(py, "token_id"), o.into_py(py)), pyo3::intern!(py, "soft_failed"),
EventInternalMetadataData::DeviceId(o) => { o.into_pyobject(py)
(pyo3::intern!(py, "device_id"), o.into_py(py)) .unwrap_infallible()
} .to_owned()
.into_any(),
),
EventInternalMetadataData::ProactivelySend(o) => (
pyo3::intern!(py, "proactively_send"),
o.into_pyobject(py)
.unwrap_infallible()
.to_owned()
.into_any(),
),
EventInternalMetadataData::Redacted(o) => (
pyo3::intern!(py, "redacted"),
o.into_pyobject(py)
.unwrap_infallible()
.to_owned()
.into_any(),
),
EventInternalMetadataData::TxnId(o) => (
pyo3::intern!(py, "txn_id"),
o.into_pyobject(py).unwrap_infallible().into_any(),
),
EventInternalMetadataData::TokenId(o) => (
pyo3::intern!(py, "token_id"),
o.into_pyobject(py).unwrap_infallible().into_any(),
),
EventInternalMetadataData::DeviceId(o) => (
pyo3::intern!(py, "device_id"),
o.into_pyobject(py).unwrap_infallible().into_any(),
),
} }
} }
@ -247,7 +277,7 @@ impl EventInternalMetadata {
/// ///
/// Note that `outlier` and `stream_ordering` are stored in separate columns so are not returned here. /// Note that `outlier` and `stream_ordering` are stored in separate columns so are not returned here.
fn get_dict(&self, py: Python<'_>) -> PyResult<PyObject> { fn get_dict(&self, py: Python<'_>) -> PyResult<PyObject> {
let dict = PyDict::new_bound(py); let dict = PyDict::new(py);
for entry in &self.data { for entry in &self.data {
let (key, value) = entry.to_python_pair(py); let (key, value) = entry.to_python_pair(py);

View file

@ -30,7 +30,7 @@ mod internal_metadata;
/// Called when registering modules with python. /// Called when registering modules with python.
pub fn register_module(py: Python<'_>, m: &Bound<'_, PyModule>) -> PyResult<()> { pub fn register_module(py: Python<'_>, m: &Bound<'_, PyModule>) -> PyResult<()> {
let child_module = PyModule::new_bound(py, "events")?; let child_module = PyModule::new(py, "events")?;
child_module.add_class::<internal_metadata::EventInternalMetadata>()?; child_module.add_class::<internal_metadata::EventInternalMetadata>()?;
child_module.add_function(wrap_pyfunction!(filter::event_visible_to_server_py, m)?)?; child_module.add_function(wrap_pyfunction!(filter::event_visible_to_server_py, m)?)?;
@ -38,7 +38,7 @@ pub fn register_module(py: Python<'_>, m: &Bound<'_, PyModule>) -> PyResult<()>
// We need to manually add the module to sys.modules to make `from // We need to manually add the module to sys.modules to make `from
// synapse.synapse_rust import events` work. // synapse.synapse_rust import events` work.
py.import_bound("sys")? py.import("sys")?
.getattr("modules")? .getattr("modules")?
.set_item("synapse.synapse_rust.events", child_module)?; .set_item("synapse.synapse_rust.events", child_module)?;

View file

@ -70,7 +70,7 @@ pub fn http_request_from_twisted(request: &Bound<'_, PyAny>) -> PyResult<Request
let headers_iter = request let headers_iter = request
.getattr("requestHeaders")? .getattr("requestHeaders")?
.call_method0("getAllRawHeaders")? .call_method0("getAllRawHeaders")?
.iter()?; .try_iter()?;
for header in headers_iter { for header in headers_iter {
let header = header?; let header = header?;

View file

@ -1,3 +1,5 @@
use std::convert::Infallible;
use lazy_static::lazy_static; use lazy_static::lazy_static;
use pyo3::prelude::*; use pyo3::prelude::*;
use pyo3_log::ResetHandle; use pyo3_log::ResetHandle;
@ -52,3 +54,16 @@ fn synapse_rust(py: Python<'_>, m: &Bound<'_, PyModule>) -> PyResult<()> {
Ok(()) Ok(())
} }
pub trait UnwrapInfallible<T> {
fn unwrap_infallible(self) -> T;
}
impl<T> UnwrapInfallible<T> for Result<T, Infallible> {
fn unwrap_infallible(self) -> T {
match self {
Ok(val) => val,
Err(never) => match never {},
}
}
}

View file

@ -167,6 +167,7 @@ impl PushRuleEvaluator {
/// ///
/// Returns the set of actions, if any, that match (filtering out any /// Returns the set of actions, if any, that match (filtering out any
/// `dont_notify` and `coalesce` actions). /// `dont_notify` and `coalesce` actions).
#[pyo3(signature = (push_rules, user_id=None, display_name=None))]
pub fn run( pub fn run(
&self, &self,
push_rules: &FilteredPushRules, push_rules: &FilteredPushRules,
@ -236,6 +237,7 @@ impl PushRuleEvaluator {
} }
/// Check if the given condition matches. /// Check if the given condition matches.
#[pyo3(signature = (condition, user_id=None, display_name=None))]
fn matches( fn matches(
&self, &self,
condition: Condition, condition: Condition,

View file

@ -65,8 +65,8 @@ use anyhow::{Context, Error};
use log::warn; use log::warn;
use pyo3::exceptions::PyTypeError; use pyo3::exceptions::PyTypeError;
use pyo3::prelude::*; use pyo3::prelude::*;
use pyo3::types::{PyBool, PyList, PyLong, PyString}; use pyo3::types::{PyBool, PyInt, PyList, PyString};
use pythonize::{depythonize_bound, pythonize}; use pythonize::{depythonize, pythonize, PythonizeError};
use serde::de::Error as _; use serde::de::Error as _;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use serde_json::Value; use serde_json::Value;
@ -79,7 +79,7 @@ pub mod utils;
/// Called when registering modules with python. /// Called when registering modules with python.
pub fn register_module(py: Python<'_>, m: &Bound<'_, PyModule>) -> PyResult<()> { pub fn register_module(py: Python<'_>, m: &Bound<'_, PyModule>) -> PyResult<()> {
let child_module = PyModule::new_bound(py, "push")?; let child_module = PyModule::new(py, "push")?;
child_module.add_class::<PushRule>()?; child_module.add_class::<PushRule>()?;
child_module.add_class::<PushRules>()?; child_module.add_class::<PushRules>()?;
child_module.add_class::<FilteredPushRules>()?; child_module.add_class::<FilteredPushRules>()?;
@ -90,7 +90,7 @@ pub fn register_module(py: Python<'_>, m: &Bound<'_, PyModule>) -> PyResult<()>
// We need to manually add the module to sys.modules to make `from // We need to manually add the module to sys.modules to make `from
// synapse.synapse_rust import push` work. // synapse.synapse_rust import push` work.
py.import_bound("sys")? py.import("sys")?
.getattr("modules")? .getattr("modules")?
.set_item("synapse.synapse_rust.push", child_module)?; .set_item("synapse.synapse_rust.push", child_module)?;
@ -182,12 +182,16 @@ pub enum Action {
Unknown(Value), Unknown(Value),
} }
impl IntoPy<PyObject> for Action { impl<'py> IntoPyObject<'py> for Action {
fn into_py(self, py: Python<'_>) -> PyObject { type Target = PyAny;
type Output = Bound<'py, Self::Target>;
type Error = PythonizeError;
fn into_pyobject(self, py: Python<'py>) -> Result<Self::Output, Self::Error> {
// When we pass the `Action` struct to Python we want it to be converted // When we pass the `Action` struct to Python we want it to be converted
// to a dict. We use `pythonize`, which converts the struct using the // to a dict. We use `pythonize`, which converts the struct using the
// `serde` serialization. // `serde` serialization.
pythonize(py, &self).expect("valid action") pythonize(py, &self)
} }
} }
@ -270,13 +274,13 @@ pub enum SimpleJsonValue {
} }
impl<'source> FromPyObject<'source> for SimpleJsonValue { impl<'source> FromPyObject<'source> for SimpleJsonValue {
fn extract(ob: &'source PyAny) -> PyResult<Self> { fn extract_bound(ob: &Bound<'source, PyAny>) -> PyResult<Self> {
if let Ok(s) = ob.downcast::<PyString>() { if let Ok(s) = ob.downcast::<PyString>() {
Ok(SimpleJsonValue::Str(Cow::Owned(s.to_string()))) Ok(SimpleJsonValue::Str(Cow::Owned(s.to_string())))
// A bool *is* an int, ensure we try bool first. // A bool *is* an int, ensure we try bool first.
} else if let Ok(b) = ob.downcast::<PyBool>() { } else if let Ok(b) = ob.downcast::<PyBool>() {
Ok(SimpleJsonValue::Bool(b.extract()?)) Ok(SimpleJsonValue::Bool(b.extract()?))
} else if let Ok(i) = ob.downcast::<PyLong>() { } else if let Ok(i) = ob.downcast::<PyInt>() {
Ok(SimpleJsonValue::Int(i.extract()?)) Ok(SimpleJsonValue::Int(i.extract()?))
} else if ob.is_none() { } else if ob.is_none() {
Ok(SimpleJsonValue::Null) Ok(SimpleJsonValue::Null)
@ -298,15 +302,19 @@ pub enum JsonValue {
} }
impl<'source> FromPyObject<'source> for JsonValue { impl<'source> FromPyObject<'source> for JsonValue {
fn extract(ob: &'source PyAny) -> PyResult<Self> { fn extract_bound(ob: &Bound<'source, PyAny>) -> PyResult<Self> {
if let Ok(l) = ob.downcast::<PyList>() { if let Ok(l) = ob.downcast::<PyList>() {
match l.iter().map(SimpleJsonValue::extract).collect() { match l
.iter()
.map(|it| SimpleJsonValue::extract_bound(&it))
.collect()
{
Ok(a) => Ok(JsonValue::Array(a)), Ok(a) => Ok(JsonValue::Array(a)),
Err(e) => Err(PyTypeError::new_err(format!( Err(e) => Err(PyTypeError::new_err(format!(
"Can't convert to JsonValue::Array: {e}" "Can't convert to JsonValue::Array: {e}"
))), ))),
} }
} else if let Ok(v) = SimpleJsonValue::extract(ob) { } else if let Ok(v) = SimpleJsonValue::extract_bound(ob) {
Ok(JsonValue::Value(v)) Ok(JsonValue::Value(v))
} else { } else {
Err(PyTypeError::new_err(format!( Err(PyTypeError::new_err(format!(
@ -363,15 +371,19 @@ pub enum KnownCondition {
}, },
} }
impl IntoPy<PyObject> for Condition { impl<'source> IntoPyObject<'source> for Condition {
fn into_py(self, py: Python<'_>) -> PyObject { type Target = PyAny;
pythonize(py, &self).expect("valid condition") type Output = Bound<'source, Self::Target>;
type Error = PythonizeError;
fn into_pyobject(self, py: Python<'source>) -> Result<Self::Output, Self::Error> {
pythonize(py, &self)
} }
} }
impl<'source> FromPyObject<'source> for Condition { impl<'source> FromPyObject<'source> for Condition {
fn extract_bound(ob: &Bound<'source, PyAny>) -> PyResult<Self> { fn extract_bound(ob: &Bound<'source, PyAny>) -> PyResult<Self> {
Ok(depythonize_bound(ob.clone())?) Ok(depythonize(ob)?)
} }
} }

View file

@ -29,7 +29,7 @@ use pyo3::{
exceptions::PyValueError, exceptions::PyValueError,
pyclass, pymethods, pyclass, pymethods,
types::{PyAnyMethods, PyModule, PyModuleMethods}, types::{PyAnyMethods, PyModule, PyModuleMethods},
Bound, Py, PyAny, PyObject, PyResult, Python, ToPyObject, Bound, IntoPyObject, Py, PyAny, PyObject, PyResult, Python,
}; };
use ulid::Ulid; use ulid::Ulid;
@ -37,6 +37,7 @@ use self::session::Session;
use crate::{ use crate::{
errors::{NotFoundError, SynapseError}, errors::{NotFoundError, SynapseError},
http::{http_request_from_twisted, http_response_to_twisted, HeaderMapPyExt}, http::{http_request_from_twisted, http_response_to_twisted, HeaderMapPyExt},
UnwrapInfallible,
}; };
mod session; mod session;
@ -125,7 +126,11 @@ impl RendezvousHandler {
let base = Uri::try_from(format!("{base}_synapse/client/rendezvous")) let base = Uri::try_from(format!("{base}_synapse/client/rendezvous"))
.map_err(|_| PyValueError::new_err("Invalid base URI"))?; .map_err(|_| PyValueError::new_err("Invalid base URI"))?;
let clock = homeserver.call_method0("get_clock")?.to_object(py); let clock = homeserver
.call_method0("get_clock")?
.into_pyobject(py)
.unwrap_infallible()
.unbind();
// Construct a Python object so that we can get a reference to the // Construct a Python object so that we can get a reference to the
// evict method and schedule it to run. // evict method and schedule it to run.
@ -288,6 +293,13 @@ impl RendezvousHandler {
let mut response = Response::new(Bytes::new()); let mut response = Response::new(Bytes::new());
*response.status_mut() = StatusCode::ACCEPTED; *response.status_mut() = StatusCode::ACCEPTED;
prepare_headers(response.headers_mut(), session); prepare_headers(response.headers_mut(), session);
// Even though this isn't mandated by the MSC, we set a Content-Type on the response. It
// doesn't do any harm as the body is empty, but this helps escape a bug in some reverse
// proxy/cache setup which strips the ETag header if there is no Content-Type set.
// Specifically, we noticed this behaviour when placing Synapse behind Cloudflare.
response.headers_mut().typed_insert(ContentType::text());
http_response_to_twisted(twisted_request, response)?; http_response_to_twisted(twisted_request, response)?;
Ok(()) Ok(())
@ -311,7 +323,7 @@ impl RendezvousHandler {
} }
pub fn register_module(py: Python<'_>, m: &Bound<'_, PyModule>) -> PyResult<()> { pub fn register_module(py: Python<'_>, m: &Bound<'_, PyModule>) -> PyResult<()> {
let child_module = PyModule::new_bound(py, "rendezvous")?; let child_module = PyModule::new(py, "rendezvous")?;
child_module.add_class::<RendezvousHandler>()?; child_module.add_class::<RendezvousHandler>()?;
@ -319,7 +331,7 @@ pub fn register_module(py: Python<'_>, m: &Bound<'_, PyModule>) -> PyResult<()>
// We need to manually add the module to sys.modules to make `from // We need to manually add the module to sys.modules to make `from
// synapse.synapse_rust import rendezvous` work. // synapse.synapse_rust import rendezvous` work.
py.import_bound("sys")? py.import("sys")?
.getattr("modules")? .getattr("modules")?
.set_item("synapse.synapse_rust.rendezvous", child_module)?; .set_item("synapse.synapse_rust.rendezvous", child_module)?;

View file

@ -88,6 +88,7 @@ from synapse.storage.databases.main.relations import RelationsWorkerStore
from synapse.storage.databases.main.room import RoomBackgroundUpdateStore from synapse.storage.databases.main.room import RoomBackgroundUpdateStore
from synapse.storage.databases.main.roommember import RoomMemberBackgroundUpdateStore from synapse.storage.databases.main.roommember import RoomMemberBackgroundUpdateStore
from synapse.storage.databases.main.search import SearchBackgroundUpdateStore from synapse.storage.databases.main.search import SearchBackgroundUpdateStore
from synapse.storage.databases.main.sliding_sync import SlidingSyncStore
from synapse.storage.databases.main.state import MainStateBackgroundUpdateStore from synapse.storage.databases.main.state import MainStateBackgroundUpdateStore
from synapse.storage.databases.main.stats import StatsStore from synapse.storage.databases.main.stats import StatsStore
from synapse.storage.databases.main.user_directory import ( from synapse.storage.databases.main.user_directory import (
@ -255,6 +256,7 @@ class Store(
ReceiptsBackgroundUpdateStore, ReceiptsBackgroundUpdateStore,
RelationsWorkerStore, RelationsWorkerStore,
EventFederationWorkerStore, EventFederationWorkerStore,
SlidingSyncStore,
): ):
def execute(self, f: Callable[..., R], *args: Any, **kwargs: Any) -> Awaitable[R]: def execute(self, f: Callable[..., R], *args: Any, **kwargs: Any) -> Awaitable[R]:
return self.db_pool.runInteraction(f.__name__, f, *args, **kwargs) return self.db_pool.runInteraction(f.__name__, f, *args, **kwargs)

View file

@ -272,9 +272,7 @@ class ContentRepositoryConfig(Config):
remote_media_lifetime remote_media_lifetime
) )
self.enable_authenticated_media = config.get( self.enable_authenticated_media = config.get("enable_authenticated_media", True)
"enable_authenticated_media", False
)
def generate_config_section(self, data_dir_path: str, **kwargs: Any) -> str: def generate_config_section(self, data_dir_path: str, **kwargs: Any) -> str:
assert data_dir_path is not None assert data_dir_path is not None

View file

@ -140,7 +140,6 @@ from typing import (
Iterable, Iterable,
List, List,
Optional, Optional,
Set,
Tuple, Tuple,
) )
@ -170,7 +169,13 @@ from synapse.metrics.background_process_metrics import (
run_as_background_process, run_as_background_process,
wrap_as_background_process, wrap_as_background_process,
) )
from synapse.types import JsonDict, ReadReceipt, RoomStreamToken, StrCollection from synapse.types import (
JsonDict,
ReadReceipt,
RoomStreamToken,
StrCollection,
get_domain_from_id,
)
from synapse.util import Clock from synapse.util import Clock
from synapse.util.metrics import Measure from synapse.util.metrics import Measure
from synapse.util.retryutils import filter_destinations_by_retry_limiter from synapse.util.retryutils import filter_destinations_by_retry_limiter
@ -297,12 +302,10 @@ class _DestinationWakeupQueue:
# being woken up. # being woken up.
_MAX_TIME_IN_QUEUE = 30.0 _MAX_TIME_IN_QUEUE = 30.0
# The maximum duration in seconds between waking up consecutive destination
# queues.
_MAX_DELAY = 0.1
sender: "FederationSender" = attr.ib() sender: "FederationSender" = attr.ib()
clock: Clock = attr.ib() clock: Clock = attr.ib()
max_delay_s: int = attr.ib()
queue: "OrderedDict[str, Literal[None]]" = attr.ib(factory=OrderedDict) queue: "OrderedDict[str, Literal[None]]" = attr.ib(factory=OrderedDict)
processing: bool = attr.ib(default=False) processing: bool = attr.ib(default=False)
@ -332,7 +335,7 @@ class _DestinationWakeupQueue:
# We also add an upper bound to the delay, to gracefully handle the # We also add an upper bound to the delay, to gracefully handle the
# case where the queue only has a few entries in it. # case where the queue only has a few entries in it.
current_sleep_seconds = min( current_sleep_seconds = min(
self._MAX_DELAY, self._MAX_TIME_IN_QUEUE / len(self.queue) self.max_delay_s, self._MAX_TIME_IN_QUEUE / len(self.queue)
) )
while self.queue: while self.queue:
@ -416,19 +419,14 @@ class FederationSender(AbstractFederationSender):
self._is_processing = False self._is_processing = False
self._last_poked_id = -1 self._last_poked_id = -1
# map from room_id to a set of PerDestinationQueues which we believe are
# awaiting a call to flush_read_receipts_for_room. The presence of an entry
# here for a given room means that we are rate-limiting RR flushes to that room,
# and that there is a pending call to _flush_rrs_for_room in the system.
self._queues_awaiting_rr_flush_by_room: Dict[str, Set[PerDestinationQueue]] = {}
self._rr_txn_interval_per_room_ms = (
1000.0
/ hs.config.ratelimiting.federation_rr_transactions_per_room_per_second
)
self._external_cache = hs.get_external_cache() self._external_cache = hs.get_external_cache()
self._destination_wakeup_queue = _DestinationWakeupQueue(self, self.clock)
rr_txn_interval_per_room_s = (
1.0 / hs.config.ratelimiting.federation_rr_transactions_per_room_per_second
)
self._destination_wakeup_queue = _DestinationWakeupQueue(
self, self.clock, max_delay_s=rr_txn_interval_per_room_s
)
# Regularly wake up destinations that have outstanding PDUs to be caught up # Regularly wake up destinations that have outstanding PDUs to be caught up
self.clock.looping_call_now( self.clock.looping_call_now(
@ -745,37 +743,48 @@ class FederationSender(AbstractFederationSender):
# Some background on the rate-limiting going on here. # Some background on the rate-limiting going on here.
# #
# It turns out that if we attempt to send out RRs as soon as we get them from # It turns out that if we attempt to send out RRs as soon as we get them
# a client, then we end up trying to do several hundred Hz of federation # from a client, then we end up trying to do several hundred Hz of
# transactions. (The number of transactions scales as O(N^2) on the size of a # federation transactions. (The number of transactions scales as O(N^2)
# room, since in a large room we have both more RRs coming in, and more servers # on the size of a room, since in a large room we have both more RRs
# to send them to.) # coming in, and more servers to send them to.)
# #
# This leads to a lot of CPU load, and we end up getting behind. The solution # This leads to a lot of CPU load, and we end up getting behind. The
# currently adopted is as follows: # solution currently adopted is to differentiate between receipts and
# destinations we should immediately send to, and those we can trickle
# the receipts to.
# #
# The first receipt in a given room is sent out immediately, at time T0. Any # The current logic is to send receipts out immediately if:
# further receipts are, in theory, batched up for N seconds, where N is calculated # - the room is "small", i.e. there's only N servers to send receipts
# based on the number of servers in the room to achieve a transaction frequency # to, and so sending out the receipts immediately doesn't cause too
# of around 50Hz. So, for example, if there were 100 servers in the room, then # much load; or
# N would be 100 / 50Hz = 2 seconds. # - the receipt is for an event that happened recently, as users
# notice if receipts are delayed when they know other users are
# currently reading the room; or
# - the receipt is being sent to the server that sent the event, so
# that users see receipts for their own receipts quickly.
# #
# Then, after T+N, we flush out any receipts that have accumulated, and restart # For destinations that we should delay sending the receipt to, we queue
# the timer to flush out more receipts at T+2N, etc. If no receipts accumulate, # the receipts up to be sent in the next transaction, but don't trigger
# we stop the cycle and go back to the start. # a new transaction to be sent. We then add the destination to the
# `DestinationWakeupQueue`, which will slowly iterate over each
# destination and trigger a new transaction to be sent.
# #
# However, in practice, it is often possible to flush out receipts earlier: in # However, in practice, it is often possible to send out delayed
# particular, if we are sending a transaction to a given server anyway (for # receipts earlier: in particular, if we are sending a transaction to a
# example, because we have a PDU or a RR in another room to send), then we may # given server anyway (for example, because we have a PDU or a RR in
# as well send out all of the pending RRs for that server. So it may be that # another room to send), then we may as well send out all of the pending
# by the time we get to T+N, we don't actually have any RRs left to send out. # RRs for that server. So it may be that by the time we get to waking up
# Nevertheless we continue to buffer up RRs for the room in question until we # the destination, we don't actually have any RRs left to send out.
# reach the point that no RRs arrive between timer ticks.
# #
# For even more background, see https://github.com/matrix-org/synapse/issues/4730. # For even more background, see
# https://github.com/matrix-org/synapse/issues/4730.
room_id = receipt.room_id room_id = receipt.room_id
# Local read receipts always have 1 event ID.
event_id = receipt.event_ids[0]
# Work out which remote servers should be poked and poke them. # Work out which remote servers should be poked and poke them.
domains_set = await self._storage_controllers.state.get_current_hosts_in_room_or_partial_state_approximation( domains_set = await self._storage_controllers.state.get_current_hosts_in_room_or_partial_state_approximation(
room_id room_id
@ -797,49 +806,51 @@ class FederationSender(AbstractFederationSender):
if not domains: if not domains:
return return
queues_pending_flush = self._queues_awaiting_rr_flush_by_room.get(room_id) # We now split which domains we want to wake up immediately vs which we
# want to delay waking up.
immediate_domains: StrCollection
delay_domains: StrCollection
# if there is no flush yet scheduled, we will send out these receipts with if len(domains) < 10:
# immediate flushes, and schedule the next flush for this room. # For "small" rooms send to all domains immediately
if queues_pending_flush is not None: immediate_domains = domains
logger.debug("Queuing receipt for: %r", domains) delay_domains = ()
else: else:
logger.debug("Sending receipt to: %r", domains) metadata = await self.store.get_metadata_for_event(
self._schedule_rr_flush_for_room(room_id, len(domains)) receipt.room_id, event_id
)
assert metadata is not None
for domain in domains: sender_domain = get_domain_from_id(metadata.sender)
if self.clock.time_msec() - metadata.received_ts < 60_000:
# We always send receipts for recent messages immediately
immediate_domains = domains
delay_domains = ()
else:
# Otherwise, we delay waking up all destinations except for the
# sender's domain.
immediate_domains = []
delay_domains = []
for domain in domains:
if domain == sender_domain:
immediate_domains.append(domain)
else:
delay_domains.append(domain)
for domain in immediate_domains:
# Add to destination queue and wake the destination up
queue = self._get_per_destination_queue(domain)
queue.queue_read_receipt(receipt)
queue.attempt_new_transaction()
for domain in delay_domains:
# Add to destination queue...
queue = self._get_per_destination_queue(domain) queue = self._get_per_destination_queue(domain)
queue.queue_read_receipt(receipt) queue.queue_read_receipt(receipt)
# if there is already a RR flush pending for this room, then make sure this # ... and schedule the destination to be woken up.
# destination is registered for the flush self._destination_wakeup_queue.add_to_queue(domain)
if queues_pending_flush is not None:
queues_pending_flush.add(queue)
else:
queue.flush_read_receipts_for_room(room_id)
def _schedule_rr_flush_for_room(self, room_id: str, n_domains: int) -> None:
# that is going to cause approximately len(domains) transactions, so now back
# off for that multiplied by RR_TXN_INTERVAL_PER_ROOM
backoff_ms = self._rr_txn_interval_per_room_ms * n_domains
logger.debug("Scheduling RR flush in %s in %d ms", room_id, backoff_ms)
self.clock.call_later(backoff_ms, self._flush_rrs_for_room, room_id)
self._queues_awaiting_rr_flush_by_room[room_id] = set()
def _flush_rrs_for_room(self, room_id: str) -> None:
queues = self._queues_awaiting_rr_flush_by_room.pop(room_id)
logger.debug("Flushing RRs in %s to %s", room_id, queues)
if not queues:
# no more RRs arrived for this room; we are done.
return
# schedule the next flush
self._schedule_rr_flush_for_room(room_id, len(queues))
for queue in queues:
queue.flush_read_receipts_for_room(room_id)
async def send_presence_to_destinations( async def send_presence_to_destinations(
self, states: Iterable[UserPresenceState], destinations: Iterable[str] self, states: Iterable[UserPresenceState], destinations: Iterable[str]

View file

@ -156,7 +156,6 @@ class PerDestinationQueue:
# Each receipt can only have a single receipt per # Each receipt can only have a single receipt per
# (room ID, receipt type, user ID, thread ID) tuple. # (room ID, receipt type, user ID, thread ID) tuple.
self._pending_receipt_edus: List[Dict[str, Dict[str, Dict[str, dict]]]] = [] self._pending_receipt_edus: List[Dict[str, Dict[str, Dict[str, dict]]]] = []
self._rrs_pending_flush = False
# stream_id of last successfully sent to-device message. # stream_id of last successfully sent to-device message.
# NB: may be a long or an int. # NB: may be a long or an int.
@ -258,15 +257,7 @@ class PerDestinationQueue:
} }
) )
def flush_read_receipts_for_room(self, room_id: str) -> None: self.mark_new_data()
# If there are any pending receipts for this room then force-flush them
# in a new transaction.
for edu in self._pending_receipt_edus:
if room_id in edu:
self._rrs_pending_flush = True
self.attempt_new_transaction()
# No use in checking remaining EDUs if the room was found.
break
def send_keyed_edu(self, edu: Edu, key: Hashable) -> None: def send_keyed_edu(self, edu: Edu, key: Hashable) -> None:
self._pending_edus_keyed[(edu.edu_type, key)] = edu self._pending_edus_keyed[(edu.edu_type, key)] = edu
@ -603,12 +594,9 @@ class PerDestinationQueue:
self._destination, last_successful_stream_ordering self._destination, last_successful_stream_ordering
) )
def _get_receipt_edus(self, force_flush: bool, limit: int) -> Iterable[Edu]: def _get_receipt_edus(self, limit: int) -> Iterable[Edu]:
if not self._pending_receipt_edus: if not self._pending_receipt_edus:
return return
if not force_flush and not self._rrs_pending_flush:
# not yet time for this lot
return
# Send at most limit EDUs for receipts. # Send at most limit EDUs for receipts.
for content in self._pending_receipt_edus[:limit]: for content in self._pending_receipt_edus[:limit]:
@ -747,7 +735,7 @@ class _TransactionQueueManager:
) )
# Add read receipt EDUs. # Add read receipt EDUs.
pending_edus.extend(self.queue._get_receipt_edus(force_flush=False, limit=5)) pending_edus.extend(self.queue._get_receipt_edus(limit=5))
edu_limit = MAX_EDUS_PER_TRANSACTION - len(pending_edus) edu_limit = MAX_EDUS_PER_TRANSACTION - len(pending_edus)
# Next, prioritize to-device messages so that existing encryption channels # Next, prioritize to-device messages so that existing encryption channels
@ -795,13 +783,6 @@ class _TransactionQueueManager:
if not self._pdus and not pending_edus: if not self._pdus and not pending_edus:
return [], [] return [], []
# if we've decided to send a transaction anyway, and we have room, we
# may as well send any pending RRs
if edu_limit:
pending_edus.extend(
self.queue._get_receipt_edus(force_flush=True, limit=edu_limit)
)
if self._pdus: if self._pdus:
self._last_stream_ordering = self._pdus[ self._last_stream_ordering = self._pdus[
-1 -1

View file

@ -124,6 +124,7 @@ class AdminHandler:
"consent_ts": user_info.consent_ts, "consent_ts": user_info.consent_ts,
"user_type": user_info.user_type, "user_type": user_info.user_type,
"is_guest": user_info.is_guest, "is_guest": user_info.is_guest,
"suspended": user_info.suspended,
} }
if self._msc3866_enabled: if self._msc3866_enabled:

View file

@ -39,6 +39,8 @@ from synapse.replication.http.devices import ReplicationUploadKeysForUserRestSer
from synapse.types import ( from synapse.types import (
JsonDict, JsonDict,
JsonMapping, JsonMapping,
ScheduledTask,
TaskStatus,
UserID, UserID,
get_domain_from_id, get_domain_from_id,
get_verify_key_from_cross_signing_key, get_verify_key_from_cross_signing_key,
@ -70,6 +72,7 @@ class E2eKeysHandler:
self.is_mine = hs.is_mine self.is_mine = hs.is_mine
self.clock = hs.get_clock() self.clock = hs.get_clock()
self._worker_lock_handler = hs.get_worker_locks_handler() self._worker_lock_handler = hs.get_worker_locks_handler()
self._task_scheduler = hs.get_task_scheduler()
federation_registry = hs.get_federation_registry() federation_registry = hs.get_federation_registry()
@ -116,6 +119,10 @@ class E2eKeysHandler:
hs.config.experimental.msc3984_appservice_key_query hs.config.experimental.msc3984_appservice_key_query
) )
self._task_scheduler.register_action(
self._delete_old_one_time_keys_task, "delete_old_otks"
)
@trace @trace
@cancellable @cancellable
async def query_devices( async def query_devices(
@ -1574,6 +1581,45 @@ class E2eKeysHandler:
return True return True
return False return False
async def _delete_old_one_time_keys_task(
self, task: ScheduledTask
) -> Tuple[TaskStatus, Optional[JsonMapping], Optional[str]]:
"""Scheduler task to delete old one time keys.
Until Synapse 1.119, Synapse used to issue one-time-keys in a random order, leading to the possibility
that it could still have old OTKs that the client has dropped. This task is scheduled exactly once
by a database schema delta file, and it clears out old one-time-keys that look like they came from libolm.
"""
last_user = task.result.get("from_user", "") if task.result else ""
while True:
# We process users in batches of 100
users, rowcount = await self.store.delete_old_otks_for_next_user_batch(
last_user, 100
)
if len(users) == 0:
# We're done!
return TaskStatus.COMPLETE, None, None
logger.debug(
"Deleted %i old one-time-keys for users '%s'..'%s'",
rowcount,
users[0],
users[-1],
)
last_user = users[-1]
# Store our progress
await self._task_scheduler.update_task(
task.id, result={"from_user": last_user}
)
# Sleep a little before doing the next user.
#
# matrix.org has about 15M users in the e2e_one_time_keys_json table
# (comprising 20M devices). We want this to take about a week, so we need
# to do about one batch of 100 users every 4 seconds.
await self.clock.sleep(4)
def _check_cross_signing_key( def _check_cross_signing_key(
key: JsonDict, user_id: str, key_type: str, signing_key: Optional[VerifyKey] = None key: JsonDict, user_id: str, key_type: str, signing_key: Optional[VerifyKey] = None

View file

@ -36,7 +36,6 @@ from typing import (
) )
import attr import attr
import multipart
import treq import treq
from canonicaljson import encode_canonical_json from canonicaljson import encode_canonical_json
from netaddr import AddrFormatError, IPAddress, IPSet from netaddr import AddrFormatError, IPAddress, IPSet
@ -93,6 +92,20 @@ from synapse.util.async_helpers import timeout_deferred
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.server import HomeServer from synapse.server import HomeServer
# Support both import names for the `python-multipart` (PyPI) library,
# which renamed its package name from `multipart` to `python_multipart`
# in 0.0.13 (though supports the old import name for compatibility).
# Note that the `multipart` package name conflicts with `multipart` (PyPI)
# so we should prefer importing from `python_multipart` when possible.
try:
from python_multipart import MultipartParser
if TYPE_CHECKING:
from python_multipart import multipart
except ImportError:
from multipart import MultipartParser # type: ignore[no-redef]
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
outgoing_requests_counter = Counter("synapse_http_client_requests", "", ["method"]) outgoing_requests_counter = Counter("synapse_http_client_requests", "", ["method"])
@ -1039,7 +1052,7 @@ class _MultipartParserProtocol(protocol.Protocol):
self.deferred = deferred self.deferred = deferred
self.boundary = boundary self.boundary = boundary
self.max_length = max_length self.max_length = max_length
self.parser: Optional[multipart.MultipartParser] = None self.parser: Optional[MultipartParser] = None
self.multipart_response = MultipartResponse() self.multipart_response = MultipartResponse()
self.has_redirect = False self.has_redirect = False
self.in_json = False self.in_json = False
@ -1097,12 +1110,12 @@ class _MultipartParserProtocol(protocol.Protocol):
self.deferred.errback() self.deferred.errback()
self.file_length += end - start self.file_length += end - start
callbacks: "multipart.multipart.MultipartCallbacks" = { callbacks: "multipart.MultipartCallbacks" = {
"on_header_field": on_header_field, "on_header_field": on_header_field,
"on_header_value": on_header_value, "on_header_value": on_header_value,
"on_part_data": on_part_data, "on_part_data": on_part_data,
} }
self.parser = multipart.MultipartParser(self.boundary, callbacks) self.parser = MultipartParser(self.boundary, callbacks)
self.total_length += len(incoming_data) self.total_length += len(incoming_data)
if self.max_length is not None and self.total_length >= self.max_length: if self.max_length is not None and self.total_length >= self.max_length:

View file

@ -254,6 +254,7 @@ class HomeServer(metaclass=abc.ABCMeta):
"auth", "auth",
"deactivate_account", "deactivate_account",
"delayed_events", "delayed_events",
"e2e_keys", # for the `delete_old_otks` scheduled-task handler
"message", "message",
"pagination", "pagination",
"profile", "profile",

View file

@ -333,6 +333,7 @@ class CacheInvalidationWorkerStore(SQLBaseStore):
self._attempt_to_invalidate_cache( self._attempt_to_invalidate_cache(
"get_unread_event_push_actions_by_room_for_user", (room_id,) "get_unread_event_push_actions_by_room_for_user", (room_id,)
) )
self._attempt_to_invalidate_cache("get_metadata_for_event", (room_id, event_id))
self._attempt_to_invalidate_cache("_get_max_event_pos", (room_id,)) self._attempt_to_invalidate_cache("_get_max_event_pos", (room_id,))
@ -457,6 +458,7 @@ class CacheInvalidationWorkerStore(SQLBaseStore):
self._attempt_to_invalidate_cache("_get_state_group_for_event", None) self._attempt_to_invalidate_cache("_get_state_group_for_event", None)
self._attempt_to_invalidate_cache("get_event_ordering", None) self._attempt_to_invalidate_cache("get_event_ordering", None)
self._attempt_to_invalidate_cache("get_metadata_for_event", (room_id,))
self._attempt_to_invalidate_cache("is_partial_state_event", None) self._attempt_to_invalidate_cache("is_partial_state_event", None)
self._attempt_to_invalidate_cache("_get_joined_profile_from_event_id", None) self._attempt_to_invalidate_cache("_get_joined_profile_from_event_id", None)

View file

@ -1453,6 +1453,54 @@ class EndToEndKeyWorkerStore(EndToEndKeyBackgroundStore, CacheInvalidationWorker
impl, impl,
) )
async def delete_old_otks_for_next_user_batch(
self, after_user_id: str, number_of_users: int
) -> Tuple[List[str], int]:
"""Deletes old OTKs belonging to the next batch of users
Returns:
`(users, rows)`, where:
* `users` is the user IDs of the updated users. An empty list if we are done.
* `rows` is the number of deleted rows
"""
def impl(txn: LoggingTransaction) -> Tuple[List[str], int]:
# Find a batch of users
txn.execute(
"""
SELECT DISTINCT(user_id) FROM e2e_one_time_keys_json
WHERE user_id > ?
ORDER BY user_id
LIMIT ?
""",
(after_user_id, number_of_users),
)
users = [row[0] for row in txn.fetchall()]
if len(users) == 0:
return users, 0
# Delete any old OTKs belonging to those users.
#
# We only actually consider OTKs whose key ID is 6 characters long. These
# keys were likely made by libolm rather than Vodozemac; libolm only kept
# 100 private OTKs, so was far more vulnerable than Vodozemac to throwing
# away keys prematurely.
clause, args = make_in_list_sql_clause(
txn.database_engine, "user_id", users
)
sql = f"""
DELETE FROM e2e_one_time_keys_json
WHERE {clause} AND ts_added_ms < ? AND length(key_id) = 6
"""
args.append(self._clock.time_msec() - (7 * 24 * 3600 * 1000))
txn.execute(sql, args)
return users, txn.rowcount
return await self.db_pool.runInteraction(
"delete_old_otks_for_next_user_batch", impl
)
class EndToEndKeyStore(EndToEndKeyWorkerStore, SQLBaseStore): class EndToEndKeyStore(EndToEndKeyWorkerStore, SQLBaseStore):
def __init__( def __init__(

View file

@ -193,6 +193,14 @@ class _EventRow:
outlier: bool outlier: bool
@attr.s(slots=True, frozen=True, auto_attribs=True)
class EventMetadata:
"""Event metadata returned by `get_metadata_for_event(..)`"""
sender: str
received_ts: int
class EventRedactBehaviour(Enum): class EventRedactBehaviour(Enum):
""" """
What to do when retrieving a redacted event from the database. What to do when retrieving a redacted event from the database.
@ -2580,3 +2588,22 @@ class EventsWorkerStore(SQLBaseStore):
_BackgroundUpdates.SLIDING_SYNC_MEMBERSHIP_SNAPSHOTS_BG_UPDATE, _BackgroundUpdates.SLIDING_SYNC_MEMBERSHIP_SNAPSHOTS_BG_UPDATE,
) )
) )
@cached(tree=True)
async def get_metadata_for_event(
self, room_id: str, event_id: str
) -> Optional[EventMetadata]:
row = await self.db_pool.simple_select_one(
table="events",
keyvalues={"room_id": room_id, "event_id": event_id},
retcols=("sender", "received_ts"),
allow_none=True,
desc="get_metadata_for_event",
)
if row is None:
return None
return EventMetadata(
sender=row[0],
received_ts=row[1],
)

View file

@ -21,7 +21,11 @@ import attr
from synapse.api.errors import SlidingSyncUnknownPosition from synapse.api.errors import SlidingSyncUnknownPosition
from synapse.logging.opentracing import log_kv from synapse.logging.opentracing import log_kv
from synapse.storage._base import SQLBaseStore, db_to_json from synapse.storage._base import SQLBaseStore, db_to_json
from synapse.storage.database import LoggingTransaction from synapse.storage.database import (
DatabasePool,
LoggingDatabaseConnection,
LoggingTransaction,
)
from synapse.types import MultiWriterStreamToken, RoomStreamToken from synapse.types import MultiWriterStreamToken, RoomStreamToken
from synapse.types.handlers.sliding_sync import ( from synapse.types.handlers.sliding_sync import (
HaveSentRoom, HaveSentRoom,
@ -35,12 +39,28 @@ from synapse.util import json_encoder
from synapse.util.caches.descriptors import cached from synapse.util.caches.descriptors import cached
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.server import HomeServer
from synapse.storage.databases.main import DataStore from synapse.storage.databases.main import DataStore
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
class SlidingSyncStore(SQLBaseStore): class SlidingSyncStore(SQLBaseStore):
def __init__(
self,
database: DatabasePool,
db_conn: LoggingDatabaseConnection,
hs: "HomeServer",
):
super().__init__(database, db_conn, hs)
self.db_pool.updates.register_background_index_update(
update_name="sliding_sync_connection_room_configs_required_state_id_idx",
index_name="sliding_sync_connection_room_configs_required_state_id_idx",
table="sliding_sync_connection_room_configs",
columns=("required_state_id",),
)
async def get_latest_bump_stamp_for_room( async def get_latest_bump_stamp_for_room(
self, self,
room_id: str, room_id: str,

View file

@ -12,7 +12,7 @@
-- <https://www.gnu.org/licenses/agpl-3.0.html>. -- <https://www.gnu.org/licenses/agpl-3.0.html>.
-- Add an index on (user_id, device_id, algorithm, ts_added_ms) on e2e_one_time_keys_json, so that OTKs can -- Add an index on `current_state_delta_stream(room_id, stream_id)` to allow
-- efficiently be issued in the same order they were uploaded. -- efficient per-room lookups.
INSERT INTO background_updates (ordering, update_name, progress_json) VALUES INSERT INTO background_updates (ordering, update_name, progress_json) VALUES
(8804, 'current_state_delta_stream_room_index', '{}'); (8804, 'current_state_delta_stream_room_index', '{}');

View file

@ -0,0 +1,19 @@
--
-- This file is licensed under the Affero General Public License (AGPL) version 3.
--
-- Copyright (C) 2024 New Vector, Ltd
--
-- This program is free software: you can redistribute it and/or modify
-- it under the terms of the GNU Affero General Public License as
-- published by the Free Software Foundation, either version 3 of the
-- License, or (at your option) any later version.
--
-- See the GNU Affero General Public License for more details:
-- <https://www.gnu.org/licenses/agpl-3.0.html>.
-- Until Synapse 1.119, Synapse used to issue one-time-keys in a random order, leading to the possibility
-- that it could still have old OTKs that the client has dropped.
--
-- We create a scheduled task which will drop old OTKs, to flush them out.
INSERT INTO scheduled_tasks(id, action, status, timestamp)
VALUES ('delete_old_otks_task', 'delete_old_otks', 'scheduled', extract(epoch from current_timestamp) * 1000);

View file

@ -0,0 +1,19 @@
--
-- This file is licensed under the Affero General Public License (AGPL) version 3.
--
-- Copyright (C) 2024 New Vector, Ltd
--
-- This program is free software: you can redistribute it and/or modify
-- it under the terms of the GNU Affero General Public License as
-- published by the Free Software Foundation, either version 3 of the
-- License, or (at your option) any later version.
--
-- See the GNU Affero General Public License for more details:
-- <https://www.gnu.org/licenses/agpl-3.0.html>.
-- Until Synapse 1.119, Synapse used to issue one-time-keys in a random order, leading to the possibility
-- that it could still have old OTKs that the client has dropped.
--
-- We create a scheduled task which will drop old OTKs, to flush them out.
INSERT INTO scheduled_tasks(id, action, status, timestamp)
VALUES ('delete_old_otks_task', 'delete_old_otks', 'scheduled', strftime('%s', 'now') * 1000);

View file

@ -0,0 +1,20 @@
--
-- This file is licensed under the Affero General Public License (AGPL) version 3.
--
-- Copyright (C) 2024 New Vector, Ltd
--
-- This program is free software: you can redistribute it and/or modify
-- it under the terms of the GNU Affero General Public License as
-- published by the Free Software Foundation, either version 3 of the
-- License, or (at your option) any later version.
--
-- See the GNU Affero General Public License for more details:
-- <https://www.gnu.org/licenses/agpl-3.0.html>.
-- Add an index on sliding_sync_connection_room_configs(required_state_id), so
-- that when we delete entries in `sliding_sync_connection_required_state` it's
-- efficient for Postgres to check they've been deleted from
-- `sliding_sync_connection_room_configs` too
INSERT INTO background_updates (ordering, update_name, progress_json) VALUES
(8805, 'sliding_sync_connection_room_configs_required_state_id_idx', '{}');

View file

@ -34,6 +34,7 @@ from synapse.handlers.device import DeviceHandler
from synapse.rest import admin from synapse.rest import admin
from synapse.rest.client import login from synapse.rest.client import login
from synapse.server import HomeServer from synapse.server import HomeServer
from synapse.storage.databases.main.events_worker import EventMetadata
from synapse.types import JsonDict, ReadReceipt from synapse.types import JsonDict, ReadReceipt
from synapse.util import Clock from synapse.util import Clock
@ -55,12 +56,15 @@ class FederationSenderReceiptsTestCases(HomeserverTestCase):
federation_transport_client=self.federation_transport_client, federation_transport_client=self.federation_transport_client,
) )
hs.get_storage_controllers().state.get_current_hosts_in_room = AsyncMock( # type: ignore[method-assign] self.main_store = hs.get_datastores().main
self.state_controller = hs.get_storage_controllers().state
self.state_controller.get_current_hosts_in_room = AsyncMock( # type: ignore[method-assign]
return_value={"test", "host2"} return_value={"test", "host2"}
) )
hs.get_storage_controllers().state.get_current_hosts_in_room_or_partial_state_approximation = ( # type: ignore[method-assign] self.state_controller.get_current_hosts_in_room_or_partial_state_approximation = ( # type: ignore[method-assign]
hs.get_storage_controllers().state.get_current_hosts_in_room self.state_controller.get_current_hosts_in_room
) )
return hs return hs
@ -185,12 +189,15 @@ class FederationSenderReceiptsTestCases(HomeserverTestCase):
], ],
) )
def test_send_receipts_with_backoff(self) -> None: def test_send_receipts_with_backoff_small_room(self) -> None:
"""Send two receipts in quick succession; the second should be flushed, but """Read receipt in small rooms should not be delayed"""
only after 20ms"""
mock_send_transaction = self.federation_transport_client.send_transaction mock_send_transaction = self.federation_transport_client.send_transaction
mock_send_transaction.return_value = {} mock_send_transaction.return_value = {}
self.state_controller.get_current_hosts_in_room_or_partial_state_approximation = AsyncMock( # type: ignore[method-assign]
return_value={"test", "host2"}
)
sender = self.hs.get_federation_sender() sender = self.hs.get_federation_sender()
receipt = ReadReceipt( receipt = ReadReceipt(
"room_id", "room_id",
@ -206,7 +213,104 @@ class FederationSenderReceiptsTestCases(HomeserverTestCase):
# expect a call to send_transaction # expect a call to send_transaction
mock_send_transaction.assert_called_once() mock_send_transaction.assert_called_once()
json_cb = mock_send_transaction.call_args[0][1] self._assert_edu_in_call(mock_send_transaction.call_args[0][1])
def test_send_receipts_with_backoff_recent_event(self) -> None:
"""Read receipt for a recent message should not be delayed"""
mock_send_transaction = self.federation_transport_client.send_transaction
mock_send_transaction.return_value = {}
# Pretend this is a big room
self.state_controller.get_current_hosts_in_room_or_partial_state_approximation = AsyncMock( # type: ignore[method-assign]
return_value={"test"} | {f"host{i}" for i in range(20)}
)
self.main_store.get_metadata_for_event = AsyncMock(
return_value=EventMetadata(
received_ts=self.clock.time_msec(),
sender="@test:test",
)
)
sender = self.hs.get_federation_sender()
receipt = ReadReceipt(
"room_id",
"m.read",
"user_id",
["event_id"],
thread_id=None,
data={"ts": 1234},
)
self.get_success(sender.send_read_receipt(receipt))
self.pump()
# expect a call to send_transaction for each host
self.assertEqual(mock_send_transaction.call_count, 20)
self._assert_edu_in_call(mock_send_transaction.call_args.args[1])
mock_send_transaction.reset_mock()
def test_send_receipts_with_backoff_sender(self) -> None:
"""Read receipt for a message should not be delayed to the sender, but
is delayed to everyone else"""
mock_send_transaction = self.federation_transport_client.send_transaction
mock_send_transaction.return_value = {}
# Pretend this is a big room
self.state_controller.get_current_hosts_in_room_or_partial_state_approximation = AsyncMock( # type: ignore[method-assign]
return_value={"test"} | {f"host{i}" for i in range(20)}
)
self.main_store.get_metadata_for_event = AsyncMock(
return_value=EventMetadata(
received_ts=self.clock.time_msec() - 5 * 60_000,
sender="@test:host1",
)
)
sender = self.hs.get_federation_sender()
receipt = ReadReceipt(
"room_id",
"m.read",
"user_id",
["event_id"],
thread_id=None,
data={"ts": 1234},
)
self.get_success(sender.send_read_receipt(receipt))
self.pump()
# First, expect a call to send_transaction for the sending host
mock_send_transaction.assert_called()
transaction = mock_send_transaction.call_args_list[0].args[0]
self.assertEqual(transaction.destination, "host1")
self._assert_edu_in_call(mock_send_transaction.call_args_list[0].args[1])
# We also expect a call to one of the other hosts, as the first
# destination to wake up.
self.assertEqual(mock_send_transaction.call_count, 2)
self._assert_edu_in_call(mock_send_transaction.call_args.args[1])
mock_send_transaction.reset_mock()
# We now expect to see 18 more transactions to the remaining hosts
# periodically.
for _ in range(18):
self.reactor.advance(
1.0
/ self.hs.config.ratelimiting.federation_rr_transactions_per_room_per_second
)
mock_send_transaction.assert_called_once()
self._assert_edu_in_call(mock_send_transaction.call_args.args[1])
mock_send_transaction.reset_mock()
def _assert_edu_in_call(self, json_cb: Callable[[], JsonDict]) -> None:
"""Assert that the given `json_cb` from a `send_transaction` has a
receipt in it."""
data = json_cb() data = json_cb()
self.assertEqual( self.assertEqual(
data["edus"], data["edus"],
@ -226,46 +330,6 @@ class FederationSenderReceiptsTestCases(HomeserverTestCase):
} }
], ],
) )
mock_send_transaction.reset_mock()
# send the second RR
receipt = ReadReceipt(
"room_id",
"m.read",
"user_id",
["other_id"],
thread_id=None,
data={"ts": 1234},
)
self.successResultOf(defer.ensureDeferred(sender.send_read_receipt(receipt)))
self.pump()
mock_send_transaction.assert_not_called()
self.reactor.advance(19)
mock_send_transaction.assert_not_called()
self.reactor.advance(10)
mock_send_transaction.assert_called_once()
json_cb = mock_send_transaction.call_args[0][1]
data = json_cb()
self.assertEqual(
data["edus"],
[
{
"edu_type": EduTypes.RECEIPT,
"content": {
"room_id": {
"m.read": {
"user_id": {
"event_ids": ["other_id"],
"data": {"ts": 1234},
}
}
}
},
}
],
)
class FederationSenderPresenceTestCases(HomeserverTestCase): class FederationSenderPresenceTestCases(HomeserverTestCase):

View file

@ -19,6 +19,7 @@
# [This file includes modifications made by New Vector Limited] # [This file includes modifications made by New Vector Limited]
# #
# #
import time
from typing import Dict, Iterable from typing import Dict, Iterable
from unittest import mock from unittest import mock
@ -1826,3 +1827,72 @@ class E2eKeysHandlerTestCase(unittest.HomeserverTestCase):
) )
self.assertIs(exists, True) self.assertIs(exists, True)
self.assertIs(replaceable_without_uia, False) self.assertIs(replaceable_without_uia, False)
def test_delete_old_one_time_keys(self) -> None:
"""Test the db migration that clears out old OTKs"""
# We upload two sets of keys, one just over a week ago, and one just less than
# a week ago. Each batch contains some keys that match the deletion pattern
# (key IDs of 6 chars), and some that do not.
#
# Finally, set the scheduled task going, and check what gets deleted.
user_id = "@user000:" + self.hs.hostname
device_id = "xyz"
# The scheduled task should be for "now" in real, wallclock time, so
# set the test reactor to just over a week ago.
self.reactor.advance(time.time() - 7.5 * 24 * 3600)
# Upload some keys
self.get_success(
self.handler.upload_keys_for_user(
user_id,
device_id,
{
"one_time_keys": {
# some keys to delete
"alg1:AAAAAA": "key1",
"alg2:AAAAAB": {"key": "key2", "signatures": {"k1": "sig1"}},
# A key to *not* delete
"alg2:AAAAAAAAAA": {"key": "key3"},
}
},
)
)
# A day passes
self.reactor.advance(24 * 3600)
# Upload some more keys
self.get_success(
self.handler.upload_keys_for_user(
user_id,
device_id,
{
"one_time_keys": {
# some keys which match the pattern
"alg1:BAAAAA": "key1",
"alg2:BAAAAB": {"key": "key2", "signatures": {"k1": "sig1"}},
# A key to *not* delete
"alg2:BAAAAAAAAA": {"key": "key3"},
}
},
)
)
# The rest of the week passes, which should set the scheduled task going.
self.reactor.advance(6.5 * 24 * 3600)
# Check what we're left with in the database
remaining_key_ids = {
row[0]
for row in self.get_success(
self.handler.store.db_pool.simple_select_list(
"e2e_one_time_keys_json", None, ["key_id"]
)
)
}
self.assertEqual(
remaining_key_ids, {"AAAAAAAAAA", "BAAAAA", "BAAAAB", "BAAAAAAAAA"}
)

View file

@ -419,6 +419,11 @@ class MediaRepoTests(unittest.HomeserverTestCase):
return channel return channel
@unittest.override_config(
{
"enable_authenticated_media": False,
}
)
def test_handle_missing_content_type(self) -> None: def test_handle_missing_content_type(self) -> None:
channel = self._req( channel = self._req(
b"attachment; filename=out" + self.test_image.extension, b"attachment; filename=out" + self.test_image.extension,
@ -430,6 +435,11 @@ class MediaRepoTests(unittest.HomeserverTestCase):
headers.getRawHeaders(b"Content-Type"), [b"application/octet-stream"] headers.getRawHeaders(b"Content-Type"), [b"application/octet-stream"]
) )
@unittest.override_config(
{
"enable_authenticated_media": False,
}
)
def test_disposition_filename_ascii(self) -> None: def test_disposition_filename_ascii(self) -> None:
""" """
If the filename is filename=<ascii> then Synapse will decode it as an If the filename is filename=<ascii> then Synapse will decode it as an
@ -450,6 +460,11 @@ class MediaRepoTests(unittest.HomeserverTestCase):
], ],
) )
@unittest.override_config(
{
"enable_authenticated_media": False,
}
)
def test_disposition_filenamestar_utf8escaped(self) -> None: def test_disposition_filenamestar_utf8escaped(self) -> None:
""" """
If the filename is filename=*utf8''<utf8 escaped> then Synapse will If the filename is filename=*utf8''<utf8 escaped> then Synapse will
@ -475,6 +490,11 @@ class MediaRepoTests(unittest.HomeserverTestCase):
], ],
) )
@unittest.override_config(
{
"enable_authenticated_media": False,
}
)
def test_disposition_none(self) -> None: def test_disposition_none(self) -> None:
""" """
If there is no filename, Content-Disposition should only If there is no filename, Content-Disposition should only
@ -491,6 +511,11 @@ class MediaRepoTests(unittest.HomeserverTestCase):
[b"inline" if self.test_image.is_inline else b"attachment"], [b"inline" if self.test_image.is_inline else b"attachment"],
) )
@unittest.override_config(
{
"enable_authenticated_media": False,
}
)
def test_thumbnail_crop(self) -> None: def test_thumbnail_crop(self) -> None:
"""Test that a cropped remote thumbnail is available.""" """Test that a cropped remote thumbnail is available."""
self._test_thumbnail( self._test_thumbnail(
@ -500,6 +525,11 @@ class MediaRepoTests(unittest.HomeserverTestCase):
unable_to_thumbnail=self.test_image.unable_to_thumbnail, unable_to_thumbnail=self.test_image.unable_to_thumbnail,
) )
@unittest.override_config(
{
"enable_authenticated_media": False,
}
)
def test_thumbnail_scale(self) -> None: def test_thumbnail_scale(self) -> None:
"""Test that a scaled remote thumbnail is available.""" """Test that a scaled remote thumbnail is available."""
self._test_thumbnail( self._test_thumbnail(
@ -509,6 +539,11 @@ class MediaRepoTests(unittest.HomeserverTestCase):
unable_to_thumbnail=self.test_image.unable_to_thumbnail, unable_to_thumbnail=self.test_image.unable_to_thumbnail,
) )
@unittest.override_config(
{
"enable_authenticated_media": False,
}
)
def test_invalid_type(self) -> None: def test_invalid_type(self) -> None:
"""An invalid thumbnail type is never available.""" """An invalid thumbnail type is never available."""
self._test_thumbnail( self._test_thumbnail(
@ -519,7 +554,10 @@ class MediaRepoTests(unittest.HomeserverTestCase):
) )
@unittest.override_config( @unittest.override_config(
{"thumbnail_sizes": [{"width": 32, "height": 32, "method": "scale"}]} {
"thumbnail_sizes": [{"width": 32, "height": 32, "method": "scale"}],
"enable_authenticated_media": False,
},
) )
def test_no_thumbnail_crop(self) -> None: def test_no_thumbnail_crop(self) -> None:
""" """
@ -533,7 +571,10 @@ class MediaRepoTests(unittest.HomeserverTestCase):
) )
@unittest.override_config( @unittest.override_config(
{"thumbnail_sizes": [{"width": 32, "height": 32, "method": "crop"}]} {
"thumbnail_sizes": [{"width": 32, "height": 32, "method": "crop"}],
"enable_authenticated_media": False,
}
) )
def test_no_thumbnail_scale(self) -> None: def test_no_thumbnail_scale(self) -> None:
""" """
@ -546,6 +587,11 @@ class MediaRepoTests(unittest.HomeserverTestCase):
unable_to_thumbnail=self.test_image.unable_to_thumbnail, unable_to_thumbnail=self.test_image.unable_to_thumbnail,
) )
@unittest.override_config(
{
"enable_authenticated_media": False,
}
)
def test_thumbnail_repeated_thumbnail(self) -> None: def test_thumbnail_repeated_thumbnail(self) -> None:
"""Test that fetching the same thumbnail works, and deleting the on disk """Test that fetching the same thumbnail works, and deleting the on disk
thumbnail regenerates it. thumbnail regenerates it.
@ -720,6 +766,11 @@ class MediaRepoTests(unittest.HomeserverTestCase):
) )
) )
@unittest.override_config(
{
"enable_authenticated_media": False,
}
)
def test_x_robots_tag_header(self) -> None: def test_x_robots_tag_header(self) -> None:
""" """
Tests that the `X-Robots-Tag` header is present, which informs web crawlers Tests that the `X-Robots-Tag` header is present, which informs web crawlers
@ -733,6 +784,11 @@ class MediaRepoTests(unittest.HomeserverTestCase):
[b"noindex, nofollow, noarchive, noimageindex"], [b"noindex, nofollow, noarchive, noimageindex"],
) )
@unittest.override_config(
{
"enable_authenticated_media": False,
}
)
def test_cross_origin_resource_policy_header(self) -> None: def test_cross_origin_resource_policy_header(self) -> None:
""" """
Test that the Cross-Origin-Resource-Policy header is set to "cross-origin" Test that the Cross-Origin-Resource-Policy header is set to "cross-origin"
@ -747,6 +803,11 @@ class MediaRepoTests(unittest.HomeserverTestCase):
[b"cross-origin"], [b"cross-origin"],
) )
@unittest.override_config(
{
"enable_authenticated_media": False,
}
)
def test_unknown_v3_endpoint(self) -> None: def test_unknown_v3_endpoint(self) -> None:
""" """
If the v3 endpoint fails, try the r0 one. If the v3 endpoint fails, try the r0 one.
@ -985,6 +1046,11 @@ class RemoteDownloadLimiterTestCase(unittest.HomeserverTestCase):
d.callback(52428800) d.callback(52428800)
return d return d
@override_config(
{
"enable_authenticated_media": False,
}
)
@patch( @patch(
"synapse.http.matrixfederationclient.read_body_with_max_size", "synapse.http.matrixfederationclient.read_body_with_max_size",
read_body_with_max_size_30MiB, read_body_with_max_size_30MiB,
@ -1060,6 +1126,7 @@ class RemoteDownloadLimiterTestCase(unittest.HomeserverTestCase):
{ {
"remote_media_download_per_second": "50M", "remote_media_download_per_second": "50M",
"remote_media_download_burst_count": "50M", "remote_media_download_burst_count": "50M",
"enable_authenticated_media": False,
} }
) )
@patch( @patch(
@ -1119,7 +1186,12 @@ class RemoteDownloadLimiterTestCase(unittest.HomeserverTestCase):
) )
assert channel.code == 200 assert channel.code == 200
@override_config({"remote_media_download_burst_count": "87M"}) @override_config(
{
"remote_media_download_burst_count": "87M",
"enable_authenticated_media": False,
}
)
@patch( @patch(
"synapse.http.matrixfederationclient.read_body_with_max_size", "synapse.http.matrixfederationclient.read_body_with_max_size",
read_body_with_max_size_30MiB, read_body_with_max_size_30MiB,
@ -1159,7 +1231,7 @@ class RemoteDownloadLimiterTestCase(unittest.HomeserverTestCase):
) )
assert channel2.code == 429 assert channel2.code == 429
@override_config({"max_upload_size": "29M"}) @override_config({"max_upload_size": "29M", "enable_authenticated_media": False})
@patch( @patch(
"synapse.http.matrixfederationclient.read_body_with_max_size", "synapse.http.matrixfederationclient.read_body_with_max_size",
read_body_with_max_size_30MiB, read_body_with_max_size_30MiB,

View file

@ -40,6 +40,7 @@ from tests.http import (
from tests.replication._base import BaseMultiWorkerStreamTestCase from tests.replication._base import BaseMultiWorkerStreamTestCase
from tests.server import FakeChannel, FakeTransport, make_request from tests.server import FakeChannel, FakeTransport, make_request
from tests.test_utils import SMALL_PNG from tests.test_utils import SMALL_PNG
from tests.unittest import override_config
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -148,6 +149,7 @@ class MediaRepoShardTestCase(BaseMultiWorkerStreamTestCase):
return channel, request return channel, request
@override_config({"enable_authenticated_media": False})
def test_basic(self) -> None: def test_basic(self) -> None:
"""Test basic fetching of remote media from a single worker.""" """Test basic fetching of remote media from a single worker."""
hs1 = self.make_worker_hs("synapse.app.generic_worker") hs1 = self.make_worker_hs("synapse.app.generic_worker")
@ -164,6 +166,7 @@ class MediaRepoShardTestCase(BaseMultiWorkerStreamTestCase):
self.assertEqual(channel.code, 200) self.assertEqual(channel.code, 200)
self.assertEqual(channel.result["body"], b"Hello!") self.assertEqual(channel.result["body"], b"Hello!")
@override_config({"enable_authenticated_media": False})
def test_download_simple_file_race(self) -> None: def test_download_simple_file_race(self) -> None:
"""Test that fetching remote media from two different processes at the """Test that fetching remote media from two different processes at the
same time works. same time works.
@ -203,6 +206,7 @@ class MediaRepoShardTestCase(BaseMultiWorkerStreamTestCase):
# We expect only one new file to have been persisted. # We expect only one new file to have been persisted.
self.assertEqual(start_count + 1, self._count_remote_media()) self.assertEqual(start_count + 1, self._count_remote_media())
@override_config({"enable_authenticated_media": False})
def test_download_image_race(self) -> None: def test_download_image_race(self) -> None:
"""Test that fetching remote *images* from two different processes at """Test that fetching remote *images* from two different processes at
the same time works. the same time works.

View file

@ -30,7 +30,7 @@ from twisted.web.resource import Resource
import synapse.rest.admin import synapse.rest.admin
from synapse.http.server import JsonResource from synapse.http.server import JsonResource
from synapse.rest.admin import VersionServlet from synapse.rest.admin import VersionServlet
from synapse.rest.client import login, room from synapse.rest.client import login, media, room
from synapse.server import HomeServer from synapse.server import HomeServer
from synapse.util import Clock from synapse.util import Clock
@ -60,6 +60,7 @@ class QuarantineMediaTestCase(unittest.HomeserverTestCase):
synapse.rest.admin.register_servlets, synapse.rest.admin.register_servlets,
synapse.rest.admin.register_servlets_for_media_repo, synapse.rest.admin.register_servlets_for_media_repo,
login.register_servlets, login.register_servlets,
media.register_servlets,
room.register_servlets, room.register_servlets,
] ]
@ -74,7 +75,7 @@ class QuarantineMediaTestCase(unittest.HomeserverTestCase):
"""Ensure a piece of media is quarantined when trying to access it.""" """Ensure a piece of media is quarantined when trying to access it."""
channel = self.make_request( channel = self.make_request(
"GET", "GET",
f"/_matrix/media/v3/download/{server_and_media_id}", f"/_matrix/client/v1/media/download/{server_and_media_id}",
shorthand=False, shorthand=False,
access_token=admin_user_tok, access_token=admin_user_tok,
) )
@ -131,7 +132,7 @@ class QuarantineMediaTestCase(unittest.HomeserverTestCase):
# Attempt to access the media # Attempt to access the media
channel = self.make_request( channel = self.make_request(
"GET", "GET",
f"/_matrix/media/v3/download/{server_name_and_media_id}", f"/_matrix/client/v1/media/download/{server_name_and_media_id}",
shorthand=False, shorthand=False,
access_token=non_admin_user_tok, access_token=non_admin_user_tok,
) )
@ -295,7 +296,7 @@ class QuarantineMediaTestCase(unittest.HomeserverTestCase):
# Attempt to access each piece of media # Attempt to access each piece of media
channel = self.make_request( channel = self.make_request(
"GET", "GET",
f"/_matrix/media/v3/download/{server_and_media_id_2}", f"/_matrix/client/v1/media/download/{server_and_media_id_2}",
shorthand=False, shorthand=False,
access_token=non_admin_user_tok, access_token=non_admin_user_tok,
) )

View file

@ -36,6 +36,7 @@ from synapse.util import Clock
from tests import unittest from tests import unittest
from tests.test_utils import SMALL_PNG from tests.test_utils import SMALL_PNG
from tests.unittest import override_config
VALID_TIMESTAMP = 1609459200000 # 2021-01-01 in milliseconds VALID_TIMESTAMP = 1609459200000 # 2021-01-01 in milliseconds
INVALID_TIMESTAMP_IN_S = 1893456000 # 2030-01-01 in seconds INVALID_TIMESTAMP_IN_S = 1893456000 # 2030-01-01 in seconds
@ -126,6 +127,7 @@ class DeleteMediaByIDTestCase(_AdminMediaTests):
self.assertEqual(400, channel.code, msg=channel.json_body) self.assertEqual(400, channel.code, msg=channel.json_body)
self.assertEqual("Can only delete local media", channel.json_body["error"]) self.assertEqual("Can only delete local media", channel.json_body["error"])
@override_config({"enable_authenticated_media": False})
def test_delete_media(self) -> None: def test_delete_media(self) -> None:
""" """
Tests that delete a media is successfully Tests that delete a media is successfully
@ -371,6 +373,7 @@ class DeleteMediaByDateSizeTestCase(_AdminMediaTests):
self._access_media(server_and_media_id, False) self._access_media(server_and_media_id, False)
@override_config({"enable_authenticated_media": False})
def test_keep_media_by_date(self) -> None: def test_keep_media_by_date(self) -> None:
""" """
Tests that media is not deleted if it is newer than `before_ts` Tests that media is not deleted if it is newer than `before_ts`
@ -408,6 +411,7 @@ class DeleteMediaByDateSizeTestCase(_AdminMediaTests):
self._access_media(server_and_media_id, False) self._access_media(server_and_media_id, False)
@override_config({"enable_authenticated_media": False})
def test_keep_media_by_size(self) -> None: def test_keep_media_by_size(self) -> None:
""" """
Tests that media is not deleted if its size is smaller than or equal Tests that media is not deleted if its size is smaller than or equal
@ -443,6 +447,7 @@ class DeleteMediaByDateSizeTestCase(_AdminMediaTests):
self._access_media(server_and_media_id, False) self._access_media(server_and_media_id, False)
@override_config({"enable_authenticated_media": False})
def test_keep_media_by_user_avatar(self) -> None: def test_keep_media_by_user_avatar(self) -> None:
""" """
Tests that we do not delete media if is used as a user avatar Tests that we do not delete media if is used as a user avatar
@ -487,6 +492,7 @@ class DeleteMediaByDateSizeTestCase(_AdminMediaTests):
self._access_media(server_and_media_id, False) self._access_media(server_and_media_id, False)
@override_config({"enable_authenticated_media": False})
def test_keep_media_by_room_avatar(self) -> None: def test_keep_media_by_room_avatar(self) -> None:
""" """
Tests that we do not delete media if it is used as a room avatar Tests that we do not delete media if it is used as a room avatar

View file

@ -45,6 +45,7 @@ from synapse.rest.client import (
devices, devices,
login, login,
logout, logout,
media,
profile, profile,
register, register,
room, room,
@ -3221,6 +3222,7 @@ class UserRestTestCase(unittest.HomeserverTestCase):
self.assertIn("consent_ts", content) self.assertIn("consent_ts", content)
self.assertIn("external_ids", content) self.assertIn("external_ids", content)
self.assertIn("last_seen_ts", content) self.assertIn("last_seen_ts", content)
self.assertIn("suspended", content)
# This key was removed intentionally. Ensure it is not accidentally re-included. # This key was removed intentionally. Ensure it is not accidentally re-included.
self.assertNotIn("password_hash", content) self.assertNotIn("password_hash", content)
@ -3517,6 +3519,7 @@ class UserMediaRestTestCase(unittest.HomeserverTestCase):
servlets = [ servlets = [
synapse.rest.admin.register_servlets, synapse.rest.admin.register_servlets,
login.register_servlets, login.register_servlets,
media.register_servlets,
] ]
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None: def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
@ -4023,7 +4026,7 @@ class UserMediaRestTestCase(unittest.HomeserverTestCase):
# Try to access a media and to create `last_access_ts` # Try to access a media and to create `last_access_ts`
channel = self.make_request( channel = self.make_request(
"GET", "GET",
f"/_matrix/media/v3/download/{server_and_media_id}", f"/_matrix/client/v1/media/download/{server_and_media_id}",
shorthand=False, shorthand=False,
access_token=user_token, access_token=user_token,
) )

View file

@ -91,7 +91,8 @@ class MediaDomainBlockingTests(unittest.HomeserverTestCase):
{ {
# Disable downloads from a domain we won't be requesting downloads from. # Disable downloads from a domain we won't be requesting downloads from.
# This proves we haven't broken anything. # This proves we haven't broken anything.
"prevent_media_downloads_from": ["not-listed.com"] "prevent_media_downloads_from": ["not-listed.com"],
"enable_authenticated_media": False,
} }
) )
def test_remote_media_normally_unblocked(self) -> None: def test_remote_media_normally_unblocked(self) -> None:
@ -132,6 +133,7 @@ class MediaDomainBlockingTests(unittest.HomeserverTestCase):
# This proves we haven't broken anything. # This proves we haven't broken anything.
"prevent_media_downloads_from": ["not-listed.com"], "prevent_media_downloads_from": ["not-listed.com"],
"dynamic_thumbnails": True, "dynamic_thumbnails": True,
"enable_authenticated_media": False,
} }
) )
def test_remote_media_thumbnail_normally_unblocked(self) -> None: def test_remote_media_thumbnail_normally_unblocked(self) -> None:

View file

@ -42,6 +42,7 @@ from synapse.util.stringutils import parse_and_validate_mxc_uri
from tests import unittest from tests import unittest
from tests.server import FakeTransport from tests.server import FakeTransport
from tests.test_utils import SMALL_PNG from tests.test_utils import SMALL_PNG
from tests.unittest import override_config
try: try:
import lxml import lxml
@ -1259,6 +1260,7 @@ class URLPreviewTests(unittest.HomeserverTestCase):
self.assertIsNone(_port) self.assertIsNone(_port)
return host, media_id return host, media_id
@override_config({"enable_authenticated_media": False})
def test_storage_providers_exclude_files(self) -> None: def test_storage_providers_exclude_files(self) -> None:
"""Test that files are not stored in or fetched from storage providers.""" """Test that files are not stored in or fetched from storage providers."""
host, media_id = self._download_image() host, media_id = self._download_image()
@ -1301,6 +1303,7 @@ class URLPreviewTests(unittest.HomeserverTestCase):
"URL cache file was unexpectedly retrieved from a storage provider", "URL cache file was unexpectedly retrieved from a storage provider",
) )
@override_config({"enable_authenticated_media": False})
def test_storage_providers_exclude_thumbnails(self) -> None: def test_storage_providers_exclude_thumbnails(self) -> None:
"""Test that thumbnails are not stored in or fetched from storage providers.""" """Test that thumbnails are not stored in or fetched from storage providers."""
host, media_id = self._download_image() host, media_id = self._download_image()