Merge branch 'develop' of https://github.com/element-hq/synapse into no-ip-logging

This commit is contained in:
Louis Gregg 2024-12-13 23:01:11 +01:00
commit 2701a2a3b2
86 changed files with 1960 additions and 434 deletions

View file

@ -14,7 +14,7 @@ permissions:
id-token: write # needed for signing the images with GitHub OIDC Token
jobs:
build:
runs-on: ubuntu-latest
runs-on: ubuntu-22.04
steps:
- name: Set up QEMU
id: qemu

View file

@ -14,7 +14,7 @@ jobs:
# There's a 'download artifact' action, but it hasn't been updated for the workflow_run action
# (https://github.com/actions/download-artifact/issues/60) so instead we get this mess:
- name: 📥 Download artifact
uses: dawidd6/action-download-artifact@bf251b5aa9c2f7eeb574a96ee720e24f801b7c11 # v6
uses: dawidd6/action-download-artifact@80620a5d27ce0ae443b965134db88467fc607b43 # v7
with:
workflow: docs-pr.yaml
run_id: ${{ github.event.workflow_run.id }}

View file

@ -212,7 +212,8 @@ jobs:
mv debs*/* debs/
tar -cvJf debs.tar.xz debs
- name: Attach to release
uses: softprops/action-gh-release@v2
# Pinned to work around https://github.com/softprops/action-gh-release/issues/445
uses: softprops/action-gh-release@v0.1.15
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
@ -220,3 +221,7 @@ jobs:
Sdist/*
Wheel*/*
debs.tar.xz
# if it's not already published, keep the release as a draft.
draft: true
# mark it as a prerelease if the tag contains 'rc'.
prerelease: ${{ contains(github.ref, 'rc') }}

View file

@ -1,3 +1,124 @@
# Synapse 1.121.1 (2024-12-11)
This release contains a fix for our docker build CI. It is functionally identical to 1.121.0, whose changelog is below.
### Internal Changes
- Downgrade the Ubuntu GHA runner when building docker images. ([\#18026](https://github.com/element-hq/synapse/issues/18026))
# Synapse 1.121.0 (2024-12-11)
### Internal Changes
- Fix release process to not create duplicate releases. ([\#18025](https://github.com/element-hq/synapse/issues/18025))
# Synapse 1.121.0rc1 (2024-12-04)
### Features
- Support for [MSC4190](https://github.com/matrix-org/matrix-spec-proposals/pull/4190): device management for Application Services. ([\#17705](https://github.com/element-hq/synapse/issues/17705))
- Update [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) Sliding Sync to include invite, ban, kick, targets when `$LAZY`-loading room members. ([\#17947](https://github.com/element-hq/synapse/issues/17947))
- Use stable `M_USER_LOCKED` error code for locked accounts, as per [Matrix 1.12](https://spec.matrix.org/v1.12/client-server-api/#account-locking). ([\#17965](https://github.com/element-hq/synapse/issues/17965))
- [MSC4076](https://github.com/matrix-org/matrix-spec-proposals/pull/4076): Add `disable_badge_count` to pusher configuration. ([\#17975](https://github.com/element-hq/synapse/issues/17975))
### Bugfixes
- Fix long-standing bug where read receipts could get overly delayed being sent over federation. ([\#17933](https://github.com/element-hq/synapse/issues/17933))
### Improved Documentation
- Add OIDC example configuration for Forgejo (fork of Gitea). ([\#17872](https://github.com/element-hq/synapse/issues/17872))
- Link to element-docker-demo from contrib/docker*. ([\#17953](https://github.com/element-hq/synapse/issues/17953))
### Internal Changes
- [MSC4108](https://github.com/matrix-org/matrix-spec-proposals/pull/4108): Add a `Content-Type` header on the `PUT` response to work around a faulty behavior in some caching reverse proxies. ([\#17253](https://github.com/element-hq/synapse/issues/17253))
- Fix incorrect comment in new schema delta. ([\#17936](https://github.com/element-hq/synapse/issues/17936))
- Raise setuptools_rust version cap to 1.10.2. ([\#17944](https://github.com/element-hq/synapse/issues/17944))
- Enable encrypted appservice related experimental features in the complement docker image. ([\#17945](https://github.com/element-hq/synapse/issues/17945))
- Return whether the user is suspended when querying the user account in the Admin API. ([\#17952](https://github.com/element-hq/synapse/issues/17952))
- Fix new scheduled tasks jumping the queue. ([\#17962](https://github.com/element-hq/synapse/issues/17962))
- Bump pyo3 and dependencies to v0.23.2. ([\#17966](https://github.com/element-hq/synapse/issues/17966))
- Update setuptools-rust and fix building abi3 wheels in latest version. ([\#17969](https://github.com/element-hq/synapse/issues/17969))
- Consolidate SSO redirects through `/_matrix/client/v3/login/sso/redirect(/{idpId})`. ([\#17972](https://github.com/element-hq/synapse/issues/17972))
- Fix Docker and Complement config to be able to use `public_baseurl`. ([\#17986](https://github.com/element-hq/synapse/issues/17986))
- Fix building wheels for MacOS which was temporarily disabled in Synapse 1.120.2. ([\#17993](https://github.com/element-hq/synapse/issues/17993))
- Fix release process to not create duplicate releases. ([\#17970](https://github.com/element-hq/synapse/issues/17970), [\#17995](https://github.com/element-hq/synapse/issues/17995))
### Updates to locked dependencies
* Bump bytes from 1.8.0 to 1.9.0. ([\#17982](https://github.com/element-hq/synapse/issues/17982))
* Bump pysaml2 from 7.3.1 to 7.5.0. ([\#17978](https://github.com/element-hq/synapse/issues/17978))
* Bump serde_json from 1.0.132 to 1.0.133. ([\#17939](https://github.com/element-hq/synapse/issues/17939))
* Bump tomli from 2.0.2 to 2.1.0. ([\#17959](https://github.com/element-hq/synapse/issues/17959))
* Bump tomli from 2.1.0 to 2.2.1. ([\#17979](https://github.com/element-hq/synapse/issues/17979))
* Bump tornado from 6.4.1 to 6.4.2. ([\#17955](https://github.com/element-hq/synapse/issues/17955))
# Synapse 1.120.2 (2024-12-03)
This version has building of wheels for macOS disabled.
It is functionally identical to 1.120.1, which contains multiple security fixes.
If you are already using 1.120.1, there is no need to upgrade to this version.
# Synapse 1.120.1 (2024-12-03)
This patch release fixes multiple security vulnerabilities, some affecting all prior versions of Synapse. Server administrators are encouraged to update Synapse as soon as possible. We are not aware of these vulnerabilities being exploited in the wild.
Administrators who are unable to update Synapse may use the workarounds described in the linked GitHub Security Advisory below.
### Security advisory
The following issues are fixed in 1.120.1.
- [GHSA-rfq8-j7rh-8hf2](https://github.com/element-hq/synapse/security/advisories/GHSA-rfq8-j7rh-8hf2) / [CVE-2024-52805](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-52805): **Unsupported content types can lead to memory exhaustion**
Synapse instances which have a high `max_upload_size` and which don't have a reverse proxy in front of them that would otherwise limit upload size are affected.
Fixed by [4b7154c58501b4bf5e1c2d6c11ebef96529f2fdf](https://github.com/element-hq/synapse/commit/4b7154c58501b4bf5e1c2d6c11ebef96529f2fdf).
- [GHSA-f3r3-h2mq-hx2h](https://github.com/element-hq/synapse/security/advisories/GHSA-f3r3-h2mq-hx2h) / [CVE-2024-52815](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-52815): **Malicious invites via federation can break a user's sync**
Fixed by [d82e1ed357b7ee21dff83d06cba7a67840cfd464](https://github.com/element-hq/synapse/commit/d82e1ed357b7ee21dff83d06cba7a67840cfd464).
- [GHSA-vp6v-whfm-rv3g](https://github.com/element-hq/synapse/security/advisories/GHSA-vp6v-whfm-rv3g) / [CVE-2024-53863](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-53863): **Synapse can be forced to thumbnail unexpected file formats, invoking potentially untrustworthy decoders**
Synapse instances can disable dynamic thumbnailing by setting `dynamic_thumbnails` to `false` in the configuration file.
Fixed by [b64a4e5fbbbf119b6c65aedf0d999b4237d55503](https://github.com/element-hq/synapse/commit/b64a4e5fbbbf119b6c65aedf0d999b4237d55503).
- [GHSA-56w4-5538-8v8h](https://github.com/element-hq/synapse/security/advisories/GHSA-56w4-5538-8v8h) / [CVE-2024-53867](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-53867): **The Sliding Sync feature on Synapse versions between 1.113.0rc1 and 1.120.0 can leak partial room state changes to users no longer in a room**
Non-state events, like messages, are unaffected.
Synapse instances can disable the Sliding Sync feature by setting `experimental_features.msc3575_enabled` to `false` in the configuration file.
Fixed by [4daa533e82f345ce87b9495d31781af570ba3ead](https://github.com/element-hq/synapse/commit/4daa533e82f345ce87b9495d31781af570ba3ead).
See the advisories for more details. If you have any questions, email [security at element.io](mailto:security@element.io).
### Bugfixes
- Fix release process to not create duplicate releases. ([\#17970](https://github.com/element-hq/synapse/issues/17970))
# Synapse 1.120.0 (2024-11-26)
### Bugfixes
- Fix a bug introduced in Synapse v1.120rc1 which would cause the newly-introduced `delete_old_otks` job to fail in worker-mode deployments. ([\#17960](https://github.com/element-hq/synapse/issues/17960))
# Synapse 1.120.0rc1 (2024-11-20)
This release enables the enforcement of authenticated media by default, with exemptions for media that is already present in the

162
Cargo.lock generated
View file

@ -35,12 +35,6 @@ version = "0.21.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9d297deb1925b89f2ccc13d7635fa0714f12c87adce1c75356b39ca9b7178567"
[[package]]
name = "bitflags"
version = "2.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cf4b9d6a944f767f8e5e0db018570623c85f3d925ac718db4e06d0187adb21c1"
[[package]]
name = "blake2"
version = "0.10.6"
@ -67,9 +61,9 @@ checksum = "79296716171880943b8470b5f8d03aa55eb2e645a4874bdbb28adb49162e012c"
[[package]]
name = "bytes"
version = "1.8.0"
version = "1.9.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9ac0150caa2ae65ca5bd83f25c7de183dea78d4d366469f148435e2acfbad0da"
checksum = "325918d6fe32f23b19878fe4b34794ae41fc19ddbe53b10571a4874d44ffd39b"
[[package]]
name = "cfg-if"
@ -162,9 +156,9 @@ dependencies = [
[[package]]
name = "heck"
version = "0.4.1"
version = "0.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "95505c38b4572b2d910cecb0281560f54b440a19336cbbcb27bf6ce6adc6f5a8"
checksum = "2304e00983f87ffb38b55b444b5e3b60a884b5d30c0fca7d82fe33449bbe55ea"
[[package]]
name = "hex"
@ -222,16 +216,6 @@ version = "0.2.154"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ae743338b92ff9146ce83992f766a31066a91a8c84a45e0e9f21e7cf6de6d346"
[[package]]
name = "lock_api"
version = "0.4.12"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "07af8b9cdd281b7915f413fa73f29ebd5d55d0d3f0155584dade1ff18cea1b17"
dependencies = [
"autocfg",
"scopeguard",
]
[[package]]
name = "log"
version = "0.4.22"
@ -265,29 +249,6 @@ version = "1.19.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3fdb12b2476b595f9358c5161aa467c2438859caa136dec86c26fdd2efe17b92"
[[package]]
name = "parking_lot"
version = "0.12.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7e4af0ca4f6caed20e900d564c242b8e5d4903fdacf31d3daf527b66fe6f42fb"
dependencies = [
"lock_api",
"parking_lot_core",
]
[[package]]
name = "parking_lot_core"
version = "0.9.10"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1e401f977ab385c9e4e3ab30627d6f26d00e2c73eef317493c4ec6d468726cf8"
dependencies = [
"cfg-if",
"libc",
"redox_syscall",
"smallvec",
"windows-targets",
]
[[package]]
name = "portable-atomic"
version = "1.6.0"
@ -311,16 +272,16 @@ dependencies = [
[[package]]
name = "pyo3"
version = "0.21.2"
version = "0.23.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a5e00b96a521718e08e03b1a622f01c8a8deb50719335de3f60b3b3950f069d8"
checksum = "e484fd2c8b4cb67ab05a318f1fd6fa8f199fcc30819f08f07d200809dba26c15"
dependencies = [
"anyhow",
"cfg-if",
"indoc",
"libc",
"memoffset",
"parking_lot",
"once_cell",
"portable-atomic",
"pyo3-build-config",
"pyo3-ffi",
@ -330,9 +291,9 @@ dependencies = [
[[package]]
name = "pyo3-build-config"
version = "0.21.2"
version = "0.23.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7883df5835fafdad87c0d888b266c8ec0f4c9ca48a5bed6bbb592e8dedee1b50"
checksum = "dc0e0469a84f208e20044b98965e1561028180219e35352a2afaf2b942beff3b"
dependencies = [
"once_cell",
"target-lexicon",
@ -340,9 +301,9 @@ dependencies = [
[[package]]
name = "pyo3-ffi"
version = "0.21.2"
version = "0.23.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "01be5843dc60b916ab4dad1dca6d20b9b4e6ddc8e15f50c47fe6d85f1fb97403"
checksum = "eb1547a7f9966f6f1a0f0227564a9945fe36b90da5a93b3933fc3dc03fae372d"
dependencies = [
"libc",
"pyo3-build-config",
@ -350,9 +311,9 @@ dependencies = [
[[package]]
name = "pyo3-log"
version = "0.10.0"
version = "0.12.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2af49834b8d2ecd555177e63b273b708dea75150abc6f5341d0a6e1a9623976c"
checksum = "3eb421dc86d38d08e04b927b02424db480be71b777fa3a56f32e2f2a3a1a3b08"
dependencies = [
"arc-swap",
"log",
@ -361,9 +322,9 @@ dependencies = [
[[package]]
name = "pyo3-macros"
version = "0.21.2"
version = "0.23.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "77b34069fc0682e11b31dbd10321cbf94808394c56fd996796ce45217dfac53c"
checksum = "fdb6da8ec6fa5cedd1626c886fc8749bdcbb09424a86461eb8cdf096b7c33257"
dependencies = [
"proc-macro2",
"pyo3-macros-backend",
@ -373,9 +334,9 @@ dependencies = [
[[package]]
name = "pyo3-macros-backend"
version = "0.21.2"
version = "0.23.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "08260721f32db5e1a5beae69a55553f56b99bd0e1c3e6e0a5e8851a9d0f5a85c"
checksum = "38a385202ff5a92791168b1136afae5059d3ac118457bb7bc304c197c2d33e7d"
dependencies = [
"heck",
"proc-macro2",
@ -386,9 +347,9 @@ dependencies = [
[[package]]
name = "pythonize"
version = "0.21.1"
version = "0.23.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9d0664248812c38cc55a4ed07f88e4df516ce82604b93b1ffdc041aa77a6cb3c"
checksum = "91a6ee7a084f913f98d70cdc3ebec07e852b735ae3059a1500db2661265da9ff"
dependencies = [
"pyo3",
"serde",
@ -433,15 +394,6 @@ dependencies = [
"getrandom",
]
[[package]]
name = "redox_syscall"
version = "0.5.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "469052894dcb553421e483e4209ee581a45100d31b4018de03e5a7ad86374a7e"
dependencies = [
"bitflags",
]
[[package]]
name = "regex"
version = "1.11.1"
@ -477,12 +429,6 @@ version = "1.0.18"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f3cb5ba0dc43242ce17de99c180e96db90b235b8a9fdc9543c96d2209116bd9f"
[[package]]
name = "scopeguard"
version = "1.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "94143f37725109f92c262ed2cf5e59bce7498c01bcc1502d7b9afe439a4e9f49"
[[package]]
name = "serde"
version = "1.0.215"
@ -537,12 +483,6 @@ dependencies = [
"digest",
]
[[package]]
name = "smallvec"
version = "1.13.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3c5e1a9a646d36c3599cd173a41282daf47c44583ad367b8e6837255952e5c67"
[[package]]
name = "subtle"
version = "2.5.0"
@ -694,67 +634,3 @@ dependencies = [
"js-sys",
"wasm-bindgen",
]
[[package]]
name = "windows-targets"
version = "0.52.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6f0713a46559409d202e70e28227288446bf7841d3211583a4b53e3f6d96e7eb"
dependencies = [
"windows_aarch64_gnullvm",
"windows_aarch64_msvc",
"windows_i686_gnu",
"windows_i686_gnullvm",
"windows_i686_msvc",
"windows_x86_64_gnu",
"windows_x86_64_gnullvm",
"windows_x86_64_msvc",
]
[[package]]
name = "windows_aarch64_gnullvm"
version = "0.52.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7088eed71e8b8dda258ecc8bac5fb1153c5cffaf2578fc8ff5d61e23578d3263"
[[package]]
name = "windows_aarch64_msvc"
version = "0.52.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9985fd1504e250c615ca5f281c3f7a6da76213ebd5ccc9561496568a2752afb6"
[[package]]
name = "windows_i686_gnu"
version = "0.52.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "88ba073cf16d5372720ec942a8ccbf61626074c6d4dd2e745299726ce8b89670"
[[package]]
name = "windows_i686_gnullvm"
version = "0.52.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "87f4261229030a858f36b459e748ae97545d6f1ec60e5e0d6a3d32e0dc232ee9"
[[package]]
name = "windows_i686_msvc"
version = "0.52.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "db3c2bf3d13d5b658be73463284eaf12830ac9a26a90c717b7f771dfe97487bf"
[[package]]
name = "windows_x86_64_gnu"
version = "0.52.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4e4246f76bdeff09eb48875a0fd3e2af6aada79d409d33011886d3e1581517d9"
[[package]]
name = "windows_x86_64_gnullvm"
version = "0.52.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "852298e482cd67c356ddd9570386e2862b5673c85bd5f88df9ab6802b334c596"
[[package]]
name = "windows_x86_64_msvc"
version = "0.52.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bec47e5bfd1bff0eeaf6d8b485cc1074891a197ab4225d504cb7a1ab88b02bf0"

View file

@ -1,8 +1,10 @@
# A build script for poetry that adds the rust extension.
import itertools
import os
from typing import Any, Dict
from packaging.specifiers import SpecifierSet
from setuptools_rust import Binding, RustExtension
@ -14,6 +16,8 @@ def build(setup_kwargs: Dict[str, Any]) -> None:
target="synapse.synapse_rust",
path=cargo_toml_path,
binding=Binding.PyO3,
# This flag is a no-op in the latest versions. Instead, we need to
# specify this in the `bdist_wheel` config below.
py_limited_api=True,
# We force always building in release mode, as we can't tell the
# difference between using `poetry` in development vs production.
@ -21,3 +25,18 @@ def build(setup_kwargs: Dict[str, Any]) -> None:
)
setup_kwargs.setdefault("rust_extensions", []).append(extension)
setup_kwargs["zip_safe"] = False
# We lookup the minimum supported python version by looking at
# `python_requires` (e.g. ">=3.9.0,<4.0.0") and finding the first python
# version that matches. We then convert that into the `py_limited_api` form,
# e.g. cp39 for python 3.9.
py_limited_api: str
python_bounds = SpecifierSet(setup_kwargs["python_requires"])
for minor_version in itertools.count(start=8):
if f"3.{minor_version}.0" in python_bounds:
py_limited_api = f"cp3{minor_version}"
break
setup_kwargs.setdefault("options", {}).setdefault("bdist_wheel", {})[
"py_limited_api"
] = py_limited_api

1
changelog.d/17846.misc Normal file
View file

@ -0,0 +1 @@
Update Alpine Linux Synapse Package Maintainer within installation.md.

View file

@ -1 +0,0 @@
Add OIDC example configuration for Forgejo (fork of Gitea).

View file

@ -0,0 +1 @@
Module developers will have access to user id of requester when adding `check_username_for_spam` callbacks to `spam_checker_module_callbacks`. Contributed by Wilson@Pangea.chat.

1
changelog.d/17930.bugfix Normal file
View file

@ -0,0 +1 @@
Fix bug when rejecting withdrew invite with a third_party_rules module, where the invite would be stuck for the client.

View file

@ -1 +0,0 @@
Fix long-standing bug where read receipts could get overly delayed being sent over federation.

View file

@ -1 +0,0 @@
Fix incorrect comment in new schema delta.

View file

@ -1 +0,0 @@
Raise setuptools_rust version cap to 1.10.2.

View file

@ -1 +0,0 @@
Enable encrypted appservice related experimental features in the complement docker image.

View file

@ -1 +0,0 @@
Return whether the user is suspended when querying the user account in the Admin API.

View file

@ -1 +0,0 @@
Link to element-docker-demo from contrib/docker*.

1
changelog.d/17954.doc Normal file
View file

@ -0,0 +1 @@
Update `synapse.app.generic_worker` documentation to only recommend `GET` requests for stream writer routes by default, unless the worker is also configured as a stream writer. Contributed by @evoL.

View file

@ -0,0 +1 @@
Support stable account suspension from [MSC3823](https://github.com/matrix-org/matrix-spec-proposals/pull/3823).

1
changelog.d/17996.misc Normal file
View file

@ -0,0 +1 @@
Add `RoomID` & `EventID` rust types.

36
debian/changelog vendored
View file

@ -1,3 +1,39 @@
matrix-synapse-py3 (1.121.1) stable; urgency=medium
* New Synapse release 1.121.1.
-- Synapse Packaging team <packages@matrix.org> Wed, 11 Dec 2024 18:24:48 +0000
matrix-synapse-py3 (1.121.0) stable; urgency=medium
* New Synapse release 1.121.0.
-- Synapse Packaging team <packages@matrix.org> Wed, 11 Dec 2024 13:12:30 +0100
matrix-synapse-py3 (1.121.0~rc1) stable; urgency=medium
* New Synapse release 1.121.0rc1.
-- Synapse Packaging team <packages@matrix.org> Wed, 04 Dec 2024 14:47:23 +0000
matrix-synapse-py3 (1.120.2) stable; urgency=medium
* New synapse release 1.120.2.
-- Synapse Packaging team <packages@matrix.org> Tue, 03 Dec 2024 15:43:37 +0000
matrix-synapse-py3 (1.120.1) stable; urgency=medium
* New synapse release 1.120.1.
-- Synapse Packaging team <packages@matrix.org> Tue, 03 Dec 2024 09:07:57 +0000
matrix-synapse-py3 (1.120.0) stable; urgency=medium
* New synapse release 1.120.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 26 Nov 2024 13:10:23 +0000
matrix-synapse-py3 (1.120.0~rc1) stable; urgency=medium
* New Synapse release 1.120.0rc1.

View file

@ -7,6 +7,7 @@
#}
## Server ##
public_baseurl: http://127.0.0.1:8008/
report_stats: False
trusted_key_servers: []
enable_registration: true

View file

@ -42,6 +42,6 @@ server {
{% endif %}
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host;
proxy_set_header Host $host:$server_port;
}
}

View file

@ -245,7 +245,7 @@ this callback.
_First introduced in Synapse v1.37.0_
```python
async def check_username_for_spam(user_profile: synapse.module_api.UserProfile) -> bool
async def check_username_for_spam(user_profile: synapse.module_api.UserProfile, requester_id: str) -> bool
```
Called when computing search results in the user directory. The module must return a
@ -264,6 +264,8 @@ The profile is represented as a dictionary with the following keys:
The module is given a copy of the original dictionary, so modifying it from within the
module cannot modify a user's profile when included in user directory search results.
The requester_id parameter is the ID of the user that called the user directory API.
If multiple modules implement this callback, they will be considered in order. If a
callback returns `False`, Synapse falls through to the next one. The value of the first
callback that does not return `False` will be used. If this happens, Synapse will not call

View file

@ -157,7 +157,7 @@ sudo pip install py-bcrypt
#### Alpine Linux
6543 maintains [Synapse packages for Alpine Linux](https://pkgs.alpinelinux.org/packages?name=synapse&branch=edge) in the community repository. Install with:
Jahway603 maintains [Synapse packages for Alpine Linux](https://pkgs.alpinelinux.org/packages?name=synapse&branch=edge) in the community repository. Install with:
```sh
sudo apk add synapse

View file

@ -72,8 +72,8 @@ class ExampleSpamChecker:
async def user_may_publish_room(self, userid, room_id):
return True # allow publishing of all rooms
async def check_username_for_spam(self, user_profile):
return False # allow all usernames
async def check_username_for_spam(self, user_profile, requester_id):
return False # allow all usernames regardless of requester
async def check_registration_for_spam(
self,

View file

@ -273,17 +273,6 @@ information.
^/_matrix/client/(api/v1|r0|v3|unstable)/knock/
^/_matrix/client/(api/v1|r0|v3|unstable)/profile/
# Account data requests
^/_matrix/client/(r0|v3|unstable)/.*/tags
^/_matrix/client/(r0|v3|unstable)/.*/account_data
# Receipts requests
^/_matrix/client/(r0|v3|unstable)/rooms/.*/receipt
^/_matrix/client/(r0|v3|unstable)/rooms/.*/read_markers
# Presence requests
^/_matrix/client/(api/v1|r0|v3|unstable)/presence/
# User directory search requests
^/_matrix/client/(r0|v3|unstable)/user_directory/search$
@ -292,6 +281,13 @@ Additionally, the following REST endpoints can be handled for GET requests:
^/_matrix/client/(api/v1|r0|v3|unstable)/pushrules/
^/_matrix/client/unstable/org.matrix.msc4140/delayed_events
# Account data requests
^/_matrix/client/(r0|v3|unstable)/.*/tags
^/_matrix/client/(r0|v3|unstable)/.*/account_data
# Presence requests
^/_matrix/client/(api/v1|r0|v3|unstable)/presence/
Pagination requests can also be handled, but all requests for a given
room must be routed to the same instance. Additionally, care must be taken to
ensure that the purge history admin API is not used while pagination requests

61
poetry.lock generated
View file

@ -1917,13 +1917,13 @@ test = ["pretend", "pytest (>=3.0.1)", "pytest-rerunfailures"]
[[package]]
name = "pysaml2"
version = "7.3.1"
version = "7.5.0"
description = "Python implementation of SAML Version 2 Standard"
optional = true
python-versions = ">=3.6.2,<4.0.0"
python-versions = ">=3.9,<4.0"
files = [
{file = "pysaml2-7.3.1-py3-none-any.whl", hash = "sha256:2cc66e7a371d3f5ff9601f0ed93b5276cca816fce82bb38447d5a0651f2f5193"},
{file = "pysaml2-7.3.1.tar.gz", hash = "sha256:eab22d187c6dd7707c58b5bb1688f9b8e816427667fc99d77f54399e15cd0a0a"},
{file = "pysaml2-7.5.0-py3-none-any.whl", hash = "sha256:bc6627cc344476a83c757f440a73fda1369f13b6fda1b4e16bca63ffbabb5318"},
{file = "pysaml2-7.5.0.tar.gz", hash = "sha256:f36871d4e5ee857c6b85532e942550d2cf90ea4ee943d75eb681044bbc4f54f7"},
]
[package.dependencies]
@ -1933,7 +1933,7 @@ pyopenssl = "*"
python-dateutil = "*"
pytz = "*"
requests = ">=2,<3"
xmlschema = ">=1.2.1"
xmlschema = ">=2,<3"
[package.extras]
s2repoze = ["paste", "repoze.who", "zope.interface"]
@ -1954,13 +1954,13 @@ six = ">=1.5"
[[package]]
name = "python-multipart"
version = "0.0.16"
version = "0.0.18"
description = "A streaming multipart parser for Python"
optional = false
python-versions = ">=3.8"
files = [
{file = "python_multipart-0.0.16-py3-none-any.whl", hash = "sha256:c2759b7b976ef3937214dfb592446b59dfaa5f04682a076f78b117c94776d87a"},
{file = "python_multipart-0.0.16.tar.gz", hash = "sha256:8dee37b88dab9b59922ca173c35acb627cc12ec74019f5cd4578369c6df36554"},
{file = "python_multipart-0.0.18-py3-none-any.whl", hash = "sha256:efe91480f485f6a361427a541db4796f9e1591afc0fb8e7a4ba06bfbc6708996"},
{file = "python_multipart-0.0.18.tar.gz", hash = "sha256:7a68db60c8bfb82e460637fa4750727b45af1d5e2ed215593f917f64694d34fe"},
]
[[package]]
@ -2405,19 +2405,18 @@ test = ["build[virtualenv] (>=1.0.3)", "filelock (>=3.4.0)", "importlib-metadata
[[package]]
name = "setuptools-rust"
version = "1.8.1"
version = "1.10.2"
description = "Setuptools Rust extension plugin"
optional = false
python-versions = ">=3.8"
files = [
{file = "setuptools-rust-1.8.1.tar.gz", hash = "sha256:94b1dd5d5308b3138d5b933c3a2b55e6d6927d1a22632e509fcea9ddd0f7e486"},
{file = "setuptools_rust-1.8.1-py3-none-any.whl", hash = "sha256:b5324493949ccd6aa0c03890c5f6b5f02de4512e3ac1697d02e9a6c02b18aa8e"},
{file = "setuptools_rust-1.10.2-py3-none-any.whl", hash = "sha256:4b39c435ae9670315d522ed08fa0e8cb29f2a6048033966b6be2571a90ce4f1c"},
{file = "setuptools_rust-1.10.2.tar.gz", hash = "sha256:5d73e7eee5f87a6417285b617c97088a7c20d1a70fcea60e3bdc94ff567c29dc"},
]
[package.dependencies]
semantic-version = ">=2.8.2,<3"
setuptools = ">=62.4"
tomli = {version = ">=1.2.1", markers = "python_version < \"3.11\""}
[[package]]
name = "signedjson"
@ -2515,20 +2514,50 @@ twisted = ["twisted"]
[[package]]
name = "tomli"
version = "2.1.0"
version = "2.2.1"
description = "A lil' TOML parser"
optional = false
python-versions = ">=3.8"
files = [
{file = "tomli-2.1.0-py3-none-any.whl", hash = "sha256:a5c57c3d1c56f5ccdf89f6523458f60ef716e210fc47c4cfb188c5ba473e0391"},
{file = "tomli-2.1.0.tar.gz", hash = "sha256:3f646cae2aec94e17d04973e4249548320197cfabdf130015d023de4b74d8ab8"},
{file = "tomli-2.2.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:678e4fa69e4575eb77d103de3df8a895e1591b48e740211bd1067378c69e8249"},
{file = "tomli-2.2.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:023aa114dd824ade0100497eb2318602af309e5a55595f76b626d6d9f3b7b0a6"},
{file = "tomli-2.2.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ece47d672db52ac607a3d9599a9d48dcb2f2f735c6c2d1f34130085bb12b112a"},
{file = "tomli-2.2.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6972ca9c9cc9f0acaa56a8ca1ff51e7af152a9f87fb64623e31d5c83700080ee"},
{file = "tomli-2.2.1-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c954d2250168d28797dd4e3ac5cf812a406cd5a92674ee4c8f123c889786aa8e"},
{file = "tomli-2.2.1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:8dd28b3e155b80f4d54beb40a441d366adcfe740969820caf156c019fb5c7ec4"},
{file = "tomli-2.2.1-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:e59e304978767a54663af13c07b3d1af22ddee3bb2fb0618ca1593e4f593a106"},
{file = "tomli-2.2.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:33580bccab0338d00994d7f16f4c4ec25b776af3ffaac1ed74e0b3fc95e885a8"},
{file = "tomli-2.2.1-cp311-cp311-win32.whl", hash = "sha256:465af0e0875402f1d226519c9904f37254b3045fc5084697cefb9bdde1ff99ff"},
{file = "tomli-2.2.1-cp311-cp311-win_amd64.whl", hash = "sha256:2d0f2fdd22b02c6d81637a3c95f8cd77f995846af7414c5c4b8d0545afa1bc4b"},
{file = "tomli-2.2.1-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:4a8f6e44de52d5e6c657c9fe83b562f5f4256d8ebbfe4ff922c495620a7f6cea"},
{file = "tomli-2.2.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:8d57ca8095a641b8237d5b079147646153d22552f1c637fd3ba7f4b0b29167a8"},
{file = "tomli-2.2.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4e340144ad7ae1533cb897d406382b4b6fede8890a03738ff1683af800d54192"},
{file = "tomli-2.2.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:db2b95f9de79181805df90bedc5a5ab4c165e6ec3fe99f970d0e302f384ad222"},
{file = "tomli-2.2.1-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:40741994320b232529c802f8bc86da4e1aa9f413db394617b9a256ae0f9a7f77"},
{file = "tomli-2.2.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:400e720fe168c0f8521520190686ef8ef033fb19fc493da09779e592861b78c6"},
{file = "tomli-2.2.1-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:02abe224de6ae62c19f090f68da4e27b10af2b93213d36cf44e6e1c5abd19fdd"},
{file = "tomli-2.2.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:b82ebccc8c8a36f2094e969560a1b836758481f3dc360ce9a3277c65f374285e"},
{file = "tomli-2.2.1-cp312-cp312-win32.whl", hash = "sha256:889f80ef92701b9dbb224e49ec87c645ce5df3fa2cc548664eb8a25e03127a98"},
{file = "tomli-2.2.1-cp312-cp312-win_amd64.whl", hash = "sha256:7fc04e92e1d624a4a63c76474610238576942d6b8950a2d7f908a340494e67e4"},
{file = "tomli-2.2.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:f4039b9cbc3048b2416cc57ab3bda989a6fcf9b36cf8937f01a6e731b64f80d7"},
{file = "tomli-2.2.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:286f0ca2ffeeb5b9bd4fcc8d6c330534323ec51b2f52da063b11c502da16f30c"},
{file = "tomli-2.2.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a92ef1a44547e894e2a17d24e7557a5e85a9e1d0048b0b5e7541f76c5032cb13"},
{file = "tomli-2.2.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9316dc65bed1684c9a98ee68759ceaed29d229e985297003e494aa825ebb0281"},
{file = "tomli-2.2.1-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e85e99945e688e32d5a35c1ff38ed0b3f41f43fad8df0bdf79f72b2ba7bc5272"},
{file = "tomli-2.2.1-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:ac065718db92ca818f8d6141b5f66369833d4a80a9d74435a268c52bdfa73140"},
{file = "tomli-2.2.1-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:d920f33822747519673ee656a4b6ac33e382eca9d331c87770faa3eef562aeb2"},
{file = "tomli-2.2.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:a198f10c4d1b1375d7687bc25294306e551bf1abfa4eace6650070a5c1ae2744"},
{file = "tomli-2.2.1-cp313-cp313-win32.whl", hash = "sha256:d3f5614314d758649ab2ab3a62d4f2004c825922f9e370b29416484086b264ec"},
{file = "tomli-2.2.1-cp313-cp313-win_amd64.whl", hash = "sha256:a38aa0308e754b0e3c67e344754dff64999ff9b513e691d0e786265c93583c69"},
{file = "tomli-2.2.1-py3-none-any.whl", hash = "sha256:cb55c73c5f4408779d0cf3eef9f762b9c9f147a77de7b258bef0a5628adc85cc"},
{file = "tomli-2.2.1.tar.gz", hash = "sha256:cd45e1dc79c835ce60f7404ec8119f2eb06d38b1deba146f07ced3bbc44505ff"},
]
[[package]]
name = "tornado"
version = "6.4.2"
description = "Tornado is a Python web framework and asynchronous networking library, originally developed at FriendFeed."
optional = false
optional = true
python-versions = ">=3.8"
files = [
{file = "tornado-6.4.2-cp38-abi3-macosx_10_9_universal2.whl", hash = "sha256:e828cce1123e9e44ae2a50a9de3055497ab1d0aeb440c5ac23064d9e44880da1"},

View file

@ -97,7 +97,7 @@ module-name = "synapse.synapse_rust"
[tool.poetry]
name = "matrix-synapse"
version = "1.120.0rc1"
version = "1.121.1"
description = "Homeserver for the Matrix decentralised comms protocol"
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
license = "AGPL-3.0-or-later"
@ -386,8 +386,11 @@ build-backend = "poetry.core.masonry.api"
# c.f. https://github.com/matrix-org/synapse/pull/14259
skip = "cp36* cp37* cp38* pp37* pp38* *-musllinux_i686 pp*aarch64 *-musllinux_aarch64"
# We need a rust compiler
before-all = "curl https://sh.rustup.rs -sSf | sh -s -- --default-toolchain stable -y --profile minimal"
# We need a rust compiler.
#
# We temporarily pin Rust to 1.82.0 to work around
# https://github.com/element-hq/synapse/issues/17988
before-all = "curl https://sh.rustup.rs -sSf | sh -s -- --default-toolchain 1.82.0 -y --profile minimal"
environment= { PATH = "$PATH:$HOME/.cargo/bin" }
# For some reason if we don't manually clean the build directory we

View file

@ -30,14 +30,14 @@ http = "1.1.0"
lazy_static = "1.4.0"
log = "0.4.17"
mime = "0.3.17"
pyo3 = { version = "0.21.0", features = [
pyo3 = { version = "0.23.2", features = [
"macros",
"anyhow",
"abi3",
"abi3-py38",
] }
pyo3-log = "0.10.0"
pythonize = "0.21.0"
pyo3-log = "0.12.0"
pythonize = "0.23.0"
regex = "1.6.0"
sha2 = "0.10.8"
serde = { version = "1.0.144", features = ["derive"] }

View file

@ -32,14 +32,14 @@ use crate::push::utils::{glob_to_regex, GlobMatchType};
/// Called when registering modules with python.
pub fn register_module(py: Python<'_>, m: &Bound<'_, PyModule>) -> PyResult<()> {
let child_module = PyModule::new_bound(py, "acl")?;
let child_module = PyModule::new(py, "acl")?;
child_module.add_class::<ServerAclEvaluator>()?;
m.add_submodule(&child_module)?;
// We need to manually add the module to sys.modules to make `from
// synapse.synapse_rust import acl` work.
py.import_bound("sys")?
py.import("sys")?
.getattr("modules")?
.set_item("synapse.synapse_rust.acl", child_module)?;

View file

@ -41,9 +41,11 @@ use pyo3::{
pybacked::PyBackedStr,
pyclass, pymethods,
types::{PyAnyMethods, PyDict, PyDictMethods, PyString},
Bound, IntoPy, PyAny, PyObject, PyResult, Python,
Bound, IntoPyObject, PyAny, PyObject, PyResult, Python,
};
use crate::UnwrapInfallible;
/// Definitions of the various fields of the internal metadata.
#[derive(Clone)]
enum EventInternalMetadataData {
@ -60,31 +62,59 @@ enum EventInternalMetadataData {
impl EventInternalMetadataData {
/// Convert the field to its name and python object.
fn to_python_pair<'a>(&self, py: Python<'a>) -> (&'a Bound<'a, PyString>, PyObject) {
fn to_python_pair<'a>(&self, py: Python<'a>) -> (&'a Bound<'a, PyString>, Bound<'a, PyAny>) {
match self {
EventInternalMetadataData::OutOfBandMembership(o) => {
(pyo3::intern!(py, "out_of_band_membership"), o.into_py(py))
}
EventInternalMetadataData::SendOnBehalfOf(o) => {
(pyo3::intern!(py, "send_on_behalf_of"), o.into_py(py))
}
EventInternalMetadataData::RecheckRedaction(o) => {
(pyo3::intern!(py, "recheck_redaction"), o.into_py(py))
}
EventInternalMetadataData::SoftFailed(o) => {
(pyo3::intern!(py, "soft_failed"), o.into_py(py))
}
EventInternalMetadataData::ProactivelySend(o) => {
(pyo3::intern!(py, "proactively_send"), o.into_py(py))
}
EventInternalMetadataData::Redacted(o) => {
(pyo3::intern!(py, "redacted"), o.into_py(py))
}
EventInternalMetadataData::TxnId(o) => (pyo3::intern!(py, "txn_id"), o.into_py(py)),
EventInternalMetadataData::TokenId(o) => (pyo3::intern!(py, "token_id"), o.into_py(py)),
EventInternalMetadataData::DeviceId(o) => {
(pyo3::intern!(py, "device_id"), o.into_py(py))
}
EventInternalMetadataData::OutOfBandMembership(o) => (
pyo3::intern!(py, "out_of_band_membership"),
o.into_pyobject(py)
.unwrap_infallible()
.to_owned()
.into_any(),
),
EventInternalMetadataData::SendOnBehalfOf(o) => (
pyo3::intern!(py, "send_on_behalf_of"),
o.into_pyobject(py).unwrap_infallible().into_any(),
),
EventInternalMetadataData::RecheckRedaction(o) => (
pyo3::intern!(py, "recheck_redaction"),
o.into_pyobject(py)
.unwrap_infallible()
.to_owned()
.into_any(),
),
EventInternalMetadataData::SoftFailed(o) => (
pyo3::intern!(py, "soft_failed"),
o.into_pyobject(py)
.unwrap_infallible()
.to_owned()
.into_any(),
),
EventInternalMetadataData::ProactivelySend(o) => (
pyo3::intern!(py, "proactively_send"),
o.into_pyobject(py)
.unwrap_infallible()
.to_owned()
.into_any(),
),
EventInternalMetadataData::Redacted(o) => (
pyo3::intern!(py, "redacted"),
o.into_pyobject(py)
.unwrap_infallible()
.to_owned()
.into_any(),
),
EventInternalMetadataData::TxnId(o) => (
pyo3::intern!(py, "txn_id"),
o.into_pyobject(py).unwrap_infallible().into_any(),
),
EventInternalMetadataData::TokenId(o) => (
pyo3::intern!(py, "token_id"),
o.into_pyobject(py).unwrap_infallible().into_any(),
),
EventInternalMetadataData::DeviceId(o) => (
pyo3::intern!(py, "device_id"),
o.into_pyobject(py).unwrap_infallible().into_any(),
),
}
}
@ -247,7 +277,7 @@ impl EventInternalMetadata {
///
/// Note that `outlier` and `stream_ordering` are stored in separate columns so are not returned here.
fn get_dict(&self, py: Python<'_>) -> PyResult<PyObject> {
let dict = PyDict::new_bound(py);
let dict = PyDict::new(py);
for entry in &self.data {
let (key, value) = entry.to_python_pair(py);

View file

@ -30,7 +30,7 @@ mod internal_metadata;
/// Called when registering modules with python.
pub fn register_module(py: Python<'_>, m: &Bound<'_, PyModule>) -> PyResult<()> {
let child_module = PyModule::new_bound(py, "events")?;
let child_module = PyModule::new(py, "events")?;
child_module.add_class::<internal_metadata::EventInternalMetadata>()?;
child_module.add_function(wrap_pyfunction!(filter::event_visible_to_server_py, m)?)?;
@ -38,7 +38,7 @@ pub fn register_module(py: Python<'_>, m: &Bound<'_, PyModule>) -> PyResult<()>
// We need to manually add the module to sys.modules to make `from
// synapse.synapse_rust import events` work.
py.import_bound("sys")?
py.import("sys")?
.getattr("modules")?
.set_item("synapse.synapse_rust.events", child_module)?;

View file

@ -70,7 +70,7 @@ pub fn http_request_from_twisted(request: &Bound<'_, PyAny>) -> PyResult<Request
let headers_iter = request
.getattr("requestHeaders")?
.call_method0("getAllRawHeaders")?
.iter()?;
.try_iter()?;
for header in headers_iter {
let header = header?;

View file

@ -71,6 +71,34 @@ impl TryFrom<&str> for UserID {
}
}
impl TryFrom<String> for UserID {
type Error = IdentifierError;
/// Will try creating a `UserID` from the provided `&str`.
/// Can fail if the user_id is incorrectly formatted.
fn try_from(s: String) -> Result<Self, Self::Error> {
if !s.starts_with('@') {
return Err(IdentifierError::IncorrectSigil);
}
if s.find(':').is_none() {
return Err(IdentifierError::MissingColon);
}
Ok(UserID(s))
}
}
impl<'de> serde::Deserialize<'de> for UserID {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where
D: serde::Deserializer<'de>,
{
let s: String = serde::Deserialize::deserialize(deserializer)?;
UserID::try_from(s).map_err(serde::de::Error::custom)
}
}
impl Deref for UserID {
type Target = str;
@ -84,3 +112,141 @@ impl fmt::Display for UserID {
write!(f, "{}", self.0)
}
}
/// A Matrix room_id.
#[derive(Clone, Debug, PartialEq)]
pub struct RoomID(String);
impl RoomID {
/// Returns the `localpart` of the room_id.
pub fn localpart(&self) -> &str {
&self[1..self.colon_pos()]
}
/// Returns the `server_name` / `domain` of the room_id.
pub fn server_name(&self) -> &str {
&self[self.colon_pos() + 1..]
}
/// Returns the position of the ':' inside of the room_id.
/// Used when splitting the room_id into it's respective parts.
fn colon_pos(&self) -> usize {
self.find(':').unwrap()
}
}
impl TryFrom<&str> for RoomID {
type Error = IdentifierError;
/// Will try creating a `RoomID` from the provided `&str`.
/// Can fail if the room_id is incorrectly formatted.
fn try_from(s: &str) -> Result<Self, Self::Error> {
if !s.starts_with('!') {
return Err(IdentifierError::IncorrectSigil);
}
if s.find(':').is_none() {
return Err(IdentifierError::MissingColon);
}
Ok(RoomID(s.to_string()))
}
}
impl TryFrom<String> for RoomID {
type Error = IdentifierError;
/// Will try creating a `RoomID` from the provided `String`.
/// Can fail if the room_id is incorrectly formatted.
fn try_from(s: String) -> Result<Self, Self::Error> {
if !s.starts_with('!') {
return Err(IdentifierError::IncorrectSigil);
}
if s.find(':').is_none() {
return Err(IdentifierError::MissingColon);
}
Ok(RoomID(s))
}
}
impl<'de> serde::Deserialize<'de> for RoomID {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where
D: serde::Deserializer<'de>,
{
let s: String = serde::Deserialize::deserialize(deserializer)?;
RoomID::try_from(s).map_err(serde::de::Error::custom)
}
}
impl Deref for RoomID {
type Target = str;
fn deref(&self) -> &Self::Target {
&self.0
}
}
impl fmt::Display for RoomID {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "{}", self.0)
}
}
/// A Matrix event_id.
#[derive(Clone, Debug, PartialEq)]
pub struct EventID(String);
impl TryFrom<&str> for EventID {
type Error = IdentifierError;
/// Will try creating a `EventID` from the provided `&str`.
/// Can fail if the event_id is incorrectly formatted.
fn try_from(s: &str) -> Result<Self, Self::Error> {
if !s.starts_with('$') {
return Err(IdentifierError::IncorrectSigil);
}
Ok(EventID(s.to_string()))
}
}
impl TryFrom<String> for EventID {
type Error = IdentifierError;
/// Will try creating a `EventID` from the provided `String`.
/// Can fail if the event_id is incorrectly formatted.
fn try_from(s: String) -> Result<Self, Self::Error> {
if !s.starts_with('$') {
return Err(IdentifierError::IncorrectSigil);
}
Ok(EventID(s))
}
}
impl<'de> serde::Deserialize<'de> for EventID {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where
D: serde::Deserializer<'de>,
{
let s: String = serde::Deserialize::deserialize(deserializer)?;
EventID::try_from(s).map_err(serde::de::Error::custom)
}
}
impl Deref for EventID {
type Target = str;
fn deref(&self) -> &Self::Target {
&self.0
}
}
impl fmt::Display for EventID {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "{}", self.0)
}
}

View file

@ -1,3 +1,5 @@
use std::convert::Infallible;
use lazy_static::lazy_static;
use pyo3::prelude::*;
use pyo3_log::ResetHandle;
@ -52,3 +54,16 @@ fn synapse_rust(py: Python<'_>, m: &Bound<'_, PyModule>) -> PyResult<()> {
Ok(())
}
pub trait UnwrapInfallible<T> {
fn unwrap_infallible(self) -> T;
}
impl<T> UnwrapInfallible<T> for Result<T, Infallible> {
fn unwrap_infallible(self) -> T {
match self {
Ok(val) => val,
Err(never) => match never {},
}
}
}

View file

@ -167,6 +167,7 @@ impl PushRuleEvaluator {
///
/// Returns the set of actions, if any, that match (filtering out any
/// `dont_notify` and `coalesce` actions).
#[pyo3(signature = (push_rules, user_id=None, display_name=None))]
pub fn run(
&self,
push_rules: &FilteredPushRules,
@ -236,6 +237,7 @@ impl PushRuleEvaluator {
}
/// Check if the given condition matches.
#[pyo3(signature = (condition, user_id=None, display_name=None))]
fn matches(
&self,
condition: Condition,

View file

@ -65,8 +65,8 @@ use anyhow::{Context, Error};
use log::warn;
use pyo3::exceptions::PyTypeError;
use pyo3::prelude::*;
use pyo3::types::{PyBool, PyList, PyLong, PyString};
use pythonize::{depythonize_bound, pythonize};
use pyo3::types::{PyBool, PyInt, PyList, PyString};
use pythonize::{depythonize, pythonize, PythonizeError};
use serde::de::Error as _;
use serde::{Deserialize, Serialize};
use serde_json::Value;
@ -79,7 +79,7 @@ pub mod utils;
/// Called when registering modules with python.
pub fn register_module(py: Python<'_>, m: &Bound<'_, PyModule>) -> PyResult<()> {
let child_module = PyModule::new_bound(py, "push")?;
let child_module = PyModule::new(py, "push")?;
child_module.add_class::<PushRule>()?;
child_module.add_class::<PushRules>()?;
child_module.add_class::<FilteredPushRules>()?;
@ -90,7 +90,7 @@ pub fn register_module(py: Python<'_>, m: &Bound<'_, PyModule>) -> PyResult<()>
// We need to manually add the module to sys.modules to make `from
// synapse.synapse_rust import push` work.
py.import_bound("sys")?
py.import("sys")?
.getattr("modules")?
.set_item("synapse.synapse_rust.push", child_module)?;
@ -182,12 +182,16 @@ pub enum Action {
Unknown(Value),
}
impl IntoPy<PyObject> for Action {
fn into_py(self, py: Python<'_>) -> PyObject {
impl<'py> IntoPyObject<'py> for Action {
type Target = PyAny;
type Output = Bound<'py, Self::Target>;
type Error = PythonizeError;
fn into_pyobject(self, py: Python<'py>) -> Result<Self::Output, Self::Error> {
// When we pass the `Action` struct to Python we want it to be converted
// to a dict. We use `pythonize`, which converts the struct using the
// `serde` serialization.
pythonize(py, &self).expect("valid action")
pythonize(py, &self)
}
}
@ -270,13 +274,13 @@ pub enum SimpleJsonValue {
}
impl<'source> FromPyObject<'source> for SimpleJsonValue {
fn extract(ob: &'source PyAny) -> PyResult<Self> {
fn extract_bound(ob: &Bound<'source, PyAny>) -> PyResult<Self> {
if let Ok(s) = ob.downcast::<PyString>() {
Ok(SimpleJsonValue::Str(Cow::Owned(s.to_string())))
// A bool *is* an int, ensure we try bool first.
} else if let Ok(b) = ob.downcast::<PyBool>() {
Ok(SimpleJsonValue::Bool(b.extract()?))
} else if let Ok(i) = ob.downcast::<PyLong>() {
} else if let Ok(i) = ob.downcast::<PyInt>() {
Ok(SimpleJsonValue::Int(i.extract()?))
} else if ob.is_none() {
Ok(SimpleJsonValue::Null)
@ -298,15 +302,19 @@ pub enum JsonValue {
}
impl<'source> FromPyObject<'source> for JsonValue {
fn extract(ob: &'source PyAny) -> PyResult<Self> {
fn extract_bound(ob: &Bound<'source, PyAny>) -> PyResult<Self> {
if let Ok(l) = ob.downcast::<PyList>() {
match l.iter().map(SimpleJsonValue::extract).collect() {
match l
.iter()
.map(|it| SimpleJsonValue::extract_bound(&it))
.collect()
{
Ok(a) => Ok(JsonValue::Array(a)),
Err(e) => Err(PyTypeError::new_err(format!(
"Can't convert to JsonValue::Array: {e}"
))),
}
} else if let Ok(v) = SimpleJsonValue::extract(ob) {
} else if let Ok(v) = SimpleJsonValue::extract_bound(ob) {
Ok(JsonValue::Value(v))
} else {
Err(PyTypeError::new_err(format!(
@ -363,15 +371,19 @@ pub enum KnownCondition {
},
}
impl IntoPy<PyObject> for Condition {
fn into_py(self, py: Python<'_>) -> PyObject {
pythonize(py, &self).expect("valid condition")
impl<'source> IntoPyObject<'source> for Condition {
type Target = PyAny;
type Output = Bound<'source, Self::Target>;
type Error = PythonizeError;
fn into_pyobject(self, py: Python<'source>) -> Result<Self::Output, Self::Error> {
pythonize(py, &self)
}
}
impl<'source> FromPyObject<'source> for Condition {
fn extract_bound(ob: &Bound<'source, PyAny>) -> PyResult<Self> {
Ok(depythonize_bound(ob.clone())?)
Ok(depythonize(ob)?)
}
}

View file

@ -29,7 +29,7 @@ use pyo3::{
exceptions::PyValueError,
pyclass, pymethods,
types::{PyAnyMethods, PyModule, PyModuleMethods},
Bound, Py, PyAny, PyObject, PyResult, Python, ToPyObject,
Bound, IntoPyObject, Py, PyAny, PyObject, PyResult, Python,
};
use ulid::Ulid;
@ -37,6 +37,7 @@ use self::session::Session;
use crate::{
errors::{NotFoundError, SynapseError},
http::{http_request_from_twisted, http_response_to_twisted, HeaderMapPyExt},
UnwrapInfallible,
};
mod session;
@ -125,7 +126,11 @@ impl RendezvousHandler {
let base = Uri::try_from(format!("{base}_synapse/client/rendezvous"))
.map_err(|_| PyValueError::new_err("Invalid base URI"))?;
let clock = homeserver.call_method0("get_clock")?.to_object(py);
let clock = homeserver
.call_method0("get_clock")?
.into_pyobject(py)
.unwrap_infallible()
.unbind();
// Construct a Python object so that we can get a reference to the
// evict method and schedule it to run.
@ -288,6 +293,13 @@ impl RendezvousHandler {
let mut response = Response::new(Bytes::new());
*response.status_mut() = StatusCode::ACCEPTED;
prepare_headers(response.headers_mut(), session);
// Even though this isn't mandated by the MSC, we set a Content-Type on the response. It
// doesn't do any harm as the body is empty, but this helps escape a bug in some reverse
// proxy/cache setup which strips the ETag header if there is no Content-Type set.
// Specifically, we noticed this behaviour when placing Synapse behind Cloudflare.
response.headers_mut().typed_insert(ContentType::text());
http_response_to_twisted(twisted_request, response)?;
Ok(())
@ -311,7 +323,7 @@ impl RendezvousHandler {
}
pub fn register_module(py: Python<'_>, m: &Bound<'_, PyModule>) -> PyResult<()> {
let child_module = PyModule::new_bound(py, "rendezvous")?;
let child_module = PyModule::new(py, "rendezvous")?;
child_module.add_class::<RendezvousHandler>()?;
@ -319,7 +331,7 @@ pub fn register_module(py: Python<'_>, m: &Bound<'_, PyModule>) -> PyResult<()>
// We need to manually add the module to sys.modules to make `from
// synapse.synapse_rust import rendezvous` work.
py.import_bound("sys")?
py.import("sys")?
.getattr("modules")?
.set_item("synapse.synapse_rust.rendezvous", child_module)?;

View file

@ -195,6 +195,10 @@ if [ -z "$skip_docker_build" ]; then
# Build the unified Complement image (from the worker Synapse image we just built).
echo_if_github "::group::Build Docker image: complement/Dockerfile"
$CONTAINER_RUNTIME build -t complement-synapse \
`# This is the tag we end up pushing to the registry (see` \
`# .github/workflows/push_complement_image.yml) so let's just label it now` \
`# so people can reference it by the same name locally.` \
-t ghcr.io/element-hq/synapse/complement-synapse \
-f "docker/complement/Dockerfile" "docker/complement"
echo_if_github "::endgroup::"

View file

@ -231,6 +231,8 @@ class EventContentFields:
ROOM_NAME: Final = "name"
MEMBERSHIP: Final = "membership"
MEMBERSHIP_DISPLAYNAME: Final = "displayname"
MEMBERSHIP_AVATAR_URL: Final = "avatar_url"
# Used in m.room.guest_access events.
GUEST_ACCESS: Final = "guest_access"

View file

@ -87,8 +87,7 @@ class Codes(str, Enum):
WEAK_PASSWORD = "M_WEAK_PASSWORD"
INVALID_SIGNATURE = "M_INVALID_SIGNATURE"
USER_DEACTIVATED = "M_USER_DEACTIVATED"
# USER_LOCKED = "M_USER_LOCKED"
USER_LOCKED = "ORG_MATRIX_MSC3939_USER_LOCKED"
USER_LOCKED = "M_USER_LOCKED"
NOT_YET_UPLOADED = "M_NOT_YET_UPLOADED"
CANNOT_OVERWRITE_MEDIA = "M_CANNOT_OVERWRITE_MEDIA"
@ -101,8 +100,9 @@ class Codes(str, Enum):
# The account has been suspended on the server.
# By opposition to `USER_DEACTIVATED`, this is a reversible measure
# that can possibly be appealed and reverted.
# Part of MSC3823.
USER_ACCOUNT_SUSPENDED = "ORG.MATRIX.MSC3823.USER_ACCOUNT_SUSPENDED"
# Introduced by MSC3823
# https://github.com/matrix-org/matrix-spec-proposals/pull/3823
USER_ACCOUNT_SUSPENDED = "M_USER_SUSPENDED"
BAD_ALIAS = "M_BAD_ALIAS"
# For restricted join rules.

View file

@ -23,7 +23,8 @@
import hmac
from hashlib import sha256
from urllib.parse import urlencode
from typing import Optional
from urllib.parse import urlencode, urljoin
from synapse.config import ConfigError
from synapse.config.homeserver import HomeServerConfig
@ -66,3 +67,42 @@ class ConsentURIBuilder:
urlencode({"u": user_id, "h": mac}),
)
return consent_uri
class LoginSSORedirectURIBuilder:
def __init__(self, hs_config: HomeServerConfig):
self._public_baseurl = hs_config.server.public_baseurl
def build_login_sso_redirect_uri(
self, *, idp_id: Optional[str], client_redirect_url: str
) -> str:
"""Build a `/login/sso/redirect` URI for the given identity provider.
Builds `/_matrix/client/v3/login/sso/redirect/{idpId}?redirectUrl=xxx` when `idp_id` is specified.
Otherwise, builds `/_matrix/client/v3/login/sso/redirect?redirectUrl=xxx` when `idp_id` is `None`.
Args:
idp_id: Optional ID of the identity provider
client_redirect_url: URL to redirect the user to after login
Returns
The URI to follow when choosing a specific identity provider.
"""
base_url = urljoin(
self._public_baseurl,
f"{CLIENT_API_PREFIX}/v3/login/sso/redirect",
)
serialized_query_parameters = urlencode({"redirectUrl": client_redirect_url})
if idp_id:
resultant_url = urljoin(
# We have to add a trailing slash to the base URL to ensure that the
# last path segment is not stripped away when joining with another path.
f"{base_url}/",
f"{idp_id}?{serialized_query_parameters}",
)
else:
resultant_url = f"{base_url}?{serialized_query_parameters}"
return resultant_url

View file

@ -87,6 +87,7 @@ class ApplicationService:
ip_range_whitelist: Optional[IPSet] = None,
supports_ephemeral: bool = False,
msc3202_transaction_extensions: bool = False,
msc4190_device_management: bool = False,
):
self.token = token
self.url = (
@ -100,6 +101,7 @@ class ApplicationService:
self.ip_range_whitelist = ip_range_whitelist
self.supports_ephemeral = supports_ephemeral
self.msc3202_transaction_extensions = msc3202_transaction_extensions
self.msc4190_device_management = msc4190_device_management
if "|" in self.id:
raise Exception("application service ID cannot contain '|' character")

View file

@ -183,6 +183,18 @@ def _load_appservice(
"The `org.matrix.msc3202` option should be true or false if specified."
)
# Opt-in flag for the MSC4190 behaviours.
# When enabled, the following C-S API endpoints change for appservices:
# - POST /register does not return an access token
# - PUT /devices/{device_id} creates a new device if one does not exist
# - DELETE /devices/{device_id} no longer requires UIA
# - POST /delete_devices/{device_id} no longer requires UIA
msc4190_enabled = as_info.get("io.element.msc4190", False)
if not isinstance(msc4190_enabled, bool):
raise ValueError(
"The `io.element.msc4190` option should be true or false if specified."
)
return ApplicationService(
token=as_info["as_token"],
url=as_info["url"],
@ -195,4 +207,5 @@ def _load_appservice(
ip_range_whitelist=ip_range_whitelist,
supports_ephemeral=supports_ephemeral,
msc3202_transaction_extensions=msc3202_transaction_extensions,
msc4190_device_management=msc4190_enabled,
)

View file

@ -20,7 +20,7 @@
#
#
from typing import Any, List
from typing import Any, List, Optional
from synapse.config.sso import SsoAttributeRequirement
from synapse.types import JsonDict
@ -46,7 +46,9 @@ class CasConfig(Config):
# TODO Update this to a _synapse URL.
public_baseurl = self.root.server.public_baseurl
self.cas_service_url = public_baseurl + "_matrix/client/r0/login/cas/ticket"
self.cas_service_url: Optional[str] = (
public_baseurl + "_matrix/client/r0/login/cas/ticket"
)
self.cas_protocol_version = cas_config.get("protocol_version")
if (

View file

@ -436,10 +436,6 @@ class ExperimentalConfig(Config):
("experimental", "msc4108_delegation_endpoint"),
)
self.msc3823_account_suspension = experimental.get(
"msc3823_account_suspension", False
)
# MSC4151: Report room API (Client-Server API)
self.msc4151_enabled: bool = experimental.get("msc4151_enabled", False)
@ -448,3 +444,6 @@ class ExperimentalConfig(Config):
# MSC4222: Adding `state_after` to sync v2
self.msc4222_enabled: bool = experimental.get("msc4222_enabled", False)
# MSC4076: Add `disable_badge_count`` to pusher configuration
self.msc4076_enabled: bool = experimental.get("msc4076_enabled", False)

View file

@ -360,5 +360,6 @@ def setup_logging(
"Licensed under the AGPL 3.0 license. Website: https://github.com/element-hq/synapse"
)
logging.info("Server hostname: %s", config.server.server_name)
logging.info("Public Base URL: %s", config.server.public_baseurl)
logging.info("Instance name: %s", hs.get_instance_name())
logging.info("Twisted reactor: %s", type(reactor).__name__)

View file

@ -332,8 +332,14 @@ class ServerConfig(Config):
logger.info("Using default public_baseurl %s", public_baseurl)
else:
self.serve_client_wellknown = True
# Ensure that public_baseurl ends with a trailing slash
if public_baseurl[-1] != "/":
public_baseurl += "/"
# Scrutinize user-provided config
if not isinstance(public_baseurl, str):
raise ConfigError("Must be a string", ("public_baseurl",))
self.public_baseurl = public_baseurl
# check that public_baseurl is valid

View file

@ -248,7 +248,7 @@ class EventContext(UnpersistedEventContextBase):
@tag_args
async def get_current_state_ids(
self, state_filter: Optional["StateFilter"] = None
) -> Optional[StateMap[str]]:
) -> StateMap[str]:
"""
Gets the room state map, including this event - ie, the state in ``state_group``
@ -256,13 +256,12 @@ class EventContext(UnpersistedEventContextBase):
not make it into the room state. This method will raise an exception if
``rejected`` is set.
It is also an error to access this for an outlier event.
Arg:
state_filter: specifies the type of state event to fetch from DB, example: EventTypes.JoinRules
Returns:
Returns None if state_group is None, which happens when the associated
event is an outlier.
Maps a (type, state_key) to the event ID of the state event matching
this tuple.
"""
@ -300,7 +299,8 @@ class EventContext(UnpersistedEventContextBase):
this tuple.
"""
assert self.state_group_before_event is not None
if self.state_group_before_event is None:
return {}
return await self._storage.state.get_state_ids_for_group(
self.state_group_before_event, state_filter
)

View file

@ -509,6 +509,9 @@ class FederationV2InviteServlet(BaseFederationServerServlet):
event = content["event"]
invite_room_state = content.get("invite_room_state", [])
if not isinstance(invite_room_state, list):
invite_room_state = []
# Synapse expects invite_room_state to be in unsigned, as it is in v1
# API

View file

@ -729,6 +729,40 @@ class DeviceHandler(DeviceWorkerHandler):
await self.notify_device_update(user_id, device_ids)
async def upsert_device(
self, user_id: str, device_id: str, display_name: Optional[str] = None
) -> bool:
"""Create or update a device
Args:
user_id: The user to update devices of.
device_id: The device to update.
display_name: The new display name for this device.
Returns:
True if the device was created, False if it was updated.
"""
# Reject a new displayname which is too long.
self._check_device_name_length(display_name)
created = await self.store.store_device(
user_id,
device_id,
initial_device_display_name=display_name,
)
if not created:
await self.store.update_device(
user_id,
device_id,
new_display_name=display_name,
)
await self.notify_device_update(user_id, [device_id])
return created
async def update_device(self, user_id: str, device_id: str, content: dict) -> None:
"""Update the given device

View file

@ -880,6 +880,9 @@ class FederationHandler:
if stripped_room_state is None:
raise KeyError("Missing 'knock_room_state' field in send_knock response")
if not isinstance(stripped_room_state, list):
raise TypeError("'knock_room_state' has wrong type")
event.unsigned["knock_room_state"] = stripped_room_state
context = EventContext.for_outlier(self._storage_controllers)

View file

@ -630,7 +630,9 @@ class RegistrationHandler:
"""
await self._auto_join_rooms(user_id)
async def appservice_register(self, user_localpart: str, as_token: str) -> str:
async def appservice_register(
self, user_localpart: str, as_token: str
) -> Tuple[str, ApplicationService]:
user = UserID(user_localpart, self.hs.hostname)
user_id = user.to_string()
service = self.store.get_app_service_by_token(as_token)
@ -653,7 +655,7 @@ class RegistrationHandler:
appservice_id=service_id,
create_profile_with_displayname=user.localpart,
)
return user_id
return (user_id, service)
def check_user_id_not_appservice_exclusive(
self, user_id: str, allowed_appservice: Optional[ApplicationService] = None

View file

@ -39,6 +39,7 @@ from synapse.logging.opentracing import (
trace,
)
from synapse.storage.databases.main.roommember import extract_heroes_from_room_summary
from synapse.storage.databases.main.state_deltas import StateDelta
from synapse.storage.databases.main.stream import PaginateFunction
from synapse.storage.roommember import (
MemberSummary,
@ -48,6 +49,7 @@ from synapse.types import (
MutableStateMap,
PersistedEventPosition,
Requester,
RoomStreamToken,
SlidingSyncStreamToken,
StateMap,
StrCollection,
@ -470,6 +472,64 @@ class SlidingSyncHandler:
return state_map
@trace
async def get_current_state_deltas_for_room(
self,
room_id: str,
room_membership_for_user_at_to_token: RoomsForUserType,
from_token: RoomStreamToken,
to_token: RoomStreamToken,
) -> List[StateDelta]:
"""
Get the state deltas between two tokens taking into account the user's
membership. If the user is LEAVE/BAN, we will only get the state deltas up to
their LEAVE/BAN event (inclusive).
(> `from_token` and <= `to_token`)
"""
membership = room_membership_for_user_at_to_token.membership
# We don't know how to handle `membership` values other than these. The
# code below would need to be updated.
assert membership in (
Membership.JOIN,
Membership.INVITE,
Membership.KNOCK,
Membership.LEAVE,
Membership.BAN,
)
# People shouldn't see past their leave/ban event
if membership in (
Membership.LEAVE,
Membership.BAN,
):
to_bound = (
room_membership_for_user_at_to_token.event_pos.to_room_stream_token()
)
# If we are participating in the room, we can get the latest current state in
# the room
elif membership == Membership.JOIN:
to_bound = to_token
# We can only rely on the stripped state included in the invite/knock event
# itself so there will never be any state deltas to send down.
elif membership in (Membership.INVITE, Membership.KNOCK):
return []
else:
# We don't know how to handle this type of membership yet
#
# FIXME: We should use `assert_never` here but for some reason
# the exhaustive matching doesn't recognize the `Never` here.
# assert_never(membership)
raise AssertionError(
f"Unexpected membership {membership} that we don't know how to handle yet"
)
return await self.store.get_current_state_deltas_for_room(
room_id=room_id,
from_token=from_token,
to_token=to_bound,
)
@trace
async def get_room_sync_data(
self,
@ -755,13 +815,19 @@ class SlidingSyncHandler:
stripped_state = []
if invite_or_knock_event.membership == Membership.INVITE:
stripped_state.extend(
invite_or_knock_event.unsigned.get("invite_room_state", [])
invite_state = invite_or_knock_event.unsigned.get(
"invite_room_state", []
)
if not isinstance(invite_state, list):
invite_state = []
stripped_state.extend(invite_state)
elif invite_or_knock_event.membership == Membership.KNOCK:
stripped_state.extend(
invite_or_knock_event.unsigned.get("knock_room_state", [])
)
knock_state = invite_or_knock_event.unsigned.get("knock_room_state", [])
if not isinstance(knock_state, list):
knock_state = []
stripped_state.extend(knock_state)
stripped_state.append(strip_event(invite_or_knock_event))
@ -790,8 +856,9 @@ class SlidingSyncHandler:
# TODO: Limit the number of state events we're about to send down
# the room, if its too many we should change this to an
# `initial=True`?
deltas = await self.store.get_current_state_deltas_for_room(
deltas = await self.get_current_state_deltas_for_room(
room_id=room_id,
room_membership_for_user_at_to_token=room_membership_for_user_at_to_token,
from_token=from_bound,
to_token=to_token.room_key,
)
@ -955,15 +1022,21 @@ class SlidingSyncHandler:
and state_key == StateValues.LAZY
):
lazy_load_room_members = True
# Everyone in the timeline is relevant
#
# FIXME: We probably also care about invite, ban, kick, targets, etc
# but the spec only mentions "senders".
timeline_membership: Set[str] = set()
if timeline_events is not None:
for timeline_event in timeline_events:
# Anyone who sent a message is relevant
timeline_membership.add(timeline_event.sender)
# We also care about invite, ban, kick, targets,
# etc.
if timeline_event.type == EventTypes.Member:
timeline_membership.add(
timeline_event.state_key
)
# Update the required state filter so we pick up the new
# membership
for user_id in timeline_membership:

View file

@ -161,7 +161,7 @@ class UserDirectoryHandler(StateDeltasHandler):
non_spammy_users = []
for user in results["results"]:
if not await self._spam_checker_module_callbacks.check_username_for_spam(
user
user, user_id
):
non_spammy_users.append(user)
results["results"] = non_spammy_users

View file

@ -21,6 +21,7 @@
import contextlib
import logging
import time
from http import HTTPStatus
from typing import TYPE_CHECKING, Any, Generator, Optional, Tuple, Union
import attr
@ -139,6 +140,41 @@ class SynapseRequest(Request):
self.synapse_site.site_tag,
)
# Twisted machinery: this method is called by the Channel once the full request has
# been received, to dispatch the request to a resource.
#
# We're patching Twisted to bail/abort early when we see someone trying to upload
# `multipart/form-data` so we can avoid Twisted parsing the entire request body into
# in-memory (specific problem of this specific `Content-Type`). This protects us
# from an attacker uploading something bigger than the available RAM and crashing
# the server with a `MemoryError`, or carefully block just enough resources to cause
# all other requests to fail.
#
# FIXME: This can be removed once we Twisted releases a fix and we update to a
# version that is patched
def requestReceived(self, command: bytes, path: bytes, version: bytes) -> None:
if command == b"POST":
ctype = self.requestHeaders.getRawHeaders(b"content-type")
if ctype and b"multipart/form-data" in ctype[0]:
self.method, self.uri = command, path
self.clientproto = version
self.code = HTTPStatus.UNSUPPORTED_MEDIA_TYPE.value
self.code_message = bytes(
HTTPStatus.UNSUPPORTED_MEDIA_TYPE.phrase, "ascii"
)
self.responseHeaders.setRawHeaders(b"content-length", [b"0"])
logger.warning(
"Aborting connection from %s because `content-type: multipart/form-data` is unsupported: %s %s",
self.client,
command,
path,
)
self.write(b"")
self.loseConnection()
return
return super().requestReceived(command, path, version)
def handleContentChunk(self, data: bytes) -> None:
# we should have a `content` by now.
assert self.content, "handleContentChunk() called before gotLength()"

View file

@ -67,6 +67,11 @@ class ThumbnailError(Exception):
class Thumbnailer:
FORMATS = {"image/jpeg": "JPEG", "image/png": "PNG"}
# Which image formats we allow Pillow to open.
# This should intentionally be kept restrictive, because the decoder of any
# format in this list becomes part of our trusted computing base.
PILLOW_FORMATS = ("jpeg", "png", "webp", "gif")
@staticmethod
def set_limits(max_image_pixels: int) -> None:
Image.MAX_IMAGE_PIXELS = max_image_pixels
@ -76,7 +81,7 @@ class Thumbnailer:
self._closed = False
try:
self.image = Image.open(input_path)
self.image = Image.open(input_path, formats=self.PILLOW_FORMATS)
except OSError as e:
# If an error occurs opening the image, a thumbnail won't be able to
# be generated.

View file

@ -31,6 +31,7 @@ from typing import (
Optional,
Tuple,
Union,
cast,
)
# `Literal` appears with Python 3.8.
@ -168,7 +169,10 @@ USER_MAY_PUBLISH_ROOM_CALLBACK = Callable[
]
],
]
CHECK_USERNAME_FOR_SPAM_CALLBACK = Callable[[UserProfile], Awaitable[bool]]
CHECK_USERNAME_FOR_SPAM_CALLBACK = Union[
Callable[[UserProfile], Awaitable[bool]],
Callable[[UserProfile, str], Awaitable[bool]],
]
LEGACY_CHECK_REGISTRATION_FOR_SPAM_CALLBACK = Callable[
[
Optional[dict],
@ -716,7 +720,9 @@ class SpamCheckerModuleApiCallbacks:
return self.NOT_SPAM
async def check_username_for_spam(self, user_profile: UserProfile) -> bool:
async def check_username_for_spam(
self, user_profile: UserProfile, requester_id: str
) -> bool:
"""Checks if a user ID or display name are considered "spammy" by this server.
If the server considers a username spammy, then it will not be included in
@ -727,15 +733,33 @@ class SpamCheckerModuleApiCallbacks:
* user_id
* display_name
* avatar_url
requester_id: The user ID of the user making the user directory search request.
Returns:
True if the user is spammy.
"""
for callback in self._check_username_for_spam_callbacks:
with Measure(self.clock, f"{callback.__module__}.{callback.__qualname__}"):
checker_args = inspect.signature(callback)
# Make a copy of the user profile object to ensure the spam checker cannot
# modify it.
res = await delay_cancellation(callback(user_profile.copy()))
# Also ensure backwards compatibility with spam checker callbacks
# that don't expect the requester_id argument.
if len(checker_args.parameters) == 2:
callback_with_requester_id = cast(
Callable[[UserProfile, str], Awaitable[bool]], callback
)
res = await delay_cancellation(
callback_with_requester_id(user_profile.copy(), requester_id)
)
else:
callback_without_requester_id = cast(
Callable[[UserProfile], Awaitable[bool]], callback
)
res = await delay_cancellation(
callback_without_requester_id(user_profile.copy())
)
if res:
return True

View file

@ -127,6 +127,11 @@ class HttpPusher(Pusher):
if self.data is None:
raise PusherConfigException("'data' key can not be null for HTTP pusher")
# Check if badge counts should be disabled for this push gateway
self.disable_badge_count = self.hs.config.experimental.msc4076_enabled and bool(
self.data.get("org.matrix.msc4076.disable_badge_count", False)
)
self.name = "%s/%s/%s" % (
pusher_config.user_name,
pusher_config.app_id,
@ -461,9 +466,10 @@ class HttpPusher(Pusher):
content: JsonDict = {
"event_id": event.event_id,
"room_id": event.room_id,
"counts": {"unread": badge},
"prio": priority,
}
if not self.disable_badge_count:
content["counts"] = {"unread": badge}
# event_id_only doesn't include the tweaks, so override them.
tweaks = {}
else:
@ -478,10 +484,10 @@ class HttpPusher(Pusher):
"type": event.type,
"sender": event.user_id,
"prio": priority,
"counts": {
}
if not self.disable_badge_count:
content["counts"] = {
"unread": badge,
# 'missed_calls': 2
},
}
if event.type == "m.room.member" and event.is_state():
content["membership"] = event.content["membership"]

View file

@ -74,9 +74,13 @@ async def get_context_for_event(
room_state = []
if ev.content.get("membership") == Membership.INVITE:
room_state = ev.unsigned.get("invite_room_state", [])
invite_room_state = ev.unsigned.get("invite_room_state", [])
if isinstance(invite_room_state, list):
room_state = invite_room_state
elif ev.content.get("membership") == Membership.KNOCK:
room_state = ev.unsigned.get("knock_room_state", [])
knock_room_state = ev.unsigned.get("knock_room_state", [])
if isinstance(knock_room_state, list):
room_state = knock_room_state
# Ideally we'd reuse the logic in `calculate_room_name`, but that gets
# complicated to handle partial events vs pulling events from the DB.

View file

@ -495,7 +495,7 @@ class LockReleasedCommand(Command):
class NewActiveTaskCommand(_SimpleCommand):
"""Sent to inform instance handling background tasks that a new active task is available to run.
"""Sent to inform instance handling background tasks that a new task is ready to run.
Format::

View file

@ -727,7 +727,7 @@ class ReplicationCommandHandler:
) -> None:
"""Called when get a new NEW_ACTIVE_TASK command."""
if self._task_scheduler:
self._task_scheduler.launch_task_by_id(cmd.data)
self._task_scheduler.on_new_task(cmd.data)
def new_connection(self, connection: IReplicationConnection) -> None:
"""Called when we have a new connection."""

View file

@ -332,7 +332,6 @@ def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
BackgroundUpdateRestServlet(hs).register(http_server)
BackgroundUpdateStartJobRestServlet(hs).register(http_server)
ExperimentalFeaturesRestServlet(hs).register(http_server)
if hs.config.experimental.msc3823_account_suspension:
SuspendAccountRestServlet(hs).register(http_server)

View file

@ -114,6 +114,10 @@ class DeleteDevicesRestServlet(RestServlet):
else:
raise e
if requester.app_service and requester.app_service.msc4190_device_management:
# MSC4190 can skip UIA for this endpoint
pass
else:
await self.auth_handler.validate_user_via_ui_auth(
requester,
request,
@ -175,9 +179,6 @@ class DeviceRestServlet(RestServlet):
async def on_DELETE(
self, request: SynapseRequest, device_id: str
) -> Tuple[int, JsonDict]:
if self._msc3861_oauth_delegation_enabled:
raise UnrecognizedRequestError(code=404)
requester = await self.auth.get_user_by_req(request)
try:
@ -192,6 +193,15 @@ class DeviceRestServlet(RestServlet):
else:
raise
if requester.app_service and requester.app_service.msc4190_device_management:
# MSC4190 allows appservices to delete devices through this endpoint without UIA
# It's also allowed with MSC3861 enabled
pass
else:
if self._msc3861_oauth_delegation_enabled:
raise UnrecognizedRequestError(code=404)
await self.auth_handler.validate_user_via_ui_auth(
requester,
request,
@ -216,6 +226,16 @@ class DeviceRestServlet(RestServlet):
requester = await self.auth.get_user_by_req(request, allow_guest=True)
body = parse_and_validate_json_object_from_request(request, self.PutBody)
# MSC4190 allows appservices to create devices through this endpoint
if requester.app_service and requester.app_service.msc4190_device_management:
created = await self.device_handler.upsert_device(
user_id=requester.user.to_string(),
device_id=device_id,
display_name=body.display_name,
)
return 201 if created else 200, {}
await self.device_handler.update_device(
requester.user.to_string(), device_id, body.dict()
)

View file

@ -771,9 +771,12 @@ class RegisterRestServlet(RestServlet):
body: JsonDict,
should_issue_refresh_token: bool = False,
) -> JsonDict:
user_id = await self.registration_handler.appservice_register(
user_id, appservice = await self.registration_handler.appservice_register(
username, as_token
)
if appservice.msc4190_device_management:
body["inhibit_login"] = True
return await self._create_registration_details(
user_id,
body,
@ -937,7 +940,7 @@ class RegisterAppServiceOnlyRestServlet(RestServlet):
as_token = self.auth.get_access_token_from_request(request)
user_id = await self.registration_handler.appservice_register(
user_id, _ = await self.registration_handler.appservice_register(
desired_username, as_token
)
return 200, {"user_id": user_id}

View file

@ -436,7 +436,12 @@ class SyncRestServlet(RestServlet):
)
unsigned = dict(invite.get("unsigned", {}))
invite["unsigned"] = unsigned
invited_state = list(unsigned.pop("invite_room_state", []))
invited_state = unsigned.pop("invite_room_state", [])
if not isinstance(invited_state, list):
invited_state = []
invited_state = list(invited_state)
invited_state.append(invite)
invited[room.room_id] = {"invite_state": {"events": invited_state}}
@ -476,7 +481,10 @@ class SyncRestServlet(RestServlet):
# Extract the stripped room state from the unsigned dict
# This is for clients to get a little bit of information about
# the room they've knocked on, without revealing any sensitive information
knocked_state = list(unsigned.pop("knock_room_state", []))
knocked_state = unsigned.pop("knock_room_state", [])
if not isinstance(knocked_state, list):
knocked_state = []
knocked_state = list(knocked_state)
# Append the actual knock membership event itself as well. This provides
# the client with:

View file

@ -21,6 +21,7 @@
import logging
from typing import TYPE_CHECKING
from synapse.api.urls import LoginSSORedirectURIBuilder
from synapse.http.server import (
DirectServeHtmlResource,
finish_request,
@ -49,6 +50,8 @@ class PickIdpResource(DirectServeHtmlResource):
hs.config.sso.sso_login_idp_picker_template
)
self._server_name = hs.hostname
self._public_baseurl = hs.config.server.public_baseurl
self._login_sso_redirect_url_builder = LoginSSORedirectURIBuilder(hs.config)
async def _async_render_GET(self, request: SynapseRequest) -> None:
client_redirect_url = parse_string(
@ -56,25 +59,23 @@ class PickIdpResource(DirectServeHtmlResource):
)
idp = parse_string(request, "idp", required=False)
# if we need to pick an IdP, do so
# If we need to pick an IdP, do so
if not idp:
return await self._serve_id_picker(request, client_redirect_url)
# otherwise, redirect to the IdP's redirect URI
providers = self._sso_handler.get_identity_providers()
auth_provider = providers.get(idp)
if not auth_provider:
logger.info("Unknown idp %r", idp)
self._sso_handler.render_error(
request, "unknown_idp", "Unknown identity provider ID"
# Otherwise, redirect to the login SSO redirect endpoint for the given IdP
# (which will in turn take us to the the IdP's redirect URI).
#
# We could go directly to the IdP's redirect URI, but this way we ensure that
# the user goes through the same logic as normal flow. Additionally, if a proxy
# needs to intercept the request, it only needs to intercept the one endpoint.
sso_login_redirect_url = (
self._login_sso_redirect_url_builder.build_login_sso_redirect_uri(
idp_id=idp, client_redirect_url=client_redirect_url
)
return
sso_url = await auth_provider.handle_redirect_request(
request, client_redirect_url.encode("utf8")
)
logger.info("Redirecting to %s", sso_url)
request.redirect(sso_url)
logger.info("Redirecting to %s", sso_login_redirect_url)
request.redirect(sso_login_redirect_url)
finish_request(request)
async def _serve_id_picker(

View file

@ -254,6 +254,7 @@ class HomeServer(metaclass=abc.ABCMeta):
"auth",
"deactivate_account",
"delayed_events",
"e2e_keys", # for the `delete_old_otks` scheduled-task handler
"message",
"pagination",
"profile",

View file

@ -243,6 +243,13 @@ class StateDeltasStore(SQLBaseStore):
(> `from_token` and <= `to_token`)
"""
# We can bail early if the `from_token` is after the `to_token`
if (
to_token is not None
and from_token is not None
and to_token.is_before_or_eq(from_token)
):
return []
if (
from_token is not None

View file

@ -407,8 +407,8 @@ class StateValues:
# Include all state events of the given type
WILDCARD: Final = "*"
# Lazy-load room membership events (include room membership events for any event
# `sender` in the timeline). We only give special meaning to this value when it's a
# `state_key`.
# `sender` or membership change target in the timeline). We only give special
# meaning to this value when it's a `state_key`.
LAZY: Final = "$LAZY"
# Subsitute with the requester's user ID. Typically used by clients to get
# the user's membership.
@ -641,9 +641,10 @@ class RoomSyncConfig:
if user_id == StateValues.ME:
continue
# We're lazy-loading membership so we can just return the state we have.
# Lazy-loading means we include membership for any event `sender` in the
# timeline but since we had to auth those timeline events, we will have the
# membership state for them (including from remote senders).
# Lazy-loading means we include membership for any event `sender` or
# membership change target in the timeline but since we had to auth those
# timeline events, we will have the membership state for them (including
# from remote senders).
elif user_id == StateValues.LAZY:
continue
elif user_id == StateValues.WILDCARD:

View file

@ -174,9 +174,10 @@ class TaskScheduler:
The id of the scheduled task
"""
status = TaskStatus.SCHEDULED
start_now = False
if timestamp is None or timestamp < self._clock.time_msec():
timestamp = self._clock.time_msec()
status = TaskStatus.ACTIVE
start_now = True
task = ScheduledTask(
random_string(16),
@ -190,9 +191,11 @@ class TaskScheduler:
)
await self._store.insert_scheduled_task(task)
if status == TaskStatus.ACTIVE:
# If the task is ready to run immediately, run the scheduling algorithm now
# rather than waiting
if start_now:
if self._run_background_tasks:
await self._launch_task(task)
self._launch_scheduled_tasks()
else:
self._hs.get_replication_command_handler().send_new_active_task(task.id)
@ -300,23 +303,13 @@ class TaskScheduler:
raise Exception(f"Task {id} is currently ACTIVE and can't be deleted")
await self._store.delete_scheduled_task(id)
def launch_task_by_id(self, id: str) -> None:
"""Try launching the task with the given ID."""
# Don't bother trying to launch new tasks if we're already at capacity.
if len(self._running_tasks) >= TaskScheduler.MAX_CONCURRENT_RUNNING_TASKS:
return
def on_new_task(self, task_id: str) -> None:
"""Handle a notification that a new ready-to-run task has been added to the queue"""
# Just run the scheduler
self._launch_scheduled_tasks()
run_as_background_process("launch_task_by_id", self._launch_task_by_id, id)
async def _launch_task_by_id(self, id: str) -> None:
"""Helper async function for `launch_task_by_id`."""
task = await self.get_task(id)
if task:
await self._launch_task(task)
@wrap_as_background_process("launch_scheduled_tasks")
async def _launch_scheduled_tasks(self) -> None:
"""Retrieve and launch scheduled tasks that should be running at that time."""
def _launch_scheduled_tasks(self) -> None:
"""Retrieve and launch scheduled tasks that should be running at this time."""
# Don't bother trying to launch new tasks if we're already at capacity.
if len(self._running_tasks) >= TaskScheduler.MAX_CONCURRENT_RUNNING_TASKS:
return
@ -326,10 +319,14 @@ class TaskScheduler:
self._launching_new_tasks = True
async def inner() -> None:
try:
for task in await self.get_tasks(
statuses=[TaskStatus.ACTIVE], limit=self.MAX_CONCURRENT_RUNNING_TASKS
statuses=[TaskStatus.ACTIVE],
limit=self.MAX_CONCURRENT_RUNNING_TASKS,
):
# _launch_task will ignore tasks that we're already running, and
# will also do nothing if we're already at the maximum capacity.
await self._launch_task(task)
for task in await self.get_tasks(
statuses=[TaskStatus.SCHEDULED],
@ -341,6 +338,8 @@ class TaskScheduler:
finally:
self._launching_new_tasks = False
run_as_background_process("launch_scheduled_tasks", inner)
@wrap_as_background_process("clean_scheduled_tasks")
async def _clean_scheduled_tasks(self) -> None:
"""Clean old complete or failed jobs to avoid clutter the DB."""

55
tests/api/test_urls.py Normal file
View file

@ -0,0 +1,55 @@
#
# This file is licensed under the Affero General Public License (AGPL) version 3.
#
# Copyright (C) 2024 New Vector, Ltd
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# See the GNU Affero General Public License for more details:
# <https://www.gnu.org/licenses/agpl-3.0.html>.
#
from twisted.test.proto_helpers import MemoryReactor
from synapse.api.urls import LoginSSORedirectURIBuilder
from synapse.server import HomeServer
from synapse.util import Clock
from tests.unittest import HomeserverTestCase
# a (valid) url with some annoying characters in. %3D is =, %26 is &, %2B is +
TRICKY_TEST_CLIENT_REDIRECT_URL = 'https://x?<ab c>&q"+%3D%2B"="%26=o"'
class LoginSSORedirectURIBuilderTestCase(HomeserverTestCase):
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
self.login_sso_redirect_url_builder = LoginSSORedirectURIBuilder(hs.config)
def test_no_idp_id(self) -> None:
self.assertEqual(
self.login_sso_redirect_url_builder.build_login_sso_redirect_uri(
idp_id=None, client_redirect_url="http://example.com/redirect"
),
"https://test/_matrix/client/v3/login/sso/redirect?redirectUrl=http%3A%2F%2Fexample.com%2Fredirect",
)
def test_explicit_idp_id(self) -> None:
self.assertEqual(
self.login_sso_redirect_url_builder.build_login_sso_redirect_uri(
idp_id="oidc-github", client_redirect_url="http://example.com/redirect"
),
"https://test/_matrix/client/v3/login/sso/redirect/oidc-github?redirectUrl=http%3A%2F%2Fexample.com%2Fredirect",
)
def test_tricky_redirect_uri(self) -> None:
self.assertEqual(
self.login_sso_redirect_url_builder.build_login_sso_redirect_uri(
idp_id="oidc-github",
client_redirect_url=TRICKY_TEST_CLIENT_REDIRECT_URL,
),
"https://test/_matrix/client/v3/login/sso/redirect/oidc-github?redirectUrl=https%3A%2F%2Fx%3F%3Cab+c%3E%26q%22%2B%253D%252B%22%3D%22f%C3%B6%2526%3Do%22",
)

View file

@ -1165,12 +1165,23 @@ class ApplicationServicesHandlerOtkCountsTestCase(unittest.HomeserverTestCase):
self.hs.get_datastores().main.services_cache = [self._service]
# Register some appservice users
self._sender_user, self._sender_device = self.register_appservice_user(
user_id, device_id = self.register_appservice_user(
"as.sender", self._service_token
)
self._namespaced_user, self._namespaced_device = self.register_appservice_user(
# With MSC4190 enabled, there will not be a device created
# during AS registration. However MSC4190 is not enabled
# in this test. It may become the default behaviour in the
# future, in which case this test will need to be updated.
assert device_id is not None
self._sender_user = user_id
self._sender_device = device_id
user_id, device_id = self.register_appservice_user(
"_as_user1", self._service_token
)
assert device_id is not None
self._namespaced_user = user_id
self._namespaced_device = device_id
# Register a real user as well.
self._real_user = self.register_user("real.user", "meow")

View file

@ -560,9 +560,15 @@ class MSC3861OAuthDelegation(HomeserverTestCase):
self.assertEqual(channel.code, 401, channel.json_body)
def expect_unrecognized(
self, method: str, path: str, content: Union[bytes, str, JsonDict] = ""
self,
method: str,
path: str,
content: Union[bytes, str, JsonDict] = "",
auth: bool = False,
) -> None:
channel = self.make_request(method, path, content)
channel = self.make_request(
method, path, content, access_token="token" if auth else None
)
self.assertEqual(channel.code, 404, channel.json_body)
self.assertEqual(
@ -648,8 +654,25 @@ class MSC3861OAuthDelegation(HomeserverTestCase):
def test_device_management_endpoints_removed(self) -> None:
"""Test that device management endpoints that were removed in MSC2964 are no longer available."""
self.expect_unrecognized("POST", "/_matrix/client/v3/delete_devices")
self.expect_unrecognized("DELETE", "/_matrix/client/v3/devices/{DEVICE}")
# Because we still support those endpoints with ASes, it checks the
# access token before returning 404
self.http_client.request = AsyncMock(
return_value=FakeResponse.json(
code=200,
payload={
"active": True,
"sub": SUBJECT,
"scope": " ".join([MATRIX_USER_SCOPE, MATRIX_DEVICE_SCOPE]),
"username": USERNAME,
},
)
)
self.expect_unrecognized("POST", "/_matrix/client/v3/delete_devices", auth=True)
self.expect_unrecognized(
"DELETE", "/_matrix/client/v3/devices/{DEVICE}", auth=True
)
def test_openid_endpoints_removed(self) -> None:
"""Test that OpenID id_token endpoints that were removed in MSC2964 are no longer available."""

View file

@ -796,6 +796,7 @@ class UserDirectoryTestCase(unittest.HomeserverTestCase):
s = self.get_success(self.handler.search_users(u1, "user2", 10))
self.assertEqual(len(s["results"]), 1)
# Kept old spam checker without `requester_id` tests for backwards compatibility.
async def allow_all(user_profile: UserProfile) -> bool:
# Allow all users.
return False
@ -809,6 +810,7 @@ class UserDirectoryTestCase(unittest.HomeserverTestCase):
s = self.get_success(self.handler.search_users(u1, "user2", 10))
self.assertEqual(len(s["results"]), 1)
# Kept old spam checker without `requester_id` tests for backwards compatibility.
# Configure a spam checker that filters all users.
async def block_all(user_profile: UserProfile) -> bool:
# All users are spammy.
@ -820,6 +822,40 @@ class UserDirectoryTestCase(unittest.HomeserverTestCase):
s = self.get_success(self.handler.search_users(u1, "user2", 10))
self.assertEqual(len(s["results"]), 0)
async def allow_all_expects_requester_id(
user_profile: UserProfile, requester_id: str
) -> bool:
self.assertEqual(requester_id, u1)
# Allow all users.
return False
# Configure a spam checker that does not filter any users.
spam_checker = self.hs.get_module_api_callbacks().spam_checker
spam_checker._check_username_for_spam_callbacks = [
allow_all_expects_requester_id
]
# The results do not change:
# We get one search result when searching for user2 by user1.
s = self.get_success(self.handler.search_users(u1, "user2", 10))
self.assertEqual(len(s["results"]), 1)
# Configure a spam checker that filters all users.
async def block_all_expects_requester_id(
user_profile: UserProfile, requester_id: str
) -> bool:
self.assertEqual(requester_id, u1)
# All users are spammy.
return True
spam_checker._check_username_for_spam_callbacks = [
block_all_expects_requester_id
]
# User1 now gets no search results for any of the other users.
s = self.get_success(self.handler.search_users(u1, "user2", 10))
self.assertEqual(len(s["results"]), 0)
@override_config(
{
"spam_checker": {

View file

@ -90,3 +90,56 @@ class SynapseRequestTestCase(HomeserverTestCase):
# default max upload size is 50M, so it should drop on the next buffer after
# that.
self.assertEqual(sent, 50 * 1024 * 1024 + 1024)
def test_content_type_multipart(self) -> None:
"""HTTP POST requests with `content-type: multipart/form-data` should be rejected"""
self.hs.start_listening()
# find the HTTP server which is configured to listen on port 0
(port, factory, _backlog, interface) = self.reactor.tcpServers[0]
self.assertEqual(interface, "::")
self.assertEqual(port, 0)
# as a control case, first send a regular request.
# complete the connection and wire it up to a fake transport
client_address = IPv6Address("TCP", "::1", 2345)
protocol = factory.buildProtocol(client_address)
transport = StringTransport()
protocol.makeConnection(transport)
protocol.dataReceived(
b"POST / HTTP/1.1\r\n"
b"Connection: close\r\n"
b"Transfer-Encoding: chunked\r\n"
b"\r\n"
b"0\r\n"
b"\r\n"
)
while not transport.disconnecting:
self.reactor.advance(1)
# we should get a 404
self.assertRegex(transport.value().decode(), r"^HTTP/1\.1 404 ")
# now send request with content-type header
protocol = factory.buildProtocol(client_address)
transport = StringTransport()
protocol.makeConnection(transport)
protocol.dataReceived(
b"POST / HTTP/1.1\r\n"
b"Connection: close\r\n"
b"Transfer-Encoding: chunked\r\n"
b"Content-Type: multipart/form-data\r\n"
b"\r\n"
b"0\r\n"
b"\r\n"
)
while not transport.disconnecting:
self.reactor.advance(1)
# we should get a 415
self.assertRegex(transport.value().decode(), r"^HTTP/1\.1 415 ")

View file

@ -17,9 +17,11 @@
# [This file includes modifications made by New Vector Limited]
#
#
from typing import Any, List, Tuple
from typing import Any, Dict, List, Tuple
from unittest.mock import Mock
from parameterized import parameterized
from twisted.internet.defer import Deferred
from twisted.test.proto_helpers import MemoryReactor
@ -1085,3 +1087,83 @@ class HTTPPusherTests(HomeserverTestCase):
self.pump()
self.assertEqual(len(self.push_attempts), 11)
@parameterized.expand(
[
# Badge count disabled
(True, True),
(True, False),
# Badge count enabled
(False, True),
(False, False),
]
)
@override_config({"experimental_features": {"msc4076_enabled": True}})
def test_msc4076_badge_count(
self, disable_badge_count: bool, event_id_only: bool
) -> None:
# Register the user who gets notified
user_id = self.register_user("user", "pass")
access_token = self.login("user", "pass")
# Register the user who sends the message
other_user_id = self.register_user("otheruser", "pass")
other_access_token = self.login("otheruser", "pass")
# Register the pusher with disable_badge_count set to True
user_tuple = self.get_success(
self.hs.get_datastores().main.get_user_by_access_token(access_token)
)
assert user_tuple is not None
device_id = user_tuple.device_id
# Set the push data dict based on test input parameters
push_data: Dict[str, Any] = {
"url": "http://example.com/_matrix/push/v1/notify",
}
if disable_badge_count:
push_data["org.matrix.msc4076.disable_badge_count"] = True
if event_id_only:
push_data["format"] = "event_id_only"
self.get_success(
self.hs.get_pusherpool().add_or_update_pusher(
user_id=user_id,
device_id=device_id,
kind="http",
app_id="m.http",
app_display_name="HTTP Push Notifications",
device_display_name="pushy push",
pushkey="a@example.com",
lang=None,
data=push_data,
)
)
# Create a room
room = self.helper.create_room_as(user_id, tok=access_token)
# The other user joins
self.helper.join(room=room, user=other_user_id, tok=other_access_token)
# The other user sends a message
self.helper.send(room, body="Hi!", tok=other_access_token)
# Advance time a bit, so the pusher will register something has happened
self.pump()
# One push was attempted to be sent
self.assertEqual(len(self.push_attempts), 1)
self.assertEqual(
self.push_attempts[0][1], "http://example.com/_matrix/push/v1/notify"
)
if disable_badge_count:
# Verify that the notification DOESN'T contain a counts field
self.assertNotIn("counts", self.push_attempts[0][2]["notification"])
else:
# Ensure that the notification DOES contain a counts field
self.assertIn("counts", self.push_attempts[0][2]["notification"])
self.assertEqual(
self.push_attempts[0][2]["notification"]["counts"]["unread"], 1
)

View file

@ -5031,7 +5031,6 @@ class UserSuspensionTestCase(unittest.HomeserverTestCase):
self.store = hs.get_datastores().main
@override_config({"experimental_features": {"msc3823_account_suspension": True}})
def test_suspend_user(self) -> None:
# test that suspending user works
channel = self.make_request(

View file

@ -11,6 +11,7 @@
# See the GNU Affero General Public License for more details:
# <https://www.gnu.org/licenses/agpl-3.0.html>.
#
import enum
import logging
from parameterized import parameterized, parameterized_class
@ -18,9 +19,9 @@ from parameterized import parameterized, parameterized_class
from twisted.test.proto_helpers import MemoryReactor
import synapse.rest.admin
from synapse.api.constants import EventTypes, Membership
from synapse.api.constants import EventContentFields, EventTypes, JoinRules, Membership
from synapse.handlers.sliding_sync import StateValues
from synapse.rest.client import login, room, sync
from synapse.rest.client import knock, login, room, sync
from synapse.server import HomeServer
from synapse.util import Clock
@ -30,6 +31,17 @@ from tests.test_utils.event_injection import mark_event_as_partial_state
logger = logging.getLogger(__name__)
# Inherit from `str` so that they show up in the test description when we
# `@parameterized.expand(...)` the first parameter
class MembershipAction(str, enum.Enum):
INVITE = "invite"
JOIN = "join"
KNOCK = "knock"
LEAVE = "leave"
BAN = "ban"
KICK = "kick"
# FIXME: This can be removed once we bump `SCHEMA_COMPAT_VERSION` and run the
# foreground update for
# `sliding_sync_joined_rooms`/`sliding_sync_membership_snapshots` (tracked by
@ -52,6 +64,7 @@ class SlidingSyncRoomsRequiredStateTestCase(SlidingSyncBase):
servlets = [
synapse.rest.admin.register_servlets,
login.register_servlets,
knock.register_servlets,
room.register_servlets,
sync.register_servlets,
]
@ -496,6 +509,153 @@ class SlidingSyncRoomsRequiredStateTestCase(SlidingSyncBase):
)
self.assertIsNone(response_body["rooms"][room_id1].get("invite_state"))
@parameterized.expand(
[
(MembershipAction.LEAVE,),
(MembershipAction.INVITE,),
(MembershipAction.KNOCK,),
(MembershipAction.JOIN,),
(MembershipAction.BAN,),
(MembershipAction.KICK,),
]
)
def test_rooms_required_state_changed_membership_in_timeline_lazy_loading_room_members_incremental_sync(
self,
room_membership_action: str,
) -> None:
"""
On incremental sync, test `rooms.required_state` returns people relevant to the
timeline when lazy-loading room members, `["m.room.member","$LAZY"]` **including
changes to membership**.
"""
user1_id = self.register_user("user1", "pass")
user1_tok = self.login(user1_id, "pass")
user2_id = self.register_user("user2", "pass")
user2_tok = self.login(user2_id, "pass")
user3_id = self.register_user("user3", "pass")
user3_tok = self.login(user3_id, "pass")
user4_id = self.register_user("user4", "pass")
user4_tok = self.login(user4_id, "pass")
user5_id = self.register_user("user5", "pass")
user5_tok = self.login(user5_id, "pass")
room_id1 = self.helper.create_room_as(user2_id, tok=user2_tok, is_public=True)
# If we're testing knocks, set the room to knock
if room_membership_action == MembershipAction.KNOCK:
self.helper.send_state(
room_id1,
EventTypes.JoinRules,
{"join_rule": JoinRules.KNOCK},
tok=user2_tok,
)
# Join the test users to the room
self.helper.invite(room_id1, src=user2_id, targ=user1_id, tok=user2_tok)
self.helper.join(room_id1, user1_id, tok=user1_tok)
self.helper.invite(room_id1, src=user2_id, targ=user3_id, tok=user2_tok)
self.helper.join(room_id1, user3_id, tok=user3_tok)
self.helper.invite(room_id1, src=user2_id, targ=user4_id, tok=user2_tok)
self.helper.join(room_id1, user4_id, tok=user4_tok)
if room_membership_action in (
MembershipAction.LEAVE,
MembershipAction.BAN,
MembershipAction.JOIN,
):
self.helper.invite(room_id1, src=user2_id, targ=user5_id, tok=user2_tok)
self.helper.join(room_id1, user5_id, tok=user5_tok)
# Send some messages to fill up the space
self.helper.send(room_id1, "1", tok=user2_tok)
self.helper.send(room_id1, "2", tok=user2_tok)
self.helper.send(room_id1, "3", tok=user2_tok)
# Make the Sliding Sync request with lazy loading for the room members
sync_body = {
"lists": {
"foo-list": {
"ranges": [[0, 1]],
"required_state": [
[EventTypes.Create, ""],
[EventTypes.Member, StateValues.LAZY],
],
"timeline_limit": 3,
}
}
}
response_body, from_token = self.do_sync(sync_body, tok=user1_tok)
# Send more timeline events into the room
self.helper.send(room_id1, "4", tok=user2_tok)
self.helper.send(room_id1, "5", tok=user4_tok)
# The third event will be our membership event concerning user5
if room_membership_action == MembershipAction.LEAVE:
# User 5 leaves
self.helper.leave(room_id1, user5_id, tok=user5_tok)
elif room_membership_action == MembershipAction.INVITE:
# User 5 is invited
self.helper.invite(room_id1, src=user2_id, targ=user5_id, tok=user2_tok)
elif room_membership_action == MembershipAction.KNOCK:
# User 5 knocks
self.helper.knock(room_id1, user5_id, tok=user5_tok)
# The admin of the room accepts the knock
self.helper.invite(room_id1, src=user2_id, targ=user5_id, tok=user2_tok)
elif room_membership_action == MembershipAction.JOIN:
# Update the display name of user5 (causing a membership change)
self.helper.send_state(
room_id1,
event_type=EventTypes.Member,
state_key=user5_id,
body={
EventContentFields.MEMBERSHIP: Membership.JOIN,
EventContentFields.MEMBERSHIP_DISPLAYNAME: "quick changer",
},
tok=user5_tok,
)
elif room_membership_action == MembershipAction.BAN:
self.helper.ban(room_id1, src=user2_id, targ=user5_id, tok=user2_tok)
elif room_membership_action == MembershipAction.KICK:
# Kick user5 from the room
self.helper.change_membership(
room=room_id1,
src=user2_id,
targ=user5_id,
tok=user2_tok,
membership=Membership.LEAVE,
extra_data={
"reason": "Bad manners",
},
)
else:
raise AssertionError(
f"Unknown room_membership_action: {room_membership_action}"
)
# Make an incremental Sliding Sync request
response_body, _ = self.do_sync(sync_body, since=from_token, tok=user1_tok)
state_map = self.get_success(
self.storage_controllers.state.get_current_state(room_id1)
)
# Only user2, user4, and user5 sent events in the last 3 events we see in the
# `timeline`.
self._assertRequiredStateIncludes(
response_body["rooms"][room_id1]["required_state"],
{
# This appears because *some* membership in the room changed and the
# heroes are recalculated and is thrown in because we have it. But this
# is technically optional and not needed because we've already seen user2
# in the last sync (and their membership hasn't changed).
state_map[(EventTypes.Member, user2_id)],
# Appears because there is a message in the timeline from this user
state_map[(EventTypes.Member, user4_id)],
# Appears because there is a membership event in the timeline from this user
state_map[(EventTypes.Member, user5_id)],
},
exact=True,
)
self.assertIsNone(response_body["rooms"][room_id1].get("invite_state"))
def test_rooms_required_state_expand_lazy_loading_room_members_incremental_sync(
self,
) -> None:
@ -751,9 +911,10 @@ class SlidingSyncRoomsRequiredStateTestCase(SlidingSyncBase):
self.assertIsNone(response_body["rooms"][room_id1].get("invite_state"))
@parameterized.expand([(Membership.LEAVE,), (Membership.BAN,)])
def test_rooms_required_state_leave_ban(self, stop_membership: str) -> None:
def test_rooms_required_state_leave_ban_initial(self, stop_membership: str) -> None:
"""
Test `rooms.required_state` should not return state past a leave/ban event.
Test `rooms.required_state` should not return state past a leave/ban event when
it's the first "initial" time the room is being sent down the connection.
"""
user1_id = self.register_user("user1", "pass")
user1_tok = self.login(user1_id, "pass")
@ -788,6 +949,13 @@ class SlidingSyncRoomsRequiredStateTestCase(SlidingSyncBase):
body={"foo": "bar"},
tok=user2_tok,
)
self.helper.send_state(
room_id1,
event_type="org.matrix.bar_state",
state_key="",
body={"bar": "bar"},
tok=user2_tok,
)
if stop_membership == Membership.LEAVE:
# User 1 leaves
@ -796,6 +964,8 @@ class SlidingSyncRoomsRequiredStateTestCase(SlidingSyncBase):
# User 1 is banned
self.helper.ban(room_id1, src=user2_id, targ=user1_id, tok=user2_tok)
# Get the state_map before we change the state as this is the final state we
# expect User1 to be able to see
state_map = self.get_success(
self.storage_controllers.state.get_current_state(room_id1)
)
@ -808,12 +978,36 @@ class SlidingSyncRoomsRequiredStateTestCase(SlidingSyncBase):
body={"foo": "qux"},
tok=user2_tok,
)
self.helper.send_state(
room_id1,
event_type="org.matrix.bar_state",
state_key="",
body={"bar": "qux"},
tok=user2_tok,
)
self.helper.leave(room_id1, user3_id, tok=user3_tok)
# Make an incremental Sliding Sync request
#
# Also expand the required state to include the `org.matrix.bar_state` event.
# This is just an extra complication of the test.
sync_body = {
"lists": {
"foo-list": {
"ranges": [[0, 1]],
"required_state": [
[EventTypes.Create, ""],
[EventTypes.Member, "*"],
["org.matrix.foo_state", ""],
["org.matrix.bar_state", ""],
],
"timeline_limit": 3,
}
}
}
response_body, _ = self.do_sync(sync_body, since=from_token, tok=user1_tok)
# Only user2 and user3 sent events in the 3 events we see in the `timeline`
# We should only see the state up to the leave/ban event
self._assertRequiredStateIncludes(
response_body["rooms"][room_id1]["required_state"],
{
@ -822,6 +1016,126 @@ class SlidingSyncRoomsRequiredStateTestCase(SlidingSyncBase):
state_map[(EventTypes.Member, user2_id)],
state_map[(EventTypes.Member, user3_id)],
state_map[("org.matrix.foo_state", "")],
state_map[("org.matrix.bar_state", "")],
},
exact=True,
)
self.assertIsNone(response_body["rooms"][room_id1].get("invite_state"))
@parameterized.expand([(Membership.LEAVE,), (Membership.BAN,)])
def test_rooms_required_state_leave_ban_incremental(
self, stop_membership: str
) -> None:
"""
Test `rooms.required_state` should not return state past a leave/ban event on
incremental sync.
"""
user1_id = self.register_user("user1", "pass")
user1_tok = self.login(user1_id, "pass")
user2_id = self.register_user("user2", "pass")
user2_tok = self.login(user2_id, "pass")
user3_id = self.register_user("user3", "pass")
user3_tok = self.login(user3_id, "pass")
room_id1 = self.helper.create_room_as(user2_id, tok=user2_tok)
self.helper.join(room_id1, user1_id, tok=user1_tok)
self.helper.join(room_id1, user3_id, tok=user3_tok)
self.helper.send_state(
room_id1,
event_type="org.matrix.foo_state",
state_key="",
body={"foo": "bar"},
tok=user2_tok,
)
self.helper.send_state(
room_id1,
event_type="org.matrix.bar_state",
state_key="",
body={"bar": "bar"},
tok=user2_tok,
)
sync_body = {
"lists": {
"foo-list": {
"ranges": [[0, 1]],
"required_state": [
[EventTypes.Create, ""],
[EventTypes.Member, "*"],
["org.matrix.foo_state", ""],
],
"timeline_limit": 3,
}
}
}
_, from_token = self.do_sync(sync_body, tok=user1_tok)
if stop_membership == Membership.LEAVE:
# User 1 leaves
self.helper.leave(room_id1, user1_id, tok=user1_tok)
elif stop_membership == Membership.BAN:
# User 1 is banned
self.helper.ban(room_id1, src=user2_id, targ=user1_id, tok=user2_tok)
# Get the state_map before we change the state as this is the final state we
# expect User1 to be able to see
state_map = self.get_success(
self.storage_controllers.state.get_current_state(room_id1)
)
# Change the state after user 1 leaves
self.helper.send_state(
room_id1,
event_type="org.matrix.foo_state",
state_key="",
body={"foo": "qux"},
tok=user2_tok,
)
self.helper.send_state(
room_id1,
event_type="org.matrix.bar_state",
state_key="",
body={"bar": "qux"},
tok=user2_tok,
)
self.helper.leave(room_id1, user3_id, tok=user3_tok)
# Make an incremental Sliding Sync request
#
# Also expand the required state to include the `org.matrix.bar_state` event.
# This is just an extra complication of the test.
sync_body = {
"lists": {
"foo-list": {
"ranges": [[0, 1]],
"required_state": [
[EventTypes.Create, ""],
[EventTypes.Member, "*"],
["org.matrix.foo_state", ""],
["org.matrix.bar_state", ""],
],
"timeline_limit": 3,
}
}
}
response_body, _ = self.do_sync(sync_body, since=from_token, tok=user1_tok)
# User1 should only see the state up to the leave/ban event
self._assertRequiredStateIncludes(
response_body["rooms"][room_id1]["required_state"],
{
# User1 should see their leave/ban membership
state_map[(EventTypes.Member, user1_id)],
state_map[("org.matrix.bar_state", "")],
# The commented out state events were already returned in the initial
# sync so we shouldn't see them again on the incremental sync. And we
# shouldn't see the state events that changed after the leave/ban event.
#
# state_map[(EventTypes.Create, "")],
# state_map[(EventTypes.Member, user2_id)],
# state_map[(EventTypes.Member, user3_id)],
# state_map[("org.matrix.foo_state", "")],
},
exact=True,
)
@ -1243,7 +1557,7 @@ class SlidingSyncRoomsRequiredStateTestCase(SlidingSyncBase):
# Update the room name
self.helper.send_state(
room_id1, "m.room.name", {"name": "Bar"}, state_key="", tok=user1_tok
room_id1, EventTypes.Name, {"name": "Bar"}, state_key="", tok=user1_tok
)
# Update the sliding sync requests to exclude the room name again

View file

@ -24,6 +24,7 @@ from twisted.internet.defer import ensureDeferred
from twisted.test.proto_helpers import MemoryReactor
from synapse.api.errors import NotFoundError
from synapse.appservice import ApplicationService
from synapse.rest import admin, devices, sync
from synapse.rest.client import keys, login, register
from synapse.server import HomeServer
@ -455,3 +456,183 @@ class DehydratedDeviceTestCase(unittest.HomeserverTestCase):
token,
)
self.assertEqual(channel.json_body["device_keys"], {"@mikey:test": {}})
class MSC4190AppserviceDevicesTestCase(unittest.HomeserverTestCase):
servlets = [
register.register_servlets,
devices.register_servlets,
]
def make_homeserver(self, reactor: MemoryReactor, clock: Clock) -> HomeServer:
self.hs = self.setup_test_homeserver()
# This application service uses the new MSC4190 behaviours
self.msc4190_service = ApplicationService(
id="msc4190",
token="some_token",
hs_token="some_token",
sender="@as:example.com",
namespaces={
ApplicationService.NS_USERS: [{"regex": "@.*", "exclusive": False}]
},
msc4190_device_management=True,
)
# This application service doesn't use the new MSC4190 behaviours
self.pre_msc_service = ApplicationService(
id="regular",
token="other_token",
hs_token="other_token",
sender="@as2:example.com",
namespaces={
ApplicationService.NS_USERS: [{"regex": "@.*", "exclusive": False}]
},
msc4190_device_management=False,
)
self.hs.get_datastores().main.services_cache.append(self.msc4190_service)
self.hs.get_datastores().main.services_cache.append(self.pre_msc_service)
return self.hs
def test_PUT_device(self) -> None:
self.register_appservice_user("alice", self.msc4190_service.token)
self.register_appservice_user("bob", self.pre_msc_service.token)
channel = self.make_request(
"GET",
"/_matrix/client/v3/devices?user_id=@alice:test",
access_token=self.msc4190_service.token,
)
self.assertEqual(channel.code, 200, channel.json_body)
self.assertEqual(channel.json_body, {"devices": []})
channel = self.make_request(
"PUT",
"/_matrix/client/v3/devices/AABBCCDD?user_id=@alice:test",
content={"display_name": "Alice's device"},
access_token=self.msc4190_service.token,
)
self.assertEqual(channel.code, 201, channel.json_body)
channel = self.make_request(
"GET",
"/_matrix/client/v3/devices?user_id=@alice:test",
access_token=self.msc4190_service.token,
)
self.assertEqual(channel.code, 200, channel.json_body)
self.assertEqual(len(channel.json_body["devices"]), 1)
self.assertEqual(channel.json_body["devices"][0]["device_id"], "AABBCCDD")
# Doing a second time should return a 200 instead of a 201
channel = self.make_request(
"PUT",
"/_matrix/client/v3/devices/AABBCCDD?user_id=@alice:test",
content={"display_name": "Alice's device"},
access_token=self.msc4190_service.token,
)
self.assertEqual(channel.code, 200, channel.json_body)
# On the regular service, that API should not allow for the
# creation of new devices.
channel = self.make_request(
"PUT",
"/_matrix/client/v3/devices/AABBCCDD?user_id=@bob:test",
content={"display_name": "Bob's device"},
access_token=self.pre_msc_service.token,
)
self.assertEqual(channel.code, 404, channel.json_body)
def test_DELETE_device(self) -> None:
self.register_appservice_user("alice", self.msc4190_service.token)
# There should be no device
channel = self.make_request(
"GET",
"/_matrix/client/v3/devices?user_id=@alice:test",
access_token=self.msc4190_service.token,
)
self.assertEqual(channel.code, 200, channel.json_body)
self.assertEqual(channel.json_body, {"devices": []})
# Create a device
channel = self.make_request(
"PUT",
"/_matrix/client/v3/devices/AABBCCDD?user_id=@alice:test",
content={},
access_token=self.msc4190_service.token,
)
self.assertEqual(channel.code, 201, channel.json_body)
# There should be one device
channel = self.make_request(
"GET",
"/_matrix/client/v3/devices?user_id=@alice:test",
access_token=self.msc4190_service.token,
)
self.assertEqual(channel.code, 200, channel.json_body)
self.assertEqual(len(channel.json_body["devices"]), 1)
# Delete the device. UIA should not be required.
channel = self.make_request(
"DELETE",
"/_matrix/client/v3/devices/AABBCCDD?user_id=@alice:test",
access_token=self.msc4190_service.token,
)
self.assertEqual(channel.code, 200, channel.json_body)
# There should be no device again
channel = self.make_request(
"GET",
"/_matrix/client/v3/devices?user_id=@alice:test",
access_token=self.msc4190_service.token,
)
self.assertEqual(channel.code, 200, channel.json_body)
self.assertEqual(channel.json_body, {"devices": []})
def test_POST_delete_devices(self) -> None:
self.register_appservice_user("alice", self.msc4190_service.token)
# There should be no device
channel = self.make_request(
"GET",
"/_matrix/client/v3/devices?user_id=@alice:test",
access_token=self.msc4190_service.token,
)
self.assertEqual(channel.code, 200, channel.json_body)
self.assertEqual(channel.json_body, {"devices": []})
# Create a device
channel = self.make_request(
"PUT",
"/_matrix/client/v3/devices/AABBCCDD?user_id=@alice:test",
content={},
access_token=self.msc4190_service.token,
)
self.assertEqual(channel.code, 201, channel.json_body)
# There should be one device
channel = self.make_request(
"GET",
"/_matrix/client/v3/devices?user_id=@alice:test",
access_token=self.msc4190_service.token,
)
self.assertEqual(channel.code, 200, channel.json_body)
self.assertEqual(len(channel.json_body["devices"]), 1)
# Delete the device with delete_devices
# UIA should not be required.
channel = self.make_request(
"POST",
"/_matrix/client/v3/delete_devices?user_id=@alice:test",
content={"devices": ["AABBCCDD"]},
access_token=self.msc4190_service.token,
)
self.assertEqual(channel.code, 200, channel.json_body)
# There should be no device again
channel = self.make_request(
"GET",
"/_matrix/client/v3/devices?user_id=@alice:test",
access_token=self.msc4190_service.token,
)
self.assertEqual(channel.code, 200, channel.json_body)
self.assertEqual(channel.json_body, {"devices": []})

View file

@ -43,6 +43,7 @@ from twisted.web.resource import Resource
import synapse.rest.admin
from synapse.api.constants import ApprovalNoticeMedium, LoginType
from synapse.api.errors import Codes
from synapse.api.urls import LoginSSORedirectURIBuilder
from synapse.appservice import ApplicationService
from synapse.http.client import RawHeaders
from synapse.module_api import ModuleApi
@ -69,6 +70,10 @@ try:
except ImportError:
HAS_JWT = False
import logging
logger = logging.getLogger(__name__)
# synapse server name: used to populate public_baseurl in some tests
SYNAPSE_SERVER_PUBLIC_HOSTNAME = "synapse"
@ -77,7 +82,7 @@ SYNAPSE_SERVER_PUBLIC_HOSTNAME = "synapse"
# FakeChannel.isSecure() returns False, so synapse will see the requested uri as
# http://..., so using http in the public_baseurl stops Synapse trying to redirect to
# https://....
BASE_URL = "http://%s/" % (SYNAPSE_SERVER_PUBLIC_HOSTNAME,)
PUBLIC_BASEURL = "http://%s/" % (SYNAPSE_SERVER_PUBLIC_HOSTNAME,)
# CAS server used in some tests
CAS_SERVER = "https://fake.test"
@ -109,6 +114,23 @@ ADDITIONAL_LOGIN_FLOWS = [
]
def get_relative_uri_from_absolute_uri(absolute_uri: str) -> str:
"""
Peels off the path and query string from an absolute URI. Useful when interacting
with `make_request(...)` util function which expects a relative path instead of a
full URI.
"""
parsed_uri = urllib.parse.urlparse(absolute_uri)
# Sanity check that we're working with an absolute URI
assert parsed_uri.scheme == "http" or parsed_uri.scheme == "https"
relative_uri = parsed_uri.path
if parsed_uri.query:
relative_uri += "?" + parsed_uri.query
return relative_uri
class TestSpamChecker:
def __init__(self, config: None, api: ModuleApi):
api.register_spam_checker_callbacks(
@ -614,7 +636,7 @@ class MultiSSOTestCase(unittest.HomeserverTestCase):
def default_config(self) -> Dict[str, Any]:
config = super().default_config()
config["public_baseurl"] = BASE_URL
config["public_baseurl"] = PUBLIC_BASEURL
config["cas_config"] = {
"enabled": True,
@ -653,6 +675,9 @@ class MultiSSOTestCase(unittest.HomeserverTestCase):
]
return config
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
self.login_sso_redirect_url_builder = LoginSSORedirectURIBuilder(hs.config)
def create_resource_dict(self) -> Dict[str, Resource]:
d = super().create_resource_dict()
d.update(build_synapse_client_resource_tree(self.hs))
@ -725,6 +750,32 @@ class MultiSSOTestCase(unittest.HomeserverTestCase):
+ "&idp=cas",
shorthand=False,
)
self.assertEqual(channel.code, 302, channel.result)
location_headers = channel.headers.getRawHeaders("Location")
assert location_headers
sso_login_redirect_uri = location_headers[0]
# it should redirect us to the standard login SSO redirect flow
self.assertEqual(
sso_login_redirect_uri,
self.login_sso_redirect_url_builder.build_login_sso_redirect_uri(
idp_id="cas", client_redirect_url=TEST_CLIENT_REDIRECT_URL
),
)
# follow the redirect
channel = self.make_request(
"GET",
# We have to make this relative to be compatible with `make_request(...)`
get_relative_uri_from_absolute_uri(sso_login_redirect_uri),
# We have to set the Host header to match the `public_baseurl` to avoid
# the extra redirect in the `SsoRedirectServlet` in order for the
# cookies to be visible.
custom_headers=[
("Host", SYNAPSE_SERVER_PUBLIC_HOSTNAME),
],
)
self.assertEqual(channel.code, 302, channel.result)
location_headers = channel.headers.getRawHeaders("Location")
assert location_headers
@ -750,6 +801,32 @@ class MultiSSOTestCase(unittest.HomeserverTestCase):
+ urllib.parse.quote_plus(TEST_CLIENT_REDIRECT_URL)
+ "&idp=saml",
)
self.assertEqual(channel.code, 302, channel.result)
location_headers = channel.headers.getRawHeaders("Location")
assert location_headers
sso_login_redirect_uri = location_headers[0]
# it should redirect us to the standard login SSO redirect flow
self.assertEqual(
sso_login_redirect_uri,
self.login_sso_redirect_url_builder.build_login_sso_redirect_uri(
idp_id="saml", client_redirect_url=TEST_CLIENT_REDIRECT_URL
),
)
# follow the redirect
channel = self.make_request(
"GET",
# We have to make this relative to be compatible with `make_request(...)`
get_relative_uri_from_absolute_uri(sso_login_redirect_uri),
# We have to set the Host header to match the `public_baseurl` to avoid
# the extra redirect in the `SsoRedirectServlet` in order for the
# cookies to be visible.
custom_headers=[
("Host", SYNAPSE_SERVER_PUBLIC_HOSTNAME),
],
)
self.assertEqual(channel.code, 302, channel.result)
location_headers = channel.headers.getRawHeaders("Location")
assert location_headers
@ -773,13 +850,38 @@ class MultiSSOTestCase(unittest.HomeserverTestCase):
# pick the default OIDC provider
channel = self.make_request(
"GET",
"/_synapse/client/pick_idp?redirectUrl="
+ urllib.parse.quote_plus(TEST_CLIENT_REDIRECT_URL)
+ "&idp=oidc",
f"/_synapse/client/pick_idp?redirectUrl={urllib.parse.quote_plus(TEST_CLIENT_REDIRECT_URL)}&idp=oidc",
)
self.assertEqual(channel.code, 302, channel.result)
location_headers = channel.headers.getRawHeaders("Location")
assert location_headers
sso_login_redirect_uri = location_headers[0]
# it should redirect us to the standard login SSO redirect flow
self.assertEqual(
sso_login_redirect_uri,
self.login_sso_redirect_url_builder.build_login_sso_redirect_uri(
idp_id="oidc", client_redirect_url=TEST_CLIENT_REDIRECT_URL
),
)
with fake_oidc_server.patch_homeserver(hs=self.hs):
# follow the redirect
channel = self.make_request(
"GET",
# We have to make this relative to be compatible with `make_request(...)`
get_relative_uri_from_absolute_uri(sso_login_redirect_uri),
# We have to set the Host header to match the `public_baseurl` to avoid
# the extra redirect in the `SsoRedirectServlet` in order for the
# cookies to be visible.
custom_headers=[
("Host", SYNAPSE_SERVER_PUBLIC_HOSTNAME),
],
)
self.assertEqual(channel.code, 302, channel.result)
location_headers = channel.headers.getRawHeaders("Location")
assert location_headers
oidc_uri = location_headers[0]
oidc_uri_path, oidc_uri_query = oidc_uri.split("?", 1)
@ -838,12 +940,38 @@ class MultiSSOTestCase(unittest.HomeserverTestCase):
self.assertEqual(chan.json_body["user_id"], "@user1:test")
def test_multi_sso_redirect_to_unknown(self) -> None:
"""An unknown IdP should cause a 400"""
"""An unknown IdP should cause a 404"""
channel = self.make_request(
"GET",
"/_synapse/client/pick_idp?redirectUrl=http://x&idp=xyz",
)
self.assertEqual(channel.code, 400, channel.result)
self.assertEqual(channel.code, 302, channel.result)
location_headers = channel.headers.getRawHeaders("Location")
assert location_headers
sso_login_redirect_uri = location_headers[0]
# it should redirect us to the standard login SSO redirect flow
self.assertEqual(
sso_login_redirect_uri,
self.login_sso_redirect_url_builder.build_login_sso_redirect_uri(
idp_id="xyz", client_redirect_url="http://x"
),
)
# follow the redirect
channel = self.make_request(
"GET",
# We have to make this relative to be compatible with `make_request(...)`
get_relative_uri_from_absolute_uri(sso_login_redirect_uri),
# We have to set the Host header to match the `public_baseurl` to avoid
# the extra redirect in the `SsoRedirectServlet` in order for the
# cookies to be visible.
custom_headers=[
("Host", SYNAPSE_SERVER_PUBLIC_HOSTNAME),
],
)
self.assertEqual(channel.code, 404, channel.result)
def test_client_idp_redirect_to_unknown(self) -> None:
"""If the client tries to pick an unknown IdP, return a 404"""
@ -1473,7 +1601,7 @@ class UsernamePickerTestCase(HomeserverTestCase):
def default_config(self) -> Dict[str, Any]:
config = super().default_config()
config["public_baseurl"] = BASE_URL
config["public_baseurl"] = PUBLIC_BASEURL
config["oidc_config"] = {}
config["oidc_config"].update(TEST_OIDC_CONFIG)

View file

@ -120,6 +120,34 @@ class RegisterRestServletTestCase(unittest.HomeserverTestCase):
self.assertEqual(channel.code, 401, msg=channel.result)
def test_POST_appservice_msc4190_enabled(self) -> None:
# With MSC4190 enabled, the registration should *not* return an access token
user_id = "@as_user_kermit:test"
as_token = "i_am_an_app_service"
appservice = ApplicationService(
as_token,
id="1234",
namespaces={"users": [{"regex": r"@as_user.*", "exclusive": True}]},
sender="@as:test",
msc4190_device_management=True,
)
self.hs.get_datastores().main.services_cache.append(appservice)
request_data = {
"username": "as_user_kermit",
"type": APP_SERVICE_REGISTRATION_TYPE,
}
channel = self.make_request(
b"POST", self.url + b"?access_token=i_am_an_app_service", request_data
)
self.assertEqual(channel.code, 200, msg=channel.result)
det_data = {"user_id": user_id, "home_server": self.hs.hostname}
self.assertLessEqual(det_data.items(), channel.json_body.items())
self.assertNotIn("access_token", channel.json_body)
def test_POST_bad_password(self) -> None:
request_data = {"username": "kermit", "password": 666}
channel = self.make_request(b"POST", self.url, request_data)

View file

@ -1337,17 +1337,13 @@ class RoomJoinTestCase(RoomBase):
"POST", f"/join/{self.room1}", access_token=self.tok2
)
self.assertEqual(channel.code, 403)
self.assertEqual(
channel.json_body["errcode"], "ORG.MATRIX.MSC3823.USER_ACCOUNT_SUSPENDED"
)
self.assertEqual(channel.json_body["errcode"], "M_USER_SUSPENDED")
channel = self.make_request(
"POST", f"/rooms/{self.room1}/join", access_token=self.tok2
)
self.assertEqual(channel.code, 403)
self.assertEqual(
channel.json_body["errcode"], "ORG.MATRIX.MSC3823.USER_ACCOUNT_SUSPENDED"
)
self.assertEqual(channel.json_body["errcode"], "M_USER_SUSPENDED")
def test_suspended_user_cannot_knock_on_room(self) -> None:
# set the user as suspended
@ -1361,9 +1357,7 @@ class RoomJoinTestCase(RoomBase):
shorthand=False,
)
self.assertEqual(channel.code, 403)
self.assertEqual(
channel.json_body["errcode"], "ORG.MATRIX.MSC3823.USER_ACCOUNT_SUSPENDED"
)
self.assertEqual(channel.json_body["errcode"], "M_USER_SUSPENDED")
def test_suspended_user_cannot_invite_to_room(self) -> None:
# set the user as suspended
@ -1376,9 +1370,7 @@ class RoomJoinTestCase(RoomBase):
access_token=self.tok1,
content={"user_id": self.user2},
)
self.assertEqual(
channel.json_body["errcode"], "ORG.MATRIX.MSC3823.USER_ACCOUNT_SUSPENDED"
)
self.assertEqual(channel.json_body["errcode"], "M_USER_SUSPENDED")
class RoomAppserviceTsParamTestCase(unittest.HomeserverTestCase):
@ -4011,9 +4003,7 @@ class UserSuspensionTests(unittest.HomeserverTestCase):
access_token=self.tok1,
content={"body": "hello", "msgtype": "m.text"},
)
self.assertEqual(
channel.json_body["errcode"], "ORG.MATRIX.MSC3823.USER_ACCOUNT_SUSPENDED"
)
self.assertEqual(channel.json_body["errcode"], "M_USER_SUSPENDED")
def test_suspended_user_cannot_change_profile_data(self) -> None:
# set the user as suspended
@ -4026,9 +4016,7 @@ class UserSuspensionTests(unittest.HomeserverTestCase):
content={"avatar_url": "mxc://matrix.org/wefh34uihSDRGhw34"},
shorthand=False,
)
self.assertEqual(
channel.json_body["errcode"], "ORG.MATRIX.MSC3823.USER_ACCOUNT_SUSPENDED"
)
self.assertEqual(channel.json_body["errcode"], "M_USER_SUSPENDED")
channel2 = self.make_request(
"PUT",
@ -4037,9 +4025,7 @@ class UserSuspensionTests(unittest.HomeserverTestCase):
content={"displayname": "something offensive"},
shorthand=False,
)
self.assertEqual(
channel2.json_body["errcode"], "ORG.MATRIX.MSC3823.USER_ACCOUNT_SUSPENDED"
)
self.assertEqual(channel2.json_body["errcode"], "M_USER_SUSPENDED")
def test_suspended_user_cannot_redact_messages_other_than_their_own(self) -> None:
# first user sends message
@ -4073,9 +4059,7 @@ class UserSuspensionTests(unittest.HomeserverTestCase):
content={"reason": "bogus"},
shorthand=False,
)
self.assertEqual(
channel.json_body["errcode"], "ORG.MATRIX.MSC3823.USER_ACCOUNT_SUSPENDED"
)
self.assertEqual(channel.json_body["errcode"], "M_USER_SUSPENDED")
# but can redact their own
channel = self.make_request(

View file

@ -889,7 +889,7 @@ class RestHelper:
"GET",
uri,
)
assert channel.code == 302
assert channel.code == 302, f"Expected 302 for {uri}, got {channel.code}"
# hit the redirect url again with the right Host header, which should now issue
# a cookie and redirect to the SSO provider.
@ -901,17 +901,18 @@ class RestHelper:
location = get_location(channel)
parts = urllib.parse.urlsplit(location)
next_uri = urllib.parse.urlunsplit(("", "") + parts[2:])
channel = make_request(
self.reactor,
self.site,
"GET",
urllib.parse.urlunsplit(("", "") + parts[2:]),
next_uri,
custom_headers=[
("Host", parts[1]),
],
)
assert channel.code == 302
assert channel.code == 302, f"Expected 302 for {next_uri}, got {channel.code}"
channel.extract_cookies(cookies)
return get_location(channel)

View file

@ -781,7 +781,7 @@ class HomeserverTestCase(TestCase):
self,
username: str,
appservice_token: str,
) -> Tuple[str, str]:
) -> Tuple[str, Optional[str]]:
"""Register an appservice user as an application service.
Requires the client-facing registration API be registered.
@ -805,7 +805,7 @@ class HomeserverTestCase(TestCase):
access_token=appservice_token,
)
self.assertEqual(channel.code, 200, channel.json_body)
return channel.json_body["user_id"], channel.json_body["device_id"]
return channel.json_body["user_id"], channel.json_body.get("device_id")
def login(
self,

View file

@ -18,8 +18,7 @@
# [This file includes modifications made by New Vector Limited]
#
#
from typing import Optional, Tuple
from typing import List, Optional, Tuple
from twisted.internet.task import deferLater
from twisted.test.proto_helpers import MemoryReactor
@ -104,33 +103,43 @@ class TestTaskScheduler(HomeserverTestCase):
)
)
# This is to give the time to the active tasks to finish
self.reactor.advance(1)
# Check that only MAX_CONCURRENT_RUNNING_TASKS tasks has run and that one
# is still scheduled.
tasks = [
def get_tasks_of_status(status: TaskStatus) -> List[ScheduledTask]:
tasks = (
self.get_success(self.task_scheduler.get_task(task_id))
for task_id in task_ids
]
)
return [t for t in tasks if t is not None and t.status == status]
# At this point, there should be MAX_CONCURRENT_RUNNING_TASKS active tasks and
# one scheduled task.
self.assertEquals(
len(
[t for t in tasks if t is not None and t.status == TaskStatus.COMPLETE]
),
len(get_tasks_of_status(TaskStatus.ACTIVE)),
TaskScheduler.MAX_CONCURRENT_RUNNING_TASKS,
)
self.assertEquals(
len(get_tasks_of_status(TaskStatus.SCHEDULED)),
1,
)
scheduled_tasks = [
t for t in tasks if t is not None and t.status == TaskStatus.ACTIVE
]
self.assertEquals(len(scheduled_tasks), 1)
# We need to wait for the next run of the scheduler loop
self.reactor.advance((TaskScheduler.SCHEDULE_INTERVAL_MS / 1000))
# Give the time to the active tasks to finish
self.reactor.advance(1)
# Check that the last task has been properly executed after the next scheduler loop run
# Check that MAX_CONCURRENT_RUNNING_TASKS tasks have run and that one
# is still scheduled.
self.assertEquals(
len(get_tasks_of_status(TaskStatus.COMPLETE)),
TaskScheduler.MAX_CONCURRENT_RUNNING_TASKS,
)
scheduled_tasks = get_tasks_of_status(TaskStatus.SCHEDULED)
self.assertEquals(len(scheduled_tasks), 1)
# The scheduled task should start 0.1s after the first of the active tasks
# finishes
self.reactor.advance(0.1)
self.assertEquals(len(get_tasks_of_status(TaskStatus.ACTIVE)), 1)
# ... and should finally complete after another second
self.reactor.advance(1)
prev_scheduled_task = self.get_success(
self.task_scheduler.get_task(scheduled_tasks[0].id)
)