mirror of
https://github.com/element-hq/synapse.git
synced 2025-03-13 19:28:44 +00:00
Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes
This commit is contained in:
commit
ca87366454
39 changed files with 1014 additions and 83 deletions
13
.github/workflows/release-artifacts.yml
vendored
13
.github/workflows/release-artifacts.yml
vendored
|
@ -5,7 +5,7 @@ name: Build release artifacts
|
|||
on:
|
||||
# we build on PRs and develop to (hopefully) get early warning
|
||||
# of things breaking (but only build one set of debs). PRs skip
|
||||
# building wheels on macOS & ARM.
|
||||
# building wheels on ARM.
|
||||
pull_request:
|
||||
push:
|
||||
branches: ["develop", "release-*"]
|
||||
|
@ -111,7 +111,7 @@ jobs:
|
|||
runs-on: ${{ matrix.os }}
|
||||
strategy:
|
||||
matrix:
|
||||
os: [ubuntu-22.04, macos-13]
|
||||
os: [ubuntu-22.04]
|
||||
arch: [x86_64, aarch64]
|
||||
# is_pr is a flag used to exclude certain jobs from the matrix on PRs.
|
||||
# It is not read by the rest of the workflow.
|
||||
|
@ -119,12 +119,6 @@ jobs:
|
|||
- ${{ startsWith(github.ref, 'refs/pull/') }}
|
||||
|
||||
exclude:
|
||||
# Don't build macos wheels on PR CI.
|
||||
- is_pr: true
|
||||
os: "macos-13"
|
||||
# Don't build aarch64 wheels on mac.
|
||||
- os: "macos-13"
|
||||
arch: aarch64
|
||||
# Don't build aarch64 wheels on PR CI.
|
||||
- is_pr: true
|
||||
arch: aarch64
|
||||
|
@ -212,7 +206,8 @@ jobs:
|
|||
mv debs*/* debs/
|
||||
tar -cvJf debs.tar.xz debs
|
||||
- name: Attach to release
|
||||
uses: softprops/action-gh-release@v2
|
||||
# Pinned to work around https://github.com/softprops/action-gh-release/issues/445
|
||||
uses: softprops/action-gh-release@v2.0.5
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
with:
|
||||
|
|
50
CHANGES.md
50
CHANGES.md
|
@ -1,3 +1,53 @@
|
|||
# Synapse 1.120.2 (2024-12-03)
|
||||
|
||||
This version has building of wheels for macOS disabled.
|
||||
It is functionally identical to 1.120.1, which contains multiple security fixes.
|
||||
If you are already using 1.120.1, there is no need to upgrade to this version.
|
||||
|
||||
|
||||
|
||||
# Synapse 1.120.1 (2024-12-03)
|
||||
|
||||
This patch release fixes multiple security vulnerabilities, some affecting all prior versions of Synapse. Server administrators are encouraged to update Synapse as soon as possible. We are not aware of these vulnerabilities being exploited in the wild.
|
||||
|
||||
Administrators who are unable to update Synapse may use the workarounds described in the linked GitHub Security Advisory below.
|
||||
|
||||
### Security advisory
|
||||
|
||||
The following issues are fixed in 1.120.1.
|
||||
|
||||
- [GHSA-rfq8-j7rh-8hf2](https://github.com/element-hq/synapse/security/advisories/GHSA-rfq8-j7rh-8hf2) / [CVE-2024-52805](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-52805): **Unsupported content types can lead to memory exhaustion**
|
||||
|
||||
Synapse instances which have a high `max_upload_size` and which don't have a reverse proxy in front of them that would otherwise limit upload size are affected.
|
||||
|
||||
Fixed by [4b7154c58501b4bf5e1c2d6c11ebef96529f2fdf](https://github.com/element-hq/synapse/commit/4b7154c58501b4bf5e1c2d6c11ebef96529f2fdf).
|
||||
|
||||
- [GHSA-f3r3-h2mq-hx2h](https://github.com/element-hq/synapse/security/advisories/GHSA-f3r3-h2mq-hx2h) / [CVE-2024-52815](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-52815): **Malicious invites via federation can break a user's sync**
|
||||
|
||||
Fixed by [d82e1ed357b7ee21dff83d06cba7a67840cfd464](https://github.com/element-hq/synapse/commit/d82e1ed357b7ee21dff83d06cba7a67840cfd464).
|
||||
|
||||
- [GHSA-vp6v-whfm-rv3g](https://github.com/element-hq/synapse/security/advisories/GHSA-vp6v-whfm-rv3g) / [CVE-2024-53863](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-53863): **Synapse can be forced to thumbnail unexpected file formats, invoking potentially untrustworthy decoders**
|
||||
|
||||
Synapse instances can disable dynamic thumbnailing by setting `dynamic_thumbnails` to `false` in the configuration file.
|
||||
|
||||
Fixed by [b64a4e5fbbbf119b6c65aedf0d999b4237d55503](https://github.com/element-hq/synapse/commit/b64a4e5fbbbf119b6c65aedf0d999b4237d55503).
|
||||
|
||||
- [GHSA-56w4-5538-8v8h](https://github.com/element-hq/synapse/security/advisories/GHSA-56w4-5538-8v8h) / [CVE-2024-53867](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-53867): **The Sliding Sync feature on Synapse versions between 1.113.0rc1 and 1.120.0 can leak partial room state changes to users no longer in a room**
|
||||
|
||||
Non-state events, like messages, are unaffected.
|
||||
|
||||
Synapse instances can disable the Sliding Sync feature by setting `experimental_features.msc3575_enabled` to `false` in the configuration file.
|
||||
|
||||
Fixed by [4daa533e82f345ce87b9495d31781af570ba3ead](https://github.com/element-hq/synapse/commit/4daa533e82f345ce87b9495d31781af570ba3ead).
|
||||
|
||||
See the advisories for more details. If you have any questions, email [security at element.io](mailto:security@element.io).
|
||||
|
||||
### Bugfixes
|
||||
|
||||
- Fix release process to not create duplicate releases. ([\#17970](https://github.com/element-hq/synapse/issues/17970))
|
||||
|
||||
|
||||
|
||||
# Synapse 1.120.0 (2024-11-26)
|
||||
|
||||
### Bugfixes
|
||||
|
|
4
Cargo.lock
generated
4
Cargo.lock
generated
|
@ -61,9 +61,9 @@ checksum = "79296716171880943b8470b5f8d03aa55eb2e645a4874bdbb28adb49162e012c"
|
|||
|
||||
[[package]]
|
||||
name = "bytes"
|
||||
version = "1.8.0"
|
||||
version = "1.9.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "9ac0150caa2ae65ca5bd83f25c7de183dea78d4d366469f148435e2acfbad0da"
|
||||
checksum = "325918d6fe32f23b19878fe4b34794ae41fc19ddbe53b10571a4874d44ffd39b"
|
||||
|
||||
[[package]]
|
||||
name = "cfg-if"
|
||||
|
|
1
changelog.d/17947.feature
Normal file
1
changelog.d/17947.feature
Normal file
|
@ -0,0 +1 @@
|
|||
Update [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) Sliding Sync to include invite, ban, kick, targets when `$LAZY`-loading room members.
|
1
changelog.d/17965.feature
Normal file
1
changelog.d/17965.feature
Normal file
|
@ -0,0 +1 @@
|
|||
Use stable `M_USER_LOCKED` error code for locked accounts, as per [Matrix 1.12](https://spec.matrix.org/v1.12/client-server-api/#account-locking).
|
1
changelog.d/17970.bugfix
Normal file
1
changelog.d/17970.bugfix
Normal file
|
@ -0,0 +1 @@
|
|||
Fix release process to not create duplicate releases.
|
1
changelog.d/17972.misc
Normal file
1
changelog.d/17972.misc
Normal file
|
@ -0,0 +1 @@
|
|||
Consolidate SSO redirects through `/_matrix/client/v3/login/sso/redirect(/{idpId})`.
|
1
changelog.d/17975.feature
Normal file
1
changelog.d/17975.feature
Normal file
|
@ -0,0 +1 @@
|
|||
[MSC4076](https://github.com/matrix-org/matrix-spec-proposals/pull/4076): Add `disable_badge_count` to pusher configuration.
|
1
changelog.d/17986.misc
Normal file
1
changelog.d/17986.misc
Normal file
|
@ -0,0 +1 @@
|
|||
Fix Docker and Complement config to be able to use `public_baseurl`.
|
12
debian/changelog
vendored
12
debian/changelog
vendored
|
@ -1,3 +1,15 @@
|
|||
matrix-synapse-py3 (1.120.2) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.120.2.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 03 Dec 2024 15:43:37 +0000
|
||||
|
||||
matrix-synapse-py3 (1.120.1) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.120.1.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 03 Dec 2024 09:07:57 +0000
|
||||
|
||||
matrix-synapse-py3 (1.120.0) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.120.0.
|
||||
|
|
|
@ -7,6 +7,7 @@
|
|||
#}
|
||||
|
||||
## Server ##
|
||||
public_baseurl: http://127.0.0.1:8008/
|
||||
report_stats: False
|
||||
trusted_key_servers: []
|
||||
enable_registration: true
|
||||
|
|
|
@ -42,6 +42,6 @@ server {
|
|||
{% endif %}
|
||||
proxy_set_header X-Forwarded-For $remote_addr;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header Host $host:$server_port;
|
||||
}
|
||||
}
|
||||
|
|
48
poetry.lock
generated
48
poetry.lock
generated
|
@ -1,4 +1,4 @@
|
|||
# This file is automatically @generated by Poetry 1.8.4 and should not be changed by hand.
|
||||
# This file is automatically @generated by Poetry 1.8.3 and should not be changed by hand.
|
||||
|
||||
[[package]]
|
||||
name = "annotated-types"
|
||||
|
@ -1917,13 +1917,13 @@ test = ["pretend", "pytest (>=3.0.1)", "pytest-rerunfailures"]
|
|||
|
||||
[[package]]
|
||||
name = "pysaml2"
|
||||
version = "7.3.1"
|
||||
version = "7.5.0"
|
||||
description = "Python implementation of SAML Version 2 Standard"
|
||||
optional = true
|
||||
python-versions = ">=3.6.2,<4.0.0"
|
||||
python-versions = ">=3.9,<4.0"
|
||||
files = [
|
||||
{file = "pysaml2-7.3.1-py3-none-any.whl", hash = "sha256:2cc66e7a371d3f5ff9601f0ed93b5276cca816fce82bb38447d5a0651f2f5193"},
|
||||
{file = "pysaml2-7.3.1.tar.gz", hash = "sha256:eab22d187c6dd7707c58b5bb1688f9b8e816427667fc99d77f54399e15cd0a0a"},
|
||||
{file = "pysaml2-7.5.0-py3-none-any.whl", hash = "sha256:bc6627cc344476a83c757f440a73fda1369f13b6fda1b4e16bca63ffbabb5318"},
|
||||
{file = "pysaml2-7.5.0.tar.gz", hash = "sha256:f36871d4e5ee857c6b85532e942550d2cf90ea4ee943d75eb681044bbc4f54f7"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
|
@ -1933,7 +1933,7 @@ pyopenssl = "*"
|
|||
python-dateutil = "*"
|
||||
pytz = "*"
|
||||
requests = ">=2,<3"
|
||||
xmlschema = ">=1.2.1"
|
||||
xmlschema = ">=2,<3"
|
||||
|
||||
[package.extras]
|
||||
s2repoze = ["paste", "repoze.who", "zope.interface"]
|
||||
|
@ -2514,13 +2514,43 @@ twisted = ["twisted"]
|
|||
|
||||
[[package]]
|
||||
name = "tomli"
|
||||
version = "2.1.0"
|
||||
version = "2.2.1"
|
||||
description = "A lil' TOML parser"
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
files = [
|
||||
{file = "tomli-2.1.0-py3-none-any.whl", hash = "sha256:a5c57c3d1c56f5ccdf89f6523458f60ef716e210fc47c4cfb188c5ba473e0391"},
|
||||
{file = "tomli-2.1.0.tar.gz", hash = "sha256:3f646cae2aec94e17d04973e4249548320197cfabdf130015d023de4b74d8ab8"},
|
||||
{file = "tomli-2.2.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:678e4fa69e4575eb77d103de3df8a895e1591b48e740211bd1067378c69e8249"},
|
||||
{file = "tomli-2.2.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:023aa114dd824ade0100497eb2318602af309e5a55595f76b626d6d9f3b7b0a6"},
|
||||
{file = "tomli-2.2.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ece47d672db52ac607a3d9599a9d48dcb2f2f735c6c2d1f34130085bb12b112a"},
|
||||
{file = "tomli-2.2.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6972ca9c9cc9f0acaa56a8ca1ff51e7af152a9f87fb64623e31d5c83700080ee"},
|
||||
{file = "tomli-2.2.1-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c954d2250168d28797dd4e3ac5cf812a406cd5a92674ee4c8f123c889786aa8e"},
|
||||
{file = "tomli-2.2.1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:8dd28b3e155b80f4d54beb40a441d366adcfe740969820caf156c019fb5c7ec4"},
|
||||
{file = "tomli-2.2.1-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:e59e304978767a54663af13c07b3d1af22ddee3bb2fb0618ca1593e4f593a106"},
|
||||
{file = "tomli-2.2.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:33580bccab0338d00994d7f16f4c4ec25b776af3ffaac1ed74e0b3fc95e885a8"},
|
||||
{file = "tomli-2.2.1-cp311-cp311-win32.whl", hash = "sha256:465af0e0875402f1d226519c9904f37254b3045fc5084697cefb9bdde1ff99ff"},
|
||||
{file = "tomli-2.2.1-cp311-cp311-win_amd64.whl", hash = "sha256:2d0f2fdd22b02c6d81637a3c95f8cd77f995846af7414c5c4b8d0545afa1bc4b"},
|
||||
{file = "tomli-2.2.1-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:4a8f6e44de52d5e6c657c9fe83b562f5f4256d8ebbfe4ff922c495620a7f6cea"},
|
||||
{file = "tomli-2.2.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:8d57ca8095a641b8237d5b079147646153d22552f1c637fd3ba7f4b0b29167a8"},
|
||||
{file = "tomli-2.2.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4e340144ad7ae1533cb897d406382b4b6fede8890a03738ff1683af800d54192"},
|
||||
{file = "tomli-2.2.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:db2b95f9de79181805df90bedc5a5ab4c165e6ec3fe99f970d0e302f384ad222"},
|
||||
{file = "tomli-2.2.1-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:40741994320b232529c802f8bc86da4e1aa9f413db394617b9a256ae0f9a7f77"},
|
||||
{file = "tomli-2.2.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:400e720fe168c0f8521520190686ef8ef033fb19fc493da09779e592861b78c6"},
|
||||
{file = "tomli-2.2.1-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:02abe224de6ae62c19f090f68da4e27b10af2b93213d36cf44e6e1c5abd19fdd"},
|
||||
{file = "tomli-2.2.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:b82ebccc8c8a36f2094e969560a1b836758481f3dc360ce9a3277c65f374285e"},
|
||||
{file = "tomli-2.2.1-cp312-cp312-win32.whl", hash = "sha256:889f80ef92701b9dbb224e49ec87c645ce5df3fa2cc548664eb8a25e03127a98"},
|
||||
{file = "tomli-2.2.1-cp312-cp312-win_amd64.whl", hash = "sha256:7fc04e92e1d624a4a63c76474610238576942d6b8950a2d7f908a340494e67e4"},
|
||||
{file = "tomli-2.2.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:f4039b9cbc3048b2416cc57ab3bda989a6fcf9b36cf8937f01a6e731b64f80d7"},
|
||||
{file = "tomli-2.2.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:286f0ca2ffeeb5b9bd4fcc8d6c330534323ec51b2f52da063b11c502da16f30c"},
|
||||
{file = "tomli-2.2.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a92ef1a44547e894e2a17d24e7557a5e85a9e1d0048b0b5e7541f76c5032cb13"},
|
||||
{file = "tomli-2.2.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9316dc65bed1684c9a98ee68759ceaed29d229e985297003e494aa825ebb0281"},
|
||||
{file = "tomli-2.2.1-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e85e99945e688e32d5a35c1ff38ed0b3f41f43fad8df0bdf79f72b2ba7bc5272"},
|
||||
{file = "tomli-2.2.1-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:ac065718db92ca818f8d6141b5f66369833d4a80a9d74435a268c52bdfa73140"},
|
||||
{file = "tomli-2.2.1-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:d920f33822747519673ee656a4b6ac33e382eca9d331c87770faa3eef562aeb2"},
|
||||
{file = "tomli-2.2.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:a198f10c4d1b1375d7687bc25294306e551bf1abfa4eace6650070a5c1ae2744"},
|
||||
{file = "tomli-2.2.1-cp313-cp313-win32.whl", hash = "sha256:d3f5614314d758649ab2ab3a62d4f2004c825922f9e370b29416484086b264ec"},
|
||||
{file = "tomli-2.2.1-cp313-cp313-win_amd64.whl", hash = "sha256:a38aa0308e754b0e3c67e344754dff64999ff9b513e691d0e786265c93583c69"},
|
||||
{file = "tomli-2.2.1-py3-none-any.whl", hash = "sha256:cb55c73c5f4408779d0cf3eef9f762b9c9f147a77de7b258bef0a5628adc85cc"},
|
||||
{file = "tomli-2.2.1.tar.gz", hash = "sha256:cd45e1dc79c835ce60f7404ec8119f2eb06d38b1deba146f07ced3bbc44505ff"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
|
|
|
@ -97,7 +97,7 @@ module-name = "synapse.synapse_rust"
|
|||
|
||||
[tool.poetry]
|
||||
name = "matrix-synapse"
|
||||
version = "1.120.0"
|
||||
version = "1.120.2"
|
||||
description = "Homeserver for the Matrix decentralised comms protocol"
|
||||
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
|
||||
license = "AGPL-3.0-or-later"
|
||||
|
|
|
@ -195,6 +195,10 @@ if [ -z "$skip_docker_build" ]; then
|
|||
# Build the unified Complement image (from the worker Synapse image we just built).
|
||||
echo_if_github "::group::Build Docker image: complement/Dockerfile"
|
||||
$CONTAINER_RUNTIME build -t complement-synapse \
|
||||
`# This is the tag we end up pushing to the registry (see` \
|
||||
`# .github/workflows/push_complement_image.yml) so let's just label it now` \
|
||||
`# so people can reference it by the same name locally.` \
|
||||
-t ghcr.io/element-hq/synapse/complement-synapse \
|
||||
-f "docker/complement/Dockerfile" "docker/complement"
|
||||
echo_if_github "::endgroup::"
|
||||
|
||||
|
|
|
@ -231,6 +231,8 @@ class EventContentFields:
|
|||
ROOM_NAME: Final = "name"
|
||||
|
||||
MEMBERSHIP: Final = "membership"
|
||||
MEMBERSHIP_DISPLAYNAME: Final = "displayname"
|
||||
MEMBERSHIP_AVATAR_URL: Final = "avatar_url"
|
||||
|
||||
# Used in m.room.guest_access events.
|
||||
GUEST_ACCESS: Final = "guest_access"
|
||||
|
|
|
@ -87,8 +87,7 @@ class Codes(str, Enum):
|
|||
WEAK_PASSWORD = "M_WEAK_PASSWORD"
|
||||
INVALID_SIGNATURE = "M_INVALID_SIGNATURE"
|
||||
USER_DEACTIVATED = "M_USER_DEACTIVATED"
|
||||
# USER_LOCKED = "M_USER_LOCKED"
|
||||
USER_LOCKED = "ORG_MATRIX_MSC3939_USER_LOCKED"
|
||||
USER_LOCKED = "M_USER_LOCKED"
|
||||
NOT_YET_UPLOADED = "M_NOT_YET_UPLOADED"
|
||||
CANNOT_OVERWRITE_MEDIA = "M_CANNOT_OVERWRITE_MEDIA"
|
||||
|
||||
|
|
|
@ -23,7 +23,8 @@
|
|||
|
||||
import hmac
|
||||
from hashlib import sha256
|
||||
from urllib.parse import urlencode
|
||||
from typing import Optional
|
||||
from urllib.parse import urlencode, urljoin
|
||||
|
||||
from synapse.config import ConfigError
|
||||
from synapse.config.homeserver import HomeServerConfig
|
||||
|
@ -66,3 +67,42 @@ class ConsentURIBuilder:
|
|||
urlencode({"u": user_id, "h": mac}),
|
||||
)
|
||||
return consent_uri
|
||||
|
||||
|
||||
class LoginSSORedirectURIBuilder:
|
||||
def __init__(self, hs_config: HomeServerConfig):
|
||||
self._public_baseurl = hs_config.server.public_baseurl
|
||||
|
||||
def build_login_sso_redirect_uri(
|
||||
self, *, idp_id: Optional[str], client_redirect_url: str
|
||||
) -> str:
|
||||
"""Build a `/login/sso/redirect` URI for the given identity provider.
|
||||
|
||||
Builds `/_matrix/client/v3/login/sso/redirect/{idpId}?redirectUrl=xxx` when `idp_id` is specified.
|
||||
Otherwise, builds `/_matrix/client/v3/login/sso/redirect?redirectUrl=xxx` when `idp_id` is `None`.
|
||||
|
||||
Args:
|
||||
idp_id: Optional ID of the identity provider
|
||||
client_redirect_url: URL to redirect the user to after login
|
||||
|
||||
Returns
|
||||
The URI to follow when choosing a specific identity provider.
|
||||
"""
|
||||
base_url = urljoin(
|
||||
self._public_baseurl,
|
||||
f"{CLIENT_API_PREFIX}/v3/login/sso/redirect",
|
||||
)
|
||||
|
||||
serialized_query_parameters = urlencode({"redirectUrl": client_redirect_url})
|
||||
|
||||
if idp_id:
|
||||
resultant_url = urljoin(
|
||||
# We have to add a trailing slash to the base URL to ensure that the
|
||||
# last path segment is not stripped away when joining with another path.
|
||||
f"{base_url}/",
|
||||
f"{idp_id}?{serialized_query_parameters}",
|
||||
)
|
||||
else:
|
||||
resultant_url = f"{base_url}?{serialized_query_parameters}"
|
||||
|
||||
return resultant_url
|
||||
|
|
|
@ -20,7 +20,7 @@
|
|||
#
|
||||
#
|
||||
|
||||
from typing import Any, List
|
||||
from typing import Any, List, Optional
|
||||
|
||||
from synapse.config.sso import SsoAttributeRequirement
|
||||
from synapse.types import JsonDict
|
||||
|
@ -46,7 +46,9 @@ class CasConfig(Config):
|
|||
|
||||
# TODO Update this to a _synapse URL.
|
||||
public_baseurl = self.root.server.public_baseurl
|
||||
self.cas_service_url = public_baseurl + "_matrix/client/r0/login/cas/ticket"
|
||||
self.cas_service_url: Optional[str] = (
|
||||
public_baseurl + "_matrix/client/r0/login/cas/ticket"
|
||||
)
|
||||
|
||||
self.cas_protocol_version = cas_config.get("protocol_version")
|
||||
if (
|
||||
|
|
|
@ -448,3 +448,6 @@ class ExperimentalConfig(Config):
|
|||
|
||||
# MSC4222: Adding `state_after` to sync v2
|
||||
self.msc4222_enabled: bool = experimental.get("msc4222_enabled", False)
|
||||
|
||||
# MSC4076: Add `disable_badge_count`` to pusher configuration
|
||||
self.msc4076_enabled: bool = experimental.get("msc4076_enabled", False)
|
||||
|
|
|
@ -360,5 +360,6 @@ def setup_logging(
|
|||
"Licensed under the AGPL 3.0 license. Website: https://github.com/element-hq/synapse"
|
||||
)
|
||||
logging.info("Server hostname: %s", config.server.server_name)
|
||||
logging.info("Public Base URL: %s", config.server.public_baseurl)
|
||||
logging.info("Instance name: %s", hs.get_instance_name())
|
||||
logging.info("Twisted reactor: %s", type(reactor).__name__)
|
||||
|
|
|
@ -332,8 +332,14 @@ class ServerConfig(Config):
|
|||
logger.info("Using default public_baseurl %s", public_baseurl)
|
||||
else:
|
||||
self.serve_client_wellknown = True
|
||||
# Ensure that public_baseurl ends with a trailing slash
|
||||
if public_baseurl[-1] != "/":
|
||||
public_baseurl += "/"
|
||||
|
||||
# Scrutinize user-provided config
|
||||
if not isinstance(public_baseurl, str):
|
||||
raise ConfigError("Must be a string", ("public_baseurl",))
|
||||
|
||||
self.public_baseurl = public_baseurl
|
||||
|
||||
# check that public_baseurl is valid
|
||||
|
|
|
@ -509,6 +509,9 @@ class FederationV2InviteServlet(BaseFederationServerServlet):
|
|||
event = content["event"]
|
||||
invite_room_state = content.get("invite_room_state", [])
|
||||
|
||||
if not isinstance(invite_room_state, list):
|
||||
invite_room_state = []
|
||||
|
||||
# Synapse expects invite_room_state to be in unsigned, as it is in v1
|
||||
# API
|
||||
|
||||
|
|
|
@ -880,6 +880,9 @@ class FederationHandler:
|
|||
if stripped_room_state is None:
|
||||
raise KeyError("Missing 'knock_room_state' field in send_knock response")
|
||||
|
||||
if not isinstance(stripped_room_state, list):
|
||||
raise TypeError("'knock_room_state' has wrong type")
|
||||
|
||||
event.unsigned["knock_room_state"] = stripped_room_state
|
||||
|
||||
context = EventContext.for_outlier(self._storage_controllers)
|
||||
|
|
|
@ -39,6 +39,7 @@ from synapse.logging.opentracing import (
|
|||
trace,
|
||||
)
|
||||
from synapse.storage.databases.main.roommember import extract_heroes_from_room_summary
|
||||
from synapse.storage.databases.main.state_deltas import StateDelta
|
||||
from synapse.storage.databases.main.stream import PaginateFunction
|
||||
from synapse.storage.roommember import (
|
||||
MemberSummary,
|
||||
|
@ -48,6 +49,7 @@ from synapse.types import (
|
|||
MutableStateMap,
|
||||
PersistedEventPosition,
|
||||
Requester,
|
||||
RoomStreamToken,
|
||||
SlidingSyncStreamToken,
|
||||
StateMap,
|
||||
StrCollection,
|
||||
|
@ -470,6 +472,64 @@ class SlidingSyncHandler:
|
|||
|
||||
return state_map
|
||||
|
||||
@trace
|
||||
async def get_current_state_deltas_for_room(
|
||||
self,
|
||||
room_id: str,
|
||||
room_membership_for_user_at_to_token: RoomsForUserType,
|
||||
from_token: RoomStreamToken,
|
||||
to_token: RoomStreamToken,
|
||||
) -> List[StateDelta]:
|
||||
"""
|
||||
Get the state deltas between two tokens taking into account the user's
|
||||
membership. If the user is LEAVE/BAN, we will only get the state deltas up to
|
||||
their LEAVE/BAN event (inclusive).
|
||||
|
||||
(> `from_token` and <= `to_token`)
|
||||
"""
|
||||
membership = room_membership_for_user_at_to_token.membership
|
||||
# We don't know how to handle `membership` values other than these. The
|
||||
# code below would need to be updated.
|
||||
assert membership in (
|
||||
Membership.JOIN,
|
||||
Membership.INVITE,
|
||||
Membership.KNOCK,
|
||||
Membership.LEAVE,
|
||||
Membership.BAN,
|
||||
)
|
||||
|
||||
# People shouldn't see past their leave/ban event
|
||||
if membership in (
|
||||
Membership.LEAVE,
|
||||
Membership.BAN,
|
||||
):
|
||||
to_bound = (
|
||||
room_membership_for_user_at_to_token.event_pos.to_room_stream_token()
|
||||
)
|
||||
# If we are participating in the room, we can get the latest current state in
|
||||
# the room
|
||||
elif membership == Membership.JOIN:
|
||||
to_bound = to_token
|
||||
# We can only rely on the stripped state included in the invite/knock event
|
||||
# itself so there will never be any state deltas to send down.
|
||||
elif membership in (Membership.INVITE, Membership.KNOCK):
|
||||
return []
|
||||
else:
|
||||
# We don't know how to handle this type of membership yet
|
||||
#
|
||||
# FIXME: We should use `assert_never` here but for some reason
|
||||
# the exhaustive matching doesn't recognize the `Never` here.
|
||||
# assert_never(membership)
|
||||
raise AssertionError(
|
||||
f"Unexpected membership {membership} that we don't know how to handle yet"
|
||||
)
|
||||
|
||||
return await self.store.get_current_state_deltas_for_room(
|
||||
room_id=room_id,
|
||||
from_token=from_token,
|
||||
to_token=to_bound,
|
||||
)
|
||||
|
||||
@trace
|
||||
async def get_room_sync_data(
|
||||
self,
|
||||
|
@ -755,13 +815,19 @@ class SlidingSyncHandler:
|
|||
|
||||
stripped_state = []
|
||||
if invite_or_knock_event.membership == Membership.INVITE:
|
||||
stripped_state.extend(
|
||||
invite_or_knock_event.unsigned.get("invite_room_state", [])
|
||||
invite_state = invite_or_knock_event.unsigned.get(
|
||||
"invite_room_state", []
|
||||
)
|
||||
if not isinstance(invite_state, list):
|
||||
invite_state = []
|
||||
|
||||
stripped_state.extend(invite_state)
|
||||
elif invite_or_knock_event.membership == Membership.KNOCK:
|
||||
stripped_state.extend(
|
||||
invite_or_knock_event.unsigned.get("knock_room_state", [])
|
||||
)
|
||||
knock_state = invite_or_knock_event.unsigned.get("knock_room_state", [])
|
||||
if not isinstance(knock_state, list):
|
||||
knock_state = []
|
||||
|
||||
stripped_state.extend(knock_state)
|
||||
|
||||
stripped_state.append(strip_event(invite_or_knock_event))
|
||||
|
||||
|
@ -790,8 +856,9 @@ class SlidingSyncHandler:
|
|||
# TODO: Limit the number of state events we're about to send down
|
||||
# the room, if its too many we should change this to an
|
||||
# `initial=True`?
|
||||
deltas = await self.store.get_current_state_deltas_for_room(
|
||||
deltas = await self.get_current_state_deltas_for_room(
|
||||
room_id=room_id,
|
||||
room_membership_for_user_at_to_token=room_membership_for_user_at_to_token,
|
||||
from_token=from_bound,
|
||||
to_token=to_token.room_key,
|
||||
)
|
||||
|
@ -955,15 +1022,21 @@ class SlidingSyncHandler:
|
|||
and state_key == StateValues.LAZY
|
||||
):
|
||||
lazy_load_room_members = True
|
||||
|
||||
# Everyone in the timeline is relevant
|
||||
#
|
||||
# FIXME: We probably also care about invite, ban, kick, targets, etc
|
||||
# but the spec only mentions "senders".
|
||||
timeline_membership: Set[str] = set()
|
||||
if timeline_events is not None:
|
||||
for timeline_event in timeline_events:
|
||||
# Anyone who sent a message is relevant
|
||||
timeline_membership.add(timeline_event.sender)
|
||||
|
||||
# We also care about invite, ban, kick, targets,
|
||||
# etc.
|
||||
if timeline_event.type == EventTypes.Member:
|
||||
timeline_membership.add(
|
||||
timeline_event.state_key
|
||||
)
|
||||
|
||||
# Update the required state filter so we pick up the new
|
||||
# membership
|
||||
for user_id in timeline_membership:
|
||||
|
|
|
@ -21,6 +21,7 @@
|
|||
import contextlib
|
||||
import logging
|
||||
import time
|
||||
from http import HTTPStatus
|
||||
from typing import TYPE_CHECKING, Any, Generator, Optional, Tuple, Union
|
||||
|
||||
import attr
|
||||
|
@ -139,6 +140,41 @@ class SynapseRequest(Request):
|
|||
self.synapse_site.site_tag,
|
||||
)
|
||||
|
||||
# Twisted machinery: this method is called by the Channel once the full request has
|
||||
# been received, to dispatch the request to a resource.
|
||||
#
|
||||
# We're patching Twisted to bail/abort early when we see someone trying to upload
|
||||
# `multipart/form-data` so we can avoid Twisted parsing the entire request body into
|
||||
# in-memory (specific problem of this specific `Content-Type`). This protects us
|
||||
# from an attacker uploading something bigger than the available RAM and crashing
|
||||
# the server with a `MemoryError`, or carefully block just enough resources to cause
|
||||
# all other requests to fail.
|
||||
#
|
||||
# FIXME: This can be removed once we Twisted releases a fix and we update to a
|
||||
# version that is patched
|
||||
def requestReceived(self, command: bytes, path: bytes, version: bytes) -> None:
|
||||
if command == b"POST":
|
||||
ctype = self.requestHeaders.getRawHeaders(b"content-type")
|
||||
if ctype and b"multipart/form-data" in ctype[0]:
|
||||
self.method, self.uri = command, path
|
||||
self.clientproto = version
|
||||
self.code = HTTPStatus.UNSUPPORTED_MEDIA_TYPE.value
|
||||
self.code_message = bytes(
|
||||
HTTPStatus.UNSUPPORTED_MEDIA_TYPE.phrase, "ascii"
|
||||
)
|
||||
self.responseHeaders.setRawHeaders(b"content-length", [b"0"])
|
||||
|
||||
logger.warning(
|
||||
"Aborting connection from %s because `content-type: multipart/form-data` is unsupported: %s %s",
|
||||
self.client,
|
||||
command,
|
||||
path,
|
||||
)
|
||||
self.write(b"")
|
||||
self.loseConnection()
|
||||
return
|
||||
return super().requestReceived(command, path, version)
|
||||
|
||||
def handleContentChunk(self, data: bytes) -> None:
|
||||
# we should have a `content` by now.
|
||||
assert self.content, "handleContentChunk() called before gotLength()"
|
||||
|
|
|
@ -67,6 +67,11 @@ class ThumbnailError(Exception):
|
|||
class Thumbnailer:
|
||||
FORMATS = {"image/jpeg": "JPEG", "image/png": "PNG"}
|
||||
|
||||
# Which image formats we allow Pillow to open.
|
||||
# This should intentionally be kept restrictive, because the decoder of any
|
||||
# format in this list becomes part of our trusted computing base.
|
||||
PILLOW_FORMATS = ("jpeg", "png", "webp", "gif")
|
||||
|
||||
@staticmethod
|
||||
def set_limits(max_image_pixels: int) -> None:
|
||||
Image.MAX_IMAGE_PIXELS = max_image_pixels
|
||||
|
@ -76,7 +81,7 @@ class Thumbnailer:
|
|||
self._closed = False
|
||||
|
||||
try:
|
||||
self.image = Image.open(input_path)
|
||||
self.image = Image.open(input_path, formats=self.PILLOW_FORMATS)
|
||||
except OSError as e:
|
||||
# If an error occurs opening the image, a thumbnail won't be able to
|
||||
# be generated.
|
||||
|
|
|
@ -127,6 +127,11 @@ class HttpPusher(Pusher):
|
|||
if self.data is None:
|
||||
raise PusherConfigException("'data' key can not be null for HTTP pusher")
|
||||
|
||||
# Check if badge counts should be disabled for this push gateway
|
||||
self.disable_badge_count = self.hs.config.experimental.msc4076_enabled and bool(
|
||||
self.data.get("org.matrix.msc4076.disable_badge_count", False)
|
||||
)
|
||||
|
||||
self.name = "%s/%s/%s" % (
|
||||
pusher_config.user_name,
|
||||
pusher_config.app_id,
|
||||
|
@ -466,9 +471,10 @@ class HttpPusher(Pusher):
|
|||
content: JsonDict = {
|
||||
"event_id": event.event_id,
|
||||
"room_id": event.room_id,
|
||||
"counts": {"unread": badge},
|
||||
"prio": priority,
|
||||
}
|
||||
if not self.disable_badge_count:
|
||||
content["counts"] = {"unread": badge}
|
||||
# event_id_only doesn't include the tweaks, so override them.
|
||||
tweaks = {}
|
||||
else:
|
||||
|
@ -483,11 +489,11 @@ class HttpPusher(Pusher):
|
|||
"type": event.type,
|
||||
"sender": event.user_id,
|
||||
"prio": priority,
|
||||
"counts": {
|
||||
"unread": badge,
|
||||
# 'missed_calls': 2
|
||||
},
|
||||
}
|
||||
if not self.disable_badge_count:
|
||||
content["counts"] = {
|
||||
"unread": badge,
|
||||
}
|
||||
if event.type == "m.room.member" and event.is_state():
|
||||
content["membership"] = event.content["membership"]
|
||||
content["user_is_target"] = event.state_key == self.user_id
|
||||
|
|
|
@ -74,9 +74,13 @@ async def get_context_for_event(
|
|||
|
||||
room_state = []
|
||||
if ev.content.get("membership") == Membership.INVITE:
|
||||
room_state = ev.unsigned.get("invite_room_state", [])
|
||||
invite_room_state = ev.unsigned.get("invite_room_state", [])
|
||||
if isinstance(invite_room_state, list):
|
||||
room_state = invite_room_state
|
||||
elif ev.content.get("membership") == Membership.KNOCK:
|
||||
room_state = ev.unsigned.get("knock_room_state", [])
|
||||
knock_room_state = ev.unsigned.get("knock_room_state", [])
|
||||
if isinstance(knock_room_state, list):
|
||||
room_state = knock_room_state
|
||||
|
||||
# Ideally we'd reuse the logic in `calculate_room_name`, but that gets
|
||||
# complicated to handle partial events vs pulling events from the DB.
|
||||
|
|
|
@ -436,7 +436,12 @@ class SyncRestServlet(RestServlet):
|
|||
)
|
||||
unsigned = dict(invite.get("unsigned", {}))
|
||||
invite["unsigned"] = unsigned
|
||||
invited_state = list(unsigned.pop("invite_room_state", []))
|
||||
|
||||
invited_state = unsigned.pop("invite_room_state", [])
|
||||
if not isinstance(invited_state, list):
|
||||
invited_state = []
|
||||
|
||||
invited_state = list(invited_state)
|
||||
invited_state.append(invite)
|
||||
invited[room.room_id] = {"invite_state": {"events": invited_state}}
|
||||
|
||||
|
@ -476,7 +481,10 @@ class SyncRestServlet(RestServlet):
|
|||
# Extract the stripped room state from the unsigned dict
|
||||
# This is for clients to get a little bit of information about
|
||||
# the room they've knocked on, without revealing any sensitive information
|
||||
knocked_state = list(unsigned.pop("knock_room_state", []))
|
||||
knocked_state = unsigned.pop("knock_room_state", [])
|
||||
if not isinstance(knocked_state, list):
|
||||
knocked_state = []
|
||||
knocked_state = list(knocked_state)
|
||||
|
||||
# Append the actual knock membership event itself as well. This provides
|
||||
# the client with:
|
||||
|
|
|
@ -21,6 +21,7 @@
|
|||
import logging
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
from synapse.api.urls import LoginSSORedirectURIBuilder
|
||||
from synapse.http.server import (
|
||||
DirectServeHtmlResource,
|
||||
finish_request,
|
||||
|
@ -49,6 +50,8 @@ class PickIdpResource(DirectServeHtmlResource):
|
|||
hs.config.sso.sso_login_idp_picker_template
|
||||
)
|
||||
self._server_name = hs.hostname
|
||||
self._public_baseurl = hs.config.server.public_baseurl
|
||||
self._login_sso_redirect_url_builder = LoginSSORedirectURIBuilder(hs.config)
|
||||
|
||||
async def _async_render_GET(self, request: SynapseRequest) -> None:
|
||||
client_redirect_url = parse_string(
|
||||
|
@ -56,25 +59,23 @@ class PickIdpResource(DirectServeHtmlResource):
|
|||
)
|
||||
idp = parse_string(request, "idp", required=False)
|
||||
|
||||
# if we need to pick an IdP, do so
|
||||
# If we need to pick an IdP, do so
|
||||
if not idp:
|
||||
return await self._serve_id_picker(request, client_redirect_url)
|
||||
|
||||
# otherwise, redirect to the IdP's redirect URI
|
||||
providers = self._sso_handler.get_identity_providers()
|
||||
auth_provider = providers.get(idp)
|
||||
if not auth_provider:
|
||||
logger.info("Unknown idp %r", idp)
|
||||
self._sso_handler.render_error(
|
||||
request, "unknown_idp", "Unknown identity provider ID"
|
||||
# Otherwise, redirect to the login SSO redirect endpoint for the given IdP
|
||||
# (which will in turn take us to the the IdP's redirect URI).
|
||||
#
|
||||
# We could go directly to the IdP's redirect URI, but this way we ensure that
|
||||
# the user goes through the same logic as normal flow. Additionally, if a proxy
|
||||
# needs to intercept the request, it only needs to intercept the one endpoint.
|
||||
sso_login_redirect_url = (
|
||||
self._login_sso_redirect_url_builder.build_login_sso_redirect_uri(
|
||||
idp_id=idp, client_redirect_url=client_redirect_url
|
||||
)
|
||||
return
|
||||
|
||||
sso_url = await auth_provider.handle_redirect_request(
|
||||
request, client_redirect_url.encode("utf8")
|
||||
)
|
||||
logger.info("Redirecting to %s", sso_url)
|
||||
request.redirect(sso_url)
|
||||
logger.info("Redirecting to %s", sso_login_redirect_url)
|
||||
request.redirect(sso_login_redirect_url)
|
||||
finish_request(request)
|
||||
|
||||
async def _serve_id_picker(
|
||||
|
|
|
@ -243,6 +243,13 @@ class StateDeltasStore(SQLBaseStore):
|
|||
|
||||
(> `from_token` and <= `to_token`)
|
||||
"""
|
||||
# We can bail early if the `from_token` is after the `to_token`
|
||||
if (
|
||||
to_token is not None
|
||||
and from_token is not None
|
||||
and to_token.is_before_or_eq(from_token)
|
||||
):
|
||||
return []
|
||||
|
||||
if (
|
||||
from_token is not None
|
||||
|
|
|
@ -407,8 +407,8 @@ class StateValues:
|
|||
# Include all state events of the given type
|
||||
WILDCARD: Final = "*"
|
||||
# Lazy-load room membership events (include room membership events for any event
|
||||
# `sender` in the timeline). We only give special meaning to this value when it's a
|
||||
# `state_key`.
|
||||
# `sender` or membership change target in the timeline). We only give special
|
||||
# meaning to this value when it's a `state_key`.
|
||||
LAZY: Final = "$LAZY"
|
||||
# Subsitute with the requester's user ID. Typically used by clients to get
|
||||
# the user's membership.
|
||||
|
@ -641,9 +641,10 @@ class RoomSyncConfig:
|
|||
if user_id == StateValues.ME:
|
||||
continue
|
||||
# We're lazy-loading membership so we can just return the state we have.
|
||||
# Lazy-loading means we include membership for any event `sender` in the
|
||||
# timeline but since we had to auth those timeline events, we will have the
|
||||
# membership state for them (including from remote senders).
|
||||
# Lazy-loading means we include membership for any event `sender` or
|
||||
# membership change target in the timeline but since we had to auth those
|
||||
# timeline events, we will have the membership state for them (including
|
||||
# from remote senders).
|
||||
elif user_id == StateValues.LAZY:
|
||||
continue
|
||||
elif user_id == StateValues.WILDCARD:
|
||||
|
|
55
tests/api/test_urls.py
Normal file
55
tests/api/test_urls.py
Normal file
|
@ -0,0 +1,55 @@
|
|||
#
|
||||
# This file is licensed under the Affero General Public License (AGPL) version 3.
|
||||
#
|
||||
# Copyright (C) 2024 New Vector, Ltd
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU Affero General Public License as
|
||||
# published by the Free Software Foundation, either version 3 of the
|
||||
# License, or (at your option) any later version.
|
||||
#
|
||||
# See the GNU Affero General Public License for more details:
|
||||
# <https://www.gnu.org/licenses/agpl-3.0.html>.
|
||||
#
|
||||
|
||||
|
||||
from twisted.test.proto_helpers import MemoryReactor
|
||||
|
||||
from synapse.api.urls import LoginSSORedirectURIBuilder
|
||||
from synapse.server import HomeServer
|
||||
from synapse.util import Clock
|
||||
|
||||
from tests.unittest import HomeserverTestCase
|
||||
|
||||
# a (valid) url with some annoying characters in. %3D is =, %26 is &, %2B is +
|
||||
TRICKY_TEST_CLIENT_REDIRECT_URL = 'https://x?<ab c>&q"+%3D%2B"="fö%26=o"'
|
||||
|
||||
|
||||
class LoginSSORedirectURIBuilderTestCase(HomeserverTestCase):
|
||||
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
|
||||
self.login_sso_redirect_url_builder = LoginSSORedirectURIBuilder(hs.config)
|
||||
|
||||
def test_no_idp_id(self) -> None:
|
||||
self.assertEqual(
|
||||
self.login_sso_redirect_url_builder.build_login_sso_redirect_uri(
|
||||
idp_id=None, client_redirect_url="http://example.com/redirect"
|
||||
),
|
||||
"https://test/_matrix/client/v3/login/sso/redirect?redirectUrl=http%3A%2F%2Fexample.com%2Fredirect",
|
||||
)
|
||||
|
||||
def test_explicit_idp_id(self) -> None:
|
||||
self.assertEqual(
|
||||
self.login_sso_redirect_url_builder.build_login_sso_redirect_uri(
|
||||
idp_id="oidc-github", client_redirect_url="http://example.com/redirect"
|
||||
),
|
||||
"https://test/_matrix/client/v3/login/sso/redirect/oidc-github?redirectUrl=http%3A%2F%2Fexample.com%2Fredirect",
|
||||
)
|
||||
|
||||
def test_tricky_redirect_uri(self) -> None:
|
||||
self.assertEqual(
|
||||
self.login_sso_redirect_url_builder.build_login_sso_redirect_uri(
|
||||
idp_id="oidc-github",
|
||||
client_redirect_url=TRICKY_TEST_CLIENT_REDIRECT_URL,
|
||||
),
|
||||
"https://test/_matrix/client/v3/login/sso/redirect/oidc-github?redirectUrl=https%3A%2F%2Fx%3F%3Cab+c%3E%26q%22%2B%253D%252B%22%3D%22f%C3%B6%2526%3Do%22",
|
||||
)
|
|
@ -90,3 +90,56 @@ class SynapseRequestTestCase(HomeserverTestCase):
|
|||
# default max upload size is 50M, so it should drop on the next buffer after
|
||||
# that.
|
||||
self.assertEqual(sent, 50 * 1024 * 1024 + 1024)
|
||||
|
||||
def test_content_type_multipart(self) -> None:
|
||||
"""HTTP POST requests with `content-type: multipart/form-data` should be rejected"""
|
||||
self.hs.start_listening()
|
||||
|
||||
# find the HTTP server which is configured to listen on port 0
|
||||
(port, factory, _backlog, interface) = self.reactor.tcpServers[0]
|
||||
self.assertEqual(interface, "::")
|
||||
self.assertEqual(port, 0)
|
||||
|
||||
# as a control case, first send a regular request.
|
||||
|
||||
# complete the connection and wire it up to a fake transport
|
||||
client_address = IPv6Address("TCP", "::1", 2345)
|
||||
protocol = factory.buildProtocol(client_address)
|
||||
transport = StringTransport()
|
||||
protocol.makeConnection(transport)
|
||||
|
||||
protocol.dataReceived(
|
||||
b"POST / HTTP/1.1\r\n"
|
||||
b"Connection: close\r\n"
|
||||
b"Transfer-Encoding: chunked\r\n"
|
||||
b"\r\n"
|
||||
b"0\r\n"
|
||||
b"\r\n"
|
||||
)
|
||||
|
||||
while not transport.disconnecting:
|
||||
self.reactor.advance(1)
|
||||
|
||||
# we should get a 404
|
||||
self.assertRegex(transport.value().decode(), r"^HTTP/1\.1 404 ")
|
||||
|
||||
# now send request with content-type header
|
||||
protocol = factory.buildProtocol(client_address)
|
||||
transport = StringTransport()
|
||||
protocol.makeConnection(transport)
|
||||
|
||||
protocol.dataReceived(
|
||||
b"POST / HTTP/1.1\r\n"
|
||||
b"Connection: close\r\n"
|
||||
b"Transfer-Encoding: chunked\r\n"
|
||||
b"Content-Type: multipart/form-data\r\n"
|
||||
b"\r\n"
|
||||
b"0\r\n"
|
||||
b"\r\n"
|
||||
)
|
||||
|
||||
while not transport.disconnecting:
|
||||
self.reactor.advance(1)
|
||||
|
||||
# we should get a 415
|
||||
self.assertRegex(transport.value().decode(), r"^HTTP/1\.1 415 ")
|
||||
|
|
|
@ -17,9 +17,11 @@
|
|||
# [This file includes modifications made by New Vector Limited]
|
||||
#
|
||||
#
|
||||
from typing import Any, List, Tuple
|
||||
from typing import Any, Dict, List, Tuple
|
||||
from unittest.mock import Mock
|
||||
|
||||
from parameterized import parameterized
|
||||
|
||||
from twisted.internet.defer import Deferred
|
||||
from twisted.test.proto_helpers import MemoryReactor
|
||||
|
||||
|
@ -1085,3 +1087,83 @@ class HTTPPusherTests(HomeserverTestCase):
|
|||
self.pump()
|
||||
|
||||
self.assertEqual(len(self.push_attempts), 11)
|
||||
|
||||
@parameterized.expand(
|
||||
[
|
||||
# Badge count disabled
|
||||
(True, True),
|
||||
(True, False),
|
||||
# Badge count enabled
|
||||
(False, True),
|
||||
(False, False),
|
||||
]
|
||||
)
|
||||
@override_config({"experimental_features": {"msc4076_enabled": True}})
|
||||
def test_msc4076_badge_count(
|
||||
self, disable_badge_count: bool, event_id_only: bool
|
||||
) -> None:
|
||||
# Register the user who gets notified
|
||||
user_id = self.register_user("user", "pass")
|
||||
access_token = self.login("user", "pass")
|
||||
|
||||
# Register the user who sends the message
|
||||
other_user_id = self.register_user("otheruser", "pass")
|
||||
other_access_token = self.login("otheruser", "pass")
|
||||
|
||||
# Register the pusher with disable_badge_count set to True
|
||||
user_tuple = self.get_success(
|
||||
self.hs.get_datastores().main.get_user_by_access_token(access_token)
|
||||
)
|
||||
assert user_tuple is not None
|
||||
device_id = user_tuple.device_id
|
||||
|
||||
# Set the push data dict based on test input parameters
|
||||
push_data: Dict[str, Any] = {
|
||||
"url": "http://example.com/_matrix/push/v1/notify",
|
||||
}
|
||||
if disable_badge_count:
|
||||
push_data["org.matrix.msc4076.disable_badge_count"] = True
|
||||
if event_id_only:
|
||||
push_data["format"] = "event_id_only"
|
||||
|
||||
self.get_success(
|
||||
self.hs.get_pusherpool().add_or_update_pusher(
|
||||
user_id=user_id,
|
||||
device_id=device_id,
|
||||
kind="http",
|
||||
app_id="m.http",
|
||||
app_display_name="HTTP Push Notifications",
|
||||
device_display_name="pushy push",
|
||||
pushkey="a@example.com",
|
||||
lang=None,
|
||||
data=push_data,
|
||||
)
|
||||
)
|
||||
|
||||
# Create a room
|
||||
room = self.helper.create_room_as(user_id, tok=access_token)
|
||||
|
||||
# The other user joins
|
||||
self.helper.join(room=room, user=other_user_id, tok=other_access_token)
|
||||
|
||||
# The other user sends a message
|
||||
self.helper.send(room, body="Hi!", tok=other_access_token)
|
||||
|
||||
# Advance time a bit, so the pusher will register something has happened
|
||||
self.pump()
|
||||
|
||||
# One push was attempted to be sent
|
||||
self.assertEqual(len(self.push_attempts), 1)
|
||||
self.assertEqual(
|
||||
self.push_attempts[0][1], "http://example.com/_matrix/push/v1/notify"
|
||||
)
|
||||
|
||||
if disable_badge_count:
|
||||
# Verify that the notification DOESN'T contain a counts field
|
||||
self.assertNotIn("counts", self.push_attempts[0][2]["notification"])
|
||||
else:
|
||||
# Ensure that the notification DOES contain a counts field
|
||||
self.assertIn("counts", self.push_attempts[0][2]["notification"])
|
||||
self.assertEqual(
|
||||
self.push_attempts[0][2]["notification"]["counts"]["unread"], 1
|
||||
)
|
||||
|
|
|
@ -11,6 +11,7 @@
|
|||
# See the GNU Affero General Public License for more details:
|
||||
# <https://www.gnu.org/licenses/agpl-3.0.html>.
|
||||
#
|
||||
import enum
|
||||
import logging
|
||||
|
||||
from parameterized import parameterized, parameterized_class
|
||||
|
@ -18,9 +19,9 @@ from parameterized import parameterized, parameterized_class
|
|||
from twisted.test.proto_helpers import MemoryReactor
|
||||
|
||||
import synapse.rest.admin
|
||||
from synapse.api.constants import EventTypes, Membership
|
||||
from synapse.api.constants import EventContentFields, EventTypes, JoinRules, Membership
|
||||
from synapse.handlers.sliding_sync import StateValues
|
||||
from synapse.rest.client import login, room, sync
|
||||
from synapse.rest.client import knock, login, room, sync
|
||||
from synapse.server import HomeServer
|
||||
from synapse.util import Clock
|
||||
|
||||
|
@ -30,6 +31,17 @@ from tests.test_utils.event_injection import mark_event_as_partial_state
|
|||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
# Inherit from `str` so that they show up in the test description when we
|
||||
# `@parameterized.expand(...)` the first parameter
|
||||
class MembershipAction(str, enum.Enum):
|
||||
INVITE = "invite"
|
||||
JOIN = "join"
|
||||
KNOCK = "knock"
|
||||
LEAVE = "leave"
|
||||
BAN = "ban"
|
||||
KICK = "kick"
|
||||
|
||||
|
||||
# FIXME: This can be removed once we bump `SCHEMA_COMPAT_VERSION` and run the
|
||||
# foreground update for
|
||||
# `sliding_sync_joined_rooms`/`sliding_sync_membership_snapshots` (tracked by
|
||||
|
@ -52,6 +64,7 @@ class SlidingSyncRoomsRequiredStateTestCase(SlidingSyncBase):
|
|||
servlets = [
|
||||
synapse.rest.admin.register_servlets,
|
||||
login.register_servlets,
|
||||
knock.register_servlets,
|
||||
room.register_servlets,
|
||||
sync.register_servlets,
|
||||
]
|
||||
|
@ -496,6 +509,153 @@ class SlidingSyncRoomsRequiredStateTestCase(SlidingSyncBase):
|
|||
)
|
||||
self.assertIsNone(response_body["rooms"][room_id1].get("invite_state"))
|
||||
|
||||
@parameterized.expand(
|
||||
[
|
||||
(MembershipAction.LEAVE,),
|
||||
(MembershipAction.INVITE,),
|
||||
(MembershipAction.KNOCK,),
|
||||
(MembershipAction.JOIN,),
|
||||
(MembershipAction.BAN,),
|
||||
(MembershipAction.KICK,),
|
||||
]
|
||||
)
|
||||
def test_rooms_required_state_changed_membership_in_timeline_lazy_loading_room_members_incremental_sync(
|
||||
self,
|
||||
room_membership_action: str,
|
||||
) -> None:
|
||||
"""
|
||||
On incremental sync, test `rooms.required_state` returns people relevant to the
|
||||
timeline when lazy-loading room members, `["m.room.member","$LAZY"]` **including
|
||||
changes to membership**.
|
||||
"""
|
||||
user1_id = self.register_user("user1", "pass")
|
||||
user1_tok = self.login(user1_id, "pass")
|
||||
user2_id = self.register_user("user2", "pass")
|
||||
user2_tok = self.login(user2_id, "pass")
|
||||
user3_id = self.register_user("user3", "pass")
|
||||
user3_tok = self.login(user3_id, "pass")
|
||||
user4_id = self.register_user("user4", "pass")
|
||||
user4_tok = self.login(user4_id, "pass")
|
||||
user5_id = self.register_user("user5", "pass")
|
||||
user5_tok = self.login(user5_id, "pass")
|
||||
|
||||
room_id1 = self.helper.create_room_as(user2_id, tok=user2_tok, is_public=True)
|
||||
# If we're testing knocks, set the room to knock
|
||||
if room_membership_action == MembershipAction.KNOCK:
|
||||
self.helper.send_state(
|
||||
room_id1,
|
||||
EventTypes.JoinRules,
|
||||
{"join_rule": JoinRules.KNOCK},
|
||||
tok=user2_tok,
|
||||
)
|
||||
|
||||
# Join the test users to the room
|
||||
self.helper.invite(room_id1, src=user2_id, targ=user1_id, tok=user2_tok)
|
||||
self.helper.join(room_id1, user1_id, tok=user1_tok)
|
||||
self.helper.invite(room_id1, src=user2_id, targ=user3_id, tok=user2_tok)
|
||||
self.helper.join(room_id1, user3_id, tok=user3_tok)
|
||||
self.helper.invite(room_id1, src=user2_id, targ=user4_id, tok=user2_tok)
|
||||
self.helper.join(room_id1, user4_id, tok=user4_tok)
|
||||
if room_membership_action in (
|
||||
MembershipAction.LEAVE,
|
||||
MembershipAction.BAN,
|
||||
MembershipAction.JOIN,
|
||||
):
|
||||
self.helper.invite(room_id1, src=user2_id, targ=user5_id, tok=user2_tok)
|
||||
self.helper.join(room_id1, user5_id, tok=user5_tok)
|
||||
|
||||
# Send some messages to fill up the space
|
||||
self.helper.send(room_id1, "1", tok=user2_tok)
|
||||
self.helper.send(room_id1, "2", tok=user2_tok)
|
||||
self.helper.send(room_id1, "3", tok=user2_tok)
|
||||
|
||||
# Make the Sliding Sync request with lazy loading for the room members
|
||||
sync_body = {
|
||||
"lists": {
|
||||
"foo-list": {
|
||||
"ranges": [[0, 1]],
|
||||
"required_state": [
|
||||
[EventTypes.Create, ""],
|
||||
[EventTypes.Member, StateValues.LAZY],
|
||||
],
|
||||
"timeline_limit": 3,
|
||||
}
|
||||
}
|
||||
}
|
||||
response_body, from_token = self.do_sync(sync_body, tok=user1_tok)
|
||||
|
||||
# Send more timeline events into the room
|
||||
self.helper.send(room_id1, "4", tok=user2_tok)
|
||||
self.helper.send(room_id1, "5", tok=user4_tok)
|
||||
# The third event will be our membership event concerning user5
|
||||
if room_membership_action == MembershipAction.LEAVE:
|
||||
# User 5 leaves
|
||||
self.helper.leave(room_id1, user5_id, tok=user5_tok)
|
||||
elif room_membership_action == MembershipAction.INVITE:
|
||||
# User 5 is invited
|
||||
self.helper.invite(room_id1, src=user2_id, targ=user5_id, tok=user2_tok)
|
||||
elif room_membership_action == MembershipAction.KNOCK:
|
||||
# User 5 knocks
|
||||
self.helper.knock(room_id1, user5_id, tok=user5_tok)
|
||||
# The admin of the room accepts the knock
|
||||
self.helper.invite(room_id1, src=user2_id, targ=user5_id, tok=user2_tok)
|
||||
elif room_membership_action == MembershipAction.JOIN:
|
||||
# Update the display name of user5 (causing a membership change)
|
||||
self.helper.send_state(
|
||||
room_id1,
|
||||
event_type=EventTypes.Member,
|
||||
state_key=user5_id,
|
||||
body={
|
||||
EventContentFields.MEMBERSHIP: Membership.JOIN,
|
||||
EventContentFields.MEMBERSHIP_DISPLAYNAME: "quick changer",
|
||||
},
|
||||
tok=user5_tok,
|
||||
)
|
||||
elif room_membership_action == MembershipAction.BAN:
|
||||
self.helper.ban(room_id1, src=user2_id, targ=user5_id, tok=user2_tok)
|
||||
elif room_membership_action == MembershipAction.KICK:
|
||||
# Kick user5 from the room
|
||||
self.helper.change_membership(
|
||||
room=room_id1,
|
||||
src=user2_id,
|
||||
targ=user5_id,
|
||||
tok=user2_tok,
|
||||
membership=Membership.LEAVE,
|
||||
extra_data={
|
||||
"reason": "Bad manners",
|
||||
},
|
||||
)
|
||||
else:
|
||||
raise AssertionError(
|
||||
f"Unknown room_membership_action: {room_membership_action}"
|
||||
)
|
||||
|
||||
# Make an incremental Sliding Sync request
|
||||
response_body, _ = self.do_sync(sync_body, since=from_token, tok=user1_tok)
|
||||
|
||||
state_map = self.get_success(
|
||||
self.storage_controllers.state.get_current_state(room_id1)
|
||||
)
|
||||
|
||||
# Only user2, user4, and user5 sent events in the last 3 events we see in the
|
||||
# `timeline`.
|
||||
self._assertRequiredStateIncludes(
|
||||
response_body["rooms"][room_id1]["required_state"],
|
||||
{
|
||||
# This appears because *some* membership in the room changed and the
|
||||
# heroes are recalculated and is thrown in because we have it. But this
|
||||
# is technically optional and not needed because we've already seen user2
|
||||
# in the last sync (and their membership hasn't changed).
|
||||
state_map[(EventTypes.Member, user2_id)],
|
||||
# Appears because there is a message in the timeline from this user
|
||||
state_map[(EventTypes.Member, user4_id)],
|
||||
# Appears because there is a membership event in the timeline from this user
|
||||
state_map[(EventTypes.Member, user5_id)],
|
||||
},
|
||||
exact=True,
|
||||
)
|
||||
self.assertIsNone(response_body["rooms"][room_id1].get("invite_state"))
|
||||
|
||||
def test_rooms_required_state_expand_lazy_loading_room_members_incremental_sync(
|
||||
self,
|
||||
) -> None:
|
||||
|
@ -751,9 +911,10 @@ class SlidingSyncRoomsRequiredStateTestCase(SlidingSyncBase):
|
|||
self.assertIsNone(response_body["rooms"][room_id1].get("invite_state"))
|
||||
|
||||
@parameterized.expand([(Membership.LEAVE,), (Membership.BAN,)])
|
||||
def test_rooms_required_state_leave_ban(self, stop_membership: str) -> None:
|
||||
def test_rooms_required_state_leave_ban_initial(self, stop_membership: str) -> None:
|
||||
"""
|
||||
Test `rooms.required_state` should not return state past a leave/ban event.
|
||||
Test `rooms.required_state` should not return state past a leave/ban event when
|
||||
it's the first "initial" time the room is being sent down the connection.
|
||||
"""
|
||||
user1_id = self.register_user("user1", "pass")
|
||||
user1_tok = self.login(user1_id, "pass")
|
||||
|
@ -788,6 +949,13 @@ class SlidingSyncRoomsRequiredStateTestCase(SlidingSyncBase):
|
|||
body={"foo": "bar"},
|
||||
tok=user2_tok,
|
||||
)
|
||||
self.helper.send_state(
|
||||
room_id1,
|
||||
event_type="org.matrix.bar_state",
|
||||
state_key="",
|
||||
body={"bar": "bar"},
|
||||
tok=user2_tok,
|
||||
)
|
||||
|
||||
if stop_membership == Membership.LEAVE:
|
||||
# User 1 leaves
|
||||
|
@ -796,6 +964,8 @@ class SlidingSyncRoomsRequiredStateTestCase(SlidingSyncBase):
|
|||
# User 1 is banned
|
||||
self.helper.ban(room_id1, src=user2_id, targ=user1_id, tok=user2_tok)
|
||||
|
||||
# Get the state_map before we change the state as this is the final state we
|
||||
# expect User1 to be able to see
|
||||
state_map = self.get_success(
|
||||
self.storage_controllers.state.get_current_state(room_id1)
|
||||
)
|
||||
|
@ -808,12 +978,36 @@ class SlidingSyncRoomsRequiredStateTestCase(SlidingSyncBase):
|
|||
body={"foo": "qux"},
|
||||
tok=user2_tok,
|
||||
)
|
||||
self.helper.send_state(
|
||||
room_id1,
|
||||
event_type="org.matrix.bar_state",
|
||||
state_key="",
|
||||
body={"bar": "qux"},
|
||||
tok=user2_tok,
|
||||
)
|
||||
self.helper.leave(room_id1, user3_id, tok=user3_tok)
|
||||
|
||||
# Make an incremental Sliding Sync request
|
||||
#
|
||||
# Also expand the required state to include the `org.matrix.bar_state` event.
|
||||
# This is just an extra complication of the test.
|
||||
sync_body = {
|
||||
"lists": {
|
||||
"foo-list": {
|
||||
"ranges": [[0, 1]],
|
||||
"required_state": [
|
||||
[EventTypes.Create, ""],
|
||||
[EventTypes.Member, "*"],
|
||||
["org.matrix.foo_state", ""],
|
||||
["org.matrix.bar_state", ""],
|
||||
],
|
||||
"timeline_limit": 3,
|
||||
}
|
||||
}
|
||||
}
|
||||
response_body, _ = self.do_sync(sync_body, since=from_token, tok=user1_tok)
|
||||
|
||||
# Only user2 and user3 sent events in the 3 events we see in the `timeline`
|
||||
# We should only see the state up to the leave/ban event
|
||||
self._assertRequiredStateIncludes(
|
||||
response_body["rooms"][room_id1]["required_state"],
|
||||
{
|
||||
|
@ -822,6 +1016,126 @@ class SlidingSyncRoomsRequiredStateTestCase(SlidingSyncBase):
|
|||
state_map[(EventTypes.Member, user2_id)],
|
||||
state_map[(EventTypes.Member, user3_id)],
|
||||
state_map[("org.matrix.foo_state", "")],
|
||||
state_map[("org.matrix.bar_state", "")],
|
||||
},
|
||||
exact=True,
|
||||
)
|
||||
self.assertIsNone(response_body["rooms"][room_id1].get("invite_state"))
|
||||
|
||||
@parameterized.expand([(Membership.LEAVE,), (Membership.BAN,)])
|
||||
def test_rooms_required_state_leave_ban_incremental(
|
||||
self, stop_membership: str
|
||||
) -> None:
|
||||
"""
|
||||
Test `rooms.required_state` should not return state past a leave/ban event on
|
||||
incremental sync.
|
||||
"""
|
||||
user1_id = self.register_user("user1", "pass")
|
||||
user1_tok = self.login(user1_id, "pass")
|
||||
user2_id = self.register_user("user2", "pass")
|
||||
user2_tok = self.login(user2_id, "pass")
|
||||
user3_id = self.register_user("user3", "pass")
|
||||
user3_tok = self.login(user3_id, "pass")
|
||||
|
||||
room_id1 = self.helper.create_room_as(user2_id, tok=user2_tok)
|
||||
self.helper.join(room_id1, user1_id, tok=user1_tok)
|
||||
self.helper.join(room_id1, user3_id, tok=user3_tok)
|
||||
|
||||
self.helper.send_state(
|
||||
room_id1,
|
||||
event_type="org.matrix.foo_state",
|
||||
state_key="",
|
||||
body={"foo": "bar"},
|
||||
tok=user2_tok,
|
||||
)
|
||||
self.helper.send_state(
|
||||
room_id1,
|
||||
event_type="org.matrix.bar_state",
|
||||
state_key="",
|
||||
body={"bar": "bar"},
|
||||
tok=user2_tok,
|
||||
)
|
||||
|
||||
sync_body = {
|
||||
"lists": {
|
||||
"foo-list": {
|
||||
"ranges": [[0, 1]],
|
||||
"required_state": [
|
||||
[EventTypes.Create, ""],
|
||||
[EventTypes.Member, "*"],
|
||||
["org.matrix.foo_state", ""],
|
||||
],
|
||||
"timeline_limit": 3,
|
||||
}
|
||||
}
|
||||
}
|
||||
_, from_token = self.do_sync(sync_body, tok=user1_tok)
|
||||
|
||||
if stop_membership == Membership.LEAVE:
|
||||
# User 1 leaves
|
||||
self.helper.leave(room_id1, user1_id, tok=user1_tok)
|
||||
elif stop_membership == Membership.BAN:
|
||||
# User 1 is banned
|
||||
self.helper.ban(room_id1, src=user2_id, targ=user1_id, tok=user2_tok)
|
||||
|
||||
# Get the state_map before we change the state as this is the final state we
|
||||
# expect User1 to be able to see
|
||||
state_map = self.get_success(
|
||||
self.storage_controllers.state.get_current_state(room_id1)
|
||||
)
|
||||
|
||||
# Change the state after user 1 leaves
|
||||
self.helper.send_state(
|
||||
room_id1,
|
||||
event_type="org.matrix.foo_state",
|
||||
state_key="",
|
||||
body={"foo": "qux"},
|
||||
tok=user2_tok,
|
||||
)
|
||||
self.helper.send_state(
|
||||
room_id1,
|
||||
event_type="org.matrix.bar_state",
|
||||
state_key="",
|
||||
body={"bar": "qux"},
|
||||
tok=user2_tok,
|
||||
)
|
||||
self.helper.leave(room_id1, user3_id, tok=user3_tok)
|
||||
|
||||
# Make an incremental Sliding Sync request
|
||||
#
|
||||
# Also expand the required state to include the `org.matrix.bar_state` event.
|
||||
# This is just an extra complication of the test.
|
||||
sync_body = {
|
||||
"lists": {
|
||||
"foo-list": {
|
||||
"ranges": [[0, 1]],
|
||||
"required_state": [
|
||||
[EventTypes.Create, ""],
|
||||
[EventTypes.Member, "*"],
|
||||
["org.matrix.foo_state", ""],
|
||||
["org.matrix.bar_state", ""],
|
||||
],
|
||||
"timeline_limit": 3,
|
||||
}
|
||||
}
|
||||
}
|
||||
response_body, _ = self.do_sync(sync_body, since=from_token, tok=user1_tok)
|
||||
|
||||
# User1 should only see the state up to the leave/ban event
|
||||
self._assertRequiredStateIncludes(
|
||||
response_body["rooms"][room_id1]["required_state"],
|
||||
{
|
||||
# User1 should see their leave/ban membership
|
||||
state_map[(EventTypes.Member, user1_id)],
|
||||
state_map[("org.matrix.bar_state", "")],
|
||||
# The commented out state events were already returned in the initial
|
||||
# sync so we shouldn't see them again on the incremental sync. And we
|
||||
# shouldn't see the state events that changed after the leave/ban event.
|
||||
#
|
||||
# state_map[(EventTypes.Create, "")],
|
||||
# state_map[(EventTypes.Member, user2_id)],
|
||||
# state_map[(EventTypes.Member, user3_id)],
|
||||
# state_map[("org.matrix.foo_state", "")],
|
||||
},
|
||||
exact=True,
|
||||
)
|
||||
|
@ -1243,7 +1557,7 @@ class SlidingSyncRoomsRequiredStateTestCase(SlidingSyncBase):
|
|||
|
||||
# Update the room name
|
||||
self.helper.send_state(
|
||||
room_id1, "m.room.name", {"name": "Bar"}, state_key="", tok=user1_tok
|
||||
room_id1, EventTypes.Name, {"name": "Bar"}, state_key="", tok=user1_tok
|
||||
)
|
||||
|
||||
# Update the sliding sync requests to exclude the room name again
|
||||
|
|
|
@ -43,6 +43,7 @@ from twisted.web.resource import Resource
|
|||
import synapse.rest.admin
|
||||
from synapse.api.constants import ApprovalNoticeMedium, LoginType
|
||||
from synapse.api.errors import Codes
|
||||
from synapse.api.urls import LoginSSORedirectURIBuilder
|
||||
from synapse.appservice import ApplicationService
|
||||
from synapse.http.client import RawHeaders
|
||||
from synapse.module_api import ModuleApi
|
||||
|
@ -69,6 +70,10 @@ try:
|
|||
except ImportError:
|
||||
HAS_JWT = False
|
||||
|
||||
import logging
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
# synapse server name: used to populate public_baseurl in some tests
|
||||
SYNAPSE_SERVER_PUBLIC_HOSTNAME = "synapse"
|
||||
|
@ -77,7 +82,7 @@ SYNAPSE_SERVER_PUBLIC_HOSTNAME = "synapse"
|
|||
# FakeChannel.isSecure() returns False, so synapse will see the requested uri as
|
||||
# http://..., so using http in the public_baseurl stops Synapse trying to redirect to
|
||||
# https://....
|
||||
BASE_URL = "http://%s/" % (SYNAPSE_SERVER_PUBLIC_HOSTNAME,)
|
||||
PUBLIC_BASEURL = "http://%s/" % (SYNAPSE_SERVER_PUBLIC_HOSTNAME,)
|
||||
|
||||
# CAS server used in some tests
|
||||
CAS_SERVER = "https://fake.test"
|
||||
|
@ -109,6 +114,23 @@ ADDITIONAL_LOGIN_FLOWS = [
|
|||
]
|
||||
|
||||
|
||||
def get_relative_uri_from_absolute_uri(absolute_uri: str) -> str:
|
||||
"""
|
||||
Peels off the path and query string from an absolute URI. Useful when interacting
|
||||
with `make_request(...)` util function which expects a relative path instead of a
|
||||
full URI.
|
||||
"""
|
||||
parsed_uri = urllib.parse.urlparse(absolute_uri)
|
||||
# Sanity check that we're working with an absolute URI
|
||||
assert parsed_uri.scheme == "http" or parsed_uri.scheme == "https"
|
||||
|
||||
relative_uri = parsed_uri.path
|
||||
if parsed_uri.query:
|
||||
relative_uri += "?" + parsed_uri.query
|
||||
|
||||
return relative_uri
|
||||
|
||||
|
||||
class TestSpamChecker:
|
||||
def __init__(self, config: None, api: ModuleApi):
|
||||
api.register_spam_checker_callbacks(
|
||||
|
@ -614,7 +636,7 @@ class MultiSSOTestCase(unittest.HomeserverTestCase):
|
|||
def default_config(self) -> Dict[str, Any]:
|
||||
config = super().default_config()
|
||||
|
||||
config["public_baseurl"] = BASE_URL
|
||||
config["public_baseurl"] = PUBLIC_BASEURL
|
||||
|
||||
config["cas_config"] = {
|
||||
"enabled": True,
|
||||
|
@ -653,6 +675,9 @@ class MultiSSOTestCase(unittest.HomeserverTestCase):
|
|||
]
|
||||
return config
|
||||
|
||||
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
|
||||
self.login_sso_redirect_url_builder = LoginSSORedirectURIBuilder(hs.config)
|
||||
|
||||
def create_resource_dict(self) -> Dict[str, Resource]:
|
||||
d = super().create_resource_dict()
|
||||
d.update(build_synapse_client_resource_tree(self.hs))
|
||||
|
@ -725,6 +750,32 @@ class MultiSSOTestCase(unittest.HomeserverTestCase):
|
|||
+ "&idp=cas",
|
||||
shorthand=False,
|
||||
)
|
||||
self.assertEqual(channel.code, 302, channel.result)
|
||||
location_headers = channel.headers.getRawHeaders("Location")
|
||||
assert location_headers
|
||||
sso_login_redirect_uri = location_headers[0]
|
||||
|
||||
# it should redirect us to the standard login SSO redirect flow
|
||||
self.assertEqual(
|
||||
sso_login_redirect_uri,
|
||||
self.login_sso_redirect_url_builder.build_login_sso_redirect_uri(
|
||||
idp_id="cas", client_redirect_url=TEST_CLIENT_REDIRECT_URL
|
||||
),
|
||||
)
|
||||
|
||||
# follow the redirect
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
# We have to make this relative to be compatible with `make_request(...)`
|
||||
get_relative_uri_from_absolute_uri(sso_login_redirect_uri),
|
||||
# We have to set the Host header to match the `public_baseurl` to avoid
|
||||
# the extra redirect in the `SsoRedirectServlet` in order for the
|
||||
# cookies to be visible.
|
||||
custom_headers=[
|
||||
("Host", SYNAPSE_SERVER_PUBLIC_HOSTNAME),
|
||||
],
|
||||
)
|
||||
|
||||
self.assertEqual(channel.code, 302, channel.result)
|
||||
location_headers = channel.headers.getRawHeaders("Location")
|
||||
assert location_headers
|
||||
|
@ -750,6 +801,32 @@ class MultiSSOTestCase(unittest.HomeserverTestCase):
|
|||
+ urllib.parse.quote_plus(TEST_CLIENT_REDIRECT_URL)
|
||||
+ "&idp=saml",
|
||||
)
|
||||
self.assertEqual(channel.code, 302, channel.result)
|
||||
location_headers = channel.headers.getRawHeaders("Location")
|
||||
assert location_headers
|
||||
sso_login_redirect_uri = location_headers[0]
|
||||
|
||||
# it should redirect us to the standard login SSO redirect flow
|
||||
self.assertEqual(
|
||||
sso_login_redirect_uri,
|
||||
self.login_sso_redirect_url_builder.build_login_sso_redirect_uri(
|
||||
idp_id="saml", client_redirect_url=TEST_CLIENT_REDIRECT_URL
|
||||
),
|
||||
)
|
||||
|
||||
# follow the redirect
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
# We have to make this relative to be compatible with `make_request(...)`
|
||||
get_relative_uri_from_absolute_uri(sso_login_redirect_uri),
|
||||
# We have to set the Host header to match the `public_baseurl` to avoid
|
||||
# the extra redirect in the `SsoRedirectServlet` in order for the
|
||||
# cookies to be visible.
|
||||
custom_headers=[
|
||||
("Host", SYNAPSE_SERVER_PUBLIC_HOSTNAME),
|
||||
],
|
||||
)
|
||||
|
||||
self.assertEqual(channel.code, 302, channel.result)
|
||||
location_headers = channel.headers.getRawHeaders("Location")
|
||||
assert location_headers
|
||||
|
@ -773,13 +850,38 @@ class MultiSSOTestCase(unittest.HomeserverTestCase):
|
|||
# pick the default OIDC provider
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
"/_synapse/client/pick_idp?redirectUrl="
|
||||
+ urllib.parse.quote_plus(TEST_CLIENT_REDIRECT_URL)
|
||||
+ "&idp=oidc",
|
||||
f"/_synapse/client/pick_idp?redirectUrl={urllib.parse.quote_plus(TEST_CLIENT_REDIRECT_URL)}&idp=oidc",
|
||||
)
|
||||
self.assertEqual(channel.code, 302, channel.result)
|
||||
location_headers = channel.headers.getRawHeaders("Location")
|
||||
assert location_headers
|
||||
sso_login_redirect_uri = location_headers[0]
|
||||
|
||||
# it should redirect us to the standard login SSO redirect flow
|
||||
self.assertEqual(
|
||||
sso_login_redirect_uri,
|
||||
self.login_sso_redirect_url_builder.build_login_sso_redirect_uri(
|
||||
idp_id="oidc", client_redirect_url=TEST_CLIENT_REDIRECT_URL
|
||||
),
|
||||
)
|
||||
|
||||
with fake_oidc_server.patch_homeserver(hs=self.hs):
|
||||
# follow the redirect
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
# We have to make this relative to be compatible with `make_request(...)`
|
||||
get_relative_uri_from_absolute_uri(sso_login_redirect_uri),
|
||||
# We have to set the Host header to match the `public_baseurl` to avoid
|
||||
# the extra redirect in the `SsoRedirectServlet` in order for the
|
||||
# cookies to be visible.
|
||||
custom_headers=[
|
||||
("Host", SYNAPSE_SERVER_PUBLIC_HOSTNAME),
|
||||
],
|
||||
)
|
||||
|
||||
self.assertEqual(channel.code, 302, channel.result)
|
||||
location_headers = channel.headers.getRawHeaders("Location")
|
||||
assert location_headers
|
||||
oidc_uri = location_headers[0]
|
||||
oidc_uri_path, oidc_uri_query = oidc_uri.split("?", 1)
|
||||
|
||||
|
@ -838,12 +940,38 @@ class MultiSSOTestCase(unittest.HomeserverTestCase):
|
|||
self.assertEqual(chan.json_body["user_id"], "@user1:test")
|
||||
|
||||
def test_multi_sso_redirect_to_unknown(self) -> None:
|
||||
"""An unknown IdP should cause a 400"""
|
||||
"""An unknown IdP should cause a 404"""
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
"/_synapse/client/pick_idp?redirectUrl=http://x&idp=xyz",
|
||||
)
|
||||
self.assertEqual(channel.code, 400, channel.result)
|
||||
self.assertEqual(channel.code, 302, channel.result)
|
||||
location_headers = channel.headers.getRawHeaders("Location")
|
||||
assert location_headers
|
||||
sso_login_redirect_uri = location_headers[0]
|
||||
|
||||
# it should redirect us to the standard login SSO redirect flow
|
||||
self.assertEqual(
|
||||
sso_login_redirect_uri,
|
||||
self.login_sso_redirect_url_builder.build_login_sso_redirect_uri(
|
||||
idp_id="xyz", client_redirect_url="http://x"
|
||||
),
|
||||
)
|
||||
|
||||
# follow the redirect
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
# We have to make this relative to be compatible with `make_request(...)`
|
||||
get_relative_uri_from_absolute_uri(sso_login_redirect_uri),
|
||||
# We have to set the Host header to match the `public_baseurl` to avoid
|
||||
# the extra redirect in the `SsoRedirectServlet` in order for the
|
||||
# cookies to be visible.
|
||||
custom_headers=[
|
||||
("Host", SYNAPSE_SERVER_PUBLIC_HOSTNAME),
|
||||
],
|
||||
)
|
||||
|
||||
self.assertEqual(channel.code, 404, channel.result)
|
||||
|
||||
def test_client_idp_redirect_to_unknown(self) -> None:
|
||||
"""If the client tries to pick an unknown IdP, return a 404"""
|
||||
|
@ -1473,7 +1601,7 @@ class UsernamePickerTestCase(HomeserverTestCase):
|
|||
|
||||
def default_config(self) -> Dict[str, Any]:
|
||||
config = super().default_config()
|
||||
config["public_baseurl"] = BASE_URL
|
||||
config["public_baseurl"] = PUBLIC_BASEURL
|
||||
|
||||
config["oidc_config"] = {}
|
||||
config["oidc_config"].update(TEST_OIDC_CONFIG)
|
||||
|
|
|
@ -889,7 +889,7 @@ class RestHelper:
|
|||
"GET",
|
||||
uri,
|
||||
)
|
||||
assert channel.code == 302
|
||||
assert channel.code == 302, f"Expected 302 for {uri}, got {channel.code}"
|
||||
|
||||
# hit the redirect url again with the right Host header, which should now issue
|
||||
# a cookie and redirect to the SSO provider.
|
||||
|
@ -901,17 +901,18 @@ class RestHelper:
|
|||
|
||||
location = get_location(channel)
|
||||
parts = urllib.parse.urlsplit(location)
|
||||
next_uri = urllib.parse.urlunsplit(("", "") + parts[2:])
|
||||
channel = make_request(
|
||||
self.reactor,
|
||||
self.site,
|
||||
"GET",
|
||||
urllib.parse.urlunsplit(("", "") + parts[2:]),
|
||||
next_uri,
|
||||
custom_headers=[
|
||||
("Host", parts[1]),
|
||||
],
|
||||
)
|
||||
|
||||
assert channel.code == 302
|
||||
assert channel.code == 302, f"Expected 302 for {next_uri}, got {channel.code}"
|
||||
channel.extract_cookies(cookies)
|
||||
return get_location(channel)
|
||||
|
||||
|
|
Loading…
Add table
Reference in a new issue