Merge branch 'master' into adb/add-send-msg

This commit is contained in:
Asiel Díaz Benítez
2023-02-26 05:14:12 -05:00
committed by GitHub
50 changed files with 1265 additions and 736 deletions

View File

@@ -1,5 +1,11 @@
name: Rust CI name: Rust CI
# Cancel previously started workflow runs
# when the branch is updated.
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
on: on:
pull_request: pull_request:
push: push:

View File

@@ -2,6 +2,10 @@
name: Build deltachat-rpc-server binaries name: Build deltachat-rpc-server binaries
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
on: on:
workflow_dispatch: workflow_dispatch:
@@ -47,6 +51,50 @@ jobs:
path: target/aarch64-unknown-linux-musl/release/deltachat-rpc-server path: target/aarch64-unknown-linux-musl/release/deltachat-rpc-server
if-no-files-found: error if-no-files-found: error
build_android:
name: Cross-compile deltachat-rpc-server for Android (armeabi-v7a, arm64-v8a, x86 and x86_64)
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v3
- uses: nttld/setup-ndk@v1
id: setup-ndk
with:
ndk-version: r21d
- name: Build
env:
ANDROID_NDK_ROOT: ${{ steps.setup-ndk.outputs.ndk-path }}
run: sh scripts/android-rpc-server.sh
- name: Upload binary
uses: actions/upload-artifact@v3
with:
name: deltachat-rpc-server-android-armv7
path: target/armv7-linux-androideabi/release/deltachat-rpc-server
if-no-files-found: error
- name: Upload binary
uses: actions/upload-artifact@v3
with:
name: deltachat-rpc-server-android-aarch64
path: target/aarch64-linux-android/release/deltachat-rpc-server
if-no-files-found: error
- name: Upload binary
uses: actions/upload-artifact@v3
with:
name: deltachat-rpc-server-android-i686
path: target/i686-linux-android/release/deltachat-rpc-server
if-no-files-found: error
- name: Upload binary
uses: actions/upload-artifact@v3
with:
name: deltachat-rpc-server-android-x86_64
path: target/x86_64-linux-android/release/deltachat-rpc-server
if-no-files-found: error
build_windows: build_windows:
name: Build deltachat-rpc-server for Windows name: Build deltachat-rpc-server for Windows
strategy: strategy:

View File

@@ -1,11 +1,16 @@
name: "node.js tests" name: "node.js tests"
# Cancel previously started workflow runs
# when the branch is updated.
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
on: on:
pull_request: pull_request:
push: push:
branches: branches:
- master - master
- staging
- trying
jobs: jobs:
tests: tests:

View File

@@ -2,6 +2,17 @@
## Unreleased ## Unreleased
### Changes
- Make smeared timestamp generation non-async. #4075
### Fixes
- Do not block async task executor while decrypting the messages. #4079
### API-Changes
## 1.110.0
### Changes ### Changes
- use transaction in `Contact::add_or_lookup()` #4059 - use transaction in `Contact::add_or_lookup()` #4059
- Organize the connection pool as a stack rather than a queue to ensure that - Organize the connection pool as a stack rather than a queue to ensure that
@@ -11,12 +22,15 @@
- Remove `Sql.get_conn()` interface in favor of `.call()` and `.transaction()`. #4055 - Remove `Sql.get_conn()` interface in favor of `.call()` and `.transaction()`. #4055
- Updated provider database. - Updated provider database.
- Disable DKIM-Checks again #4076 - Disable DKIM-Checks again #4076
- Switch from "X.Y.Z" and "py-X.Y.Z" to "vX.Y.Z" tags. #4089
- mimeparser: handle headers from the signed part of unencrypted signed message #4013
### Fixes ### Fixes
- Start SQL transactions with IMMEDIATE behaviour rather than default DEFERRED one. #4063 - Start SQL transactions with IMMEDIATE behaviour rather than default DEFERRED one. #4063
- Fix a problem with Gmail where (auto-)deleted messages would get archived instead of deleted. - Fix a problem with Gmail where (auto-)deleted messages would get archived instead of deleted.
Move them to the Trash folder for Gmail which auto-deletes trashed messages in 30 days #3972 Move them to the Trash folder for Gmail which auto-deletes trashed messages in 30 days #3972
- Clear config cache after backup import. This bug sometimes resulted in the import to seemingly work at first. #4067 - Clear config cache after backup import. This bug sometimes resulted in the import to seemingly work at first. #4067
- Update timestamps in `param` columns with transactions. #4083
### API-Changes ### API-Changes
- jsonrpc: add more advanced API to send a message. - jsonrpc: add more advanced API to send a message.
@@ -51,6 +65,7 @@
- Prefer TLS over STARTTLS during autoconfiguration #4021 - Prefer TLS over STARTTLS during autoconfiguration #4021
- Use SOCKS5 configuration for HTTP requests #4017 - Use SOCKS5 configuration for HTTP requests #4017
- Show non-deltachat emails by default for new installations #4019 - Show non-deltachat emails by default for new installations #4019
- Re-enabled SMTP pipelining after disabling it in #4006
### Fixes ### Fixes
- Fix Securejoin for multiple devices on a joining side #3982 - Fix Securejoin for multiple devices on a joining side #3982
@@ -438,7 +453,7 @@
- Auto accept contact requests if `Config::Bot` is set for a client #3567 - Auto accept contact requests if `Config::Bot` is set for a client #3567
- Don't prepend the subject to chat messages in mailinglists - Don't prepend the subject to chat messages in mailinglists
- fix `set_core_version.py` script to also update version in `deltachat-jsonrpc/typescript/package.json` #3585 - fix `set_core_version.py` script to also update version in `deltachat-jsonrpc/typescript/package.json` #3585
- Reject webxcd-updates from contacts who are not group members #3568 - Reject webxdc-updates from contacts who are not group members #3568
## 1.93.0 ## 1.93.0

10
Cargo.lock generated
View File

@@ -834,7 +834,7 @@ checksum = "23d8666cb01533c39dde32bcbab8e227b4ed6679b2c925eba05feabea39508fb"
[[package]] [[package]]
name = "deltachat" name = "deltachat"
version = "1.109.0" version = "1.110.0"
dependencies = [ dependencies = [
"ansi_term", "ansi_term",
"anyhow", "anyhow",
@@ -905,7 +905,7 @@ dependencies = [
[[package]] [[package]]
name = "deltachat-jsonrpc" name = "deltachat-jsonrpc"
version = "1.109.0" version = "1.110.0"
dependencies = [ dependencies = [
"anyhow", "anyhow",
"async-channel", "async-channel",
@@ -927,7 +927,7 @@ dependencies = [
[[package]] [[package]]
name = "deltachat-repl" name = "deltachat-repl"
version = "1.109.0" version = "1.110.0"
dependencies = [ dependencies = [
"ansi_term", "ansi_term",
"anyhow", "anyhow",
@@ -942,7 +942,7 @@ dependencies = [
[[package]] [[package]]
name = "deltachat-rpc-server" name = "deltachat-rpc-server"
version = "1.109.0" version = "1.110.0"
dependencies = [ dependencies = [
"anyhow", "anyhow",
"deltachat-jsonrpc", "deltachat-jsonrpc",
@@ -965,7 +965,7 @@ dependencies = [
[[package]] [[package]]
name = "deltachat_ffi" name = "deltachat_ffi"
version = "1.109.0" version = "1.110.0"
dependencies = [ dependencies = [
"anyhow", "anyhow",
"deltachat", "deltachat",

View File

@@ -1,6 +1,6 @@
[package] [package]
name = "deltachat" name = "deltachat"
version = "1.109.0" version = "1.110.0"
edition = "2021" edition = "2021"
license = "MPL-2.0" license = "MPL-2.0"
rust-version = "1.63" rust-version = "1.63"

View File

@@ -1,6 +1,6 @@
[package] [package]
name = "deltachat_ffi" name = "deltachat_ffi"
version = "1.109.0" version = "1.110.0"
description = "Deltachat FFI" description = "Deltachat FFI"
edition = "2018" edition = "2018"
readme = "README.md" readme = "README.md"

View File

@@ -1,6 +1,6 @@
[package] [package]
name = "deltachat-jsonrpc" name = "deltachat-jsonrpc"
version = "1.109.0" version = "1.110.0"
description = "DeltaChat JSON-RPC API" description = "DeltaChat JSON-RPC API"
edition = "2021" edition = "2021"
default-run = "deltachat-jsonrpc-server" default-run = "deltachat-jsonrpc-server"

View File

@@ -3,7 +3,7 @@
"dependencies": { "dependencies": {
"@deltachat/tiny-emitter": "3.0.0", "@deltachat/tiny-emitter": "3.0.0",
"isomorphic-ws": "^4.0.1", "isomorphic-ws": "^4.0.1",
"yerpc": "^0.3.3" "yerpc": "^0.4.3"
}, },
"devDependencies": { "devDependencies": {
"@types/chai": "^4.2.21", "@types/chai": "^4.2.21",
@@ -26,8 +26,8 @@
}, },
"exports": { "exports": {
".": { ".": {
"require": "./dist/deltachat.cjs", "import": "./dist/deltachat.js",
"import": "./dist/deltachat.js" "require": "./dist/deltachat.cjs"
} }
}, },
"license": "MPL-2.0", "license": "MPL-2.0",
@@ -36,8 +36,8 @@
"scripts": { "scripts": {
"build": "run-s generate-bindings extract-constants build:tsc build:bundle build:cjs", "build": "run-s generate-bindings extract-constants build:tsc build:bundle build:cjs",
"build:bundle": "esbuild --format=esm --bundle dist/deltachat.js --outfile=dist/deltachat.bundle.js", "build:bundle": "esbuild --format=esm --bundle dist/deltachat.js --outfile=dist/deltachat.bundle.js",
"build:tsc": "tsc",
"build:cjs": "esbuild --format=cjs --bundle --packages=external dist/deltachat.js --outfile=dist/deltachat.cjs", "build:cjs": "esbuild --format=cjs --bundle --packages=external dist/deltachat.js --outfile=dist/deltachat.cjs",
"build:tsc": "tsc",
"docs": "typedoc --out docs deltachat.ts", "docs": "typedoc --out docs deltachat.ts",
"example": "run-s build example:build example:start", "example": "run-s build example:build example:start",
"example:build": "esbuild --bundle dist/example/example.js --outfile=dist/example.bundle.js", "example:build": "esbuild --bundle dist/example/example.js --outfile=dist/example.bundle.js",
@@ -55,5 +55,5 @@
}, },
"type": "module", "type": "module",
"types": "dist/deltachat.d.ts", "types": "dist/deltachat.d.ts",
"version": "1.109.0" "version": "1.110.0"
} }

View File

@@ -1,6 +1,6 @@
[package] [package]
name = "deltachat-repl" name = "deltachat-repl"
version = "1.109.0" version = "1.110.0"
edition = "2021" edition = "2021"
[dependencies] [dependencies]

View File

@@ -194,19 +194,23 @@ class Chat:
"""Add contacts to this group.""" """Add contacts to this group."""
for cnt in contact: for cnt in contact:
if isinstance(cnt, str): if isinstance(cnt, str):
cnt = (await self.account.create_contact(cnt)).id contact_id = (await self.account.create_contact(cnt)).id
elif not isinstance(cnt, int): elif not isinstance(cnt, int):
cnt = cnt.id contact_id = cnt.id
await self._rpc.add_contact_to_chat(self.account.id, self.id, cnt) else:
contact_id = cnt
await self._rpc.add_contact_to_chat(self.account.id, self.id, contact_id)
async def remove_contact(self, *contact: Union[int, str, Contact]) -> None: async def remove_contact(self, *contact: Union[int, str, Contact]) -> None:
"""Remove members from this group.""" """Remove members from this group."""
for cnt in contact: for cnt in contact:
if isinstance(cnt, str): if isinstance(cnt, str):
cnt = (await self.account.create_contact(cnt)).id contact_id = (await self.account.create_contact(cnt)).id
elif not isinstance(cnt, int): elif not isinstance(cnt, int):
cnt = cnt.id contact_id = cnt.id
await self._rpc.remove_contact_from_chat(self.account.id, self.id, cnt) else:
contact_id = cnt
await self._rpc.remove_contact_from_chat(self.account.id, self.id, contact_id)
async def get_contacts(self) -> List[Contact]: async def get_contacts(self) -> List[Contact]:
"""Get the contacts belonging to this chat. """Get the contacts belonging to this chat.
@@ -242,9 +246,9 @@ class Chat:
locations = [] locations = []
contacts: Dict[int, Contact] = {} contacts: Dict[int, Contact] = {}
for loc in result: for loc in result:
loc = AttrDict(loc) location = AttrDict(loc)
loc["chat"] = self location["chat"] = self
loc["contact"] = contacts.setdefault(loc.contact_id, Contact(self.account, loc.contact_id)) location["contact"] = contacts.setdefault(location.contact_id, Contact(self.account, location.contact_id))
loc["message"] = Message(self.account, loc.msg_id) location["message"] = Message(self.account, location.msg_id)
locations.append(loc) locations.append(location)
return locations return locations

View File

@@ -1,6 +1,6 @@
[package] [package]
name = "deltachat-rpc-server" name = "deltachat-rpc-server"
version = "1.109.0" version = "1.110.0"
description = "DeltaChat JSON-RPC server" description = "DeltaChat JSON-RPC server"
edition = "2021" edition = "2021"
readme = "README.md" readme = "README.md"

View File

@@ -60,5 +60,5 @@
"test:mocha": "mocha -r esm node/test/test.js --growl --reporter=spec --bail --exit" "test:mocha": "mocha -r esm node/test/test.js --growl --reporter=spec --bail --exit"
}, },
"types": "node/dist/index.d.ts", "types": "node/dist/index.d.ts",
"version": "1.109.0" "version": "1.110.0"
} }

View File

@@ -44,8 +44,8 @@ deltachat = [
[tool.setuptools_scm] [tool.setuptools_scm]
root = ".." root = ".."
tag_regex = '^(?P<prefix>py-)?(?P<version>[^\+]+)(?P<suffix>.*)?$' tag_regex = '^(?P<prefix>v)?(?P<version>[^\+]+)(?P<suffix>.*)?$'
git_describe_command = "git describe --dirty --tags --long --match py-*.*" git_describe_command = "git describe --dirty --tags --long --match v*.*"
[tool.black] [tool.black]
line-length = 120 line-length = 120

44
scripts/android-rpc-server.sh Executable file
View File

@@ -0,0 +1,44 @@
#!/bin/sh
# Build deltachat-rpc-server for Android.
set -e
test -n "$ANDROID_NDK_ROOT" || exit 1
RUSTUP_TOOLCHAIN="1.64.0"
rustup install "$RUSTUP_TOOLCHAIN"
rustup target add armv7-linux-androideabi aarch64-linux-android i686-linux-android x86_64-linux-android --toolchain "$RUSTUP_TOOLCHAIN"
KERNEL="$(uname -s | tr '[:upper:]' '[:lower:]')"
ARCH="$(uname -m)"
NDK_HOST_TAG="$KERNEL-$ARCH"
TOOLCHAIN="$ANDROID_NDK_ROOT/toolchains/llvm/prebuilt/$NDK_HOST_TAG"
export PATH="$PATH:$TOOLCHAIN/bin/"
PACKAGE="deltachat-rpc-server"
export CARGO_PROFILE_RELEASE_LTO=on
CARGO_TARGET_ARMV7_LINUX_ANDROIDEABI_LINKER="$TOOLCHAIN/bin/armv7a-linux-androideabi16-clang" \
CFLAGS=-D__ANDROID_API__=16 \
TARGET_CC=armv7a-linux-androideabi16-clang \
TARGET_AR=llvm-ar \
cargo "+$RUSTUP_TOOLCHAIN" rustc --release --target armv7-linux-androideabi -p $PACKAGE
CARGO_TARGET_AARCH64_LINUX_ANDROID_LINKER="$TOOLCHAIN/bin/aarch64-linux-android21-clang" \
CFLAGS=-D__ANDROID_API__=21 \
TARGET_CC=aarch64-linux-android21-clang \
TARGET_AR=llvm-ar \
cargo "+$RUSTUP_TOOLCHAIN" rustc --release --target aarch64-linux-android -p $PACKAGE
CARGO_TARGET_I686_LINUX_ANDROID_LINKER="$TOOLCHAIN/bin/i686-linux-android16-clang" \
CFLAGS=-D__ANDROID_API__=16 \
TARGET_CC=i686-linux-android16-clang \
TARGET_AR=llvm-ar \
cargo "+$RUSTUP_TOOLCHAIN" rustc --release --target i686-linux-android -p $PACKAGE
CARGO_TARGET_X86_64_LINUX_ANDROID_LINKER="$TOOLCHAIN/bin/x86_64-linux-android21-clang" \
CFLAGS=-D__ANDROID_API__=21 \
TARGET_CC=x86_64-linux-android21-clang \
TARGET_AR=llvm-ar \
cargo "+$RUSTUP_TOOLCHAIN" rustc --release --target x86_64-linux-android -p $PACKAGE

View File

@@ -12,7 +12,7 @@ resources:
source: source:
branch: master branch: master
uri: https://github.com/deltachat/deltachat-core-rust.git uri: https://github.com/deltachat/deltachat-core-rust.git
tag_filter: "py-*" tag_filter: "v*"
jobs: jobs:
- name: doxygen - name: doxygen

View File

@@ -115,10 +115,8 @@ def main():
print("after commit, on master make sure to: ") print("after commit, on master make sure to: ")
print("") print("")
print(f" git tag -a {newversion}") print(f" git tag -a v{newversion}")
print(f" git push origin {newversion}") print(f" git push origin v{newversion}")
print(f" git tag -a py-{newversion}")
print(f" git push origin py-{newversion}")
print("") print("")

View File

@@ -476,10 +476,13 @@ impl Config {
struct AccountConfig { struct AccountConfig {
/// Unique id. /// Unique id.
pub id: u32, pub id: u32,
/// Root directory for all data for this account. /// Root directory for all data for this account.
/// ///
/// The path is relative to the account manager directory. /// The path is relative to the account manager directory.
pub dir: std::path::PathBuf, pub dir: std::path::PathBuf,
/// Universally unique account identifier.
pub uuid: Uuid, pub uuid: Uuid,
} }

View File

@@ -276,7 +276,7 @@ impl ChatId {
grpname, grpname,
grpid, grpid,
create_blocked, create_blocked,
create_smeared_timestamp(context).await, create_smeared_timestamp(context),
create_protected, create_protected,
param.unwrap_or_default(), param.unwrap_or_default(),
], ],
@@ -482,7 +482,7 @@ impl ChatId {
self, self,
&msg_text, &msg_text,
cmd, cmd,
create_smeared_timestamp(context).await, create_smeared_timestamp(context),
None, None,
None, None,
None, None,
@@ -1881,7 +1881,10 @@ pub(crate) async fn update_special_chat_names(context: &Context) -> Result<()> {
/// [`Deref`]: std::ops::Deref /// [`Deref`]: std::ops::Deref
#[derive(Debug)] #[derive(Debug)]
pub(crate) struct ChatIdBlocked { pub(crate) struct ChatIdBlocked {
/// Chat ID.
pub id: ChatId, pub id: ChatId,
/// Whether the chat is blocked, unblocked or a contact request.
pub blocked: Blocked, pub blocked: Blocked,
} }
@@ -1953,7 +1956,6 @@ impl ChatIdBlocked {
_ => (), _ => (),
} }
let created_timestamp = create_smeared_timestamp(context).await;
let chat_id = context let chat_id = context
.sql .sql
.transaction(move |transaction| { .transaction(move |transaction| {
@@ -1966,7 +1968,7 @@ impl ChatIdBlocked {
chat_name, chat_name,
params.to_string(), params.to_string(),
create_blocked as u8, create_blocked as u8,
created_timestamp, create_smeared_timestamp(context)
], ],
)?; )?;
let chat_id = ChatId::new( let chat_id = ChatId::new(
@@ -2114,7 +2116,7 @@ async fn prepare_msg_common(
context, context,
msg, msg,
update_msg_id, update_msg_id,
create_smeared_timestamp(context).await, create_smeared_timestamp(context),
) )
.await?; .await?;
msg.chat_id = chat_id; msg.chat_id = chat_id;
@@ -2839,7 +2841,7 @@ pub async fn create_group_chat(
Chattype::Group, Chattype::Group,
chat_name, chat_name,
grpid, grpid,
create_smeared_timestamp(context).await, create_smeared_timestamp(context),
], ],
) )
.await?; .await?;
@@ -2897,7 +2899,7 @@ pub async fn create_broadcast_list(context: &Context) -> Result<ChatId> {
Chattype::Broadcast, Chattype::Broadcast,
chat_name, chat_name,
grpid, grpid,
create_smeared_timestamp(context).await, create_smeared_timestamp(context),
], ],
) )
.await?; .await?;
@@ -3358,7 +3360,7 @@ pub async fn forward_msgs(context: &Context, msg_ids: &[MsgId], chat_id: ChatId)
if let Some(reason) = chat.why_cant_send(context).await? { if let Some(reason) = chat.why_cant_send(context).await? {
bail!("cannot send to {}: {}", chat_id, reason); bail!("cannot send to {}: {}", chat_id, reason);
} }
curr_timestamp = create_smeared_timestamps(context, msg_ids.len()).await; curr_timestamp = create_smeared_timestamps(context, msg_ids.len());
let ids = context let ids = context
.sql .sql
.query_map( .query_map(
@@ -3560,7 +3562,7 @@ pub async fn add_device_msg_with_importance(
msg.try_calc_and_set_dimensions(context).await.ok(); msg.try_calc_and_set_dimensions(context).await.ok();
prepare_msg_blob(context, msg).await?; prepare_msg_blob(context, msg).await?;
let timestamp_sent = create_smeared_timestamp(context).await; let timestamp_sent = create_smeared_timestamp(context);
// makes sure, the added message is the last one, // makes sure, the added message is the last one,
// even if the date is wrong (useful esp. when warning about bad dates) // even if the date is wrong (useful esp. when warning about bad dates)
@@ -4088,7 +4090,6 @@ mod tests {
send_text_msg(&alice, alice_chat_id, "populate".to_string()).await?; send_text_msg(&alice, alice_chat_id, "populate".to_string()).await?;
add_contact_to_chat(&alice, alice_chat_id, bob_id).await?; add_contact_to_chat(&alice, alice_chat_id, bob_id).await?;
tokio::time::sleep(std::time::Duration::from_millis(1100)).await;
let add1 = alice.pop_sent_msg().await; let add1 = alice.pop_sent_msg().await;
add_contact_to_chat(&alice, alice_chat_id, claire_id).await?; add_contact_to_chat(&alice, alice_chat_id, claire_id).await?;
@@ -4107,29 +4108,18 @@ mod tests {
remove_contact_from_chat(&alice, alice_chat_id, daisy_id).await?; remove_contact_from_chat(&alice, alice_chat_id, daisy_id).await?;
let remove2 = alice.pop_sent_msg().await; let remove2 = alice.pop_sent_msg().await;
tokio::time::sleep(std::time::Duration::from_millis(1100)).await;
assert_eq!(get_chat_contacts(&alice, alice_chat_id).await?.len(), 2); assert_eq!(get_chat_contacts(&alice, alice_chat_id).await?.len(), 2);
// Bob receives the add and deletion messages out of order // Bob receives the add and deletion messages out of order
let bob = TestContext::new_bob().await; let bob = TestContext::new_bob().await;
bob.recv_msg(&add1).await; bob.recv_msg(&add1).await;
tokio::time::sleep(std::time::Duration::from_millis(1100)).await;
bob.recv_msg(&add3).await; bob.recv_msg(&add3).await;
tokio::time::sleep(std::time::Duration::from_millis(1100)).await;
let bob_chat_id = bob.recv_msg(&add2).await.chat_id; let bob_chat_id = bob.recv_msg(&add2).await.chat_id;
tokio::time::sleep(std::time::Duration::from_millis(1100)).await;
assert_eq!(get_chat_contacts(&bob, bob_chat_id).await?.len(), 4); assert_eq!(get_chat_contacts(&bob, bob_chat_id).await?.len(), 4);
bob.recv_msg(&remove2).await; bob.recv_msg(&remove2).await;
tokio::time::sleep(std::time::Duration::from_millis(1100)).await;
bob.recv_msg(&remove1).await; bob.recv_msg(&remove1).await;
tokio::time::sleep(std::time::Duration::from_millis(1100)).await;
assert_eq!(get_chat_contacts(&bob, bob_chat_id).await?.len(), 2); assert_eq!(get_chat_contacts(&bob, bob_chat_id).await?.len(), 2);
Ok(()) Ok(())

View File

@@ -300,6 +300,9 @@ pub enum Config {
/// See `crate::authres::update_authservid_candidates`. /// See `crate::authres::update_authservid_candidates`.
AuthservIdCandidates, AuthservIdCandidates,
/// Make all outgoing messages with Autocrypt header "multipart/signed".
SignUnencrypted,
/// Let the core save all events to the database. /// Let the core save all events to the database.
/// This value is used internally to remember the MsgId of the logging xdc /// This value is used internally to remember the MsgId of the logging xdc
#[strum(props(default = "0"))] #[strum(props(default = "0"))]

View File

@@ -646,10 +646,14 @@ async fn try_smtp_one_param(
} }
} }
/// Failure to connect and login with email client configuration.
#[derive(Debug, thiserror::Error)] #[derive(Debug, thiserror::Error)]
#[error("Trying {config}…\nError: {msg}")] #[error("Trying {config}…\nError: {msg}")]
pub struct ConfigurationError { pub struct ConfigurationError {
/// Tried configuration description.
config: String, config: String,
/// Error message.
msg: String, msg: String,
} }

View File

@@ -190,11 +190,11 @@ pub const DC_LP_AUTH_NORMAL: i32 = 0x4;
pub const DC_LP_AUTH_FLAGS: i32 = DC_LP_AUTH_OAUTH2 | DC_LP_AUTH_NORMAL; pub const DC_LP_AUTH_FLAGS: i32 = DC_LP_AUTH_OAUTH2 | DC_LP_AUTH_NORMAL;
/// How many existing messages shall be fetched after configuration. /// How many existing messages shall be fetched after configuration.
pub const DC_FETCH_EXISTING_MSGS_COUNT: i64 = 100; pub(crate) const DC_FETCH_EXISTING_MSGS_COUNT: i64 = 100;
// max. width/height of an avatar // max. width/height of an avatar
pub const BALANCED_AVATAR_SIZE: u32 = 256; pub(crate) const BALANCED_AVATAR_SIZE: u32 = 256;
pub const WORSE_AVATAR_SIZE: u32 = 128; pub(crate) const WORSE_AVATAR_SIZE: u32 = 128;
// max. width/height of images // max. width/height of images
pub const BALANCED_IMAGE_SIZE: u32 = 1280; pub const BALANCED_IMAGE_SIZE: u32 = 1280;

View File

@@ -4,6 +4,7 @@ use std::collections::{BTreeMap, HashMap};
use std::ffi::OsString; use std::ffi::OsString;
use std::ops::Deref; use std::ops::Deref;
use std::path::{Path, PathBuf}; use std::path::{Path, PathBuf};
use std::sync::atomic::AtomicBool;
use std::sync::Arc; use std::sync::Arc;
use std::time::{Duration, Instant, SystemTime}; use std::time::{Duration, Instant, SystemTime};
@@ -26,6 +27,7 @@ use crate::quota::QuotaInfo;
use crate::scheduler::Scheduler; use crate::scheduler::Scheduler;
use crate::sql::Sql; use crate::sql::Sql;
use crate::stock_str::StockStrings; use crate::stock_str::StockStrings;
use crate::timesmearing::SmearedTimestamp;
use crate::tools::{duration_to_str, time}; use crate::tools::{duration_to_str, time};
/// Builder for the [`Context`]. /// Builder for the [`Context`].
@@ -188,7 +190,7 @@ pub struct InnerContext {
/// Blob directory path /// Blob directory path
pub(crate) blobdir: PathBuf, pub(crate) blobdir: PathBuf,
pub(crate) sql: Sql, pub(crate) sql: Sql,
pub(crate) last_smeared_timestamp: RwLock<i64>, pub(crate) smeared_timestamp: SmearedTimestamp,
running_state: RwLock<RunningState>, running_state: RwLock<RunningState>,
/// Mutex to avoid generating the key for the user more than once. /// Mutex to avoid generating the key for the user more than once.
pub(crate) generating_key_mutex: Mutex<()>, pub(crate) generating_key_mutex: Mutex<()>,
@@ -206,6 +208,9 @@ pub struct InnerContext {
/// Set to `None` if quota was never tried to load. /// Set to `None` if quota was never tried to load.
pub(crate) quota: RwLock<Option<QuotaInfo>>, pub(crate) quota: RwLock<Option<QuotaInfo>>,
/// Set to true if quota update is requested.
pub(crate) quota_update_request: AtomicBool,
/// Server ID response if ID capability is supported /// Server ID response if ID capability is supported
/// and the server returned non-NIL on the inbox connection. /// and the server returned non-NIL on the inbox connection.
/// <https://datatracker.ietf.org/doc/html/rfc2971> /// <https://datatracker.ietf.org/doc/html/rfc2971>
@@ -356,7 +361,7 @@ impl Context {
blobdir, blobdir,
running_state: RwLock::new(Default::default()), running_state: RwLock::new(Default::default()),
sql: Sql::new(dbfile), sql: Sql::new(dbfile),
last_smeared_timestamp: RwLock::new(0), smeared_timestamp: SmearedTimestamp::new(),
generating_key_mutex: Mutex::new(()), generating_key_mutex: Mutex::new(()),
oauth2_mutex: Mutex::new(()), oauth2_mutex: Mutex::new(()),
wrong_pw_warning_mutex: Mutex::new(()), wrong_pw_warning_mutex: Mutex::new(()),
@@ -365,6 +370,7 @@ impl Context {
scheduler: RwLock::new(None), scheduler: RwLock::new(None),
ratelimit: RwLock::new(Ratelimit::new(Duration::new(60, 0), 6.0)), // Allow to send 6 messages immediately, no more than once every 10 seconds. ratelimit: RwLock::new(Ratelimit::new(Duration::new(60, 0), 6.0)), // Allow to send 6 messages immediately, no more than once every 10 seconds.
quota: RwLock::new(None), quota: RwLock::new(None),
quota_update_request: AtomicBool::new(false),
server_id: RwLock::new(None), server_id: RwLock::new(None),
creation_time: std::time::SystemTime::now(), creation_time: std::time::SystemTime::now(),
last_full_folder_scan: Mutex::new(None), last_full_folder_scan: Mutex::new(None),
@@ -757,6 +763,12 @@ impl Context {
.await? .await?
.unwrap_or_default(), .unwrap_or_default(),
); );
res.insert(
"sign_unencrypted",
self.get_config_int(Config::SignUnencrypted)
.await?
.to_string(),
);
res.insert( res.insert(
"debug_logging", "debug_logging",

View File

@@ -13,7 +13,6 @@ use crate::imap::{Imap, ImapActionResult};
use crate::job::{self, Action, Job, Status}; use crate::job::{self, Action, Job, Status};
use crate::message::{Message, MsgId, Viewtype}; use crate::message::{Message, MsgId, Viewtype};
use crate::mimeparser::{MimeMessage, Part}; use crate::mimeparser::{MimeMessage, Part};
use crate::param::Params;
use crate::tools::time; use crate::tools::time;
use crate::{job_try, stock_str, EventType}; use crate::{job_try, stock_str, EventType};
@@ -86,11 +85,7 @@ impl MsgId {
DownloadState::Available | DownloadState::Failure => { DownloadState::Available | DownloadState::Failure => {
self.update_download_state(context, DownloadState::InProgress) self.update_download_state(context, DownloadState::InProgress)
.await?; .await?;
job::add( job::add(context, Job::new(Action::DownloadMsg, self.to_u32())).await?;
context,
Job::new(Action::DownloadMsg, self.to_u32(), Params::new(), 0),
)
.await?;
} }
} }
Ok(()) Ok(())

View File

@@ -124,6 +124,19 @@ impl EncryptHelper {
Ok(ctext) Ok(ctext)
} }
/// Signs the passed-in `mail` using the private key from `context`.
/// Returns the payload and the signature.
pub async fn sign(
self,
context: &Context,
mail: lettre_email::PartBuilder,
) -> Result<(lettre_email::MimeMessage, String)> {
let sign_key = SignedSecretKey::load_self(context).await?;
let mime_message = mail.build();
let signature = pgp::pk_calc_signature(mime_message.as_string().as_bytes(), &sign_key)?;
Ok((mime_message, signature))
}
} }
/// Ensures a private key exists for the configured user. /// Ensures a private key exists for the configured user.

View File

@@ -650,7 +650,7 @@ mod tests {
use crate::download::DownloadState; use crate::download::DownloadState;
use crate::receive_imf::receive_imf; use crate::receive_imf::receive_imf;
use crate::test_utils::TestContext; use crate::test_utils::TestContext;
use crate::tools::MAX_SECONDS_TO_LEND_FROM_FUTURE; use crate::timesmearing::MAX_SECONDS_TO_LEND_FROM_FUTURE;
use crate::{ use crate::{
chat::{self, create_group_chat, send_text_msg, Chat, ChatItem, ProtectionStatus}, chat::{self, create_group_chat, send_text_msg, Chat, ChatItem, ProtectionStatus},
tools::IsNoneOrEmpty, tools::IsNoneOrEmpty,

View File

@@ -116,6 +116,8 @@ impl async_imap::Authenticator for OAuth2 {
#[derive(Debug, Display, PartialEq, Eq, Clone, Copy)] #[derive(Debug, Display, PartialEq, Eq, Clone, Copy)]
pub enum FolderMeaning { pub enum FolderMeaning {
Unknown, Unknown,
/// Spam folder.
Spam, Spam,
Inbox, Inbox,
Mvbox, Mvbox,
@@ -149,8 +151,11 @@ impl FolderMeaning {
#[derive(Debug)] #[derive(Debug)]
struct ImapConfig { struct ImapConfig {
/// Email address.
pub addr: String, pub addr: String,
pub lp: ServerLoginParam, pub lp: ServerLoginParam,
/// SOCKS 5 configuration.
pub socks5_config: Option<Socks5Config>, pub socks5_config: Option<Socks5Config>,
pub strict_tls: bool, pub strict_tls: bool,
} }

View File

@@ -11,9 +11,9 @@ use tokio::io::BufWriter;
use super::capabilities::Capabilities; use super::capabilities::Capabilities;
use super::session::Session; use super::session::Session;
use crate::context::Context; use crate::context::Context;
use crate::login_param::build_tls;
use crate::net::connect_tcp; use crate::net::connect_tcp;
use crate::net::session::SessionStream; use crate::net::session::SessionStream;
use crate::net::tls::wrap_tls;
use crate::socks::Socks5Config; use crate::socks::Socks5Config;
/// IMAP write and read timeout. /// IMAP write and read timeout.
@@ -95,8 +95,7 @@ impl Client {
strict_tls: bool, strict_tls: bool,
) -> Result<Self> { ) -> Result<Self> {
let tcp_stream = connect_tcp(context, hostname, port, IMAP_TIMEOUT, strict_tls).await?; let tcp_stream = connect_tcp(context, hostname, port, IMAP_TIMEOUT, strict_tls).await?;
let tls = build_tls(strict_tls); let tls_stream = wrap_tls(strict_tls, hostname, tcp_stream).await?;
let tls_stream = tls.connect(hostname, tcp_stream).await?;
let buffered_stream = BufWriter::new(tls_stream); let buffered_stream = BufWriter::new(tls_stream);
let session_stream: Box<dyn SessionStream> = Box::new(buffered_stream); let session_stream: Box<dyn SessionStream> = Box::new(buffered_stream);
let mut client = ImapClient::new(session_stream); let mut client = ImapClient::new(session_stream);
@@ -142,9 +141,7 @@ impl Client {
.context("STARTTLS command failed")?; .context("STARTTLS command failed")?;
let tcp_stream = client.into_inner(); let tcp_stream = client.into_inner();
let tls = build_tls(strict_tls); let tls_stream = wrap_tls(strict_tls, hostname, tcp_stream)
let tls_stream = tls
.connect(hostname, tcp_stream)
.await .await
.context("STARTTLS upgrade failed")?; .context("STARTTLS upgrade failed")?;
@@ -165,8 +162,7 @@ impl Client {
let socks5_stream = socks5_config let socks5_stream = socks5_config
.connect(context, domain, port, IMAP_TIMEOUT, strict_tls) .connect(context, domain, port, IMAP_TIMEOUT, strict_tls)
.await?; .await?;
let tls = build_tls(strict_tls); let tls_stream = wrap_tls(strict_tls, domain, socks5_stream).await?;
let tls_stream = tls.connect(domain, socks5_stream).await?;
let buffered_stream = BufWriter::new(tls_stream); let buffered_stream = BufWriter::new(tls_stream);
let session_stream: Box<dyn SessionStream> = Box::new(buffered_stream); let session_stream: Box<dyn SessionStream> = Box::new(buffered_stream);
let mut client = ImapClient::new(session_stream); let mut client = ImapClient::new(session_stream);
@@ -221,9 +217,7 @@ impl Client {
.context("STARTTLS command failed")?; .context("STARTTLS command failed")?;
let socks5_stream = client.into_inner(); let socks5_stream = client.into_inner();
let tls = build_tls(strict_tls); let tls_stream = wrap_tls(strict_tls, hostname, socks5_stream)
let tls_stream = tls
.connect(hostname, socks5_stream)
.await .await
.context("STARTTLS upgrade failed")?; .context("STARTTLS upgrade failed")?;
let buffered_stream = BufWriter::new(tls_stream); let buffered_stream = BufWriter::new(tls_stream);

View File

@@ -1,3 +1,5 @@
//! # IMAP folder selection module.
use anyhow::Context as _; use anyhow::Context as _;
use super::session::Session as ImapSession; use super::session::Session as ImapSession;

View File

@@ -13,7 +13,6 @@ use rand::{thread_rng, Rng};
use crate::context::Context; use crate::context::Context;
use crate::imap::{get_folder_meaning, FolderMeaning, Imap}; use crate::imap::{get_folder_meaning, FolderMeaning, Imap};
use crate::param::Params;
use crate::scheduler::InterruptInfo; use crate::scheduler::InterruptInfo;
use crate::tools::time; use crate::tools::time;
@@ -58,9 +57,6 @@ macro_rules! job_try {
)] )]
#[repr(u32)] #[repr(u32)]
pub enum Action { pub enum Action {
// this is user initiated so it should have a fairly high priority
UpdateRecentQuota = 140,
// This job will download partially downloaded messages completely // This job will download partially downloaded messages completely
// and is added when download_full() is called. // and is added when download_full() is called.
// Most messages are downloaded automatically on fetch // Most messages are downloaded automatically on fetch
@@ -80,7 +76,6 @@ pub struct Job {
pub desired_timestamp: i64, pub desired_timestamp: i64,
pub added_timestamp: i64, pub added_timestamp: i64,
pub tries: u32, pub tries: u32,
pub param: Params,
} }
impl fmt::Display for Job { impl fmt::Display for Job {
@@ -90,24 +85,19 @@ impl fmt::Display for Job {
} }
impl Job { impl Job {
pub fn new(action: Action, foreign_id: u32, param: Params, delay_seconds: i64) -> Self { pub fn new(action: Action, foreign_id: u32) -> Self {
let timestamp = time(); let timestamp = time();
Self { Self {
job_id: 0, job_id: 0,
action, action,
foreign_id, foreign_id,
desired_timestamp: timestamp + delay_seconds, desired_timestamp: timestamp,
added_timestamp: timestamp, added_timestamp: timestamp,
tries: 0, tries: 0,
param,
} }
} }
pub fn delay_seconds(&self) -> i64 {
self.desired_timestamp - self.added_timestamp
}
/// Deletes the job from the database. /// Deletes the job from the database.
async fn delete(self, context: &Context) -> Result<()> { async fn delete(self, context: &Context) -> Result<()> {
if self.job_id != 0 { if self.job_id != 0 {
@@ -130,23 +120,21 @@ impl Job {
context context
.sql .sql
.execute( .execute(
"UPDATE jobs SET desired_timestamp=?, tries=?, param=? WHERE id=?;", "UPDATE jobs SET desired_timestamp=?, tries=? WHERE id=?;",
paramsv![ paramsv![
self.desired_timestamp, self.desired_timestamp,
i64::from(self.tries), i64::from(self.tries),
self.param.to_string(),
self.job_id as i32, self.job_id as i32,
], ],
) )
.await?; .await?;
} else { } else {
context.sql.execute( context.sql.execute(
"INSERT INTO jobs (added_timestamp, action, foreign_id, param, desired_timestamp) VALUES (?,?,?,?,?);", "INSERT INTO jobs (added_timestamp, action, foreign_id, desired_timestamp) VALUES (?,?,?,?);",
paramsv![ paramsv![
self.added_timestamp, self.added_timestamp,
self.action, self.action,
self.foreign_id, self.foreign_id,
self.param.to_string(),
self.desired_timestamp self.desired_timestamp
] ]
).await?; ).await?;
@@ -202,17 +190,6 @@ pub async fn kill_action(context: &Context, action: Action) -> Result<()> {
Ok(()) Ok(())
} }
pub async fn action_exists(context: &Context, action: Action) -> Result<bool> {
let exists = context
.sql
.exists(
"SELECT COUNT(*) FROM jobs WHERE action=?;",
paramsv![action],
)
.await?;
Ok(exists)
}
pub(crate) enum Connection<'a> { pub(crate) enum Connection<'a> {
Inbox(&'a mut Imap), Inbox(&'a mut Imap),
} }
@@ -240,7 +217,7 @@ pub(crate) async fn perform_job(context: &Context, mut connection: Connection<'_
if tries < JOB_RETRIES { if tries < JOB_RETRIES {
info!(context, "increase job {} tries to {}", job, tries); info!(context, "increase job {} tries to {}", job, tries);
job.tries = tries; job.tries = tries;
let time_offset = get_backoff_time_offset(tries, job.action); let time_offset = get_backoff_time_offset(tries);
job.desired_timestamp = time() + time_offset; job.desired_timestamp = time() + time_offset;
info!( info!(
context, context,
@@ -289,10 +266,6 @@ async fn perform_job_action(
let try_res = match job.action { let try_res = match job.action {
Action::ResyncFolders => job.resync_folders(context, connection.inbox()).await, Action::ResyncFolders => job.resync_folders(context, connection.inbox()).await,
Action::UpdateRecentQuota => match context.update_recent_quota(connection.inbox()).await {
Ok(status) => status,
Err(err) => Status::Finished(Err(err)),
},
Action::DownloadMsg => job.download_msg(context, connection.inbox()).await, Action::DownloadMsg => job.download_msg(context, connection.inbox()).await,
}; };
@@ -301,13 +274,7 @@ async fn perform_job_action(
try_res try_res
} }
fn get_backoff_time_offset(tries: u32, action: Action) -> i64 { fn get_backoff_time_offset(tries: u32) -> i64 {
match action {
// Just try every 10s to update the quota
// If all retries are exhausted, a new job will be created when the quota information is needed
Action::UpdateRecentQuota => 10,
_ => {
// Exponential backoff // Exponential backoff
let n = 2_i32.pow(tries - 1) * 60; let n = 2_i32.pow(tries - 1) * 60;
let mut rng = thread_rng(); let mut rng = thread_rng();
@@ -318,33 +285,19 @@ fn get_backoff_time_offset(tries: u32, action: Action) -> i64 {
} }
i64::from(seconds) i64::from(seconds)
} }
}
}
pub(crate) async fn schedule_resync(context: &Context) -> Result<()> { pub(crate) async fn schedule_resync(context: &Context) -> Result<()> {
kill_action(context, Action::ResyncFolders).await?; kill_action(context, Action::ResyncFolders).await?;
add( add(context, Job::new(Action::ResyncFolders, 0)).await?;
context,
Job::new(Action::ResyncFolders, 0, Params::new(), 0),
)
.await?;
Ok(()) Ok(())
} }
/// Adds a job to the database, scheduling it. /// Adds a job to the database, scheduling it.
pub async fn add(context: &Context, job: Job) -> Result<()> { pub async fn add(context: &Context, job: Job) -> Result<()> {
let action = job.action;
let delay_seconds = job.delay_seconds();
job.save(context).await.context("failed to save job")?; job.save(context).await.context("failed to save job")?;
if delay_seconds == 0 {
match action {
Action::ResyncFolders | Action::UpdateRecentQuota | Action::DownloadMsg => {
info!(context, "interrupt: imap"); info!(context, "interrupt: imap");
context.interrupt_inbox(InterruptInfo::new(false)).await; context.interrupt_inbox(InterruptInfo::new(false)).await;
}
}
}
Ok(()) Ok(())
} }
@@ -396,7 +349,6 @@ LIMIT 1;
desired_timestamp: row.get("desired_timestamp")?, desired_timestamp: row.get("desired_timestamp")?,
added_timestamp: row.get("added_timestamp")?, added_timestamp: row.get("added_timestamp")?,
tries: row.get("tries")?, tries: row.get("tries")?,
param: row.get::<_, String>("param")?.parse().unwrap_or_default(),
}; };
Ok(job) Ok(job)
@@ -436,8 +388,8 @@ mod tests {
.sql .sql
.execute( .execute(
"INSERT INTO jobs "INSERT INTO jobs
(added_timestamp, action, foreign_id, param, desired_timestamp) (added_timestamp, action, foreign_id, desired_timestamp)
VALUES (?, ?, ?, ?, ?);", VALUES (?, ?, ?, ?);",
paramsv![ paramsv![
now, now,
if valid { if valid {
@@ -446,7 +398,6 @@ mod tests {
-1 -1
}, },
foreign_id, foreign_id,
Params::new().to_string(),
now now
], ],
) )

View File

@@ -92,6 +92,7 @@ mod smtp;
mod socks; mod socks;
pub mod stock_str; pub mod stock_str;
mod sync; mod sync;
mod timesmearing;
mod token; mod token;
mod update_helper; mod update_helper;
pub mod webxdc; pub mod webxdc;

View File

@@ -3,8 +3,6 @@
use std::fmt; use std::fmt;
use anyhow::{ensure, Result}; use anyhow::{ensure, Result};
use async_native_tls::Certificate;
use once_cell::sync::Lazy;
use crate::constants::{DC_LP_AUTH_FLAGS, DC_LP_AUTH_NORMAL, DC_LP_AUTH_OAUTH2}; use crate::constants::{DC_LP_AUTH_FLAGS, DC_LP_AUTH_NORMAL, DC_LP_AUTH_OAUTH2};
use crate::provider::{get_provider_by_id, Provider}; use crate::provider::{get_provider_by_id, Provider};
@@ -306,28 +304,6 @@ fn unset_empty(s: &str) -> &str {
} }
} }
// this certificate is missing on older android devices (eg. lg with android6 from 2017)
// certificate downloaded from https://letsencrypt.org/certificates/
static LETSENCRYPT_ROOT: Lazy<Certificate> = Lazy::new(|| {
Certificate::from_der(include_bytes!(
"../assets/root-certificates/letsencrypt/isrgrootx1.der"
))
.unwrap()
});
pub fn build_tls(strict_tls: bool) -> async_native_tls::TlsConnector {
let tls_builder =
async_native_tls::TlsConnector::new().add_root_certificate(LETSENCRYPT_ROOT.clone());
if strict_tls {
tls_builder
} else {
tls_builder
.danger_accept_invalid_hostnames(true)
.danger_accept_invalid_certs(true)
}
}
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::*; use super::*;
@@ -378,13 +354,4 @@ mod tests {
assert_eq!(param, loaded); assert_eq!(param, loaded);
Ok(()) Ok(())
} }
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn test_build_tls() -> Result<()> {
// we are using some additional root certificates.
// make sure, they do not break construction of TlsConnector
let _ = build_tls(true);
let _ = build_tls(false);
Ok(())
}
} }

View File

@@ -1771,12 +1771,7 @@ async fn ndn_maybe_add_info_msg(
// Tell the user which of the recipients failed if we know that (because in // Tell the user which of the recipients failed if we know that (because in
// a group, this might otherwise be unclear) // a group, this might otherwise be unclear)
let text = stock_str::failed_sending_to(context, contact.get_display_name()).await; let text = stock_str::failed_sending_to(context, contact.get_display_name()).await;
chat::add_info_msg( chat::add_info_msg(context, chat_id, &text, create_smeared_timestamp(context))
context,
chat_id,
&text,
create_smeared_timestamp(context).await,
)
.await?; .await?;
context.emit_event(EventType::ChatModified(chat_id)); context.emit_event(EventType::ChatModified(chat_id));
} }

View File

@@ -250,7 +250,7 @@ impl<'a> MimeFactory<'a> {
.get_config(Config::Selfstatus) .get_config(Config::Selfstatus)
.await? .await?
.unwrap_or_default(); .unwrap_or_default();
let timestamp = create_smeared_timestamp(context).await; let timestamp = create_smeared_timestamp(context);
let res = MimeFactory::<'a> { let res = MimeFactory::<'a> {
from_addr, from_addr,
@@ -779,10 +779,36 @@ impl<'a> MimeFactory<'a> {
}; };
// Store protected headers in the outer message. // Store protected headers in the outer message.
headers let message = headers
.protected .protected
.into_iter() .into_iter()
.fold(message, |message, header| message.header(header)) .fold(message, |message, header| message.header(header));
if self.should_skip_autocrypt()
|| !context.get_config_bool(Config::SignUnencrypted).await?
{
message
} else {
let (payload, signature) = encrypt_helper.sign(context, message).await?;
PartBuilder::new()
.header((
"Content-Type".to_string(),
"multipart/signed; protocol=\"application/pgp-signature\"".to_string(),
))
.child(payload)
.child(
PartBuilder::new()
.content_type(
&"application/pgp-signature; name=\"signature.asc\""
.parse::<mime::Mime>()
.unwrap(),
)
.header(("Content-Description", "OpenPGP digital signature"))
.header(("Content-Disposition", "attachment; filename=\"signature\";"))
.body(signature)
.build(),
)
}
}; };
// Store the unprotected headers on the outer message. // Store the unprotected headers on the outer message.
@@ -2140,6 +2166,96 @@ mod tests {
Ok(()) Ok(())
} }
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn test_selfavatar_unencrypted_signed() {
// create chat with bob, set selfavatar
let t = TestContext::new_alice().await;
t.set_config(Config::SignUnencrypted, Some("1"))
.await
.unwrap();
let chat = t.create_chat_with_contact("bob", "bob@example.org").await;
let file = t.dir.path().join("avatar.png");
let bytes = include_bytes!("../test-data/image/avatar64x64.png");
tokio::fs::write(&file, bytes).await.unwrap();
t.set_config(Config::Selfavatar, Some(file.to_str().unwrap()))
.await
.unwrap();
// send message to bob: that should get multipart/mixed because of the avatar moved to inner header;
// make sure, `Subject:` stays in the outer header (imf header)
let mut msg = Message::new(Viewtype::Text);
msg.set_text(Some("this is the text!".to_string()));
let sent_msg = t.send_msg(chat.id, &mut msg).await;
let mut payload = sent_msg.payload().splitn(4, "\r\n\r\n");
let part = payload.next().unwrap();
assert_eq!(part.match_indices("multipart/signed").count(), 1);
assert_eq!(part.match_indices("Subject:").count(), 0);
assert_eq!(part.match_indices("Autocrypt:").count(), 1);
assert_eq!(part.match_indices("Chat-User-Avatar:").count(), 0);
let part = payload.next().unwrap();
assert_eq!(part.match_indices("multipart/mixed").count(), 1);
assert_eq!(part.match_indices("Subject:").count(), 1);
assert_eq!(part.match_indices("Autocrypt:").count(), 0);
assert_eq!(part.match_indices("Chat-User-Avatar:").count(), 0);
let part = payload.next().unwrap();
assert_eq!(part.match_indices("text/plain").count(), 1);
assert_eq!(part.match_indices("Chat-User-Avatar:").count(), 1);
assert_eq!(part.match_indices("Subject:").count(), 0);
let body = payload.next().unwrap();
assert_eq!(body.match_indices("this is the text!").count(), 1);
let bob = TestContext::new_bob().await;
bob.recv_msg(&sent_msg).await;
let alice_id = Contact::lookup_id_by_addr(&bob.ctx, "alice@example.org", Origin::Unknown)
.await
.unwrap()
.unwrap();
let alice_contact = Contact::load_from_db(&bob.ctx, alice_id).await.unwrap();
assert!(alice_contact
.get_profile_image(&bob.ctx)
.await
.unwrap()
.is_some());
// if another message is sent, that one must not contain the avatar
// and no artificial multipart/mixed nesting
let sent_msg = t.send_msg(chat.id, &mut msg).await;
let mut payload = sent_msg.payload().splitn(3, "\r\n\r\n");
let part = payload.next().unwrap();
assert_eq!(part.match_indices("multipart/signed").count(), 1);
assert_eq!(part.match_indices("Subject:").count(), 0);
assert_eq!(part.match_indices("Autocrypt:").count(), 1);
assert_eq!(part.match_indices("Chat-User-Avatar:").count(), 0);
let part = payload.next().unwrap();
assert_eq!(part.match_indices("text/plain").count(), 1);
assert_eq!(part.match_indices("Subject:").count(), 1);
assert_eq!(part.match_indices("Autocrypt:").count(), 0);
assert_eq!(part.match_indices("multipart/mixed").count(), 0);
assert_eq!(part.match_indices("Chat-User-Avatar:").count(), 0);
let body = payload.next().unwrap();
assert_eq!(body.match_indices("this is the text!").count(), 1);
assert_eq!(body.match_indices("text/plain").count(), 0);
assert_eq!(body.match_indices("Chat-User-Avatar:").count(), 0);
assert_eq!(body.match_indices("Subject:").count(), 0);
bob.recv_msg(&sent_msg).await;
let alice_contact = Contact::load_from_db(&bob.ctx, alice_id).await.unwrap();
assert!(alice_contact
.get_profile_image(&bob.ctx)
.await
.unwrap()
.is_some());
}
/// Test that removed member address does not go into the `To:` field. /// Test that removed member address does not go into the `To:` field.
#[tokio::test(flavor = "multi_thread", worker_threads = 2)] #[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn test_remove_member_bcc() -> Result<()> { async fn test_remove_member_bcc() -> Result<()> {

View File

@@ -224,8 +224,32 @@ impl MimeMessage {
// Parse hidden headers. // Parse hidden headers.
let mimetype = mail.ctype.mimetype.parse::<Mime>()?; let mimetype = mail.ctype.mimetype.parse::<Mime>()?;
if mimetype.type_() == mime::MULTIPART && mimetype.subtype().as_str() == "mixed" { let (part, mimetype) =
if mimetype.type_() == mime::MULTIPART && mimetype.subtype().as_str() == "signed" {
if let Some(part) = mail.subparts.first() { if let Some(part) = mail.subparts.first() {
// We don't remove "subject" from `headers` because currently just signed
// messages are shown as unencrypted anyway.
MimeMessage::merge_headers(
context,
&mut headers,
&mut recipients,
&mut from,
&mut list_post,
&mut chat_disposition_notification_to,
&part.headers,
);
(part, part.ctype.mimetype.parse::<Mime>()?)
} else {
// If it's a partially fetched message, there are no subparts.
(&mail, mimetype)
}
} else {
// Currently we do not sign unencrypted messages by default.
(&mail, mimetype)
};
if mimetype.type_() == mime::MULTIPART && mimetype.subtype().as_str() == "mixed" {
if let Some(part) = part.subparts.first() {
for field in &part.headers { for field in &part.headers {
let key = field.get_key().to_lowercase(); let key = field.get_key().to_lowercase();
@@ -256,8 +280,9 @@ impl MimeMessage {
hop_info += &decryption_info.dkim_results.to_string(); hop_info += &decryption_info.dkim_results.to_string();
let public_keyring = keyring_from_peerstate(decryption_info.peerstate.as_ref()); let public_keyring = keyring_from_peerstate(decryption_info.peerstate.as_ref());
let (mail, mut signatures, encrypted) = let (mail, mut signatures, encrypted) = match tokio::task::block_in_place(|| {
match try_decrypt(context, &mail, &private_keyring, &public_keyring) { try_decrypt(context, &mail, &private_keyring, &public_keyring)
}) {
Ok(Some((raw, signatures))) => { Ok(Some((raw, signatures))) => {
mail_raw = raw; mail_raw = raw;
let decrypted_mail = mailparse::parse_mail(&mail_raw)?; let decrypted_mail = mailparse::parse_mail(&mail_raw)?;

View File

@@ -13,6 +13,7 @@ use crate::context::Context;
use crate::tools::time; use crate::tools::time;
pub(crate) mod session; pub(crate) mod session;
pub(crate) mod tls;
async fn connect_tcp_inner(addr: SocketAddr, timeout_val: Duration) -> Result<TcpStream> { async fn connect_tcp_inner(addr: SocketAddr, timeout_val: Duration) -> Result<TcpStream> {
let tcp_stream = timeout(timeout_val, TcpStream::connect(addr)) let tcp_stream = timeout(timeout_val, TcpStream::connect(addr))

50
src/net/tls.rs Normal file
View File

@@ -0,0 +1,50 @@
//! TLS support.
use anyhow::Result;
use async_native_tls::{Certificate, TlsConnector, TlsStream};
use once_cell::sync::Lazy;
use tokio::io::{AsyncRead, AsyncWrite};
// this certificate is missing on older android devices (eg. lg with android6 from 2017)
// certificate downloaded from https://letsencrypt.org/certificates/
static LETSENCRYPT_ROOT: Lazy<Certificate> = Lazy::new(|| {
Certificate::from_der(include_bytes!(
"../../assets/root-certificates/letsencrypt/isrgrootx1.der"
))
.unwrap()
});
pub fn build_tls(strict_tls: bool) -> TlsConnector {
let tls_builder = TlsConnector::new().add_root_certificate(LETSENCRYPT_ROOT.clone());
if strict_tls {
tls_builder
} else {
tls_builder
.danger_accept_invalid_hostnames(true)
.danger_accept_invalid_certs(true)
}
}
pub async fn wrap_tls<T: AsyncRead + AsyncWrite + Unpin>(
strict_tls: bool,
hostname: &str,
stream: T,
) -> Result<TlsStream<T>> {
let tls = build_tls(strict_tls);
let tls_stream = tls.connect(hostname, stream).await?;
Ok(tls_stream)
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_build_tls() {
// we are using some additional root certificates.
// make sure, they do not break construction of TlsConnector
let _ = build_tls(true);
let _ = build_tls(false);
}
}

View File

@@ -262,6 +262,20 @@ pub async fn pk_encrypt(
.await? .await?
} }
/// Signs `plain` text using `private_key_for_signing`.
pub fn pk_calc_signature(
plain: &[u8],
private_key_for_signing: &SignedSecretKey,
) -> Result<String> {
let msg = Message::new_literal_bytes("", plain).sign(
private_key_for_signing,
|| "".into(),
Default::default(),
)?;
let signature = msg.into_signature().to_armored_string(None)?;
Ok(signature)
}
/// Decrypts the message with keys from the private key keyring. /// Decrypts the message with keys from the private key keyring.
/// ///
/// Receiver private keys are provided in /// Receiver private keys are provided in

View File

@@ -473,11 +473,15 @@ fn decode_webrtc_instance(_context: &Context, qr: &str) -> Result<Qr> {
#[derive(Debug, Deserialize)] #[derive(Debug, Deserialize)]
struct CreateAccountSuccessResponse { struct CreateAccountSuccessResponse {
/// Email address.
email: String, email: String,
/// Password.
password: String, password: String,
} }
#[derive(Debug, Deserialize)] #[derive(Debug, Deserialize)]
struct CreateAccountErrorResponse { struct CreateAccountErrorResponse {
/// Reason for the failure to create account returned by the server.
reason: String, reason: String,
} }

View File

@@ -1,6 +1,7 @@
//! # Support for IMAP QUOTA extension. //! # Support for IMAP QUOTA extension.
use std::collections::BTreeMap; use std::collections::BTreeMap;
use std::sync::atomic::Ordering;
use anyhow::{anyhow, Context as _, Result}; use anyhow::{anyhow, Context as _, Result};
use async_imap::types::{Quota, QuotaResource}; use async_imap::types::{Quota, QuotaResource};
@@ -11,11 +12,10 @@ use crate::context::Context;
use crate::imap::scan_folders::get_watched_folders; use crate::imap::scan_folders::get_watched_folders;
use crate::imap::session::Session as ImapSession; use crate::imap::session::Session as ImapSession;
use crate::imap::Imap; use crate::imap::Imap;
use crate::job::{Action, Status};
use crate::message::{Message, Viewtype}; use crate::message::{Message, Viewtype};
use crate::param::Params; use crate::scheduler::InterruptInfo;
use crate::tools::time; use crate::tools::time;
use crate::{job, stock_str, EventType}; use crate::{stock_str, EventType};
/// warn about a nearly full mailbox after this usage percentage is reached. /// warn about a nearly full mailbox after this usage percentage is reached.
/// quota icon is "yellow". /// quota icon is "yellow".
@@ -112,12 +112,10 @@ pub fn needs_quota_warning(curr_percentage: u64, warned_at_percentage: u64) -> b
impl Context { impl Context {
// Adds a job to update `quota.recent` // Adds a job to update `quota.recent`
pub(crate) async fn schedule_quota_update(&self) -> Result<()> { pub(crate) async fn schedule_quota_update(&self) -> Result<()> {
if !job::action_exists(self, Action::UpdateRecentQuota).await? { let requested = self.quota_update_request.swap(true, Ordering::Relaxed);
job::add( if !requested {
self, // Quota update was not requested before.
job::Job::new(Action::UpdateRecentQuota, 0, Params::new(), 0), self.interrupt_inbox(InterruptInfo::new(false)).await;
)
.await?;
} }
Ok(()) Ok(())
} }
@@ -132,10 +130,10 @@ impl Context {
/// and new space is allocated as needed. /// and new space is allocated as needed.
/// ///
/// Called in response to `Action::UpdateRecentQuota`. /// Called in response to `Action::UpdateRecentQuota`.
pub(crate) async fn update_recent_quota(&self, imap: &mut Imap) -> Result<Status> { pub(crate) async fn update_recent_quota(&self, imap: &mut Imap) -> Result<()> {
if let Err(err) = imap.prepare(self).await { if let Err(err) = imap.prepare(self).await {
warn!(self, "could not connect: {:#}", err); warn!(self, "could not connect: {:#}", err);
return Ok(Status::RetryNow); return Ok(());
} }
let session = imap.session.as_mut().context("no session")?; let session = imap.session.as_mut().context("no session")?;
@@ -166,13 +164,16 @@ impl Context {
} }
} }
// Clear the request to update quota.
self.quota_update_request.store(false, Ordering::Relaxed);
*self.quota.write().await = Some(QuotaInfo { *self.quota.write().await = Some(QuotaInfo {
recent: quota, recent: quota,
modified: time(), modified: time(),
}); });
self.emit_event(EventType::ConnectivityChanged); self.emit_event(EventType::ConnectivityChanged);
Ok(Status::Finished(Ok(()))) Ok(())
} }
} }

View File

@@ -203,7 +203,7 @@ pub(crate) async fn receive_imf_inner(
) )
.await?; .await?;
let rcvd_timestamp = smeared_time(context).await; let rcvd_timestamp = smeared_time(context);
// Sender timestamp is allowed to be a bit in the future due to // Sender timestamp is allowed to be a bit in the future due to
// unsynchronized clocks, but not too much. // unsynchronized clocks, but not too much.
@@ -1149,7 +1149,10 @@ async fn add_parts(
// also change `MsgId::trash()` and `delete_expired_messages()` // also change `MsgId::trash()` and `delete_expired_messages()`
let trash = chat_id.is_trash() || (is_location_kml && msg.is_empty()); let trash = chat_id.is_trash() || (is_location_kml && msg.is_empty());
let row_id = context.sql.insert( let row_id = context
.sql
.call(|conn| {
let mut stmt = conn.prepare_cached(
r#" r#"
INSERT INTO msgs INSERT INTO msgs
( (
@@ -1179,8 +1182,8 @@ SET rfc724_mid=excluded.rfc724_mid, chat_id=excluded.chat_id,
bytes=excluded.bytes, mime_headers=excluded.mime_headers, mime_in_reply_to=excluded.mime_in_reply_to, bytes=excluded.bytes, mime_headers=excluded.mime_headers, mime_in_reply_to=excluded.mime_in_reply_to,
mime_references=excluded.mime_references, mime_modified=excluded.mime_modified, error=excluded.error, ephemeral_timer=excluded.ephemeral_timer, mime_references=excluded.mime_references, mime_modified=excluded.mime_modified, error=excluded.error, ephemeral_timer=excluded.ephemeral_timer,
ephemeral_timestamp=excluded.ephemeral_timestamp, download_state=excluded.download_state, hop_info=excluded.hop_info ephemeral_timestamp=excluded.ephemeral_timestamp, download_state=excluded.download_state, hop_info=excluded.hop_info
"#, "#)?;
paramsv![ stmt.execute(params![
replace_msg_id, replace_msg_id,
rfc724_mid, rfc724_mid,
if trash { DC_CHAT_ID_TRASH } else { chat_id }, if trash { DC_CHAT_ID_TRASH } else { chat_id },
@@ -1219,7 +1222,11 @@ SET rfc724_mid=excluded.rfc724_mid, chat_id=excluded.chat_id,
DownloadState::Done DownloadState::Done
}, },
mime_parser.hop_info mime_parser.hop_info
]).await?; ])?;
let row_id = conn.last_insert_rowid();
Ok(row_id)
})
.await?;
// We only replace placeholder with a first part, // We only replace placeholder with a first part,
// afterwards insert additional parts. // afterwards insert additional parts.
@@ -1373,7 +1380,7 @@ async fn calc_sort_timestamp(
} }
} }
Ok(min(sort_timestamp, smeared_time(context).await)) Ok(min(sort_timestamp, smeared_time(context)))
} }
async fn lookup_chat_by_reply( async fn lookup_chat_by_reply(

View File

@@ -2133,6 +2133,76 @@ Original signature updated",
Ok(()) Ok(())
} }
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn test_ignore_old_status_updates() -> Result<()> {
let t = TestContext::new_alice().await;
let bob_id = Contact::add_or_lookup(
&t,
"",
ContactAddress::new("bob@example.net")?,
Origin::AddressBook,
)
.await?
.0;
receive_imf(
&t,
b"From: Bob <bob@example.net>
To: Alice <alice@example.org>
Message-ID: <2@example.org>
Date: Wed, 22 Feb 2023 20:00:00 +0000
body
--
sig wednesday",
false,
)
.await?;
let chat_id = t.get_last_msg().await.chat_id;
let bob = Contact::load_from_db(&t, bob_id).await?;
assert_eq!(bob.get_status(), "sig wednesday");
assert_eq!(get_chat_msgs(&t, chat_id).await?.len(), 1);
receive_imf(
&t,
b"From: Bob <bob@example.net>
To: Alice <alice@example.org>
Message-ID: <1@example.org>
Date: Tue, 21 Feb 2023 20:00:00 +0000
body
--
sig tuesday",
false,
)
.await?;
let bob = Contact::load_from_db(&t, bob_id).await?;
assert_eq!(bob.get_status(), "sig wednesday");
assert_eq!(get_chat_msgs(&t, chat_id).await?.len(), 2);
receive_imf(
&t,
b"From: Bob <bob@example.net>
To: Alice <alice@example.org>
Message-ID: <3@example.org>
Date: Thu, 23 Feb 2023 20:00:00 +0000
body
--
sig thursday",
false,
)
.await?;
let bob = Contact::load_from_db(&t, bob_id).await?;
assert_eq!(bob.get_status(), "sig thursday");
assert_eq!(get_chat_msgs(&t, chat_id).await?.len(), 3);
Ok(())
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)] #[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn test_chat_assignment_private_classical_reply() { async fn test_chat_assignment_private_classical_reply() {
for outgoing_is_classical in &[true, false] { for outgoing_is_classical in &[true, false] {

View File

@@ -1,4 +1,5 @@
use std::iter::{self, once}; use std::iter::{self, once};
use std::sync::atomic::Ordering;
use anyhow::{bail, Context as _, Result}; use anyhow::{bail, Context as _, Result};
use async_channel::{self as channel, Receiver, Sender}; use async_channel::{self as channel, Receiver, Sender};
@@ -128,6 +129,13 @@ async fn inbox_loop(ctx: Context, started: Sender<()>, inbox_handlers: ImapConne
info = Default::default(); info = Default::default();
} }
None => { None => {
let requested = ctx.quota_update_request.swap(false, Ordering::Relaxed);
if requested {
if let Err(err) = ctx.update_recent_quota(&mut connection).await {
warn!(ctx, "Failed to update quota: {:#}.", err);
}
}
maybe_add_time_based_warnings(&ctx).await; maybe_add_time_based_warnings(&ctx).await;
match ctx.get_config_i64(Config::LastHousekeeping).await { match ctx.get_config_i64(Config::LastHousekeeping).await {

View File

@@ -13,12 +13,13 @@ use tokio::task;
use crate::config::Config; use crate::config::Config;
use crate::contact::{Contact, ContactId}; use crate::contact::{Contact, ContactId};
use crate::events::EventType; use crate::events::EventType;
use crate::login_param::{build_tls, CertificateChecks, LoginParam, ServerLoginParam}; use crate::login_param::{CertificateChecks, LoginParam, ServerLoginParam};
use crate::message::Message; use crate::message::Message;
use crate::message::{self, MsgId}; use crate::message::{self, MsgId};
use crate::mimefactory::MimeFactory; use crate::mimefactory::MimeFactory;
use crate::net::connect_tcp; use crate::net::connect_tcp;
use crate::net::session::SessionStream; use crate::net::session::SessionStream;
use crate::net::tls::wrap_tls;
use crate::oauth2::get_oauth2_access_token; use crate::oauth2::get_oauth2_access_token;
use crate::provider::Socket; use crate::provider::Socket;
use crate::socks::Socks5Config; use crate::socks::Socks5Config;
@@ -119,8 +120,7 @@ impl Smtp {
let socks5_stream = socks5_config let socks5_stream = socks5_config
.connect(context, hostname, port, SMTP_TIMEOUT, strict_tls) .connect(context, hostname, port, SMTP_TIMEOUT, strict_tls)
.await?; .await?;
let tls = build_tls(strict_tls); let tls_stream = wrap_tls(strict_tls, hostname, socks5_stream).await?;
let tls_stream = tls.connect(hostname, socks5_stream).await?;
let buffered_stream = BufWriter::new(tls_stream); let buffered_stream = BufWriter::new(tls_stream);
let session_stream: Box<dyn SessionStream> = Box::new(buffered_stream); let session_stream: Box<dyn SessionStream> = Box::new(buffered_stream);
let client = smtp::SmtpClient::new().smtp_utf8(true); let client = smtp::SmtpClient::new().smtp_utf8(true);
@@ -144,9 +144,7 @@ impl Smtp {
let client = smtp::SmtpClient::new().smtp_utf8(true); let client = smtp::SmtpClient::new().smtp_utf8(true);
let transport = SmtpTransport::new(client, socks5_stream).await?; let transport = SmtpTransport::new(client, socks5_stream).await?;
let tcp_stream = transport.starttls().await?; let tcp_stream = transport.starttls().await?;
let tls = build_tls(strict_tls); let tls_stream = wrap_tls(strict_tls, hostname, tcp_stream)
let tls_stream = tls
.connect(hostname, tcp_stream)
.await .await
.context("STARTTLS upgrade failed")?; .context("STARTTLS upgrade failed")?;
let buffered_stream = BufWriter::new(tls_stream); let buffered_stream = BufWriter::new(tls_stream);
@@ -181,8 +179,7 @@ impl Smtp {
strict_tls: bool, strict_tls: bool,
) -> Result<SmtpTransport<Box<dyn SessionStream>>> { ) -> Result<SmtpTransport<Box<dyn SessionStream>>> {
let tcp_stream = connect_tcp(context, hostname, port, SMTP_TIMEOUT, false).await?; let tcp_stream = connect_tcp(context, hostname, port, SMTP_TIMEOUT, false).await?;
let tls = build_tls(strict_tls); let tls_stream = wrap_tls(strict_tls, hostname, tcp_stream).await?;
let tls_stream = tls.connect(hostname, tcp_stream).await?;
let buffered_stream = BufWriter::new(tls_stream); let buffered_stream = BufWriter::new(tls_stream);
let session_stream: Box<dyn SessionStream> = Box::new(buffered_stream); let session_stream: Box<dyn SessionStream> = Box::new(buffered_stream);
let client = smtp::SmtpClient::new().smtp_utf8(true); let client = smtp::SmtpClient::new().smtp_utf8(true);
@@ -203,9 +200,7 @@ impl Smtp {
let client = smtp::SmtpClient::new().smtp_utf8(true); let client = smtp::SmtpClient::new().smtp_utf8(true);
let transport = SmtpTransport::new(client, tcp_stream).await?; let transport = SmtpTransport::new(client, tcp_stream).await?;
let tcp_stream = transport.starttls().await?; let tcp_stream = transport.starttls().await?;
let tls = build_tls(strict_tls); let tls_stream = wrap_tls(strict_tls, hostname, tcp_stream)
let tls_stream = tls
.connect(hostname, tcp_stream)
.await .await
.context("STARTTLS upgrade failed")?; .context("STARTTLS upgrade failed")?;
let buffered_stream = BufWriter::new(tls_stream); let buffered_stream = BufWriter::new(tls_stream);

View File

@@ -983,7 +983,7 @@ mod tests {
assert_eq!(avatar_bytes, &tokio::fs::read(&a).await.unwrap()[..]); assert_eq!(avatar_bytes, &tokio::fs::read(&a).await.unwrap()[..]);
t.sql.close().await; t.sql.close().await;
housekeeping(&t).await.unwrap_err(); // housekeeping should fail as the db is closed housekeeping(&t).await.unwrap(); // housekeeping should emit warnings but not fail
t.sql.open(&t, "".to_string()).await.unwrap(); t.sql.open(&t, "".to_string()).await.unwrap();
let a = t.get_config(Config::Selfavatar).await.unwrap().unwrap(); let a = t.get_config(Config::Selfavatar).await.unwrap().unwrap();

193
src/timesmearing.rs Normal file
View File

@@ -0,0 +1,193 @@
//! # Time smearing.
//!
//! As e-mails typically only use a second-based-resolution for timestamps,
//! the order of two mails sent withing one second is unclear.
//! This is bad e.g. when forwarding some messages from a chat -
//! these messages will appear at the recipient easily out of order.
//!
//! We work around this issue by not sending out two mails with the same timestamp.
//! For this purpose, in short, we track the last timestamp used in `last_smeared_timestamp`
//! when another timestamp is needed in the same second, we use `last_smeared_timestamp+1`
//! after some moments without messages sent out,
//! `last_smeared_timestamp` is again in sync with the normal time.
//!
//! However, we do not do all this for the far future,
//! but at max `MAX_SECONDS_TO_LEND_FROM_FUTURE`
use std::cmp::{max, min};
use std::sync::atomic::{AtomicI64, Ordering};
pub(crate) const MAX_SECONDS_TO_LEND_FROM_FUTURE: i64 = 5;
/// Smeared timestamp generator.
#[derive(Debug)]
pub struct SmearedTimestamp {
/// Next timestamp available for allocation.
smeared_timestamp: AtomicI64,
}
impl SmearedTimestamp {
/// Creates a new smeared timestamp generator.
pub fn new() -> Self {
Self {
smeared_timestamp: AtomicI64::new(0),
}
}
/// Allocates `count` unique timestamps.
///
/// Returns the first allocated timestamp.
pub fn create_n(&self, now: i64, count: i64) -> i64 {
let mut prev = self.smeared_timestamp.load(Ordering::Relaxed);
loop {
// Advance the timestamp if it is in the past,
// but keep `count - 1` timestamps from the past if possible.
let t = max(prev, now - count + 1);
// Rewind the time back if there is no room
// to allocate `count` timestamps without going too far into the future.
// Not going too far into the future
// is more important than generating unique timestamps.
let first = min(t, now + MAX_SECONDS_TO_LEND_FROM_FUTURE - count + 1);
// Allocate `count` timestamps by advancing the current timestamp.
let next = first + count;
if let Err(x) = self.smeared_timestamp.compare_exchange_weak(
prev,
next,
Ordering::Relaxed,
Ordering::Relaxed,
) {
prev = x;
} else {
return first;
}
}
}
/// Creates a single timestamp.
pub fn create(&self, now: i64) -> i64 {
self.create_n(now, 1)
}
/// Returns the current smeared timestamp.
pub fn current(&self) -> i64 {
self.smeared_timestamp.load(Ordering::Relaxed)
}
}
#[cfg(test)]
mod tests {
use std::time::SystemTime;
use super::*;
use crate::test_utils::TestContext;
use crate::tools::{create_smeared_timestamp, create_smeared_timestamps, smeared_time, time};
#[test]
fn test_smeared_timestamp() {
let smeared_timestamp = SmearedTimestamp::new();
let now = time();
assert_eq!(smeared_timestamp.current(), 0);
for i in 0..MAX_SECONDS_TO_LEND_FROM_FUTURE {
assert_eq!(smeared_timestamp.create(now), now + i);
}
assert_eq!(
smeared_timestamp.create(now),
now + MAX_SECONDS_TO_LEND_FROM_FUTURE
);
assert_eq!(
smeared_timestamp.create(now),
now + MAX_SECONDS_TO_LEND_FROM_FUTURE
);
// System time rewinds back by 1000 seconds.
let now = now - 1000;
assert_eq!(
smeared_timestamp.create(now),
now + MAX_SECONDS_TO_LEND_FROM_FUTURE
);
assert_eq!(
smeared_timestamp.create(now),
now + MAX_SECONDS_TO_LEND_FROM_FUTURE
);
assert_eq!(
smeared_timestamp.create(now + 1),
now + MAX_SECONDS_TO_LEND_FROM_FUTURE + 1
);
assert_eq!(smeared_timestamp.create(now + 100), now + 100);
assert_eq!(smeared_timestamp.create(now + 100), now + 101);
assert_eq!(smeared_timestamp.create(now + 100), now + 102);
}
#[test]
fn test_create_n_smeared_timestamps() {
let smeared_timestamp = SmearedTimestamp::new();
let now = time();
// Create a single timestamp to initialize the generator.
assert_eq!(smeared_timestamp.create(now), now);
// Wait a minute.
let now = now + 60;
// Simulate forwarding 7 messages.
let forwarded_messages = 7;
// We have not sent anything for a minute,
// so we can take the current timestamp and take 6 timestamps from the past.
assert_eq!(smeared_timestamp.create_n(now, forwarded_messages), now - 6);
assert_eq!(smeared_timestamp.current(), now + 1);
// Wait 4 seconds.
// Now we have 3 free timestamps in the past.
let now = now + 4;
assert_eq!(smeared_timestamp.current(), now - 3);
// Forward another 7 messages.
// We can only lend 3 timestamps from the past.
assert_eq!(smeared_timestamp.create_n(now, forwarded_messages), now - 3);
// We had to borrow 3 timestamps from the future
// because there were not enough timestamps in the past.
assert_eq!(smeared_timestamp.current(), now + 4);
// Forward another 7 messages.
// We cannot use more than 5 timestamps from the future,
// so we use 5 timestamps from the future,
// the current timestamp and one timestamp from the past.
assert_eq!(smeared_timestamp.create_n(now, forwarded_messages), now - 1);
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn test_create_smeared_timestamp() {
let t = TestContext::new().await;
assert_ne!(create_smeared_timestamp(&t), create_smeared_timestamp(&t));
assert!(
create_smeared_timestamp(&t)
>= SystemTime::now()
.duration_since(SystemTime::UNIX_EPOCH)
.unwrap()
.as_secs() as i64
);
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn test_create_smeared_timestamps() {
let t = TestContext::new().await;
let count = MAX_SECONDS_TO_LEND_FROM_FUTURE - 1;
let start = create_smeared_timestamps(&t, count as usize);
let next = smeared_time(&t);
assert!((start + count - 1) < next);
let count = MAX_SECONDS_TO_LEND_FROM_FUTURE + 30;
let start = create_smeared_timestamps(&t, count as usize);
let next = smeared_time(&t);
assert!((start + count - 1) < next);
}
}

View File

@@ -3,7 +3,6 @@
#![allow(missing_docs)] #![allow(missing_docs)]
use core::cmp::{max, min};
use std::borrow::Cow; use std::borrow::Cow;
use std::fmt; use std::fmt;
use std::io::Cursor; use std::io::Cursor;
@@ -140,63 +139,27 @@ pub(crate) fn gm2local_offset() -> i64 {
i64::from(lt.offset().local_minus_utc()) i64::from(lt.offset().local_minus_utc())
} }
// timesmearing
// - as e-mails typically only use a second-based-resolution for timestamps,
// the order of two mails sent withing one second is unclear.
// this is bad eg. when forwarding some messages from a chat -
// these messages will appear at the recipient easily out of order.
// - we work around this issue by not sending out two mails with the same timestamp.
// - for this purpose, in short, we track the last timestamp used in `last_smeared_timestamp`
// when another timestamp is needed in the same second, we use `last_smeared_timestamp+1`
// - after some moments without messages sent out,
// `last_smeared_timestamp` is again in sync with the normal time.
// - however, we do not do all this for the far future,
// but at max `MAX_SECONDS_TO_LEND_FROM_FUTURE`
pub(crate) const MAX_SECONDS_TO_LEND_FROM_FUTURE: i64 = 5;
/// Returns the current smeared timestamp, /// Returns the current smeared timestamp,
/// ///
/// The returned timestamp MUST NOT be sent out. /// The returned timestamp MUST NOT be sent out.
pub(crate) async fn smeared_time(context: &Context) -> i64 { pub(crate) fn smeared_time(context: &Context) -> i64 {
let mut now = time(); let now = time();
let ts = *context.last_smeared_timestamp.read().await; let ts = context.smeared_timestamp.current();
if ts >= now { std::cmp::max(ts, now)
now = ts + 1;
}
now
} }
/// Returns a timestamp that is guaranteed to be unique. /// Returns a timestamp that is guaranteed to be unique.
pub(crate) async fn create_smeared_timestamp(context: &Context) -> i64 { pub(crate) fn create_smeared_timestamp(context: &Context) -> i64 {
let now = time(); let now = time();
let mut ret = now; context.smeared_timestamp.create(now)
let mut last_smeared_timestamp = context.last_smeared_timestamp.write().await;
if ret <= *last_smeared_timestamp {
ret = *last_smeared_timestamp + 1;
if ret - now > MAX_SECONDS_TO_LEND_FROM_FUTURE {
ret = now + MAX_SECONDS_TO_LEND_FROM_FUTURE
}
}
*last_smeared_timestamp = ret;
ret
} }
// creates `count` timestamps that are guaranteed to be unique. // creates `count` timestamps that are guaranteed to be unique.
// the frist created timestamps is returned directly, // the first created timestamps is returned directly,
// get the other timestamps just by adding 1..count-1 // get the other timestamps just by adding 1..count-1
pub(crate) async fn create_smeared_timestamps(context: &Context, count: usize) -> i64 { pub(crate) fn create_smeared_timestamps(context: &Context, count: usize) -> i64 {
let now = time(); let now = time();
let count = count as i64; context.smeared_timestamp.create_n(now, count as i64)
let mut start = now + min(count, MAX_SECONDS_TO_LEND_FROM_FUTURE) - count;
let mut last_smeared_timestamp = context.last_smeared_timestamp.write().await;
start = max(*last_smeared_timestamp + 1, start);
*last_smeared_timestamp = start + count - 1;
start
} }
// if the system time is not plausible, once a day, add a device message. // if the system time is not plausible, once a day, add a device message.
@@ -592,6 +555,8 @@ pub(crate) fn improve_single_line_input(input: &str) -> String {
} }
pub(crate) trait IsNoneOrEmpty<T> { pub(crate) trait IsNoneOrEmpty<T> {
/// Returns true if an Option does not contain a string
/// or contains an empty string.
fn is_none_or_empty(&self) -> bool; fn is_none_or_empty(&self) -> bool;
} }
impl<T> IsNoneOrEmpty<T> for Option<T> impl<T> IsNoneOrEmpty<T> for Option<T>
@@ -1069,36 +1034,6 @@ DKIM Results: Passed=true, Works=true, Allow_Keychange=true";
assert!(!file_exist!(context, &fn0)); assert!(!file_exist!(context, &fn0));
} }
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn test_create_smeared_timestamp() {
let t = TestContext::new().await;
assert_ne!(
create_smeared_timestamp(&t).await,
create_smeared_timestamp(&t).await
);
assert!(
create_smeared_timestamp(&t).await
>= SystemTime::now()
.duration_since(SystemTime::UNIX_EPOCH)
.unwrap()
.as_secs() as i64
);
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn test_create_smeared_timestamps() {
let t = TestContext::new().await;
let count = MAX_SECONDS_TO_LEND_FROM_FUTURE - 1;
let start = create_smeared_timestamps(&t, count as usize).await;
let next = smeared_time(&t).await;
assert!((start + count - 1) < next);
let count = MAX_SECONDS_TO_LEND_FROM_FUTURE + 30;
let start = create_smeared_timestamps(&t, count as usize).await;
let next = smeared_time(&t).await;
assert!((start + count - 1) < next);
}
#[test] #[test]
fn test_duration_to_str() { fn test_duration_to_str() {
assert_eq!(duration_to_str(Duration::from_secs(0)), "0h 0m 0s"); assert_eq!(duration_to_str(Duration::from_secs(0)), "0h 0m 0s");

View File

@@ -2,8 +2,8 @@
use anyhow::Result; use anyhow::Result;
use crate::chat::{Chat, ChatId}; use crate::chat::ChatId;
use crate::contact::{Contact, ContactId}; use crate::contact::ContactId;
use crate::context::Context; use crate::context::Context;
use crate::param::{Param, Params}; use crate::param::{Param, Params};
@@ -17,12 +17,26 @@ impl Context {
scope: Param, scope: Param,
new_timestamp: i64, new_timestamp: i64,
) -> Result<bool> { ) -> Result<bool> {
let mut contact = Contact::load_from_db(self, contact_id).await?; self.sql
if contact.param.update_timestamp(scope, new_timestamp)? { .transaction(|transaction| {
contact.update_param(self).await?; let mut param: Params = transaction.query_row(
return Ok(true); "SELECT param FROM contacts WHERE id=?",
[contact_id],
|row| {
let param: String = row.get(0)?;
Ok(param.parse().unwrap_or_default())
},
)?;
let update = param.update_timestamp(scope, new_timestamp)?;
if update {
transaction.execute(
"UPDATE contacts SET param=? WHERE id=?",
params![param.to_string(), contact_id],
)?;
} }
Ok(false) Ok(update)
})
.await
} }
} }
@@ -35,12 +49,24 @@ impl ChatId {
scope: Param, scope: Param,
new_timestamp: i64, new_timestamp: i64,
) -> Result<bool> { ) -> Result<bool> {
let mut chat = Chat::load_from_db(context, *self).await?; context
if chat.param.update_timestamp(scope, new_timestamp)? { .sql
chat.update_param(context).await?; .transaction(|transaction| {
return Ok(true); let mut param: Params =
transaction.query_row("SELECT param FROM chats WHERE id=?", [self], |row| {
let param: String = row.get(0)?;
Ok(param.parse().unwrap_or_default())
})?;
let update = param.update_timestamp(scope, new_timestamp)?;
if update {
transaction.execute(
"UPDATE chats SET param=? WHERE id=?",
params![param.to_string(), self],
)?;
} }
Ok(false) Ok(update)
})
.await
} }
} }
@@ -60,6 +86,7 @@ impl Params {
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::*; use super::*;
use crate::chat::Chat;
use crate::receive_imf::receive_imf; use crate::receive_imf::receive_imf;
use crate::test_utils::TestContext; use crate::test_utils::TestContext;
use crate::tools::time; use crate::tools::time;

View File

@@ -408,7 +408,7 @@ impl Context {
.create_status_update_record( .create_status_update_record(
&mut instance, &mut instance,
update_str, update_str,
create_smeared_timestamp(self).await, create_smeared_timestamp(self),
send_now, send_now,
ContactId::SELF, ContactId::SELF,
) )

View File

@@ -3,26 +3,54 @@
Some of the standards Delta Chat is based on: Some of the standards Delta Chat is based on:
Tasks | Standards Tasks | Standards
---------------------------------|--------------------------------------------- -------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Transport | IMAP v4 ([RFC 3501](https://tools.ietf.org/html/rfc3501)), SMTP ([RFC 5321](https://tools.ietf.org/html/rfc5321)) and Internet Message Format (IMF, [RFC 5322](https://tools.ietf.org/html/rfc5322)) Transport | IMAP v4 ([RFC 3501][]), SMTP ([RFC 5321][]) and Internet Message Format (IMF, [RFC 5322][])
Proxy | SOCKS5 ([RFC 1928](https://tools.ietf.org/html/rfc1928)) Proxy | SOCKS5 ([RFC 1928][])
Embedded media | MIME Document Series ([RFC 2045](https://tools.ietf.org/html/rfc2045), [RFC 2046](https://tools.ietf.org/html/rfc2046)), Content-Disposition Header ([RFC 2183](https://tools.ietf.org/html/rfc2183)), Multipart/Related ([RFC 2387](https://tools.ietf.org/html/rfc2387)) Embedded media | MIME Document Series ([RFC 2045][], [RFC 2046][]), Content-Disposition Header ([RFC 2183][]), Multipart/Related ([RFC 2387][])
Text and Quote encoding | Fixed, Flowed ([RFC 3676](https://tools.ietf.org/html/rfc3676)) Text and Quote encoding | Fixed, Flowed ([RFC 3676][])
Reactions | Reaction: Indicating Summary Reaction to a Message [RFC 9078](https://datatracker.ietf.org/doc/rfc9078/) Reactions | Reaction: Indicating Summary Reaction to a Message ([RFC 9078][])
Filename encoding | Encoded Words ([RFC 2047](https://tools.ietf.org/html/rfc2047)), Encoded Word Extensions ([RFC 2231](https://tools.ietf.org/html/rfc2231)) Filename encoding | Encoded Words ([RFC 2047][]), Encoded Word Extensions ([RFC 2231][])
Identify server folders | IMAP LIST Extension ([RFC 6154](https://tools.ietf.org/html/rfc6154)) Identify server folders | IMAP LIST Extension ([RFC 6154][])
Push | IMAP IDLE ([RFC 2177](https://tools.ietf.org/html/rfc2177)) Push | IMAP IDLE ([RFC 2177][])
Quota | IMAP QUOTA extension ([RFC 2087](https://tools.ietf.org/html/rfc2087)) Quota | IMAP QUOTA extension ([RFC 2087][])
Seen status synchronization | IMAP CONDSTORE extension ([RFC 7162](https://tools.ietf.org/html/rfc7162)) Seen status synchronization | IMAP CONDSTORE extension ([RFC 7162][])
Client/server identification | IMAP ID extension ([RFC 2971](https://datatracker.ietf.org/doc/html/rfc2971)) Client/server identification | IMAP ID extension ([RFC 2971][])
Authorization | OAuth2 ([RFC 6749](https://tools.ietf.org/html/rfc6749)) Authorization | OAuth2 ([RFC 6749][])
End-to-end encryption | [Autocrypt Level 1](https://autocrypt.org/level1.html), OpenPGP ([RFC 4880](https://tools.ietf.org/html/rfc4880)), Security Multiparts for MIME ([RFC 1847](https://tools.ietf.org/html/rfc1847)) and [“Mixed Up” Encryption repairing](https://tools.ietf.org/id/draft-dkg-openpgp-pgpmime-message-mangling-00.html) End-to-end encryption | [Autocrypt Level 1][], OpenPGP ([RFC 4880][]), Security Multiparts for MIME ([RFC 1847][]) and [“Mixed Up” Encryption repairing](https://tools.ietf.org/id/draft-dkg-openpgp-pgpmime-message-mangling-00.html)
Header encryption | [Protected Headers for Cryptographic E-mail](https://datatracker.ietf.org/doc/draft-autocrypt-lamps-protected-headers/) Header encryption | [Protected Headers for Cryptographic E-mail](https://datatracker.ietf.org/doc/draft-autocrypt-lamps-protected-headers/)
Configuration assistance | [Autoconfigure](https://web.archive.org/web/20210402044801/https://developer.mozilla.org/en-US/docs/Mozilla/Thunderbird/Autoconfiguration) and [Autodiscover](https://technet.microsoft.com/library/bb124251(v=exchg.150).aspx) Configuration assistance | [Autoconfigure](https://web.archive.org/web/20210402044801/https://developer.mozilla.org/en-US/docs/Mozilla/Thunderbird/Autoconfiguration) and [Autodiscover][]
Messenger functions | [Chat-over-Email](https://github.com/deltachat/deltachat-core-rust/blob/master/spec.md#chat-mail-specification) Messenger functions | [Chat-over-Email](https://github.com/deltachat/deltachat-core-rust/blob/master/spec.md#chat-mail-specification)
Detect mailing list | List-Id ([RFC 2919](https://tools.ietf.org/html/rfc2919)) and Precedence ([RFC 3834](https://tools.ietf.org/html/rfc3834)) Detect mailing list | List-Id ([RFC 2919][]) and Precedence ([RFC 3834][])
User and chat colors | [XEP-0392](https://xmpp.org/extensions/xep-0392.html): Consistent Color Generation User and chat colors | [XEP-0392][]: Consistent Color Generation
Send and receive system messages | Multipart/Report Media Type ([RFC 6522](https://tools.ietf.org/html/rfc6522)) Send and receive system messages | Multipart/Report Media Type ([RFC 6522][])
Return receipts | Message Disposition Notification (MDN, [RFC 8098](https://tools.ietf.org/html/rfc8098), [RFC 3503](https://tools.ietf.org/html/rfc3503)) using the Chat-Disposition-Notification-To header Return receipts | Message Disposition Notification (MDN, [RFC 8098][], [RFC 3503][]) using the Chat-Disposition-Notification-To header
Locations | KML ([Open Geospatial Consortium](http://www.opengeospatial.org/standards/kml/), [Google Dev](https://developers.google.com/kml/)) Locations | KML ([Open Geospatial Consortium](http://www.opengeospatial.org/standards/kml/), [Google Dev](https://developers.google.com/kml/))
[Autocrypt Level 1]: https://autocrypt.org/level1.html
[Autodiscover]: https://learn.microsoft.com/en-us/exchange/autodiscover-service-for-exchange-2013
[XEP-0392]: https://xmpp.org/extensions/xep-0392.html
[RFC 1847]: https://tools.ietf.org/html/rfc1847
[RFC 1928]: https://tools.ietf.org/html/rfc1928
[RFC 2045]: https://tools.ietf.org/html/rfc2045
[RFC 2046]: https://tools.ietf.org/html/rfc2046
[RFC 2047]: https://tools.ietf.org/html/rfc2047
[RFC 2087]: https://tools.ietf.org/html/rfc2087
[RFC 2177]: https://tools.ietf.org/html/rfc2177
[RFC 2183]: https://tools.ietf.org/html/rfc2183
[RFC 2231]: https://tools.ietf.org/html/rfc2231
[RFC 2387]: https://tools.ietf.org/html/rfc2387
[RFC 2919]: https://tools.ietf.org/html/rfc2919
[RFC 2971]: https://tools.ietf.org/html/rfc2971
[RFC 3501]: https://tools.ietf.org/html/rfc3501
[RFC 3503]: https://tools.ietf.org/html/rfc3503
[RFC 3676]: https://tools.ietf.org/html/rfc3676
[RFC 3834]: https://tools.ietf.org/html/rfc3834
[RFC 4880]: https://tools.ietf.org/html/rfc4880
[RFC 5321]: https://tools.ietf.org/html/rfc5321
[RFC 5322]: https://tools.ietf.org/html/rfc5322
[RFC 6154]: https://tools.ietf.org/html/rfc6154
[RFC 6522]: https://tools.ietf.org/html/rfc6522
[RFC 6749]: https://tools.ietf.org/html/rfc6749
[RFC 7162]: https://tools.ietf.org/html/rfc7162
[RFC 8098]: https://tools.ietf.org/html/rfc8098
[RFC 9078]: https://tools.ietf.org/html/rfc9078