Compare commits

...

83 Commits

Author SHA1 Message Date
B. Petersen
966f863926 Resolved merge conflict in CHANGELOG 2023-04-04 17:09:53 +02:00
dependabot[bot]
2ab7a37d85 Merge pull request #4297 from deltachat/dependabot/cargo/iroh-0.4.1 2023-04-04 09:02:11 +00:00
dependabot[bot]
0d1c12115d Merge pull request #4298 from deltachat/dependabot/cargo/fast-socks5-0.8.2 2023-04-04 08:34:30 +00:00
link2xt
3bced80b23 imap: use try_next() to avoid the need to transpose Option and Result 2023-04-04 08:23:37 +00:00
link2xt
f46f3e939e clippy fix 2023-04-04 09:50:42 +02:00
B. Petersen
9d8e836fdd show some text instead of nothing on empty quota list
some providers say that they support QUOTA in the IMAP CAPABILITY,
but return an empty list without any quota information then.

in our "Connectivity", this looks a bit of an error.

i have not seen this error often - only for testrun.org -
if it is usual, we could also just say "not supported"
(as we do in case QUOTA is not returned).
a translations seems not to be needed for now,
it seems unusual, and all other errors are not translated as well.
2023-04-04 09:50:42 +02:00
dependabot[bot]
132d34db5c cargo: bump fast-socks5 from 0.8.1 to 0.8.2
Bumps [fast-socks5](https://github.com/dizda/fast-socks5) from 0.8.1 to 0.8.2.
- [Release notes](https://github.com/dizda/fast-socks5/releases)
- [Commits](https://github.com/dizda/fast-socks5/compare/v0.8.1...v0.8.2)

---
updated-dependencies:
- dependency-name: fast-socks5
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-04-03 22:22:51 +00:00
dependabot[bot]
16edb7b35a cargo: bump iroh from 0.4.0 to 0.4.1
Bumps [iroh](https://github.com/n0-computer/iroh) from 0.4.0 to 0.4.1.
- [Release notes](https://github.com/n0-computer/iroh/releases)
- [Changelog](https://github.com/n0-computer/iroh/blob/main/CHANGELOG.md)
- [Commits](https://github.com/n0-computer/iroh/compare/v0.4.0...v0.4.1)

---
updated-dependencies:
- dependency-name: iroh
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-04-03 22:21:52 +00:00
link2xt
044478a044 Update fd-lock, is-terminal and tempfile 2023-04-03 21:56:16 +00:00
dependabot[bot]
f6ea48d666 Merge pull request #4275 from deltachat/dependabot/cargo/axum-0.6.12 2023-04-03 21:43:43 +00:00
dependabot[bot]
6ed3ed1617 cargo: bump axum from 0.6.11 to 0.6.12
Bumps [axum](https://github.com/tokio-rs/axum) from 0.6.11 to 0.6.12.
- [Release notes](https://github.com/tokio-rs/axum/releases)
- [Changelog](https://github.com/tokio-rs/axum/blob/main/CHANGELOG.md)
- [Commits](https://github.com/tokio-rs/axum/compare/axum-v0.6.11...axum-v0.6.12)

---
updated-dependencies:
- dependency-name: axum
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-04-03 19:48:44 +00:00
link2xt
9c9c401e66 ci: use --all-targets --all-features for cargo check 2023-04-03 19:27:11 +00:00
link2xt
72031edfbe ci: split CI into more parallel jobs
With this, all the builds happen in parallel in 3-6 minutes,
and then python tests run for 8-9 minutes.

Before, it was 6-17 minutes for compilation,
then 19-31 minutes for python tests.
Compilation of the library and test binary was not parallelized,
and then python tests included async python tests
and deltachat-rpc-server binary compilation.

Now, deltachat-rpc-server compilation,
Rust testing and async python tests are separate jobs.
2023-04-03 19:27:11 +00:00
iequidoo
41456aa2ab jsonrpc API: Fix documentation of search_messages() (#4288) 2023-04-03 14:17:39 -04:00
link2xt
a70f29381a location: remove bitflags dependency 2023-04-03 16:56:37 +00:00
dependabot[bot]
3b0ad73732 Merge pull request #4270 from deltachat/dependabot/cargo/async_zip-0.0.12 2023-04-03 16:08:41 +00:00
dependabot[bot]
8c302648bb Merge pull request #4263 from deltachat/dependabot/cargo/tokio-1.27.0 2023-04-03 16:06:43 +00:00
dependabot[bot]
01b888c341 cargo: bump tokio from 1.26.0 to 1.27.0
Bumps [tokio](https://github.com/tokio-rs/tokio) from 1.26.0 to 1.27.0.
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Commits](https://github.com/tokio-rs/tokio/compare/tokio-1.26.0...tokio-1.27.0)

---
updated-dependencies:
- dependency-name: tokio
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-04-03 12:33:21 +00:00
dependabot[bot]
f8e87a8b6c cargo: bump async_zip from 0.0.11 to 0.0.12
Bumps [async_zip](https://github.com/Majored/rs-async-zip) from 0.0.11 to 0.0.12.
- [Release notes](https://github.com/Majored/rs-async-zip/releases)
- [Commits](https://github.com/Majored/rs-async-zip/commits)

---
updated-dependencies:
- dependency-name: async_zip
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-04-03 12:33:19 +00:00
link2xt
01af83946c Update to released version of async-imap 2023-04-03 12:31:16 +00:00
link2xt
386a9ad0b0 Update from yanked spin 0.9.7 to 0.9.8 2023-04-03 10:21:11 +00:00
Floris Bruynooghe
c6c20d8f3c ref(scheduler): Make InnerSchedulerState an enum (#4251)
This is more verbose, but makes reasoning about things easier.
2023-04-03 11:13:44 +02:00
dependabot[bot]
40b072711e Merge pull request #4283 from deltachat/dependabot/cargo/rusqlite-0.29.0 2023-04-03 08:25:57 +00:00
Floris Bruynooghe
2469964a44 ref(sql): Do not warn if BLOBS_BACKUP_NAME does not exist (#4256)
ref(sql): Dop not warn if BLOBS_BACKUP_NAME does not exist

It is perfectly normal for this directory to not exist.
2023-04-03 09:28:17 +02:00
link2xt
40e0924768 Add bitflags exception to deny.toml 2023-04-03 07:25:44 +00:00
dependabot[bot]
e016440fb3 Merge pull request #4267 from deltachat/dependabot/cargo/serde_json-1.0.95 2023-04-02 22:06:59 +00:00
dependabot[bot]
a37aaa5585 Merge pull request #4271 from deltachat/dependabot/cargo/image-0.24.6 2023-04-02 22:06:26 +00:00
dependabot[bot]
1ee551d53e cargo: bump serde_json from 1.0.93 to 1.0.95
Bumps [serde_json](https://github.com/serde-rs/json) from 1.0.93 to 1.0.95.
- [Release notes](https://github.com/serde-rs/json/releases)
- [Commits](https://github.com/serde-rs/json/compare/v1.0.93...v1.0.95)

---
updated-dependencies:
- dependency-name: serde_json
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-04-02 19:55:56 +00:00
dependabot[bot]
2341eed796 Merge pull request #4282 from deltachat/dependabot/cargo/futures-0.3.28 2023-04-02 19:54:10 +00:00
dependabot[bot]
49cfd79505 Merge pull request #4265 from deltachat/dependabot/cargo/serde-1.0.159 2023-04-02 19:41:36 +00:00
dependabot[bot]
9ac0d1a1ce Merge pull request #4280 from deltachat/dependabot/cargo/thiserror-1.0.40 2023-04-02 19:00:26 +00:00
dependabot[bot]
6046dfabe9 cargo: bump rusqlite from 0.28.0 to 0.29.0
Bumps [rusqlite](https://github.com/rusqlite/rusqlite) from 0.28.0 to 0.29.0.
- [Release notes](https://github.com/rusqlite/rusqlite/releases)
- [Changelog](https://github.com/rusqlite/rusqlite/blob/master/Changelog.md)
- [Commits](https://github.com/rusqlite/rusqlite/compare/v0.28.0...v0.29.0)

---
updated-dependencies:
- dependency-name: rusqlite
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-04-02 18:32:07 +00:00
dependabot[bot]
c2269bf777 cargo: bump futures from 0.3.26 to 0.3.28
Bumps [futures](https://github.com/rust-lang/futures-rs) from 0.3.26 to 0.3.28.
- [Release notes](https://github.com/rust-lang/futures-rs/releases)
- [Changelog](https://github.com/rust-lang/futures-rs/blob/master/CHANGELOG.md)
- [Commits](https://github.com/rust-lang/futures-rs/compare/0.3.26...0.3.28)

---
updated-dependencies:
- dependency-name: futures
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-04-02 17:40:49 +00:00
dependabot[bot]
b81a958021 Merge pull request #4278 from deltachat/dependabot/cargo/walkdir-2.3.3 2023-04-02 17:16:24 +00:00
dependabot[bot]
9e37a5643c cargo: bump thiserror from 1.0.38 to 1.0.40
Bumps [thiserror](https://github.com/dtolnay/thiserror) from 1.0.38 to 1.0.40.
- [Release notes](https://github.com/dtolnay/thiserror/releases)
- [Commits](https://github.com/dtolnay/thiserror/compare/1.0.38...1.0.40)

---
updated-dependencies:
- dependency-name: thiserror
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-04-02 16:42:22 +00:00
dependabot[bot]
50a213c2e1 cargo: bump serde from 1.0.152 to 1.0.159
Bumps [serde](https://github.com/serde-rs/serde) from 1.0.152 to 1.0.159.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.152...v1.0.159)

---
updated-dependencies:
- dependency-name: serde
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-04-02 16:42:10 +00:00
link2xt
c6e9fbd501 Update syn crate in deltachat_derive 2023-04-02 16:40:23 +00:00
dependabot[bot]
720b5aa7c1 Merge pull request #4284 from deltachat/dependabot/cargo/toml-0.7.3 2023-04-02 16:30:33 +00:00
dependabot[bot]
7be7640d3f Merge pull request #4266 from deltachat/dependabot/cargo/reqwest-0.11.16 2023-04-02 16:07:43 +00:00
dependabot[bot]
e4d97278ee cargo: bump reqwest from 0.11.14 to 0.11.16
Bumps [reqwest](https://github.com/seanmonstar/reqwest) from 0.11.14 to 0.11.16.
- [Release notes](https://github.com/seanmonstar/reqwest/releases)
- [Changelog](https://github.com/seanmonstar/reqwest/blob/master/CHANGELOG.md)
- [Commits](https://github.com/seanmonstar/reqwest/compare/v0.11.14...v0.11.16)

---
updated-dependencies:
- dependency-name: reqwest
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-04-02 15:07:16 +00:00
dependabot[bot]
793ad82d42 Merge pull request #4277 from deltachat/dependabot/cargo/libc-0.2.140 2023-04-02 15:04:51 +00:00
dependabot[bot]
2b4d846f0e Merge pull request #4285 from deltachat/dependabot/cargo/regex-1.7.3 2023-04-02 15:03:47 +00:00
dependabot[bot]
379eadab4f Merge pull request #4281 from deltachat/dependabot/cargo/anyhow-1.0.70 2023-04-02 14:45:24 +00:00
dependabot[bot]
53a41fcffe cargo: bump anyhow from 1.0.69 to 1.0.70
Bumps [anyhow](https://github.com/dtolnay/anyhow) from 1.0.69 to 1.0.70.
- [Release notes](https://github.com/dtolnay/anyhow/releases)
- [Commits](https://github.com/dtolnay/anyhow/compare/1.0.69...1.0.70)

---
updated-dependencies:
- dependency-name: anyhow
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-04-02 09:52:31 +00:00
link2xt
5e597e1f09 Merge tag 'v1.112.5' 2023-04-02 09:50:16 +00:00
dependabot[bot]
e1b260d0ec cargo: bump regex from 1.7.1 to 1.7.3
Bumps [regex](https://github.com/rust-lang/regex) from 1.7.1 to 1.7.3.
- [Release notes](https://github.com/rust-lang/regex/releases)
- [Changelog](https://github.com/rust-lang/regex/blob/master/CHANGELOG.md)
- [Commits](https://github.com/rust-lang/regex/compare/1.7.1...1.7.3)

---
updated-dependencies:
- dependency-name: regex
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-04-02 09:19:42 +00:00
dependabot[bot]
480b05063d cargo: bump libc from 0.2.139 to 0.2.140
Bumps [libc](https://github.com/rust-lang/libc) from 0.2.139 to 0.2.140.
- [Release notes](https://github.com/rust-lang/libc/releases)
- [Commits](https://github.com/rust-lang/libc/compare/0.2.139...0.2.140)

---
updated-dependencies:
- dependency-name: libc
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-04-02 09:17:54 +00:00
dependabot[bot]
8477f7b472 Merge pull request #4276 from deltachat/dependabot/cargo/quick-xml-0.28.1 2023-04-02 09:17:40 +00:00
dependabot[bot]
91ae4eab06 Merge pull request #4268 from deltachat/dependabot/cargo/dirs-5.0.0 2023-04-02 09:14:21 +00:00
dependabot[bot]
7b86681cb2 Merge pull request #4274 from deltachat/dependabot/cargo/testdir-0.7.3 2023-04-02 08:38:19 +00:00
dependabot[bot]
1de815441a Merge pull request #4272 from deltachat/dependabot/cargo/chrono-0.4.24 2023-04-02 08:37:11 +00:00
dependabot[bot]
44a36025b5 cargo: bump toml from 0.7.2 to 0.7.3
Bumps [toml](https://github.com/toml-rs/toml) from 0.7.2 to 0.7.3.
- [Release notes](https://github.com/toml-rs/toml/releases)
- [Commits](https://github.com/toml-rs/toml/compare/toml-v0.7.2...toml-v0.7.3)

---
updated-dependencies:
- dependency-name: toml
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-04-01 22:04:59 +00:00
dependabot[bot]
00017ab653 cargo: bump walkdir from 2.3.2 to 2.3.3
Bumps [walkdir](https://github.com/BurntSushi/walkdir) from 2.3.2 to 2.3.3.
- [Release notes](https://github.com/BurntSushi/walkdir/releases)
- [Commits](https://github.com/BurntSushi/walkdir/compare/2.3.2...2.3.3)

---
updated-dependencies:
- dependency-name: walkdir
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-04-01 22:03:47 +00:00
dependabot[bot]
411adc861e cargo: bump quick-xml from 0.27.1 to 0.28.1
Bumps [quick-xml](https://github.com/tafia/quick-xml) from 0.27.1 to 0.28.1.
- [Release notes](https://github.com/tafia/quick-xml/releases)
- [Changelog](https://github.com/tafia/quick-xml/blob/master/Changelog.md)
- [Commits](https://github.com/tafia/quick-xml/compare/v0.27.1...v0.28.1)

---
updated-dependencies:
- dependency-name: quick-xml
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-04-01 22:03:25 +00:00
dependabot[bot]
c5b3939ddb cargo: bump testdir from 0.7.2 to 0.7.3
Bumps testdir from 0.7.2 to 0.7.3.

---
updated-dependencies:
- dependency-name: testdir
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-04-01 22:02:46 +00:00
dependabot[bot]
a9e3ea56ec cargo: bump chrono from 0.4.23 to 0.4.24
Bumps [chrono](https://github.com/chronotope/chrono) from 0.4.23 to 0.4.24.
- [Release notes](https://github.com/chronotope/chrono/releases)
- [Changelog](https://github.com/chronotope/chrono/blob/main/CHANGELOG.md)
- [Commits](https://github.com/chronotope/chrono/compare/v0.4.23...v0.4.24)

---
updated-dependencies:
- dependency-name: chrono
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-04-01 22:02:27 +00:00
dependabot[bot]
6a0f17a983 cargo: bump image from 0.24.5 to 0.24.6
Bumps [image](https://github.com/image-rs/image) from 0.24.5 to 0.24.6.
- [Release notes](https://github.com/image-rs/image/releases)
- [Changelog](https://github.com/image-rs/image/blob/master/CHANGES.md)
- [Commits](https://github.com/image-rs/image/commits)

---
updated-dependencies:
- dependency-name: image
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-04-01 22:02:14 +00:00
dependabot[bot]
28e6f457b1 cargo: bump dirs from 4.0.0 to 5.0.0
Bumps [dirs](https://github.com/soc/dirs-rs) from 4.0.0 to 5.0.0.
- [Release notes](https://github.com/soc/dirs-rs/releases)
- [Commits](https://github.com/soc/dirs-rs/commits)

---
updated-dependencies:
- dependency-name: dirs
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-04-01 22:01:31 +00:00
link2xt
fdf46054e2 ci: limit artifact retention time for libdeltachat.a to 1 day
These artifacts are only needed to be downloaded
by the python test job immediately after building.
2023-04-01 17:37:52 +00:00
link2xt
47a3ee1ff1 ci: run mypy and doc with installed library
These require that package can be installed.
2023-04-01 13:29:55 +00:00
link2xt
a0b7b4b5c4 ci: use ' instead of " for condition quote 2023-04-01 12:58:29 +00:00
link2xt
b9bc0a4047 ci: fixup "if" expression 2023-04-01 12:56:36 +00:00
link2xt
e57abf92f3 ci: split library build and python tests into separate jobs
This way if flaky python tests fail,
it is possible to rerun them without having to rerun library compilation.
Python test jobs will redownload already built library artifact
and compile the bindings against it.
2023-04-01 12:53:49 +00:00
iequidoo
b56a7bc139 Fix CHANGELOG.md after c401780c 2023-04-01 09:13:08 -03:00
iequidoo
c401780c15 Compress mime_headers column with HTML emails stored in database (#4077)
Co-authored-by: bjoern <r10s@b44t.com>
2023-03-31 19:45:34 -03:00
iequidoo
df96a1daac test_moved_markseen: Expect DC_EVENT_MSGS_CHANGED also 2023-03-31 17:15:02 -03:00
iequidoo
0a2200c2c8 python: Wait for initial Resync in the bot tests 2023-03-31 17:15:02 -03:00
Floris Bruynooghe
61b8d04418 ref(logging): remove LogExt::log_or_ok (#4250)
This further reduces the cognitive overload of having many ways to do
something.  The same is very easily done using composition.  Followup
from 82ace72527.
2023-03-31 12:15:17 +02:00
link2xt
fd7cc83537 Merge tag 'v1.112.4'
Release 1.112.4
2023-03-31 01:13:56 +00:00
link2xt
ae5f72cf4f Remove install_python_bindings.py script and update docs
`install_python_bindings.py` was not used by CI
and scripts, except for `scripts/run-python-test.sh`
which only used it to invoke `cargo`.

Instead of using an additional script,
run cargo directly.

The documentation is updated to remove
references to `install_python_bindings.py`.
The section "Installing bindings from source"
was in fact incorrect as it suggested
running `tox --devenv` first and only then
compiling the library with `install_python_bindings.py`.
Now it explicitly says to run cargo first
and then install the package without using `tox`.
`tox` is still used for running the tests
and setting up development environment.
2023-03-31 00:27:35 +00:00
link2xt
3e65b6f3a6 Update to rPGP 0.10.1 2023-03-30 21:09:29 +00:00
link2xt
a4a53d5299 Increase MSRV to 1.65.0 2023-03-30 21:09:29 +00:00
link2xt
4223cac7a5 set_core_version.py: add newline to the end of package.json 2023-03-30 20:38:47 +00:00
link2xt
0073a09da6 Merge v1.112.3 2023-03-30 20:37:43 +00:00
link2xt
c4cf0f12c9 Prettier lint fix 2023-03-30 16:08:02 +00:00
link2xt
68635be8a2 Merge v1.112.2 into master 2023-03-30 15:42:27 +00:00
link2xt
776df7505e Remove upper limit on the attachment size
Recommended file size is used for recoding media.
For the upper size, we rely on the provider to tell us back
if the message cannot be delivered to some recipients.
This allows to send large files, such as APKs,
when using providers that don't have such strict limits.
2023-03-30 15:01:02 +00:00
Floris Bruynooghe
91c10b3ac6 feat(scheduler): Allow overlapping IoPauseGuards (#4246)
This enables using multiple pause guards at the same time.
2023-03-30 12:40:13 +02:00
Floris Bruynooghe
82ace72527 ref(logging): Remove message from LogExt::log_err (#4235)
This removes the message that needed to be supplied to LogExt::log_err
calls.  This was from a time before we adopted anyhow and now we are
better off using anyhow::Context::context for the message: it is more
consistent, composes better and is less custom.

The benefit of the composition can be seen in the FFI calls which need
to both log the error as well as return it to the caller via
the set_last_error mechanism.

It also removes the LogExt::ok_or_log_msg funcion for the same reason,
the message is obsoleted by anyhow's context.
2023-03-30 10:13:07 +02:00
link2xt
585b8ece58 Remove outdated comment. 2023-03-30 08:07:21 +00:00
link2xt
3400f5641e Build zig-rpc-server for i686
cargo-zigbuild is replaced with a simple zig-cc wrapper.
cargo-zigbuild still tries to use i386-linux-musl triple,
while in zig 0.11 x86-linux-musl should be used [1].

This also gives us more control over compilation flags used
and prevents changes due to cargo-zigbuild upgrades,
as its version was not pinned.

[1] Merged PR "all: rename i386 to x86 "
    <https://github.com/ziglang/zig/pull/13101>
2023-03-29 10:56:22 +00:00
link2xt
5068648960 Update yanked spin 0.9.6
Otherwise cargo-deny complains.
2023-03-29 10:56:02 +00:00
link2xt
1bf5064039 Add unreleased section to changelog 2023-03-27 16:13:11 +00:00
43 changed files with 1547 additions and 760 deletions

View File

@@ -16,11 +16,11 @@ env:
RUSTFLAGS: -Dwarnings
jobs:
lint:
name: Rustfmt and Clippy
lint_rust:
name: Lint Rust
runs-on: ubuntu-latest
env:
RUSTUP_TOOLCHAIN: 1.68.0
RUSTUP_TOOLCHAIN: 1.68.2
steps:
- uses: actions/checkout@v3
- name: Install rustfmt and clippy
@@ -31,6 +31,8 @@ jobs:
run: cargo fmt --all -- --check
- name: Run clippy
run: scripts/clippy.sh
- name: Check
run: cargo check --workspace --all-targets --all-features
cargo_deny:
name: cargo deny
@@ -64,34 +66,28 @@ jobs:
- name: Rustdoc
run: cargo doc --document-private-items --no-deps
build_and_test:
name: Build and test
rust_tests:
name: Rust tests
strategy:
fail-fast: false
matrix:
include:
# Currently used Rust version.
- os: ubuntu-latest
rust: 1.68.0
python: 3.9
rust: 1.68.2
- os: windows-latest
rust: 1.68.0
python: false # Python bindings compilation on Windows is not supported.
rust: 1.68.2
- os: macos-latest
rust: 1.68.0
python: 3.9
rust: 1.68.2
# Minimum Supported Rust Version = 1.64.0
# Minimum Supported Rust Version = 1.65.0
#
# Minimum Supported Python Version = 3.7
# This is the minimum version for which manylinux Python wheels are
# built.
- os: ubuntu-latest
rust: 1.64.0
python: 3.7
rust: 1.65.0
runs-on: ${{ matrix.os }}
steps:
- uses: actions/checkout@master
- uses: actions/checkout@v3
- name: Install Rust ${{ matrix.rust }}
run: rustup toolchain install --profile minimal ${{ matrix.rust }}
@@ -100,64 +96,176 @@ jobs:
- name: Cache rust cargo artifacts
uses: swatinem/rust-cache@v2
- name: Check
run: cargo check --workspace --bins --examples --tests --benches
- name: Tests
run: cargo test --workspace
- name: Test cargo vendor
run: cargo vendor
c_library:
name: Build C library
strategy:
matrix:
os: [ubuntu-latest, macos-latest]
runs-on: ${{ matrix.os }}
steps:
- uses: actions/checkout@v3
- name: Cache rust cargo artifacts
uses: swatinem/rust-cache@v2
- name: Build C library
run: cargo build -p deltachat_ffi --features jsonrpc
- name: Upload C library
uses: actions/upload-artifact@v3
with:
name: ${{ matrix.os }}-${{matrix.rust}}-libdeltachat.a
path: target/debug/libdeltachat.a
retention-days: 1
rpc_server:
name: Build deltachat-rpc-server
strategy:
matrix:
os: [ubuntu-latest, macos-latest]
runs-on: ${{ matrix.os }}
steps:
- uses: actions/checkout@v3
- name: Cache rust cargo artifacts
uses: swatinem/rust-cache@v2
- name: Build deltachat-rpc-server
run: cargo build -p deltachat-rpc-server
- name: Upload deltachat-rpc-server
uses: actions/upload-artifact@v3
with:
name: ${{ matrix.os }}-${{matrix.rust}}-deltachat-rpc-server
path: target/debug/deltachat-rpc-server
retention-days: 1
python_lint:
name: Python lint
runs-on: ubuntu-latest
steps:
- name: Checkout sources
uses: actions/checkout@v3
- name: Install tox
run: pip install tox
- name: Lint Python bindings
working-directory: python
run: tox -e lint
- name: Lint deltachat-rpc-client
working-directory: deltachat-rpc-client
run: tox -e lint
python_tests:
name: Python tests
needs: ["c_library", "python_lint"]
strategy:
fail-fast: false
matrix:
include:
# Currently used Rust version.
- os: ubuntu-latest
python: 3.11
- os: macos-latest
python: 3.11
# PyPy tests
- os: ubuntu-latest
python: pypy3.9
- os: macos-latest
python: pypy3.9
# Minimum Supported Python Version = 3.7
# This is the minimum version for which manylinux Python wheels are
# built. Test it with minimum supported Rust version.
- os: ubuntu-latest
python: 3.7
runs-on: ${{ matrix.os }}
steps:
- uses: actions/checkout@v3
- name: Download libdeltachat.a
uses: actions/download-artifact@v3
with:
name: ${{ matrix.os }}-${{matrix.rust}}-libdeltachat.a
path: target/debug
- name: Install python
if: ${{ matrix.python }}
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python }}
- name: Install tox
if: ${{ matrix.python }}
run: pip install tox
- name: Build C library
if: ${{ matrix.python }}
run: cargo build -p deltachat_ffi --features jsonrpc
- name: Run python tests
if: ${{ matrix.python }}
env:
DCC_NEW_TMP_EMAIL: ${{ secrets.DCC_NEW_TMP_EMAIL }}
DCC_RS_TARGET: debug
DCC_RS_DEV: ${{ github.workspace }}
working-directory: python
run: tox -e lint,mypy,doc,py3
run: tox -e mypy,doc,py
- name: Build deltachat-rpc-server
if: ${{ matrix.python }}
run: cargo build -p deltachat-rpc-server
aysnc_python_tests:
name: Async Python tests
needs: ["python_lint", "rpc_server"]
strategy:
fail-fast: false
matrix:
include:
# Currently used Rust version.
- os: ubuntu-latest
python: 3.11
- os: macos-latest
python: 3.11
# PyPy tests
- os: ubuntu-latest
python: pypy3.9
- os: macos-latest
python: pypy3.9
# Minimum Supported Python Version = 3.7
# This is the minimum version for which manylinux Python wheels are
# built. Test it with minimum supported Rust version.
- os: ubuntu-latest
python: 3.7
runs-on: ${{ matrix.os }}
steps:
- uses: actions/checkout@v3
- name: Install python
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python }}
- name: Install tox
run: pip install tox
- name: Download deltachat-rpc-server
uses: actions/download-artifact@v3
with:
name: ${{ matrix.os }}-${{matrix.rust}}-deltachat-rpc-server
path: target/debug
- name: Make deltachat-rpc-server executable
run: chmod +x target/debug/deltachat-rpc-server
- name: Add deltachat-rpc-server to path
if: ${{ matrix.python }}
run: echo ${{ github.workspace }}/target/debug >> $GITHUB_PATH
- name: Run deltachat-rpc-client tests
if: ${{ matrix.python }}
env:
DCC_NEW_TMP_EMAIL: ${{ secrets.DCC_NEW_TMP_EMAIL }}
working-directory: deltachat-rpc-client
run: tox -e py3,lint
- name: Install pypy
if: ${{ matrix.python }}
uses: actions/setup-python@v4
with:
python-version: "pypy${{ matrix.python }}"
- name: Run pypy tests
if: ${{ matrix.python }}
env:
DCC_NEW_TMP_EMAIL: ${{ secrets.DCC_NEW_TMP_EMAIL }}
DCC_RS_TARGET: debug
DCC_RS_DEV: ${{ github.workspace }}
working-directory: python
run: tox -e pypy3
run: tox -e py

View File

@@ -15,7 +15,7 @@ jobs:
# Build a version statically linked against musl libc
# to avoid problems with glibc version incompatibility.
build_linux:
name: Cross-compile deltachat-rpc-server for x86_64, aarch64 and armv7 Linux
name: Cross-compile deltachat-rpc-server for x86_64, i686, aarch64 and armv7 Linux
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v3
@@ -30,6 +30,13 @@ jobs:
path: target/x86_64-unknown-linux-musl/release/deltachat-rpc-server
if-no-files-found: error
- name: Upload i686 binary
uses: actions/upload-artifact@v3
with:
name: deltachat-rpc-server-i686
path: target/i686-unknown-linux-musl/release/deltachat-rpc-server
if-no-files-found: error
- name: Upload aarch64 binary
uses: actions/upload-artifact@v3
with:
@@ -90,7 +97,7 @@ jobs:
- name: Compose dist/ directory
run: |
mkdir dist
for x in x86_64 aarch64 armv7 win32.exe win64.exe; do
for x in x86_64 i686 aarch64 armv7 win32.exe win64.exe; do
mv "deltachat-rpc-server-$x"/* "dist/deltachat-rpc-server-$x"
done

View File

@@ -1,5 +1,17 @@
# Changelog
## [Unreleased]
### Changes
- Increase MSRV to 1.65.0. #4236
- Remove upper limit on the attachment size. #4253
- Update rPGP to 0.10.1. #4236
- Compress `mime_headers` column with HTML emails stored in database
### Fixes
- Fix python bindings README documentation on installing the bindings from source.
- Show a warning if quota list is empty #4261
## [1.112.6] - 2023-04-04
### Changes
@@ -2366,6 +2378,7 @@ For a full list of changes, please see our closed Pull Requests:
https://github.com/deltachat/deltachat-core-rust/pulls?q=is%3Apr+is%3Aclosed
[unreleased]: https://github.com/deltachat/deltachat-core-rust/compare/v1.112.5...HEAD
[1.111.0]: https://github.com/deltachat/deltachat-core-rust/compare/v1.110.0...v1.111.0
[1.112.0]: https://github.com/deltachat/deltachat-core-rust/compare/v1.111.0...v1.112.0
[1.112.1]: https://github.com/deltachat/deltachat-core-rust/compare/v1.112.0...v1.112.1

871
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -3,7 +3,7 @@ name = "deltachat"
version = "1.112.6"
edition = "2021"
license = "MPL-2.0"
rust-version = "1.64"
rust-version = "1.65"
[profile.dev]
debug = 0
@@ -35,13 +35,13 @@ ratelimit = { path = "./deltachat-ratelimit" }
anyhow = "1"
async-channel = "1.8.0"
async-imap = { git = "https://github.com/async-email/async-imap", branch = "master", default-features = false, features = ["runtime-tokio"] }
async-imap = { version = "0.7.0", default-features = false, features = ["runtime-tokio"] }
async-native-tls = { version = "0.5", default-features = false, features = ["runtime-tokio"] }
async-smtp = { version = "0.9", default-features = false, features = ["runtime-tokio"] }
async_zip = { version = "0.0.11", default-features = false, features = ["deflate", "fs"] }
async_zip = { version = "0.0.12", default-features = false, features = ["deflate", "fs"] }
backtrace = "0.3"
base64 = "0.21"
bitflags = "1.3"
brotli = "3.3"
chrono = { version = "0.4", default-features=false, features = ["clock", "std"] }
email = { git = "https://github.com/deltachat/rust-email", branch = "master" }
encoded-words = { git = "https://github.com/async-email/encoded-words", branch = "master" }
@@ -51,7 +51,11 @@ futures = "0.3"
futures-lite = "1.12.0"
hex = "0.4.0"
humansize = "2"
<<<<<<< HEAD
image = { version = "0.24.6", default-features=false, features = ["gif", "jpeg", "ico", "png", "pnm", "webp", "bmp"] }
=======
image = { version = "0.24.5", default-features=false, features = ["gif", "jpeg", "ico", "png", "pnm", "webp", "bmp"] }
>>>>>>> stable-1.112
iroh = { version = "0.4.1", default-features = false }
kamadak-exif = "0.5"
lettre_email = { git = "https://github.com/deltachat/lettre", branch = "master" }
@@ -63,14 +67,14 @@ num-traits = "0.2"
once_cell = "1.17.0"
percent-encoding = "2.2"
parking_lot = "0.12"
pgp = { version = "0.9", default-features = false }
pgp = { version = "0.10", default-features = false }
pretty_env_logger = { version = "0.4", optional = true }
qrcodegen = "1.7.0"
quick-xml = "0.27"
quick-xml = "0.28"
rand = "0.8"
regex = "1.7"
reqwest = { version = "0.11.14", features = ["json"] }
rusqlite = { version = "0.28", features = ["sqlcipher"] }
reqwest = { version = "0.11.16", features = ["json"] }
rusqlite = { version = "0.29", features = ["sqlcipher"] }
rust-hsluv = "0.1"
sanitize-filename = "0.4"
serde_json = "1.0"
@@ -101,7 +105,7 @@ log = "0.4"
pretty_env_logger = "0.4"
proptest = { version = "1", default-features = false, features = ["std"] }
tempfile = "3"
testdir = "0.7.2"
testdir = "0.7.3"
tokio = { version = "1", features = ["parking_lot", "rt-multi-thread", "macros"] }
[workspace]

View File

@@ -160,7 +160,8 @@ pub unsafe extern "C" fn dc_context_open(
let ctx = &*context;
let passphrase = to_string_lossy(passphrase);
block_on(ctx.open(passphrase))
.log_err(ctx, "dc_context_open() failed")
.context("dc_context_open() failed")
.log_err(ctx)
.map(|b| b as libc::c_int)
.unwrap_or(0)
}
@@ -216,16 +217,18 @@ pub unsafe extern "C" fn dc_set_config(
if key.starts_with("ui.") {
ctx.set_ui_config(&key, value.as_deref())
.await
.with_context(|| format!("Can't set {key} to {value:?}"))
.log_err(ctx, "dc_set_config() failed")
.with_context(|| format!("dc_set_config failed: Can't set {key} to {value:?}"))
.log_err(ctx)
.is_ok() as libc::c_int
} else {
match config::Config::from_str(&key) {
Ok(key) => ctx
.set_config(key, value.as_deref())
.await
.with_context(|| format!("Can't set {key} to {value:?}"))
.log_err(ctx, "dc_set_config() failed")
.with_context(|| {
format!("dc_set_config() failed: Can't set {key} to {value:?}")
})
.log_err(ctx)
.is_ok() as libc::c_int,
Err(_) => {
warn!(ctx, "dc_set_config(): invalid key");
@@ -253,7 +256,8 @@ pub unsafe extern "C" fn dc_get_config(
if key.starts_with("ui.") {
ctx.get_ui_config(&key)
.await
.log_err(ctx, "Can't get ui-config")
.context("Can't get ui-config")
.log_err(ctx)
.unwrap_or_default()
.unwrap_or_default()
.strdup()
@@ -262,7 +266,8 @@ pub unsafe extern "C" fn dc_get_config(
Ok(key) => ctx
.get_config(key)
.await
.log_err(ctx, "Can't get config")
.context("Can't get config")
.log_err(ctx)
.unwrap_or_default()
.unwrap_or_default()
.strdup(),
@@ -414,7 +419,8 @@ pub unsafe extern "C" fn dc_get_oauth2_url(
block_on(async move {
match oauth2::get_oauth2_url(ctx, &addr, &redirect)
.await
.log_err(ctx, "dc_get_oauth2_url failed")
.context("dc_get_oauth2_url failed")
.log_err(ctx)
{
Ok(Some(res)) => res.strdup(),
Ok(None) | Err(_) => ptr::null_mut(),
@@ -423,7 +429,12 @@ pub unsafe extern "C" fn dc_get_oauth2_url(
}
fn spawn_configure(ctx: Context) {
spawn(async move { ctx.configure().await.log_err(&ctx, "Configure failed") });
spawn(async move {
ctx.configure()
.await
.context("Configure failed")
.log_err(&ctx)
});
}
#[no_mangle]
@@ -448,7 +459,8 @@ pub unsafe extern "C" fn dc_is_configured(context: *mut dc_context_t) -> libc::c
block_on(async move {
ctx.is_configured()
.await
.log_err(ctx, "failed to get configured state")
.context("failed to get configured state")
.log_err(ctx)
.unwrap_or_default() as libc::c_int
})
}
@@ -790,7 +802,8 @@ pub unsafe extern "C" fn dc_preconfigure_keypair(
key::store_self_keypair(ctx, &keypair, key::KeyPairUse::Default).await?;
Ok::<_, anyhow::Error>(1)
})
.log_err(ctx, "Failed to save keypair")
.context("Failed to save keypair")
.log_err(ctx)
.unwrap_or(0)
}
@@ -817,7 +830,8 @@ pub unsafe extern "C" fn dc_get_chatlist(
block_on(async move {
match chatlist::Chatlist::try_load(ctx, flags as usize, qs.as_deref(), qi)
.await
.log_err(ctx, "Failed to get chatlist")
.context("Failed to get chatlist")
.log_err(ctx)
{
Ok(list) => {
let ffi_list = ChatlistWrapper { context, list };
@@ -842,7 +856,8 @@ pub unsafe extern "C" fn dc_create_chat_by_contact_id(
block_on(async move {
ChatId::create_for_contact(ctx, ContactId::new(contact_id))
.await
.log_err(ctx, "Failed to create chat from contact_id")
.context("Failed to create chat from contact_id")
.log_err(ctx)
.map(|id| id.to_u32())
.unwrap_or(0)
})
@@ -862,7 +877,8 @@ pub unsafe extern "C" fn dc_get_chat_id_by_contact_id(
block_on(async move {
ChatId::lookup_by_contact(ctx, ContactId::new(contact_id))
.await
.log_err(ctx, "Failed to get chat for contact_id")
.context("Failed to get chat for contact_id")
.log_err(ctx)
.unwrap_or_default() // unwraps the Result
.map(|id| id.to_u32())
.unwrap_or(0) // unwraps the Option
@@ -1004,7 +1020,8 @@ pub unsafe extern "C" fn dc_get_msg_reactions(
let ctx = &*context;
let reactions = if let Ok(reactions) = block_on(get_msg_reactions(ctx, MsgId::new(msg_id)))
.log_err(ctx, "failed dc_get_msg_reactions() call")
.context("failed dc_get_msg_reactions() call")
.log_err(ctx)
{
reactions
} else {
@@ -1032,7 +1049,8 @@ pub unsafe extern "C" fn dc_send_webxdc_status_update(
&to_string_lossy(json),
&to_string_lossy(descr),
))
.log_err(ctx, "Failed to send webxdc update")
.context("Failed to send webxdc update")
.log_err(ctx)
.is_ok() as libc::c_int
}
@@ -1251,7 +1269,8 @@ pub unsafe extern "C" fn dc_get_fresh_msgs(
let arr = dc_array_t::from(
ctx.get_fresh_msgs()
.await
.log_err(ctx, "Failed to get fresh messages")
.context("Failed to get fresh messages")
.log_err(ctx)
.unwrap_or_default()
.iter()
.map(|msg_id| msg_id.to_u32())
@@ -1272,7 +1291,8 @@ pub unsafe extern "C" fn dc_marknoticed_chat(context: *mut dc_context_t, chat_id
block_on(async move {
chat::marknoticed_chat(ctx, ChatId::new(chat_id))
.await
.log_err(ctx, "Failed marknoticed chat")
.context("Failed marknoticed chat")
.log_err(ctx)
.unwrap_or(())
})
}
@@ -1414,7 +1434,8 @@ pub unsafe extern "C" fn dc_set_chat_visibility(
ChatId::new(chat_id)
.set_visibility(ctx, visibility)
.await
.log_err(ctx, "Failed setting chat visibility")
.context("Failed setting chat visibility")
.log_err(ctx)
.unwrap_or(())
})
}
@@ -1431,7 +1452,9 @@ pub unsafe extern "C" fn dc_delete_chat(context: *mut dc_context_t, chat_id: u32
ChatId::new(chat_id)
.delete(ctx)
.await
.ok_or_log_msg(ctx, "Failed chat delete");
.context("Failed chat delete")
.log_err(ctx)
.ok();
})
}
@@ -1447,7 +1470,9 @@ pub unsafe extern "C" fn dc_block_chat(context: *mut dc_context_t, chat_id: u32)
ChatId::new(chat_id)
.block(ctx)
.await
.ok_or_log_msg(ctx, "Failed chat block");
.context("Failed chat block")
.log_err(ctx)
.ok();
})
}
@@ -1463,7 +1488,9 @@ pub unsafe extern "C" fn dc_accept_chat(context: *mut dc_context_t, chat_id: u32
ChatId::new(chat_id)
.accept(ctx)
.await
.ok_or_log_msg(ctx, "Failed chat accept");
.context("Failed chat accept")
.log_err(ctx)
.ok();
})
}
@@ -1561,7 +1588,8 @@ pub unsafe extern "C" fn dc_create_group_chat(
block_on(async move {
chat::create_group_chat(ctx, protect, &to_string_lossy(name))
.await
.log_err(ctx, "Failed to create group chat")
.context("Failed to create group chat")
.log_err(ctx)
.map(|id| id.to_u32())
.unwrap_or(0)
})
@@ -1575,7 +1603,8 @@ pub unsafe extern "C" fn dc_create_broadcast_list(context: *mut dc_context_t) ->
}
let ctx = &*context;
block_on(chat::create_broadcast_list(ctx))
.log_err(ctx, "Failed to create broadcast list")
.context("Failed to create broadcast list")
.log_err(ctx)
.map(|id| id.to_u32())
.unwrap_or(0)
}
@@ -1597,7 +1626,8 @@ pub unsafe extern "C" fn dc_is_contact_in_chat(
ChatId::new(chat_id),
ContactId::new(contact_id),
))
.log_err(ctx, "is_contact_in_chat failed")
.context("is_contact_in_chat failed")
.log_err(ctx)
.unwrap_or_default() as libc::c_int
}
@@ -1618,7 +1648,8 @@ pub unsafe extern "C" fn dc_add_contact_to_chat(
ChatId::new(chat_id),
ContactId::new(contact_id),
))
.log_err(ctx, "Failed to add contact")
.context("Failed to add contact")
.log_err(ctx)
.is_ok() as libc::c_int
}
@@ -1639,7 +1670,8 @@ pub unsafe extern "C" fn dc_remove_contact_from_chat(
ChatId::new(chat_id),
ContactId::new(contact_id),
))
.log_err(ctx, "Failed to remove contact")
.context("Failed to remove contact")
.log_err(ctx)
.is_ok() as libc::c_int
}
@@ -1758,7 +1790,8 @@ pub unsafe extern "C" fn dc_get_chat_ephemeral_timer(
// ignored when ephemeral timer value is used to construct
// message headers.
block_on(async move { ChatId::new(chat_id).get_ephemeral_timer(ctx).await })
.log_err(ctx, "Failed to get ephemeral timer")
.context("Failed to get ephemeral timer")
.log_err(ctx)
.unwrap_or_default()
.to_u32()
}
@@ -1779,7 +1812,8 @@ pub unsafe extern "C" fn dc_set_chat_ephemeral_timer(
ChatId::new(chat_id)
.set_ephemeral_timer(ctx, EphemeralTimer::from_u32(timer))
.await
.log_err(ctx, "Failed to set ephemeral timer")
.context("Failed to set ephemeral timer")
.log_err(ctx)
.is_ok() as libc::c_int
})
}
@@ -1855,7 +1889,8 @@ pub unsafe extern "C" fn dc_delete_msgs(
let msg_ids = convert_and_prune_message_ids(msg_ids, msg_cnt);
block_on(message::delete_msgs(ctx, &msg_ids))
.log_err(ctx, "failed dc_delete_msgs() call")
.context("failed dc_delete_msgs() call")
.log_err(ctx)
.ok();
}
@@ -1919,7 +1954,8 @@ pub unsafe extern "C" fn dc_markseen_msgs(
let ctx = &*context;
block_on(message::markseen_msgs(ctx, msg_ids))
.log_err(ctx, "failed dc_markseen_msgs() call")
.context("failed dc_markseen_msgs() call")
.log_err(ctx)
.ok();
}
@@ -1961,7 +1997,8 @@ pub unsafe extern "C" fn dc_download_full_msg(context: *mut dc_context_t, msg_id
}
let ctx = &*context;
block_on(MsgId::new(msg_id).download_full(ctx))
.log_err(ctx, "Failed to download message fully.")
.context("Failed to download message fully.")
.log_err(ctx)
.ok();
}
@@ -2009,7 +2046,8 @@ pub unsafe extern "C" fn dc_create_contact(
let name = to_string_lossy(name);
block_on(Contact::create(ctx, &name, &to_string_lossy(addr)))
.log_err(ctx, "Cannot create contact")
.context("Cannot create contact")
.log_err(ctx)
.map(|id| id.to_u32())
.unwrap_or(0)
}
@@ -2086,7 +2124,8 @@ pub unsafe extern "C" fn dc_get_blocked_contacts(
Box::into_raw(Box::new(dc_array_t::from(
Contact::get_all_blocked(ctx)
.await
.log_err(ctx, "Can't get blocked contacts")
.context("Can't get blocked contacts")
.log_err(ctx)
.unwrap_or_default()
.iter()
.map(|id| id.to_u32())
@@ -2111,11 +2150,15 @@ pub unsafe extern "C" fn dc_block_contact(
if block == 0 {
Contact::unblock(ctx, contact_id)
.await
.ok_or_log_msg(ctx, "Can't unblock contact");
.context("Can't unblock contact")
.log_err(ctx)
.ok();
} else {
Contact::block(ctx, contact_id)
.await
.ok_or_log_msg(ctx, "Can't block contact");
.context("Can't block contact")
.log_err(ctx)
.ok();
}
});
}
@@ -2188,7 +2231,8 @@ fn spawn_imex(ctx: Context, what: imex::ImexMode, param1: String, passphrase: Op
spawn(async move {
imex::imex(&ctx, what, param1.as_ref(), passphrase)
.await
.log_err(&ctx, "IMEX failed")
.context("IMEX failed")
.log_err(&ctx)
});
}
@@ -2374,7 +2418,8 @@ pub unsafe extern "C" fn dc_join_securejoin(
securejoin::join_securejoin(ctx, &to_string_lossy(qr))
.await
.map(|chatid| chatid.to_u32())
.log_err(ctx, "failed dc_join_securejoin() call")
.context("failed dc_join_securejoin() call")
.log_err(ctx)
.unwrap_or_default()
})
}
@@ -2396,7 +2441,8 @@ pub unsafe extern "C" fn dc_send_locations_to_chat(
ChatId::new(chat_id),
seconds as i64,
))
.log_err(ctx, "Failed dc_send_locations_to_chat()")
.context("Failed dc_send_locations_to_chat()")
.log_err(ctx)
.ok();
}
@@ -2479,7 +2525,8 @@ pub unsafe extern "C" fn dc_delete_all_locations(context: *mut dc_context_t) {
block_on(async move {
location::delete_all(ctx)
.await
.log_err(ctx, "Failed to delete locations")
.context("Failed to delete locations")
.log_err(ctx)
.ok()
});
}
@@ -2763,7 +2810,8 @@ pub unsafe extern "C" fn dc_chatlist_get_summary(
.list
.get_summary(ctx, index, maybe_chat)
.await
.log_err(ctx, "get_summary failed")
.context("get_summary failed")
.log_err(ctx)
.unwrap_or_default();
Box::into_raw(Box::new(summary.into()))
})
@@ -2791,7 +2839,8 @@ pub unsafe extern "C" fn dc_chatlist_get_summary2(
msg_id,
None,
))
.log_err(ctx, "get_summary2 failed")
.context("get_summary2 failed")
.log_err(ctx)
.unwrap_or_default();
Box::into_raw(Box::new(summary.into()))
}
@@ -2974,7 +3023,8 @@ pub unsafe extern "C" fn dc_chat_can_send(chat: *mut dc_chat_t) -> libc::c_int {
let ffi_chat = &*chat;
let ctx = &*ffi_chat.context;
block_on(ffi_chat.chat.can_send(ctx))
.log_err(ctx, "can_send failed")
.context("can_send failed")
.log_err(ctx)
.unwrap_or_default() as libc::c_int
}
@@ -3419,7 +3469,8 @@ pub unsafe extern "C" fn dc_msg_get_summary(
let ctx = &*ffi_msg.context;
let summary = block_on(ffi_msg.message.get_summary(ctx, maybe_chat))
.log_err(ctx, "dc_msg_get_summary failed")
.context("dc_msg_get_summary failed")
.log_err(ctx)
.unwrap_or_default();
Box::into_raw(Box::new(summary.into()))
}
@@ -3437,7 +3488,8 @@ pub unsafe extern "C" fn dc_msg_get_summarytext(
let ctx = &*ffi_msg.context;
let summary = block_on(ffi_msg.message.get_summary(ctx, None))
.log_err(ctx, "dc_msg_get_summarytext failed")
.context("dc_msg_get_summarytext failed")
.log_err(ctx)
.unwrap_or_default();
match usize::try_from(approx_characters) {
Ok(chars) => summary.truncated_text(chars).strdup(),
@@ -3704,7 +3756,9 @@ pub unsafe extern "C" fn dc_msg_latefiling_mediasize(
.message
.latefiling_mediasize(ctx, width, height, duration)
})
.ok_or_log_msg(ctx, "Cannot set media size");
.context("Cannot set media size")
.log_err(ctx)
.ok();
}
#[no_mangle]
@@ -3743,7 +3797,8 @@ pub unsafe extern "C" fn dc_msg_set_quote(msg: *mut dc_msg_t, quote: *const dc_m
.message
.set_quote(&*ffi_msg.context, quote_msg)
.await
.log_err(&*ffi_msg.context, "failed to set quote")
.context("failed to set quote")
.log_err(&*ffi_msg.context)
.ok();
});
}
@@ -3774,7 +3829,8 @@ pub unsafe extern "C" fn dc_msg_get_quoted_msg(msg: *const dc_msg_t) -> *mut dc_
.message
.quoted_message(context)
.await
.log_err(context, "failed to get quoted message")
.context("failed to get quoted message")
.log_err(context)
.unwrap_or(None)
});
@@ -3797,7 +3853,8 @@ pub unsafe extern "C" fn dc_msg_get_parent(msg: *const dc_msg_t) -> *mut dc_msg_
.message
.parent(context)
.await
.log_err(context, "failed to get parent message")
.context("failed to get parent message")
.log_err(context)
.unwrap_or(None)
});
@@ -3988,7 +4045,8 @@ pub unsafe extern "C" fn dc_contact_is_verified(contact: *mut dc_contact_t) -> l
let ctx = &*ffi_contact.context;
block_on(ffi_contact.contact.is_verified(ctx))
.log_err(ctx, "is_verified failed")
.context("is_verified failed")
.log_err(ctx)
.unwrap_or_default() as libc::c_int
}
@@ -4003,7 +4061,8 @@ pub unsafe extern "C" fn dc_contact_get_verifier_addr(
let ffi_contact = &*contact;
let ctx = &*ffi_contact.context;
block_on(ffi_contact.contact.get_verifier_addr(ctx))
.log_err(ctx, "failed to get verifier for contact")
.context("failed to get verifier for contact")
.log_err(ctx)
.unwrap_or_default()
.strdup()
}
@@ -4017,7 +4076,8 @@ pub unsafe extern "C" fn dc_contact_get_verifier_id(contact: *mut dc_contact_t)
let ffi_contact = &*contact;
let ctx = &*ffi_contact.context;
let verifier_contact_id = block_on(ffi_contact.contact.get_verifier_id(ctx))
.log_err(ctx, "failed to get verifier")
.context("failed to get verifier")
.log_err(ctx)
.unwrap_or_default()
.unwrap_or_default();
@@ -4169,8 +4229,8 @@ pub unsafe extern "C" fn dc_backup_provider_new(
provider,
})
.map(|ffi_provider| Box::into_raw(Box::new(ffi_provider)))
.log_err(ctx, "BackupProvider failed")
.context("BackupProvider failed")
.log_err(ctx)
.set_last_error(ctx)
.unwrap_or(ptr::null_mut())
}
@@ -4186,8 +4246,8 @@ pub unsafe extern "C" fn dc_backup_provider_get_qr(
let ffi_provider = &*provider;
let ctx = &*ffi_provider.context;
deltachat::qr::format_backup(&ffi_provider.provider.qr())
.log_err(ctx, "BackupProvider get_qr failed")
.context("BackupProvider get_qr failed")
.log_err(ctx)
.set_last_error(ctx)
.unwrap_or_default()
.strdup()
@@ -4205,8 +4265,8 @@ pub unsafe extern "C" fn dc_backup_provider_get_qr_svg(
let ctx = &*ffi_provider.context;
let provider = &ffi_provider.provider;
block_on(generate_backup_qr(ctx, &provider.qr()))
.log_err(ctx, "BackupProvider get_qr_svg failed")
.context("BackupProvider get_qr_svg failed")
.log_err(ctx)
.set_last_error(ctx)
.unwrap_or_default()
.strdup()
@@ -4222,8 +4282,8 @@ pub unsafe extern "C" fn dc_backup_provider_wait(provider: *mut dc_backup_provid
let ctx = &*ffi_provider.context;
let provider = &mut ffi_provider.provider;
block_on(provider)
.log_err(ctx, "Failed to await BackupProvider")
.context("Failed to await BackupProvider")
.log_err(ctx)
.set_last_error(ctx)
.ok();
}
@@ -4255,16 +4315,16 @@ pub unsafe extern "C" fn dc_receive_backup(
// user from deallocating it by calling unref on the object while we are using it.
fn receive_backup(ctx: Context, qr_text: String) -> libc::c_int {
let qr = match block_on(qr::check_qr(&ctx, &qr_text))
.log_err(&ctx, "Invalid QR code")
.context("Invalid QR code")
.log_err(&ctx)
.set_last_error(&ctx)
{
Ok(qr) => qr,
Err(_) => return 0,
};
match block_on(imex::get_backup(&ctx, qr))
.log_err(&ctx, "Get backup failed")
.context("Get backup failed")
.log_err(&ctx)
.set_last_error(&ctx)
{
Ok(_) => 1,
@@ -4316,8 +4376,8 @@ where
///
/// ```no_compile
/// some_dc_rust_api_call_returning_result()
/// .log_err(&context, "My API call failed")
/// .context("My API call failed")
/// .log_err(&context)
/// .set_last_error(&context)
/// .unwrap_or_default()
/// ```
@@ -4404,7 +4464,8 @@ pub unsafe extern "C" fn dc_provider_new_from_email_with_dns(
let socks5_enabled = block_on(async move {
ctx.get_config_bool(config::Config::Socks5Enabled)
.await
.log_err(ctx, "Can't get config")
.context("Can't get config")
.log_err(ctx)
});
match socks5_enabled {

View File

@@ -19,21 +19,21 @@ serde = { version = "1.0", features = ["derive"] }
tempfile = "3.3.0"
log = "0.4"
async-channel = { version = "1.8.0" }
futures = { version = "0.3.26" }
serde_json = "1.0.91"
futures = { version = "0.3.28" }
serde_json = "1.0.95"
yerpc = { version = "0.4.3", features = ["anyhow_expose"] }
typescript-type-def = { version = "0.5.5", features = ["json_value"] }
tokio = { version = "1.25.0" }
tokio = { version = "1.27.0" }
sanitize-filename = "0.4"
walkdir = "2.3.2"
walkdir = "2.3.3"
base64 = "0.21"
# optional dependencies
axum = { version = "0.6.11", optional = true, features = ["ws"] }
axum = { version = "0.6.12", optional = true, features = ["ws"] }
env_logger = { version = "0.10.0", optional = true }
[dev-dependencies]
tokio = { version = "1.25.0", features = ["full", "rt-multi-thread"] }
tokio = { version = "1.27.0", features = ["full", "rt-multi-thread"] }
[features]

View File

@@ -1082,17 +1082,17 @@ impl CommandApi {
}
/// Search messages containing the given query string.
/// Searching can be done globally (chat_id=0) or in a specified chat only (chat_id set).
/// Searching can be done globally (chat_id=None) or in a specified chat only (chat_id set).
///
/// Global chat results are typically displayed using dc_msg_get_summary(), chat
/// search results may just hilite the corresponding messages and present a
/// Global search results are typically displayed using dc_msg_get_summary(), chat
/// search results may just highlight the corresponding messages and present a
/// prev/next button.
///
/// For global search, result is limited to 1000 messages,
/// this allows incremental search done fast.
/// So, when getting exactly 1000 results, the result may be truncated;
/// the UIs may display sth. as "1000+ messages found" in this case.
/// Chat search (if a chat_id is set) is not limited.
/// For the global search, the result is limited to 1000 messages,
/// this allows an incremental search done fast.
/// So, when getting exactly 1000 messages, the result actually may be truncated;
/// the UIs may display sth. like "1000+ messages found" in this case.
/// The chat search (if chat_id is set) is not limited.
async fn search_messages(
&self,
account_id: u32,

View File

@@ -8,10 +8,10 @@ edition = "2021"
ansi_term = "0.12.1"
anyhow = "1"
deltachat = { path = "..", features = ["internals"]}
dirs = "4"
dirs = "5"
log = "0.4.16"
pretty_env_logger = "0.4"
rusqlite = "0.28"
rusqlite = "0.29"
rustyline = "11"
tokio = { version = "1", features = ["fs", "rt-multi-thread", "macros"] }

View File

@@ -563,7 +563,7 @@ pub async fn cmdline(context: Context, line: &str, chat_id: &mut ChatId) -> Resu
context.maybe_network().await;
}
"housekeeping" => {
sql::housekeeping(&context).await.ok_or_log(&context);
sql::housekeeping(&context).await.log_err(&context).ok();
}
"listchats" | "listarchived" | "chats" => {
let listflags = if arg0 == "listarchived" {

View File

@@ -17,9 +17,9 @@ anyhow = "1"
env_logger = { version = "0.10.0" }
futures-lite = "1.12.0"
log = "0.4"
serde_json = "1.0.91"
serde_json = "1.0.95"
serde = { version = "1.0", features = ["derive"] }
tokio = { version = "1.25.0", features = ["io-std"] }
tokio = { version = "1.27.0", features = ["io-std"] }
yerpc = { version = "0.4.0", features = ["anyhow_expose"] }
[features]

View File

@@ -8,5 +8,5 @@ license = "MPL-2.0"
proc-macro = true
[dependencies]
syn = "1"
syn = "2"
quote = "1"

View File

@@ -13,34 +13,46 @@ ignore = [
# when upgrading.
# Please keep this list alphabetically sorted.
skip = [
{ name = "base16ct", version = "0.1.1" },
{ name = "base64", version = "<0.21" },
{ name = "bitflags", version = "1.3.2" },
{ name = "block-buffer", version = "<0.10" },
{ name = "clap", version = "3.2.23" },
{ name = "clap_lex", version = "0.2.4" },
{ name = "clap", version = "3.2.23" },
{ name = "convert_case", version = "0.4.0" },
{ name = "darling", version = "<0.14" },
{ name = "curve25519-dalek", version = "3.2.0" },
{ name = "darling_core", version = "<0.14" },
{ name = "darling_macro", version = "<0.14" },
{ name = "darling", version = "<0.14" },
{ name = "der", version = "0.6.1" },
{ name = "digest", version = "<0.10" },
{ name = "ed25519-dalek", version = "1.0.1" },
{ name = "ed25519", version = "1.5.3" },
{ name = "env_logger", version = "<0.10" },
{ name = "getrandom", version = "<0.2" },
{ name = "hermit-abi", version = "<0.3" },
{ name = "humantime", version = "<2.1" },
{ name = "idna", version = "<0.3" },
{ name = "nom", version = "<7.1" },
{ name = "libm", version = "0.1.4" },
{ name = "pem-rfc7468", version = "0.6.0" },
{ name = "pkcs8", version = "0.9.0" },
{ name = "quick-error", version = "<2.0" },
{ name = "rand", version = "<0.8" },
{ name = "rand_chacha", version = "<0.3" },
{ name = "rand_core", version = "<0.6" },
{ name = "rand", version = "<0.8" },
{ name = "redox_syscall", version = "0.2.16" },
{ name = "sec1", version = "0.3.0" },
{ name = "sha2", version = "<0.10" },
{ name = "signature", version = "1.6.4" },
{ name = "spin", version = "<0.9.6" },
{ name = "spki", version = "0.6.0" },
{ name = "syn", version = "1.0.109" },
{ name = "time", version = "<0.3" },
{ name = "version_check", version = "<0.9" },
{ name = "wasi", version = "<0.11" },
{ name = "windows-sys", version = "<0.45" },
{ name = "windows_aarch64_msvc", version = "<0.42" },
{ name = "windows_i686_gnu", version = "<0.42" },
{ name = "windows_i686_msvc", version = "<0.42" },
{ name = "windows-sys", version = "<0.45" },
{ name = "windows_x86_64_gnu", version = "<0.42" },
{ name = "windows_x86_64_msvc", version = "<0.42" },
]

View File

@@ -12,17 +12,17 @@ a low-level Chat/Contact/Message API to user interfaces and bots.
Installing pre-built packages (Linux-only)
==========================================
If you have a Linux system you may try to install the ``deltachat`` binary "wheel" packages
If you have a Linux system you may install the ``deltachat`` binary "wheel" packages
without any "build-from-source" steps.
Otherwise you need to `compile the Delta Chat bindings yourself`__.
__ sourceinstall_
We recommend to first `install virtualenv <https://virtualenv.pypa.io/en/stable/installation.html>`_,
then create a fresh Python virtual environment and activate it in your shell::
We recommend to first create a fresh Python virtual environment
and activate it in your shell::
virtualenv env # or: python -m venv
source env/bin/activate
python -m venv env
source env/bin/activate
Afterwards, invoking ``python`` or ``pip install`` only
modifies files in your ``env`` directory and leaves
@@ -40,16 +40,14 @@ To verify it worked::
Running tests
=============
Recommended way to run tests is using `tox <https://tox.wiki>`_.
After successful binding installation you can install tox
and run the tests::
Recommended way to run tests is using `scripts/run-python-test.sh`
script provided in the core repository.
pip install tox
tox -e py3
This will run all "offline" tests and skip all functional
This script compiles the library in debug mode and runs the tests using `tox`_.
By default it will run all "offline" tests and skip all functional
end-to-end tests that require accounts on real e-mail servers.
.. _`tox`: https://tox.wiki
.. _livetests:
Running "live" tests with temporary accounts
@@ -61,13 +59,32 @@ Please feel free to contact us through a github issue or by e-mail and we'll sen
export DCC_NEW_TMP_EMAIL=<URL you got from us>
With this account-creation setting, pytest runs create ephemeral e-mail accounts on the http://testrun.org server. These accounts exists only for one hour and then are removed completely.
One hour is enough to invoke pytest and run all offline and online tests::
With this account-creation setting, pytest runs create ephemeral e-mail accounts on the http://testrun.org server.
These accounts are removed automatically as they expire.
After setting the variable, either rerun `scripts/run-python-test.sh`
or run offline and online tests with `tox` directly::
tox -e py3
tox -e py
Each test run creates new accounts.
Developing the bindings
-----------------------
If you want to develop or debug the bindings,
you can create a testing development environment using `tox`::
tox -c python --devenv env
. env/bin/activate
Inside this environment the bindings are installed
in editable mode (as if installed with `python -m pip install -e`)
together with the testing dependencies like `pytest` and its plugins.
You can then edit the source code in the development tree
and quickly run `pytest` manually without waiting for `tox`
to recreating the virtual environment each time.
.. _sourceinstall:
Installing bindings from source
@@ -89,20 +106,34 @@ To install the Delta Chat Python bindings make sure you have Python3 installed.
E.g. on Debian-based systems `apt install python3 python3-pip
python3-venv` should give you a usable python installation.
Ensure you are in the deltachat-core-rust/python directory, create the
virtual environment with dependencies using tox
and activate it in your shell::
First, build the core library::
cd python
tox --devenv env
cargo build --release -p deltachat_ffi --features jsonrpc
`jsonrpc` feature is required even if not used by the bindings
because `deltachat.h` includes JSON-RPC functions unconditionally.
Create the virtual environment and activate it:
python -m venv env
source env/bin/activate
You should now be able to build the python bindings using the supplied script::
Build and install the bindings:
python3 install_python_bindings.py
export DCC_RS_DEV="$PWD"
export DCC_RS_TARGET=release
python -m pip install python
The core compilation and bindings building might take a while,
depending on the speed of your machine.
`DCC_RS_DEV` environment variable specifies the location of
the core development tree. If this variable is not set,
`libdeltachat` library and `deltachat.h` header are expected
to be installed system-wide.
When `DCC_RS_DEV` is set, `DCC_RS_TARGET` specifies
the build profile name to look up the artifacts
in the target directory.
In this case setting it can be skipped because
`DCC_RS_TARGET=release` is the default.
Building manylinux based wheels
===============================

View File

@@ -24,6 +24,8 @@ def test_echo_quit_plugin(acfactory, lp):
lp.sec("creating a temp account to contact the bot")
(ac1,) = acfactory.get_online_accounts(1)
botproc.await_resync()
lp.sec("sending a message to the bot")
bot_contact = ac1.create_contact(botproc.addr)
bot_chat = bot_contact.create_chat()
@@ -40,7 +42,7 @@ def test_echo_quit_plugin(acfactory, lp):
def test_group_tracking_plugin(acfactory, lp):
lp.sec("creating one group-tracking bot and two temp accounts")
botproc = acfactory.run_bot_process(group_tracking, ffi=False)
botproc = acfactory.run_bot_process(group_tracking)
ac1, ac2 = acfactory.get_online_accounts(2)
@@ -52,6 +54,8 @@ def test_group_tracking_plugin(acfactory, lp):
ac1.add_account_plugin(FFIEventLogger(ac1))
ac2.add_account_plugin(FFIEventLogger(ac2))
botproc.await_resync()
lp.sec("creating bot test group with bot")
bot_contact = ac1.create_contact(botproc.addr)
ch = ac1.create_group_chat("bot test group")

View File

@@ -1,30 +0,0 @@
#!/usr/bin/env python3
"""
setup a python binding development in-place install with cargo debug symbols.
"""
import os
import subprocess
import sys
if __name__ == "__main__":
target = os.environ.get("DCC_RS_TARGET")
if target is None:
os.environ["DCC_RS_TARGET"] = target = "debug"
if "DCC_RS_DEV" not in os.environ:
dn = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
os.environ["DCC_RS_DEV"] = dn
cmd = ["cargo", "build", "-p", "deltachat_ffi", "--features", "jsonrpc"]
if target == "release":
os.environ["CARGO_PROFILE_RELEASE_LTO"] = "on"
cmd.append("--release")
print("running:", " ".join(cmd))
subprocess.check_call(cmd)
subprocess.check_call("rm -rf build/ src/deltachat/*.so src/deltachat/*.dylib src/deltachat/*.dll", shell=True)
if len(sys.argv) <= 1 or sys.argv[1] != "onlybuild":
subprocess.check_call([sys.executable, "-m", "pip", "install", "-e", "."])

View File

@@ -676,6 +676,13 @@ class BotProcess:
print("+++IGN:", line)
ignored.append(line)
def await_resync(self):
self.fnmatch_lines(
"""
*Resync: collected * message IDs in folder INBOX*
""",
)
@pytest.fixture()
def tmp_db_path(tmpdir):

View File

@@ -567,7 +567,8 @@ def test_moved_markseen(acfactory):
with ac2.direct_imap.idle() as idle2:
ac2.start_io()
msg = ac2._evtracker.wait_next_incoming_message()
ev = ac2._evtracker.get_matching("DC_EVENT_INCOMING_MSG|DC_EVENT_MSGS_CHANGED")
msg = ac2.get_message_by_id(ev.data2)
# Accept the contact request.
msg.chat.accept()

View File

@@ -43,7 +43,7 @@ deps =
pygments
restructuredtext_lint
commands =
black --quiet --check --diff setup.py install_python_bindings.py src/deltachat examples/ tests/
black --quiet --check --diff setup.py src/deltachat examples/ tests/
ruff src/deltachat tests/ examples/
rst-lint --encoding 'utf-8' README.rst

View File

@@ -12,7 +12,7 @@ export DCC_RS_DEV=`pwd`
cd python
python install_python_bindings.py onlybuild
cargo build -p deltachat_ffi --features jsonrpc
# remove and inhibit writing PYC files
rm -rf tests/__pycache__
@@ -22,4 +22,4 @@ export PYTHONDONTWRITEBYTECODE=1
# run python tests (tox invokes pytest to run tests in python/tests)
#TOX_PARALLEL_NO_SPINNER=1 tox -e lint,doc
tox -e lint
tox -e doc,py3
tox -e doc,py

View File

@@ -56,7 +56,8 @@ def update_package_json(relpath, newversion):
json_data = json.loads(f.read())
json_data["version"] = newversion
with open(p, "w") as f:
f.write(json.dumps(json_data, sort_keys=True, indent=2))
json.dump(json_data, f, sort_keys=True, indent=2)
f.write("\n")
def main():

32
scripts/zig-cc Executable file
View File

@@ -0,0 +1,32 @@
#!/usr/bin/env python
import subprocess
import sys
import os
def flag_filter(flag: str) -> bool:
if flag == "-lc":
return False
if flag == "-Wl,-melf_i386":
return False
if "self-contained" in flag and "crt" in flag:
return False
return True
def main():
args = [flag for flag in sys.argv[1:] if flag_filter(flag)]
zig_target = os.environ["ZIG_TARGET"]
zig_cpu = os.environ.get("ZIG_CPU")
if zig_cpu:
zig_cpu_args = ["-mcpu=" + zig_cpu]
args = [x for x in args if not x.startswith("-march")]
else:
zig_cpu_args = []
subprocess.run(
["zig", "cc", "-target", zig_target, *zig_cpu_args, *args], check=True
)
main()

View File

@@ -1,12 +1,15 @@
#!/bin/sh
#
# Build statically linked deltachat-rpc-server using cargo-zigbuild.
# Build statically linked deltachat-rpc-server using zig.
set -x
set -e
unset RUSTFLAGS
# Pin Rust version to avoid uncontrolled changes in the compiler and linker flags.
export RUSTUP_TOOLCHAIN=1.68.1
ZIG_VERSION=0.11.0-dev.2213+515e1c93e
# Download Zig
@@ -15,9 +18,35 @@ wget "https://ziglang.org/builds/zig-linux-x86_64-$ZIG_VERSION.tar.xz"
tar xf "zig-linux-x86_64-$ZIG_VERSION.tar.xz"
export PATH="$PWD/zig-linux-x86_64-$ZIG_VERSION:$PATH"
cargo install cargo-zigbuild
rustup target add i686-unknown-linux-musl
CC="$PWD/scripts/zig-cc" \
TARGET_CC="$PWD/scripts/zig-cc" \
CARGO_TARGET_I686_UNKNOWN_LINUX_MUSL_LINKER="$PWD/scripts/zig-cc" \
LD="$PWD/scripts/zig-cc" \
ZIG_TARGET="x86-linux-musl" \
cargo build --release --target i686-unknown-linux-musl -p deltachat-rpc-server --features vendored
for TARGET in x86_64-unknown-linux-musl aarch64-unknown-linux-musl armv7-unknown-linux-musleabihf; do
rustup target add "$TARGET"
cargo zigbuild --release --target "$TARGET" -p deltachat-rpc-server --features vendored
done
rustup target add armv7-unknown-linux-musleabihf
CC="$PWD/scripts/zig-cc" \
TARGET_CC="$PWD/scripts/zig-cc" \
CARGO_TARGET_ARMV7_UNKNOWN_LINUX_MUSLEABIHF_LINKER="$PWD/scripts/zig-cc" \
LD="$PWD/scripts/zig-cc" \
ZIG_TARGET="arm-linux-musleabihf" \
ZIG_CPU="generic+v7a+vfp3-d32+thumb2-neon" \
cargo build --release --target armv7-unknown-linux-musleabihf -p deltachat-rpc-server --features vendored
rustup target add x86_64-unknown-linux-musl
CC="$PWD/scripts/zig-cc" \
TARGET_CC="$PWD/scripts/zig-cc" \
CARGO_TARGET_X86_64_UNKNOWN_LINUX_MUSL_LINKER="$PWD/scripts/zig-cc" \
LD="$PWD/scripts/zig-cc" \
ZIG_TARGET="x86_64-linux-musl" \
cargo build --release --target x86_64-unknown-linux-musl -p deltachat-rpc-server --features vendored
rustup target add aarch64-unknown-linux-musl
CC="$PWD/scripts/zig-cc" \
TARGET_CC="$PWD/scripts/zig-cc" \
CARGO_TARGET_AARCH64_UNKNOWN_LINUX_MUSL_LINKER="$PWD/scripts/zig-cc" \
LD="$PWD/scripts/zig-cc" \
ZIG_TARGET="aarch64-linux-musl" \
cargo build --release --target aarch64-unknown-linux-musl -p deltachat-rpc-server --features vendored

View File

@@ -92,7 +92,7 @@ impl<'a> BlobObject<'a> {
if attempt >= MAX_ATTEMPT {
return Err(err).context("failed to create file");
} else if attempt == 1 && !dir.exists() {
fs::create_dir_all(dir).await.ok_or_log(context);
fs::create_dir_all(dir).await.log_err(context).ok();
} else {
name = format!("{}-{}{}", stem, rand::random::<u32>(), ext);
}

View File

@@ -35,8 +35,9 @@ use crate::scheduler::InterruptInfo;
use crate::smtp::send_msg_to_smtp;
use crate::stock_str;
use crate::tools::{
create_id, create_outgoing_rfc724_mid, create_smeared_timestamp, create_smeared_timestamps,
get_abs_path, gm2local_offset, improve_single_line_input, time, IsNoneOrEmpty,
buf_compress, create_id, create_outgoing_rfc724_mid, create_smeared_timestamp,
create_smeared_timestamps, get_abs_path, gm2local_offset, improve_single_line_input, time,
IsNoneOrEmpty,
};
use crate::webxdc::WEBXDC_SUFFIX;
use crate::{location, sql};
@@ -1580,7 +1581,12 @@ impl Chat {
} else {
msg.param.get(Param::SendHtml).map(|s| s.to_string())
};
html.map(|html| new_html_mimepart(html).build().as_string())
match html {
Some(html) => Some(tokio::task::block_in_place(move || {
buf_compress(new_html_mimepart(html).build().as_string().as_bytes())
})?),
None => None,
}
} else {
None
};
@@ -1594,7 +1600,8 @@ impl Chat {
SET rfc724_mid=?, chat_id=?, from_id=?, to_id=?, timestamp=?, type=?,
state=?, txt=?, subject=?, param=?,
hidden=?, mime_in_reply_to=?, mime_references=?, mime_modified=?,
mime_headers=?, location_id=?, ephemeral_timer=?, ephemeral_timestamp=?
mime_headers=?, mime_compressed=1, location_id=?, ephemeral_timer=?,
ephemeral_timestamp=?
WHERE id=?;",
paramsv![
new_rfc724_mid,
@@ -1640,10 +1647,11 @@ impl Chat {
mime_references,
mime_modified,
mime_headers,
mime_compressed,
location_id,
ephemeral_timer,
ephemeral_timestamp)
VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?);",
VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,1,?,?,?);",
paramsv![
new_rfc724_mid,
self.id,

View File

@@ -156,7 +156,9 @@ async fn on_configure_completed(
Some(stock_str::aeap_explanation_and_link(context, &old_addr, &new_addr).await);
chat::add_device_msg(context, None, Some(&mut msg))
.await
.ok_or_log_msg(context, "Cannot add AEAP explanation");
.context("Cannot add AEAP explanation")
.log_err(context)
.ok();
}
}
}

View File

@@ -575,7 +575,8 @@ pub(crate) async fn ephemeral_loop(context: &Context, interrupt_receiver: Receiv
delete_expired_messages(context, time())
.await
.ok_or_log(context);
.log_err(context)
.ok();
}
}

View File

@@ -14,7 +14,7 @@ use std::{
use anyhow::{bail, format_err, Context as _, Result};
use async_channel::Receiver;
use async_imap::types::{Fetch, Flag, Name, NameAttribute, UnsolicitedResponse};
use futures::StreamExt;
use futures::{StreamExt, TryStreamExt};
use num_traits::FromPrimitive;
use crate::chat::{self, ChatId, ChatIdBlocked};
@@ -516,8 +516,7 @@ impl Imap {
.uid_fetch("1:*", RFC724MID_UID)
.await
.with_context(|| format!("can't resync folder {folder}"))?;
while let Some(fetch) = list.next().await {
let fetch = fetch?;
while let Some(fetch) = list.try_next().await? {
let headers = match get_fetch_headers(&fetch) {
Ok(headers) => headers,
Err(err) => {
@@ -660,7 +659,7 @@ impl Imap {
.context("Error fetching UID")?;
let mut new_last_seen_uid = None;
while let Some(fetch) = list.next().await.transpose()? {
while let Some(fetch) = list.try_next().await? {
if fetch.message == mailbox.exists && fetch.uid.is_some() {
new_last_seen_uid = fetch.uid;
}
@@ -1198,8 +1197,11 @@ impl Imap {
.await
.context("failed to fetch flags")?;
while let Some(fetch) = list.next().await {
let fetch = fetch.context("failed to get FETCH result")?;
while let Some(fetch) = list
.try_next()
.await
.context("failed to get FETCH result")?
{
let uid = if let Some(uid) = fetch.uid {
uid
} else {
@@ -1258,8 +1260,7 @@ impl Imap {
.await
.context("IMAP Could not fetch")?;
while let Some(fetch) = list.next().await {
let msg = fetch?;
while let Some(msg) = list.try_next().await? {
match get_fetch_headers(&msg) {
Ok(headers) => {
if let Some(from) = mimeparser::get_from(&headers) {
@@ -1294,8 +1295,7 @@ impl Imap {
.context("IMAP could not fetch")?;
let mut msgs = BTreeMap::new();
while let Some(fetch) = list.next().await {
let msg = fetch?;
while let Some(msg) = list.try_next().await? {
if let Some(msg_uid) = msg.uid {
// If the mailbox is not empty, results always include
// at least one UID, even if last_seen_uid+1 is past
@@ -1332,8 +1332,7 @@ impl Imap {
.context("IMAP Could not fetch")?;
let mut msgs = BTreeMap::new();
while let Some(fetch) = list.next().await {
let msg = fetch?;
while let Some(msg) = list.try_next().await? {
if let Some(msg_uid) = msg.uid {
msgs.insert((msg.internal_date(), msg_uid), msg);
}
@@ -1680,8 +1679,7 @@ impl Imap {
let mut delimiter_is_default = true;
let mut folder_configs = BTreeMap::new();
while let Some(folder) = folders.next().await {
let folder = folder?;
while let Some(folder) = folders.try_next().await? {
info!(context, "Scanning folder: {:?}", folder);
// Update the delimiter iff there is a different one, but only once.

View File

@@ -70,7 +70,9 @@ impl Imap {
loop {
self.fetch_move_delete(context, folder.name(), folder_meaning)
.await
.ok_or_log_msg(context, "Can't fetch new msgs in scanned folder");
.context("Can't fetch new msgs in scanned folder")
.log_err(context)
.ok();
let session = self.session.as_mut().context("no session")?;
// If the server sent an unsocicited EXISTS during the fetch, we need to fetch again
@@ -105,7 +107,11 @@ impl Imap {
let list = session
.list(Some(""), Some("*"))
.await?
.filter_map(|f| async { f.ok_or_log_msg(context, "list_folders() can't get folder") });
.filter_map(|f| async {
f.context("list_folders() can't get folder")
.log_err(context)
.ok()
});
Ok(list.collect().await)
}
}

View File

@@ -91,7 +91,7 @@ pub async fn imex(
let cancel = context.alloc_ongoing().await?;
let res = {
let _guard = context.scheduler.pause(context.clone()).await;
let _guard = context.scheduler.pause(context.clone()).await?;
imex_inner(context, what, path, passphrase)
.race(async {
cancel.recv().await.ok();
@@ -759,7 +759,7 @@ async fn export_database(context: &Context, dest: &Path, passphrase: String) ->
.with_context(|| format!("path {} is not valid unicode", dest.display()))?;
context.sql.set_raw_config_int("backup_time", now).await?;
sql::housekeeping(context).await.ok_or_log(context);
sql::housekeeping(context).await.log_err(context).ok();
context
.sql
.call_write(|conn| {

View File

@@ -99,7 +99,7 @@ impl BackupProvider {
// Acquire global "ongoing" mutex.
let cancel_token = context.alloc_ongoing().await?;
let paused_guard = context.scheduler.pause(context.clone()).await;
let paused_guard = context.scheduler.pause(context.clone()).await?;
let context_dir = context
.get_blobdir()
.parent()
@@ -393,7 +393,7 @@ pub async fn get_backup(context: &Context, qr: Qr) -> Result<()> {
!context.is_configured().await?,
"Cannot import backups to accounts in use."
);
let _guard = context.scheduler.pause(context.clone()).await;
let _guard = context.scheduler.pause(context.clone()).await?;
// Acquire global "ongoing" mutex.
let cancel_token = context.alloc_ongoing().await?;

View File

@@ -5,7 +5,6 @@ use std::time::Duration;
use anyhow::{ensure, Context as _, Result};
use async_channel::Receiver;
use bitflags::bitflags;
use quick_xml::events::{BytesEnd, BytesStart, BytesText};
use tokio::time::timeout;
@@ -78,16 +77,15 @@ pub struct Kml {
pub curr: Location,
}
bitflags! {
#[derive(Default)]
struct KmlTag: i32 {
const UNDEFINED = 0x00;
const PLACEMARK = 0x01;
const TIMESTAMP = 0x02;
const WHEN = 0x04;
const POINT = 0x08;
const COORDINATES = 0x10;
}
#[derive(Default, Debug, Clone, PartialEq, Eq)]
enum KmlTag {
#[default]
Undefined,
Placemark,
PlacemarkTimestamp,
PlacemarkTimestampWhen,
PlacemarkPoint,
PlacemarkPointCoordinates,
}
impl Kml {
@@ -128,12 +126,14 @@ impl Kml {
}
fn text_cb(&mut self, event: &BytesText) {
if self.tag.contains(KmlTag::WHEN) || self.tag.contains(KmlTag::COORDINATES) {
if self.tag == KmlTag::PlacemarkTimestampWhen
|| self.tag == KmlTag::PlacemarkPointCoordinates
{
let val = event.unescape().unwrap_or_default();
let val = val.replace(['\n', '\r', '\t', ' '], "");
if self.tag.contains(KmlTag::WHEN) && val.len() >= 19 {
if self.tag == KmlTag::PlacemarkTimestampWhen && val.len() >= 19 {
// YYYY-MM-DDTHH:MM:SSZ
// 0 4 7 10 13 16 19
match chrono::NaiveDateTime::parse_from_str(&val, "%Y-%m-%dT%H:%M:%SZ") {
@@ -147,7 +147,7 @@ impl Kml {
self.curr.timestamp = time();
}
}
} else if self.tag.contains(KmlTag::COORDINATES) {
} else if self.tag == KmlTag::PlacemarkPointCoordinates {
let parts = val.splitn(2, ',').collect::<Vec<_>>();
if let [longitude, latitude] = &parts[..] {
self.curr.longitude = longitude.parse().unwrap_or_default();
@@ -162,17 +162,41 @@ impl Kml {
.trim()
.to_lowercase();
if tag == "placemark" {
if self.tag.contains(KmlTag::PLACEMARK)
&& 0 != self.curr.timestamp
&& 0. != self.curr.latitude
&& 0. != self.curr.longitude
{
self.locations
.push(std::mem::replace(&mut self.curr, Location::new()));
match self.tag {
KmlTag::PlacemarkTimestampWhen => {
if tag == "when" {
self.tag = KmlTag::PlacemarkTimestamp
}
}
self.tag = KmlTag::UNDEFINED;
};
KmlTag::PlacemarkTimestamp => {
if tag == "timestamp" {
self.tag = KmlTag::Placemark
}
}
KmlTag::PlacemarkPointCoordinates => {
if tag == "coordinates" {
self.tag = KmlTag::PlacemarkPoint
}
}
KmlTag::PlacemarkPoint => {
if tag == "point" {
self.tag = KmlTag::Placemark
}
}
KmlTag::Placemark => {
if tag == "placemark" {
if 0 != self.curr.timestamp
&& 0. != self.curr.latitude
&& 0. != self.curr.longitude
{
self.locations
.push(std::mem::replace(&mut self.curr, Location::new()));
}
self.tag = KmlTag::Undefined;
}
}
KmlTag::Undefined => {}
}
}
fn starttag_cb<B: std::io::BufRead>(
@@ -196,19 +220,19 @@ impl Kml {
.map(|a| a.into_owned());
}
} else if tag == "placemark" {
self.tag = KmlTag::PLACEMARK;
self.tag = KmlTag::Placemark;
self.curr.timestamp = 0;
self.curr.latitude = 0.0;
self.curr.longitude = 0.0;
self.curr.accuracy = 0.0
} else if tag == "timestamp" && self.tag.contains(KmlTag::PLACEMARK) {
self.tag = KmlTag::PLACEMARK | KmlTag::TIMESTAMP
} else if tag == "when" && self.tag.contains(KmlTag::TIMESTAMP) {
self.tag = KmlTag::PLACEMARK | KmlTag::TIMESTAMP | KmlTag::WHEN
} else if tag == "point" && self.tag.contains(KmlTag::PLACEMARK) {
self.tag = KmlTag::PLACEMARK | KmlTag::POINT
} else if tag == "coordinates" && self.tag.contains(KmlTag::POINT) {
self.tag = KmlTag::PLACEMARK | KmlTag::POINT | KmlTag::COORDINATES;
} else if tag == "timestamp" && self.tag == KmlTag::Placemark {
self.tag = KmlTag::PlacemarkTimestamp;
} else if tag == "when" && self.tag == KmlTag::PlacemarkTimestamp {
self.tag = KmlTag::PlacemarkTimestampWhen;
} else if tag == "point" && self.tag == KmlTag::Placemark {
self.tag = KmlTag::PlacemarkPoint;
} else if tag == "coordinates" && self.tag == KmlTag::PlacemarkPoint {
self.tag = KmlTag::PlacemarkPointCoordinates;
if let Some(acc) = event.attributes().find(|attr| {
attr.as_ref()
.map(|a| {

View File

@@ -65,9 +65,6 @@ pub trait LogExt<T, E>
where
Self: std::marker::Sized,
{
#[track_caller]
fn log_err_inner(self, context: &Context, msg: Option<&str>) -> Result<T, E>;
/// Emits a warning if the receiver contains an Err value.
///
/// Thanks to the [track_caller](https://blog.rust-lang.org/2020/08/27/Rust-1.46.0.html#track_caller)
@@ -76,73 +73,23 @@ where
/// Unfortunately, the track_caller feature does not work on async functions (as of Rust 1.50).
/// Once it is, you can add `#[track_caller]` to helper functions that use one of the log helpers here
/// so that the location of the caller can be seen in the log. (this won't work with the macros,
/// like warn!(), since the file!() and line!() macros don't work with track_caller)
/// like warn!(), since the file!() and line!() macros don't work with track_caller)
/// See <https://github.com/rust-lang/rust/issues/78840> for progress on this.
#[track_caller]
fn log_err(self, context: &Context, msg: &str) -> Result<T, E> {
self.log_err_inner(context, Some(msg))
}
/// Emits a warning if the receiver contains an Err value and returns an [`Option<T>`].
///
/// Example:
/// ```text
/// if let Err(e) = do_something() {
/// warn!(context, "{:#}", e);
/// }
/// ```
/// is equivalent to:
/// ```text
/// do_something().ok_or_log(context);
/// ```
///
/// For a note on the `track_caller` feature, see the doc comment on `log_err()`.
#[track_caller]
fn ok_or_log(self, context: &Context) -> Option<T> {
self.log_err_inner(context, None).ok()
}
/// Like `ok_or_log()`, but you can pass an extra message that is prepended in the log.
///
/// Example:
/// ```text
/// if let Err(e) = do_something() {
/// warn!(context, "Something went wrong: {:#}", e);
/// }
/// ```
/// is equivalent to:
/// ```text
/// do_something().ok_or_log_msg(context, "Something went wrong");
/// ```
/// and is also equivalent to:
/// ```text
/// use anyhow::Context as _;
/// do_something().context("Something went wrong").ok_or_log(context);
/// ```
///
/// For a note on the `track_caller` feature, see the doc comment on `log_err()`.
#[track_caller]
fn ok_or_log_msg(self, context: &Context, msg: &'static str) -> Option<T> {
self.log_err_inner(context, Some(msg)).ok()
}
fn log_err(self, context: &Context) -> Result<T, E>;
}
impl<T, E: std::fmt::Display> LogExt<T, E> for Result<T, E> {
#[track_caller]
fn log_err_inner(self, context: &Context, msg: Option<&str>) -> Result<T, E> {
fn log_err(self, context: &Context) -> Result<T, E> {
if let Err(e) = &self {
let location = std::panic::Location::caller();
let separator = if msg.is_none() { "" } else { ": " };
let msg = msg.unwrap_or_default();
// We are using Anyhow's .context() and to show the inner error, too, we need the {:#}:
let full = format!(
"{file}:{line}: {msg}{separator}{e:#}",
"{file}:{line}: {e:#}",
file = location.file(),
line = location.line(),
msg = msg,
separator = separator,
e = e
);
// We can't use the warn!() macro here as the file!() and line!() macros

View File

@@ -5,7 +5,6 @@ use std::path::{Path, PathBuf};
use anyhow::{ensure, format_err, Context as _, Result};
use deltachat_derive::{FromSql, ToSql};
use rusqlite::types::ValueRef;
use serde::{Deserialize, Serialize};
use crate::chat::{self, Chat, ChatId};
@@ -29,8 +28,8 @@ use crate::sql;
use crate::stock_str;
use crate::summary::Summary;
use crate::tools::{
create_smeared_timestamp, get_filebytes, get_filemeta, gm2local_offset, read_file, time,
timestamp_to_str, truncate,
buf_compress, buf_decompress, create_smeared_timestamp, get_filebytes, get_filemeta,
gm2local_offset, read_file, time, timestamp_to_str, truncate,
};
/// Message ID, including reserved IDs.
@@ -1350,21 +1349,52 @@ pub(crate) fn guess_msgtype_from_suffix(path: &Path) -> Option<(Viewtype, &str)>
/// e.g. because of save_mime_headers is not set
/// or the message is not incoming.
pub async fn get_mime_headers(context: &Context, msg_id: MsgId) -> Result<Vec<u8>> {
let headers = context
let (headers, compressed) = context
.sql
.query_row(
"SELECT mime_headers FROM msgs WHERE id=?;",
"SELECT mime_headers, mime_compressed FROM msgs WHERE id=?",
paramsv![msg_id],
|row| {
row.get(0).or_else(|err| match row.get_ref(0)? {
ValueRef::Null => Ok(Vec::new()),
ValueRef::Text(text) => Ok(text.to_vec()),
ValueRef::Blob(blob) => Ok(blob.to_vec()),
ValueRef::Integer(_) | ValueRef::Real(_) => Err(err),
})
let headers = sql::row_get_vec(row, 0)?;
let compressed: bool = row.get(1)?;
Ok((headers, compressed))
},
)
.await?;
if compressed {
return buf_decompress(&headers);
}
let headers2 = headers.clone();
let compressed = match tokio::task::block_in_place(move || buf_compress(&headers2)) {
Err(e) => {
warn!(context, "get_mime_headers: buf_compress() failed: {}", e);
return Ok(headers);
}
Ok(o) => o,
};
let update = |conn: &mut rusqlite::Connection| {
match conn.execute(
"\
UPDATE msgs SET mime_headers=?, mime_compressed=1 \
WHERE id=? AND mime_headers!='' AND mime_compressed=0",
params![compressed, msg_id],
) {
Ok(rows_updated) => ensure!(rows_updated <= 1),
Err(e) => {
warn!(context, "get_mime_headers: UPDATE failed: {}", e);
return Err(e.into());
}
}
Ok(())
};
if let Err(e) = context.sql.call_write(update).await {
warn!(
context,
"get_mime_headers: failed to update mime_headers: {}", e
);
}
Ok(headers)
}
@@ -1413,8 +1443,6 @@ pub async fn delete_msgs(context: &Context, msg_ids: &[MsgId]) -> Result<()> {
context.emit_msgs_changed_without_ids();
// Run housekeeping to delete unused blobs.
// We need to use set_raw_config() here since with set_config() it
// wouldn't compile ("recursion in an `async fn`")
context.set_config(Config::LastHousekeeping, None).await?;
}

View File

@@ -27,16 +27,13 @@ use crate::simplify::escape_message_footer_marks;
use crate::stock_str;
use crate::tools::IsNoneOrEmpty;
use crate::tools::{
create_outgoing_rfc724_mid, create_smeared_timestamp, get_filebytes, remove_subject_prefix,
time,
create_outgoing_rfc724_mid, create_smeared_timestamp, remove_subject_prefix, time,
};
// attachments of 25 mb brutto should work on the majority of providers
// (brutto examples: web.de=50, 1&1=40, t-online.de=32, gmail=25, posteo=50, yahoo=25, all-inkl=100).
// as an upper limit, we double the size; the core won't send messages larger than this
// to get the netto sizes, we subtract 1 mb header-overhead and the base64-overhead.
pub const RECOMMENDED_FILE_SIZE: u64 = 24 * 1024 * 1024 / 4 * 3;
const UPPER_LIMIT_FILE_SIZE: u64 = 49 * 1024 * 1024 / 4 * 3;
#[derive(Debug, Clone)]
pub enum Loaded {
@@ -1211,15 +1208,8 @@ impl<'a> MimeFactory<'a> {
// add attachment part
if self.msg.viewtype.has_file() {
if !is_file_size_okay(context, self.msg).await? {
bail!(
"Message exceeds the recommended {} MB.",
RECOMMENDED_FILE_SIZE / 1_000_000,
);
} else {
let (file_part, _) = build_body_file(context, self.msg, "").await?;
parts.push(file_part);
}
let (file_part, _) = build_body_file(context, self.msg, "").await?;
parts.push(file_part);
}
if let Some(meta_part) = meta_part {
@@ -1483,16 +1473,6 @@ fn recipients_contain_addr(recipients: &[(String, String)], addr: &str) -> bool
.any(|(_, cur)| cur.to_lowercase() == addr_lc)
}
async fn is_file_size_okay(context: &Context, msg: &Message) -> Result<bool> {
match msg.param.get_path(Param::File, context)? {
Some(path) => {
let bytes = get_filebytes(context, &path).await?;
Ok(bytes <= UPPER_LIMIT_FILE_SIZE)
}
None => Ok(false),
}
}
fn render_rfc724_mid(rfc724_mid: &str) -> String {
let rfc724_mid = rfc724_mid.trim().to_string();

View File

@@ -1283,7 +1283,7 @@ impl MimeMessage {
if !key.details.users.iter().any(|user| {
user.id
.id()
.ends_with(&(String::from("<") + &peerstate.addr + ">"))
.ends_with((String::from("<") + &peerstate.addr + ">").as_bytes())
}) {
return Ok(false);
}

View File

@@ -10,7 +10,8 @@ use pgp::composed::{
Deserializable, KeyType as PgpKeyType, Message, SecretKeyParamsBuilder, SignedPublicKey,
SignedPublicSubKey, SignedSecretKey, StandaloneSignature, SubkeyParamsBuilder,
};
use pgp::crypto::{HashAlgorithm, SymmetricKeyAlgorithm};
use pgp::crypto::hash::HashAlgorithm;
use pgp::crypto::sym::SymmetricKeyAlgorithm;
use pgp::types::{
CompressionAlgorithm, KeyTrait, Mpi, PublicKeyTrait, SecretKeyTrait, StringToKey,
};
@@ -50,7 +51,7 @@ impl<'a> KeyTrait for SignedPublicKeyOrSubkey<'a> {
}
}
fn algorithm(&self) -> pgp::crypto::PublicKeyAlgorithm {
fn algorithm(&self) -> pgp::crypto::public_key::PublicKeyAlgorithm {
match self {
Self::Key(k) => k.algorithm(),
Self::Subkey(k) => k.algorithm(),
@@ -297,7 +298,7 @@ pub fn pk_decrypt(
let skeys: Vec<&SignedSecretKey> = private_keys_for_decryption.keys().iter().collect();
let (decryptor, _) = msg.decrypt(|| "".into(), || "".into(), &skeys[..])?;
let (decryptor, _) = msg.decrypt(|| "".into(), &skeys[..])?;
let msgs = decryptor.collect::<pgp::errors::Result<Vec<_>>>()?;
if let Some(msg) = msgs.into_iter().next() {

View File

@@ -37,7 +37,7 @@ use crate::reaction::{set_msg_reaction, Reaction};
use crate::securejoin::{self, handle_securejoin_handshake, observe_securejoin_on_other_device};
use crate::sql;
use crate::stock_str;
use crate::tools::{extract_grpid_from_rfc724_mid, smeared_time};
use crate::tools::{buf_compress, extract_grpid_from_rfc724_mid, smeared_time};
use crate::{contact, imap};
/// This is the struct that is returned after receiving one email (aka MIME message).
@@ -691,7 +691,8 @@ async fn add_parts(
} else if allow_creation {
if let Ok(chat) = ChatIdBlocked::get_for_contact(context, from_id, create_blocked)
.await
.log_err(context, "Failed to get (new) chat for contact")
.context("Failed to get (new) chat for contact")
.log_err(context)
{
chat_id = Some(chat.id);
chat_id_blocked = chat.blocked;
@@ -843,7 +844,8 @@ async fn add_parts(
// maybe an Autocrypt Setup Message
if let Ok(chat) = ChatIdBlocked::get_for_contact(context, ContactId::SELF, Blocked::Not)
.await
.log_err(context, "Failed to get (new) chat for contact")
.context("Failed to get (new) chat for contact")
.log_err(context)
{
chat_id = Some(chat.id);
chat_id_blocked = chat.blocked;
@@ -1063,11 +1065,12 @@ async fn add_parts(
let mut save_mime_modified = mime_parser.is_mime_modified;
let mime_headers = if save_mime_headers || save_mime_modified {
if mime_parser.was_encrypted() && !mime_parser.decoded_data.is_empty() {
let headers = if mime_parser.was_encrypted() && !mime_parser.decoded_data.is_empty() {
mime_parser.decoded_data.clone()
} else {
imf_raw.to_vec()
}
};
tokio::task::block_in_place(move || buf_compress(&headers))?
} else {
Vec::new()
};
@@ -1150,7 +1153,7 @@ INSERT INTO msgs
from_id, to_id, timestamp, timestamp_sent,
timestamp_rcvd, type, state, msgrmsg,
txt, subject, txt_raw, param,
bytes, mime_headers, mime_in_reply_to,
bytes, mime_headers, mime_compressed, mime_in_reply_to,
mime_references, mime_modified, error, ephemeral_timer,
ephemeral_timestamp, download_state, hop_info
)
@@ -1159,7 +1162,7 @@ INSERT INTO msgs
?, ?, ?, ?,
?, ?, ?, ?,
?, ?, ?, ?,
?, ?, ?, ?,
?, ?, ?, ?, 1,
?, ?, ?, ?,
?, ?, ?, ?
)
@@ -1168,7 +1171,8 @@ SET rfc724_mid=excluded.rfc724_mid, chat_id=excluded.chat_id,
from_id=excluded.from_id, to_id=excluded.to_id, timestamp=excluded.timestamp, timestamp_sent=excluded.timestamp_sent,
timestamp_rcvd=excluded.timestamp_rcvd, type=excluded.type, state=excluded.state, msgrmsg=excluded.msgrmsg,
txt=excluded.txt, subject=excluded.subject, txt_raw=excluded.txt_raw, param=excluded.param,
bytes=excluded.bytes, mime_headers=excluded.mime_headers, mime_in_reply_to=excluded.mime_in_reply_to,
bytes=excluded.bytes, mime_headers=excluded.mime_headers,
mime_compressed=excluded.mime_compressed, mime_in_reply_to=excluded.mime_in_reply_to,
mime_references=excluded.mime_references, mime_modified=excluded.mime_modified, error=excluded.error, ephemeral_timer=excluded.ephemeral_timer,
ephemeral_timestamp=excluded.ephemeral_timestamp, download_state=excluded.download_state, hop_info=excluded.hop_info
"#)?;

View File

@@ -1,7 +1,8 @@
use std::iter::{self, once};
use std::num::NonZeroUsize;
use std::sync::atomic::Ordering;
use anyhow::{bail, Context as _, Result};
use anyhow::{bail, Context as _, Error, Result};
use async_channel::{self as channel, Receiver, Sender};
use futures::future::try_join_all;
use futures_lite::FutureExt;
@@ -43,15 +44,18 @@ impl SchedulerState {
/// Whether the scheduler is currently running.
pub(crate) async fn is_running(&self) -> bool {
let inner = self.inner.read().await;
inner.scheduler.is_some()
matches!(*inner, InnerSchedulerState::Started(_))
}
/// Starts the scheduler if it is not yet started.
pub(crate) async fn start(&self, context: Context) {
let mut inner = self.inner.write().await;
inner.started = true;
if inner.scheduler.is_none() && !inner.paused {
Self::do_start(inner, context).await;
match *inner {
InnerSchedulerState::Started(_) => (),
InnerSchedulerState::Stopped => Self::do_start(inner, context).await,
InnerSchedulerState::Paused {
ref mut started, ..
} => *started = true,
}
}
@@ -60,7 +64,7 @@ impl SchedulerState {
info!(context, "starting IO");
let ctx = context.clone();
match Scheduler::start(context).await {
Ok(scheduler) => inner.scheduler = Some(scheduler),
Ok(scheduler) => *inner = InnerSchedulerState::Started(scheduler),
Err(err) => error!(&ctx, "Failed to start IO: {:#}", err),
}
}
@@ -68,12 +72,23 @@ impl SchedulerState {
/// Stops the scheduler if it is currently running.
pub(crate) async fn stop(&self, context: &Context) {
let mut inner = self.inner.write().await;
inner.started = false;
Self::do_stop(inner, context).await;
match *inner {
InnerSchedulerState::Started(_) => {
Self::do_stop(inner, context, InnerSchedulerState::Stopped).await
}
InnerSchedulerState::Stopped => (),
InnerSchedulerState::Paused {
ref mut started, ..
} => *started = false,
}
}
/// Stops the scheduler if it is currently running.
async fn do_stop(mut inner: RwLockWriteGuard<'_, InnerSchedulerState>, context: &Context) {
async fn do_stop(
mut inner: RwLockWriteGuard<'_, InnerSchedulerState>,
context: &Context,
new_state: InnerSchedulerState,
) {
// Sending an event wakes up event pollers (get_next_event)
// so the caller of stop_io() can arrange for proper termination.
// For this, the caller needs to instruct the event poller
@@ -83,8 +98,10 @@ impl SchedulerState {
if let Some(debug_logging) = context.debug_logging.read().await.as_ref() {
debug_logging.loop_handle.abort();
}
if let Some(scheduler) = inner.scheduler.take() {
scheduler.stop(context).await;
let prev_state = std::mem::replace(&mut *inner, new_state);
match prev_state {
InnerSchedulerState::Started(scheduler) => scheduler.stop(context).await,
InnerSchedulerState::Stopped | InnerSchedulerState::Paused { .. } => (),
}
}
@@ -96,22 +113,63 @@ impl SchedulerState {
/// If in the meantime [`SchedulerState::start`] or [`SchedulerState::stop`] is called
/// resume will do the right thing and restore the scheduler to the state requested by
/// the last call.
pub(crate) async fn pause<'a>(&'_ self, context: Context) -> IoPausedGuard {
pub(crate) async fn pause<'a>(&'_ self, context: Context) -> Result<IoPausedGuard> {
{
let mut inner = self.inner.write().await;
inner.paused = true;
Self::do_stop(inner, &context).await;
match *inner {
InnerSchedulerState::Started(_) => {
let new_state = InnerSchedulerState::Paused {
started: true,
pause_guards_count: NonZeroUsize::new(1).unwrap(),
};
Self::do_stop(inner, &context, new_state).await;
}
InnerSchedulerState::Stopped => {
*inner = InnerSchedulerState::Paused {
started: false,
pause_guards_count: NonZeroUsize::new(1).unwrap(),
};
}
InnerSchedulerState::Paused {
ref mut pause_guards_count,
..
} => {
*pause_guards_count = pause_guards_count
.checked_add(1)
.ok_or_else(|| Error::msg("Too many pause guards active"))?
}
}
}
let (tx, rx) = oneshot::channel();
tokio::spawn(async move {
rx.await.ok();
let mut inner = context.scheduler.inner.write().await;
inner.paused = false;
if inner.started && inner.scheduler.is_none() {
SchedulerState::do_start(inner, context.clone()).await;
match *inner {
InnerSchedulerState::Started(_) => {
warn!(&context, "IoPausedGuard resume: started instead of paused");
}
InnerSchedulerState::Stopped => {
warn!(&context, "IoPausedGuard resume: stopped instead of paused");
}
InnerSchedulerState::Paused {
ref started,
ref mut pause_guards_count,
} => {
if *pause_guards_count == NonZeroUsize::new(1).unwrap() {
match *started {
true => SchedulerState::do_start(inner, context.clone()).await,
false => *inner = InnerSchedulerState::Stopped,
}
} else {
let new_count = pause_guards_count.get() - 1;
// SAFETY: Value was >=2 before due to if condition
*pause_guards_count = NonZeroUsize::new(new_count).unwrap();
}
}
}
});
IoPausedGuard { sender: Some(tx) }
Ok(IoPausedGuard { sender: Some(tx) })
}
/// Restarts the scheduler, only if it is running.
@@ -126,8 +184,8 @@ impl SchedulerState {
/// Indicate that the network likely has come back.
pub(crate) async fn maybe_network(&self) {
let inner = self.inner.read().await;
let (inbox, oboxes) = match inner.scheduler {
Some(ref scheduler) => {
let (inbox, oboxes) = match *inner {
InnerSchedulerState::Started(ref scheduler) => {
scheduler.maybe_network();
let inbox = scheduler.inbox.conn_state.state.connectivity.clone();
let oboxes = scheduler
@@ -137,7 +195,7 @@ impl SchedulerState {
.collect::<Vec<_>>();
(inbox, oboxes)
}
None => return,
_ => return,
};
drop(inner);
connectivity::idle_interrupted(inbox, oboxes).await;
@@ -146,15 +204,15 @@ impl SchedulerState {
/// Indicate that the network likely is lost.
pub(crate) async fn maybe_network_lost(&self, context: &Context) {
let inner = self.inner.read().await;
let stores = match inner.scheduler {
Some(ref scheduler) => {
let stores = match *inner {
InnerSchedulerState::Started(ref scheduler) => {
scheduler.maybe_network_lost();
scheduler
.boxes()
.map(|b| b.conn_state.state.connectivity.clone())
.collect()
}
None => return,
_ => return,
};
drop(inner);
connectivity::maybe_network_lost(context, stores).await;
@@ -162,45 +220,49 @@ impl SchedulerState {
pub(crate) async fn interrupt_inbox(&self, info: InterruptInfo) {
let inner = self.inner.read().await;
if let Some(ref scheduler) = inner.scheduler {
if let InnerSchedulerState::Started(ref scheduler) = *inner {
scheduler.interrupt_inbox(info);
}
}
pub(crate) async fn interrupt_smtp(&self, info: InterruptInfo) {
let inner = self.inner.read().await;
if let Some(ref scheduler) = inner.scheduler {
if let InnerSchedulerState::Started(ref scheduler) = *inner {
scheduler.interrupt_smtp(info);
}
}
pub(crate) async fn interrupt_ephemeral_task(&self) {
let inner = self.inner.read().await;
if let Some(ref scheduler) = inner.scheduler {
if let InnerSchedulerState::Started(ref scheduler) = *inner {
scheduler.interrupt_ephemeral_task();
}
}
pub(crate) async fn interrupt_location(&self) {
let inner = self.inner.read().await;
if let Some(ref scheduler) = inner.scheduler {
if let InnerSchedulerState::Started(ref scheduler) = *inner {
scheduler.interrupt_location();
}
}
pub(crate) async fn interrupt_recently_seen(&self, contact_id: ContactId, timestamp: i64) {
let inner = self.inner.read().await;
if let Some(ref scheduler) = inner.scheduler {
if let InnerSchedulerState::Started(ref scheduler) = *inner {
scheduler.interrupt_recently_seen(contact_id, timestamp);
}
}
}
#[derive(Debug, Default)]
struct InnerSchedulerState {
scheduler: Option<Scheduler>,
started: bool,
paused: bool,
enum InnerSchedulerState {
Started(Scheduler),
#[default]
Stopped,
Paused {
started: bool,
pause_guards_count: NonZeroUsize,
},
}
/// Guard to make sure the IO Scheduler is resumed.
@@ -301,7 +363,7 @@ async fn inbox_loop(ctx: Context, started: Sender<()>, inbox_handlers: ImapConne
let next_housekeeping_time =
last_housekeeping_time.saturating_add(60 * 60 * 24);
if next_housekeeping_time <= time() {
sql::housekeeping(&ctx).await.ok_or_log(&ctx);
sql::housekeeping(&ctx).await.log_err(&ctx).ok();
}
}
Err(err) => {
@@ -410,7 +472,8 @@ async fn fetch_idle(
.store_seen_flags_on_imap(ctx)
.await
.context("store_seen_flags_on_imap")
.ok_or_log(ctx);
.log_err(ctx)
.ok();
} else {
warn!(ctx, "No session even though we just prepared it");
}
@@ -434,7 +497,8 @@ async fn fetch_idle(
delete_expired_imap_messages(ctx)
.await
.context("delete_expired_imap_messages")
.ok_or_log(ctx);
.log_err(ctx)
.ok();
// Scan additional folders only after finishing fetching the watched folder.
//
@@ -474,7 +538,8 @@ async fn fetch_idle(
.sync_seen_flags(ctx, &watch_folder)
.await
.context("sync_seen_flags")
.ok_or_log(ctx);
.log_err(ctx)
.ok();
connection.connectivity.set_connected(ctx).await;
@@ -770,20 +835,22 @@ impl Scheduler {
pub(crate) async fn stop(self, context: &Context) {
// Send stop signals to tasks so they can shutdown cleanly.
for b in self.boxes() {
b.conn_state.stop().await.ok_or_log(context);
b.conn_state.stop().await.log_err(context).ok();
}
self.smtp.stop().await.ok_or_log(context);
self.smtp.stop().await.log_err(context).ok();
// Actually shutdown tasks.
let timeout_duration = std::time::Duration::from_secs(30);
for b in once(self.inbox).chain(self.oboxes.into_iter()) {
tokio::time::timeout(timeout_duration, b.handle)
.await
.ok_or_log(context);
.log_err(context)
.ok();
}
tokio::time::timeout(timeout_duration, self.smtp_handle)
.await
.ok_or_log(context);
.log_err(context)
.ok();
self.ephemeral_handle.abort();
self.location_handle.abort();
self.recently_seen_loop.abort();

View File

@@ -14,6 +14,8 @@ use crate::tools::time;
use crate::{context::Context, log::LogExt};
use crate::{stock_str, tools};
use super::InnerSchedulerState;
#[derive(Debug, Clone, Copy, PartialEq, Eq, EnumProperty, PartialOrd, Ord)]
pub enum Connectivity {
NotConnected = 1000,
@@ -226,12 +228,12 @@ impl Context {
/// If the connectivity changes, a DC_EVENT_CONNECTIVITY_CHANGED will be emitted.
pub async fn get_connectivity(&self) -> Connectivity {
let lock = self.scheduler.inner.read().await;
let stores: Vec<_> = match lock.scheduler {
Some(ref sched) => sched
let stores: Vec<_> = match *lock {
InnerSchedulerState::Started(ref sched) => sched
.boxes()
.map(|b| b.conn_state.state.connectivity.clone())
.collect(),
None => return Connectivity::NotConnected,
_ => return Connectivity::NotConnected,
};
drop(lock);
@@ -309,15 +311,15 @@ impl Context {
// =============================================================================================
let lock = self.scheduler.inner.read().await;
let (folders_states, smtp) = match lock.scheduler {
Some(ref sched) => (
let (folders_states, smtp) = match *lock {
InnerSchedulerState::Started(ref sched) => (
sched
.boxes()
.map(|b| (b.meaning, b.conn_state.state.connectivity.clone()))
.collect::<Vec<_>>(),
sched.smtp.state.connectivity.clone(),
),
None => {
_ => {
return Err(anyhow!("Not started"));
}
};
@@ -337,7 +339,7 @@ impl Context {
let mut folder_added = false;
if let Some(config) = folder.to_config().filter(|c| watched_folders.contains(c)) {
let f = self.get_config(config).await.ok_or_log(self).flatten();
let f = self.get_config(config).await.log_err(self).ok().flatten();
if let Some(foldername) = f {
let detailed = &state.get_detailed().await;
@@ -396,64 +398,72 @@ impl Context {
if let Some(quota) = &*quota {
match &quota.recent {
Ok(quota) => {
let roots_cnt = quota.len();
for (root_name, resources) in quota {
use async_imap::types::QuotaResourceName::*;
for resource in resources {
ret += "<li>";
if !quota.is_empty() {
for (root_name, resources) in quota {
use async_imap::types::QuotaResourceName::*;
for resource in resources {
ret += "<li>";
// root name is empty eg. for gmail and redundant eg. for riseup.
// therefore, use it only if there are really several roots.
if roots_cnt > 1 && !root_name.is_empty() {
ret +=
&format!("<b>{}:</b> ", &*escaper::encode_minimal(root_name));
} else {
info!(self, "connectivity: root name hidden: \"{}\"", root_name);
// root name is empty eg. for gmail and redundant eg. for riseup.
// therefore, use it only if there are really several roots.
if quota.len() > 1 && !root_name.is_empty() {
ret += &format!(
"<b>{}:</b> ",
&*escaper::encode_minimal(root_name)
);
} else {
info!(
self,
"connectivity: root name hidden: \"{}\"", root_name
);
}
let messages = stock_str::messages(self).await;
let part_of_total_used = stock_str::part_of_total_used(
self,
&resource.usage.to_string(),
&resource.limit.to_string(),
)
.await;
ret += &match &resource.name {
Atom(resource_name) => {
format!(
"<b>{}:</b> {}",
&*escaper::encode_minimal(resource_name),
part_of_total_used
)
}
Message => {
format!("<b>{part_of_total_used}:</b> {messages}")
}
Storage => {
// do not use a special title needed for "Storage":
// - it is usually shown directly under the "Storage" headline
// - by the units "1 MB of 10 MB used" there is some difference to eg. "Messages: 1 of 10 used"
// - the string is not longer than the other strings that way (minus title, plus units) -
// additional linebreaks on small displays are unlikely therefore
// - most times, this is the only item anyway
let usage = &format_size(resource.usage * 1024, BINARY);
let limit = &format_size(resource.limit * 1024, BINARY);
stock_str::part_of_total_used(self, usage, limit).await
}
};
let percent = resource.get_usage_percentage();
let color = if percent >= QUOTA_ERROR_THRESHOLD_PERCENTAGE {
"red"
} else if percent >= QUOTA_WARN_THRESHOLD_PERCENTAGE {
"yellow"
} else {
"green"
};
ret += &format!("<div class=\"bar\"><div class=\"progress {color}\" style=\"width: {percent}%\">{percent}%</div></div>");
ret += "</li>";
}
let messages = stock_str::messages(self).await;
let part_of_total_used = stock_str::part_of_total_used(
self,
&resource.usage.to_string(),
&resource.limit.to_string(),
)
.await;
ret += &match &resource.name {
Atom(resource_name) => {
format!(
"<b>{}:</b> {}",
&*escaper::encode_minimal(resource_name),
part_of_total_used
)
}
Message => {
format!("<b>{part_of_total_used}:</b> {messages}")
}
Storage => {
// do not use a special title needed for "Storage":
// - it is usually shown directly under the "Storage" headline
// - by the units "1 MB of 10 MB used" there is some difference to eg. "Messages: 1 of 10 used"
// - the string is not longer than the other strings that way (minus title, plus units) -
// additional linebreaks on small displays are unlikely therefore
// - most times, this is the only item anyway
let usage = &format_size(resource.usage * 1024, BINARY);
let limit = &format_size(resource.limit * 1024, BINARY);
stock_str::part_of_total_used(self, usage, limit).await
}
};
let percent = resource.get_usage_percentage();
let color = if percent >= QUOTA_ERROR_THRESHOLD_PERCENTAGE {
"red"
} else if percent >= QUOTA_WARN_THRESHOLD_PERCENTAGE {
"yellow"
} else {
"green"
};
ret += &format!("<div class=\"bar\"><div class=\"progress {color}\" style=\"width: {percent}%\">{percent}%</div></div>");
ret += "</li>";
}
} else {
ret += format!("<li>Warning: {domain} claims to support quota but gives no information</li>").as_str();
}
}
Err(e) => {
@@ -480,14 +490,14 @@ impl Context {
/// Returns true if all background work is done.
pub async fn all_work_done(&self) -> bool {
let lock = self.scheduler.inner.read().await;
let stores: Vec<_> = match lock.scheduler {
Some(ref sched) => sched
let stores: Vec<_> = match *lock {
InnerSchedulerState::Started(ref sched) => sched
.boxes()
.map(|b| &b.conn_state.state)
.chain(once(&sched.smtp.state))
.map(|state| state.connectivity.clone())
.collect(),
None => return false,
_ => return false,
};
drop(lock);

View File

@@ -5,7 +5,7 @@ use std::convert::TryFrom;
use std::path::{Path, PathBuf};
use anyhow::{bail, Context as _, Result};
use rusqlite::{self, config::DbConfig, Connection, OpenFlags};
use rusqlite::{self, config::DbConfig, types::ValueRef, Connection, OpenFlags, Row};
use tokio::sync::{Mutex, MutexGuard, RwLock};
use crate::blob::BlobObject;
@@ -57,7 +57,7 @@ pub struct Sql {
/// Database file path
pub(crate) dbfile: PathBuf,
/// Write transaction mutex.
/// Write transactions mutex.
///
/// See [`Self::write_lock`].
write_mtx: Mutex<()>,
@@ -696,6 +696,15 @@ fn new_connection(path: &Path, passphrase: &str) -> Result<Connection> {
/// Cleanup the account to restore some storage and optimize the database.
pub async fn housekeeping(context: &Context) -> Result<()> {
// Setting `Config::LastHousekeeping` at the beginning avoids endless loops when things do not
// work out for whatever reason or are interrupted by the OS.
if let Err(e) = context
.set_config(Config::LastHousekeeping, Some(&time().to_string()))
.await
{
warn!(context, "Can't set config: {e:#}.");
}
if let Err(err) = remove_unused_files(context).await {
warn!(
context,
@@ -743,13 +752,6 @@ pub async fn housekeeping(context: &Context) -> Result<()> {
}
}
if let Err(e) = context
.set_config(Config::LastHousekeeping, Some(&time().to_string()))
.await
{
warn!(context, "Can't set config: {e:#}.");
}
context
.sql
.execute(
@@ -757,12 +759,24 @@ pub async fn housekeeping(context: &Context) -> Result<()> {
(),
)
.await
.ok_or_log_msg(context, "failed to remove old MDNs");
.context("failed to remove old MDNs")
.log_err(context)
.ok();
info!(context, "Housekeeping done.");
Ok(())
}
/// Get the value of a column `idx` of the `row` as `Vec<u8>`.
pub fn row_get_vec(row: &Row, idx: usize) -> rusqlite::Result<Vec<u8>> {
row.get(idx).or_else(|err| match row.get_ref(idx)? {
ValueRef::Null => Ok(Vec::new()),
ValueRef::Text(text) => Ok(text.to_vec()),
ValueRef::Blob(blob) => Ok(blob.to_vec()),
ValueRef::Integer(_) | ValueRef::Real(_) => Err(err),
})
}
/// Enumerates used files in the blobdir and removes unused ones.
pub async fn remove_unused_files(context: &Context) -> Result<()> {
let mut files_in_use = HashSet::new();
@@ -895,12 +909,14 @@ pub async fn remove_unused_files(context: &Context) -> Result<()> {
}
}
Err(err) => {
warn!(
context,
"Housekeeping: Cannot read dir {}: {:#}.",
p.display(),
err
);
if !p.ends_with(BLOBS_BACKUP_NAME) {
warn!(
context,
"Housekeeping: Cannot read dir {}: {:#}.",
p.display(),
err
);
}
}
}
}

View File

@@ -704,6 +704,13 @@ CREATE INDEX smtp_messageid ON imap(rfc724_mid);
// Reverted above, as it requires to load the whole DB in memory.
sql.set_db_version(99).await?;
}
if dbversion < 100 {
sql.execute_migration(
"ALTER TABLE msgs ADD COLUMN mime_compressed INTEGER NOT NULL DEFAULT 0",
100,
)
.await?;
}
let new_version = sql
.get_raw_config_int(VERSION_CFG)
@@ -735,14 +742,18 @@ impl Sql {
Ok(())
}
async fn execute_migration(&self, query: &'static str, version: i32) -> Result<()> {
self.transaction(move |transaction| {
// set raw config inside the transaction
transaction.execute(
"UPDATE config SET value=? WHERE keyname=?;",
paramsv![format!("{version}"), VERSION_CFG],
)?;
// Sets db `version` in the `transaction`.
fn set_db_version_trans(transaction: &mut rusqlite::Transaction, version: i32) -> Result<()> {
transaction.execute(
"UPDATE config SET value=? WHERE keyname=?;",
params![format!("{version}"), VERSION_CFG],
)?;
Ok(())
}
async fn execute_migration(&self, query: &str, version: i32) -> Result<()> {
self.transaction(move |transaction| {
Self::set_db_version_trans(transaction, version)?;
transaction.execute_batch(query)?;
Ok(())

View File

@@ -5,7 +5,8 @@
use std::borrow::Cow;
use std::fmt;
use std::io::Cursor;
use std::io::{Cursor, Write};
use std::mem;
use std::path::{Path, PathBuf};
use std::str::from_utf8;
use std::time::{Duration, SystemTime};
@@ -654,6 +655,42 @@ pub(crate) fn single_value<T>(collection: impl IntoIterator<Item = T>) -> Option
None
}
/// Compressor/decompressor buffer size.
const BROTLI_BUFSZ: usize = 4096;
/// Compresses `buf` to `Vec` using `brotli`.
/// Note that it handles an empty `buf` as a special value that remains empty after compression,
/// otherwise brotli would add its metadata to it which is not nice because this function is used
/// for compression of strings stored in the db and empty strings are common there. This approach is
/// not strictly correct because nowhere in the brotli documentation is said that an empty buffer
/// can't be a result of compression of some input, but i think this will never break.
pub(crate) fn buf_compress(buf: &[u8]) -> Result<Vec<u8>> {
if buf.is_empty() {
return Ok(Vec::new());
}
// level 4 is 2x faster than level 6 (and 54x faster than 10, for comparison).
// with the adaptiveness, we aim to not slow down processing
// single large files too much, esp. on low-budget devices.
// in tests (see #4129), this makes a difference, without compressing much worse.
let q: u32 = if buf.len() > 1_000_000 { 4 } else { 6 };
let lgwin: u32 = 22; // log2(LZ77 window size), it's the default for brotli CLI tool.
let mut compressor = brotli::CompressorWriter::new(Vec::new(), BROTLI_BUFSZ, q, lgwin);
compressor.write_all(buf)?;
Ok(compressor.into_inner())
}
/// Decompresses `buf` to `Vec` using `brotli`.
/// See `buf_compress()` for why we don't pass an empty buffer to brotli decompressor.
pub(crate) fn buf_decompress(buf: &[u8]) -> Result<Vec<u8>> {
if buf.is_empty() {
return Ok(Vec::new());
}
let mut decompressor = brotli::DecompressorWriter::new(Vec::new(), BROTLI_BUFSZ);
decompressor.write_all(buf)?;
decompressor.flush()?;
Ok(mem::take(decompressor.get_mut()))
}
#[cfg(test)]
mod tests {
#![allow(clippy::indexing_slicing)]