Message.set_text() and Message.get_text() are modified accordingly
to accept String and return String.
Messages which previously contained None text
are now represented as messages with empty text.
Use Message.set_text("".to_string())
instead of Message.set_text(None).
Moved custom ToSql trait including Send + Sync from lib.rs to sql.rs.
Replaced most params! and paramsv! macro usage with tuples.
Replaced paramsv! and params_iterv! with params_slice!,
because there is no need to construct a vector.
This removes the message that needed to be supplied to LogExt::log_err
calls. This was from a time before we adopted anyhow and now we are
better off using anyhow::Context::context for the message: it is more
consistent, composes better and is less custom.
The benefit of the composition can be seen in the FFI calls which need
to both log the error as well as return it to the caller via
the set_last_error mechanism.
It also removes the LogExt::ok_or_log_msg funcion for the same reason,
the message is obsoleted by anyhow's context.
The fact that `PRAGMA incremental_vacuum` may return a zero-column
SQLITE_ROW result is documented in `sqlite3_data_count()` documentation:
<https://www.sqlite.org/c3ref/data_count.html>
Previously successful auto_vacuum worked,
but resulted in a "Failed to run incremental vacuum" log.
We do not make all transactions
[IMMEDIATE](https://www.sqlite.org/lang_transaction.html#deferred_immediate_and_exclusive_transactions)
for more parallelism -- at least read transactions can be made DEFERRED to run in parallel
w/o any drawbacks. But if we make write transactions DEFERRED also w/o any external locking,
then they are upgraded from read to write ones on the first write statement. This has some
drawbacks:
- If there are other write transactions, we block the thread and the db connection until
upgraded. Also if some reader comes then, it has to get next, less used connection with a
worse per-connection page cache.
- If a transaction is blocked for more than busy_timeout, it fails with SQLITE_BUSY.
- Configuring busy_timeout is not the best way to manage transaction timeouts, we would
prefer it to be integrated with Rust/tokio asyncs. Moreover, SQLite implements waiting
using sleeps.
- If upon a successful upgrade to a write transaction the db has been modified by another
one, the transaction has to be rolled back and retried. It is an extra work in terms of
CPU/battery.
- Maybe minor, but we lose some fairness in servicing write transactions, i.e. we service
them in the order of the first write statement, not in the order they come.
The only pro of making write transactions DEFERRED w/o the external locking is some
parallelism between them. Also we have an option to make write transactions IMMEDIATE, also
w/o the external locking. But then the most of cons above are still valid. Instead, if we
perform all write transactions under an async mutex, the only cons is losing some
parallelism for write transactions.
.call() interface is safer because it ensures
that blocking operations on SQL connection
are called within tokio::task::block_in_place().
Previously some code called blocking operations
in async context, e.g. add_parts() in receive_imf module.
The underlying implementation of .call()
can later be replaced with an implementation
that does not require block_in_place(),
e.g. a worker pool,
without changing the code using the .call() interface.
With the introduction of transactions in Contact::add_or_lookup(),
python tests sometimes fail to create contacts with the folowing error:
Cannot create contact: add_or_lookup: database is locked: Error code 5: The database file is locked
`PRAGMA busy_timeout=60000` does not affect
this case as the error is returned before 60 seconds pass.
DEFERRED transactions with write operations need
to be retried from scratch
if they are rolled back
due to a write operation on another connection.
Using IMMEDIATE transactions for writing
is an attempt to fix this problem
without a retry loop.
If we later need DEFERRED transactions,
e.g. for reading a snapshot without locking the database,
we may introduce another function for this.
This esp. speeds up receive_imf a bit when we recreate the member list (recreate_member_list == true).
It's a preparation for https://github.com/deltachat/deltachat-core-rust/issues/3768, which will be a one-line-fix, but recreate the member list more often, so that we want to optimize this case a bit.
It also adds a benchmark for this case. It's not that easy to make the benchmark non-flaky, but by closing all other programs and locking the CPU to 1.5GHz it worked. It is consistently a few percent faster than ./without-optim:
```
Receive messages/Receive 100 Chat-Group-Member-{Added|Removed} messages
time: [52.257 ms 52.569 ms 52.941 ms]
change: [-3.5301% -2.6181% -1.6697%] (p = 0.00 < 0.05)
Performance has improved.
Found 7 outliers among 100 measurements (7.00%)
4 (4.00%) high mild
3 (3.00%) high severe
```
MarkseenMsgOnImap job, that was responsible for marking messages as
seen on IMAP and sending MDNs, has been removed.
Messages waiting to be marked as seen are now stored in a
single-column imap_markseen table consisting of foreign keys pointing
to corresponding imap table records.
Messages are marked as seen in batches in the inbox loop. UIDs are
grouped by folders to reduce the number of requests, including folder
selection requests. UID grouping logic has been factored out of
move_delete_messages into UidGrouper iterator to avoid code duplication.
Messages are marked as seen right before fetching from the inbox
folder. This ensures that even if new messages arrive into inbox while
the connection has another folder selected to mark messages there, all
messages are fetched before going IDLE. Ideally marking messages as
seen should be done after fetching and moving, as it is a low-priority
task, but this requires skipping IDLE if UIDNEXT has advanced since
previous time inbox has been selected. This is outside of the scope of
this change.
MDNs are now queued independently of marking the messages as seen.
SendMdn job is created directly rather than after marking the message
as seen on IMAP. Previously sending MDNs was done in MarkseenMsgOnImap
avoid duplicate MDN sending by setting $MDNSent flag together with
\Seen flag and skipping MDN sending if the flag is already set. This
is not the case anymore as $MDNSent flag support has been removed in
9c077c98cd and duplicate MDN sending in
multi-device case is avoided by synchronizing Seen status since
833e5f46cc as long as the server
supports CONDSTORE extension.
* Do ephemeral deletion in background loop
1. in start_io start ephemeral async task, in stop_io cancel ephemeral async task
2. start ephemeral async task which loops like this:
- wait until next time a message deletion is needed or an interrupt occurs (see 3.)
- perform delete_expired_messages including sending MSGS_CHANGED events
3. on new messages (incoming or outgoing) with ephemeral timer:
- interrupt ephemeral async task
* Changelog
* Fix and improve test
* no return value needed
* address @link2xt review comments
* slight normalization: have only one place where we wait for interrupt_receiver
* simplify sql statement -- and don't exit the ephemeral_task if there is an sql problem but rather wait
* Remove now-unused `ephemeral_task` JoinHandle
The JoinHandle is now inside the Scheduler.
* fix clippy
* Revert accidental move of the line
* Add log
Co-authored-by: holger krekel <holger@merlinux.eu>
Co-authored-by: link2xt <link2xt@testrun.org>
* add simple backup export/import test
this test fails on current master
until the context is recrated.
* avoid config_cache races
adds needed SQL-statements to config_cache locking.
otherwise, another thread may alter the database
eg. between SELECT and the config_cache update -
resulting in the wrong value being written to config_cache.
* also update config_cache on initializing tables
VERSION_CFG is also set later, however,
not doing it here will result in bugs when we change DBVERSION at some point.
as this alters only VERSION_CFG and that is executed sequentially anyway,
race conditions between SQL and config_cache
seems not to be an issue in this case.
* clear config_cache after backup import
import replaces the whole database,
so config_cache needs to be invalidated as well.
we do that before import,
so in case a backup is imported only partly,
the cache does not add additional problems.
* update CHANGELOG