This fixes things for Gmail f.e. Before, `Imap::fetch_move_delete()` was called before looking for
Trash and returned an error because of that failing the whole `fetch_idle()` which prevented
configuring Trash in turn.
- Always emit `ContactsChanged` from `contact::update_last_seen()` if a contact was seen recently
just for simplicity and symmetry with `RecentlySeenLoop::run()` which also may emit several events
for single contact.
- Fix sleep time calculation in `RecentlySeenLoop::run()` -- `now` must be updated on every
iteration, before the initial value was used every time which led to progressively long sleeps.
Connection establishment now happens only in one place in each IMAP loop.
Now all connection establishment happens in one place
and is limited by the ratelimit.
Backoff was removed from fake_idle
as it does not establish connections anymore.
If connection fails, fake_idle will return an error.
We then drop the connection and get back to the beginning of IMAP
loop.
Backoff may be still nice to have to delay retries
in case of constant connection failures
so we don't immediately hit ratelimit if the network is unusable
and returns immediate error on each connection attempt
(e.g. ICMP network unreachable error),
but adding backoff for connection failures is out of scope for this change.
In this case connection failure
may be a connection timeout (currently 1 minute),
so it does not make sense to fake idle for another minute immediately after.
However, failure may be immediate if the port is closed
and the server refuses connection every time.
To prevent busy loop in this case
we apply ratelimit to connection attempts rather than login attempts.
This partially reverts ccec26ffa7
If a time value doesn't need to be sent to another host, saved to the db or otherwise used across
program restarts, a monotonically nondecreasing clock (`Instant`) should be used. But as `Instant`
may use `libc::clock_gettime(CLOCK_MONOTONIC)`, e.g. on Android, and does not advance while being in
deep sleep mode, get rid of `Instant` in favor of using `SystemTime`, but add `tools::Time` as an
alias for it with the appropriate comment so that it's clear why `Instant` isn't used in those
places and to protect from unwanted usages of `Instant` in the future. Also this can help to switch
to another clock impl if we find any.
Restart the IO scheduler if needed to make the new config value effective (for `MvboxMove,
OnlyFetchMvbox, SentboxWatch` currently). Also add `set_config_internal()` which doesn't affect
running the IO scheduler. The reason is that `Scheduler::start()` itself calls `set_config()`,
although not for the mentioned keys, but still, and also Rust complains about recursive async calls.
Previously the message was removed from `download` table,
but message bubble was stuck in InProgress state.
Now download state is updated by the caller,
so it cannot be accidentally skipped.
Send `EventType::ConnectivityChanged` when using the context methods
`start_io` and `stop_io`.
close#5097
---------
Co-authored-by: Septias <scoreplayer2000@gmail.comclear>
We do not want all_work_done() to return true immediately
after calling start_io(), but only when connection goes idle.
"Connected" state is set immediately after connecting to the server,
but it does not mean there is nothing to do.
This change make all_work_done() return false
from the Connected state and introduces a new Idle
connectivity state that is only set before connection
actually goes idle. For idle state all_work_done() returns true.
From the user point of view both old Connected state
and new Idle state look the same.
Don't grow timeout if interrupted early and slept not enough. Also:
- Don't grow timeout too fast, but 1.5--2 times (randomly) per iteration.
- Don't interrupt if rate-limited.
- Reset timeout if rate-limited. Rate limit isn't an error, so we can start from 30 secs again if an
error happens then.
Android calls get_connectivity_html()
every time connectivity changes, which in turn interrupts
IMAP loop and triggers change from "not connected" to "connecting"
state.
To avoid such infinite loop of IMAP interrupts when
there is not connectivity, update quota only when IMAP
loop is interrupted otherwise. This anyway happens
when a message is received or maybe_network is called.
Also remove outdated comments about `Action::UpdateRecentQuota` job
which does not exist anymore.
This patch adds new C APIs
dc_get_next_msgs() and dc_wait_next_msgs(),
and their JSON-RPC counterparts
get_next_msgs() and wait_next_msgs().
New configuration "last_msg_id"
tracks the last message ID processed by the bot.
get_next_msgs() returns message IDs above
the "last_msg_id".
wait_next_msgs() waits for new message notification
and calls get_next_msgs().
wait_next_msgs() can be used to build
a separate message processing loop
independent of the event loop.
Async Python API get_fresh_messages_in_arrival_order()
is deprecated in favor of get_next_messages().
Introduced Python APIs:
- Account.wait_next_incoming_message()
- Message.is_from_self()
- Message.is_from_device()
Introduced Rust APIs:
- Context.set_config_u32()
- Context.get_config_u32()
This further reduces the cognitive overload of having many ways to do
something. The same is very easily done using composition. Followup
from 82ace72527.
This replaces the mechanism by which the IoPauseGuard makes sure the
IO scheduler is resumed: it really is a drop guard now by sending a
single message on drop.
This makes it not have to hold on to anything like the context so
makes it a lot easier to use.
The trade-off is that a long-running task is spawned when the guard is
created, this task needs to receive the message from the drop guard in
order for the scheduler to resume.
This makes the BackupProvider automatically invoke pause-io while it
is needed.
It needed to make the guard independent from the Context lifetime to
make this work. Which is a bit sad.
To handle backups the UIs have to make sure they do stop the IO
scheduler and also don't accidentally restart it while working on it.
Since they have to call start_io from a bunch of locations this can be
a bit difficult to manage.
This introduces a mechanism for the core to pause IO for some time,
which is used by the imex function. It interacts well with other
calls to dc_start_io() and dc_stop_io() making sure that when resumed
the scheduler will be running or not as the latest calls to them.
This was a little more invasive then hoped due to the scheduler. The
additional abstraction of the scheduler on the context seems a nice
improvement though.