Compare commits

...

1 Commits

Author SHA1 Message Date
WofWca
7a7f040200 perf: JSON-RPC: faster eventLoop: request buffer
Before this commit we would not start another `getNextEvent()`
before we have received a response for the previous one.
This commit makes a pool of `getNextEvent()` requests
so that the server always has a request
that it can readily respond to.

Measurements

I have tested this on Delta Chat desktop.
This seems to provide a measurable speedup in handling event bursts.

I did `console.time()` when we start
a `BackendRemote.rpc.maybeNetwork()` request
(which we do every time the main window is focused),
and a `console.timeEnd()` when we receive the first
"IDLE entering wait-on-remote state" entry, inside of
[`ipcBackend.on('json-rpc-message'`](3846aef67c/packages/target-electron/runtime-electron/runtime.ts (L52)).
For these measurements I also disabled request-response logging
with `config['log-debug'] = false`.

Each such `maybeNetwork()` resulted in ~1000 `getNextEvent()` responses.

With the original event loop (without a pool) the average time
based on 150 measurements was 1152.28 ms.
With the new event loop with a pool of 20
based on 150 measurements it was 774.58 ms.

That is 67.22% of the original average duration.

Related:
- https://github.com/deltachat/deltachat-desktop/issues/5282
2026-02-01 15:19:07 +04:00

View File

@@ -40,20 +40,51 @@ export class BaseDeltaChat<
* and emitting the respective events on this class.
*/
startEventLoop: boolean,
options?: {
/**
* @see {@linkcode BaseDeltaChat.eventLoop}.
*
* Has no effect if {@linkcode startEventLoop} === false.
*/
eventLoopRequestPoolSize?: number;
},
) {
super();
this.rpc = new RawClient(this.transport);
if (startEventLoop) {
this.eventTask = this.eventLoop();
this.eventTask = this.eventLoop({
eventLoopRequestPoolSize: options?.eventLoopRequestPoolSize,
});
}
}
/**
* @see the constructor's `startEventLoop`
*/
async eventLoop(): Promise<void> {
async eventLoop(options?: {
/**
* How many {@linkcode RawClient.getNextEvent} to constantly keep open.
* Having a value > 1 improves performance
* when dealing with bursts of events.
*
* Must be >= 1.
*
* @default 20
*/
eventLoopRequestPoolSize?: number;
}): Promise<void> {
const promises: ReturnType<typeof this.rpc.getNextEvent>[] = [];
for (let i = 0; i < (options?.eventLoopRequestPoolSize ?? 20); i++) {
promises.push(this.rpc.getNextEvent());
}
const bufferLength = promises.length;
let currInd = 0;
while (true) {
const event = await this.rpc.getNextEvent();
const event = await promises[currInd];
promises[currInd] = this.rpc.getNextEvent();
currInd = (currInd + 1) % bufferLength;
//@ts-ignore
this.emit(event.event.kind, event.contextId, event.event);
this.emit("ALL", event.contextId, event.event);