Caches up every N promises, to race-resolve them and emit unordered results.
It improves performance when handling multiple lengthy asynchronous operations,
by letting you process results in the order in which they resolve, rather than
the order in which those operations are created.
Passing in cacheSize < 2 deactivates caching, and it then works like wait.
import {pipeAsync, map, waitRace} from'iter-ops';
consti = pipeAsync( [1, 2, 3, 4, 5], map(a=>Promise.resolve(a * 10)), // replace with async processing waitRace(3) // cache & wait for up to 3 values at a time );
Maximum number of promises to be cached up for concurrent resolution racing. Larger cache size
results in better concurrency. Setting it to less than 2 will deactivate caching completely,
and instead apply the same logic as operator wait.
Caches up every N promises, to race-resolve them and emit unordered results.
It improves performance when handling multiple lengthy asynchronous operations, by letting you process results in the order in which they resolve, rather than the order in which those operations are created.
Passing in
cacheSize
< 2 deactivates caching, and it then works like wait.This operator can handle a combination of promises and simple values, with the latter emitted immediately, as they appear.
When results need to be linked to the source, you can simply remap the operations, like shown in the following example: