You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Especially on slow connections, it would be nice to have fewer DL jobs in parallel. On the other hand, on faster connections (and when there are lots of small packages to download), it probably makes sense to have more parallel downloads.
The text was updated successfully, but these errors were encountered:
I just spent way too much time thinking about this and coming up with a scheme for how this could be done. It's 100% based on my intuition and no hard data or prior art, but here goes 😄
Start downloading with 3 parallel jobs
Every time a download is finished or every 100ms after the last measurement
measure (calculate) the average response body transmission speed over the last 5s (summed across all jobs from that time)
Based on the measured tx speed, calculate a desired number of jobs
If the desired number of jobs is higher than the current number of jobs, start new jobs until the desired number is reached or the end of the download queue is reached
For the tx speed to desired number of jobs, playing around with numbers until it looked nice lead me to the following scheme:
tx speed (kb/s)
jobs
note: bandwidth per job (tx speed/jobs)
< 384
1
0 - 384
≥ 384
2
192 - 256
≥ 512
3
170 - 227
≥ 682
4
170 - 227
≥ 910
5
182 - 242
≥ 1213
6
202 - 269
≥ 1618
7
213 - 308
≥ 2157
8
269 - 359
≥ 2876
9
319 - 426
≥ 3835
10
383 - 511
≥ 5114
11
464 - 619
≥ 6816
12
568 - 757
≥ 9091
13
699 - 932
≥ 12122
14
865 - 1154
≥ 16163
15
1077 - 1436
≥ 21551
16
1346+
tx speed increases ~1.333333-fold per row, a logarithmic scale like that felt better than a flat "+N kb/s means +1 job" and in particular 1.2 felt too "fast" (too many job bumps as tx speed increases), while sqrt(2) felt too slow.
All of the numbers above (not just inside the table) could of course be tweaked.
If I'm not the only one who thinks this looks good, I could probably spend some time on an implementation soon-ish (no promises though).
Especially on slow connections, it would be nice to have fewer DL jobs in parallel. On the other hand, on faster connections (and when there are lots of small packages to download), it probably makes sense to have more parallel downloads.
The text was updated successfully, but these errors were encountered: