For chunked uploads (and direct-to-S3 / Azure), chunkConcurrency controls how many parts stream at once. Higher values fill the pipe; lower values are kinder to shared infrastructure. This demo lets you change the value between uploads and watch the effect on throughput.
What you'll observe: 1 = strictly sequential (slowest, kindest to throttled servers). 4 = sensible default for most connections. 8+ = best on high-bandwidth links, but watch for rate-limit 429s.
Configuration
AjaxUploader.create('#uploader', {
chunked: true,
chunkSize: 5 * 1024 * 1024,
chunkConcurrency: 4 // default 1; tune to taste
});
How parallel chunks work
- The file is sliced into N chunks (
totalChunks = ceil(size / chunkSize)).
- A worker pool (size =
chunkConcurrency) pulls chunks off the queue and uploads them independently.
- Per-chunk progress feeds into a shared
chunkBytes map; the displayed progress is the sum across all chunks.
- Each chunk has its own retry budget with exponential backoff; a failed chunk is retried without stalling the others.
- On completion, the server assembles chunks (for
chunked) or the client commits the block list / part list (for s3 / azure).
Works for every multipart strategy
The same chunkConcurrency option applies to chunked, s3, and azure strategies. tus is protocol-sequential (one PATCH at a time) and ignores the setting.