Parallel chunk concurrency

Tune parallel chunk concurrency live

For chunked uploads (and direct-to-S3 / Azure), chunkConcurrency controls how many parts stream at once. Higher values fill the pipe; lower values are kinder to shared infrastructure. This demo lets you change the value between uploads and watch the effect on throughput.

4

Applies on the next upload. Range 1–8; tune for your bucket's 5xx rate in production.

Configuration
AjaxUploader.create('#uploader', {
    chunked: true,
    chunkSize: 5 * 1024 * 1024,
    chunkConcurrency: 4               // default 1; tune to taste
});
How parallel chunks work
  • The file is sliced into N chunks (totalChunks = ceil(size / chunkSize)).
  • A worker pool (size = chunkConcurrency) pulls chunks off the queue and uploads them independently.
  • Per-chunk progress feeds into a shared chunkBytes map; the displayed progress is the sum across all chunks.
  • Each chunk has its own retry budget with exponential backoff; a failed chunk is retried without stalling the others.
  • On completion, the server assembles chunks (for chunked) or the client commits the block list / part list (for s3 / azure).
Works for every multipart strategy

The same chunkConcurrency option applies to chunked, s3, and azure strategies. tus is protocol-sequential (one PATCH at a time) and ignores the setting.