The third member of the cloud-direct trio (alongside S3 and Azure). The browser uploads each chunk straight to a GCS resumable session URL via Content-Range headers. Bytes never traverse IIS — the WebForms handler only signs the initiate. Survives reloads via IndexedDB: the resumed task issues a zero-byte PUT to the session URL, reads Range: bytes=0-N from GCS's 308 response, and continues from byte N+1.
Resume behaviour: reload during an upload → the session URL is restored from IndexedDB → the client probes GCS with Content-Range: bytes */<total> → GCS returns 308 Resume Incomplete with the byte offset it has → uploading continues. No bytes re-sent.
Client config
AjaxUploader.create('#uploader', {
uploadUrl: '/ajaxupload.axd/upload',
strategy: 'gcs',
chunkSize: 8 * 1024 * 1024, // default 8 MiB; rounded to 256 KiB grain
persistState: true,
persistAdapter: 'indexeddb'
});
Wire protocol
POST /ajaxupload.axd/gcs/initiate → server creates a GCS resumable session, returns { sessionUrl, key }
PUT <sessionUrl> ← chunked bytes with Content-Range: bytes <start>-<end>/<total> (browser-direct)
308 Resume Incomplete on every chunk except the last (with Range: bytes=0-N showing committed offset)
200/201 OK on the final chunk — body is the GCS object metadata
POST /ajaxupload.axd/gcs/finalize → optional server-side bookkeeping
POST /ajaxupload.axd/gcs/abort on cancel / fatal error
Server-side (Global.asax.cs)
Implement IGcsSigner against the official Google.Cloud.Storage.V1 SDK or sign manually with a service account. Same shape as IS3Signer / IAzureSigner:
using AjaxUploader.Providers;
protected void Application_Start(object sender, EventArgs e)
{
Gcs.Signer = new MyGcsSigner(
bucketName: "your-bucket",
serviceAccountKeyJson: ConfigurationManager.AppSettings["GCS:ServiceAccountKey"]);
}
Bucket CORS
The browser PUTs directly to GCS, so the bucket must allow cross-origin PUT and expose the Range response header (used to read resume offsets):
[{
"origin": ["https://ajaxuploader.com"],
"method": ["PUT", "OPTIONS"],
"responseHeader": ["Range", "Content-Range", "ETag"],
"maxAgeSeconds": 3600
}]
Apply with gsutil cors set cors.json gs://your-bucket.
When to pick GCS over S3
- Cheaper egress to Google services (BigQuery, Vertex AI, Cloud Run) — same-region transfer is free.
- Simpler protocol — one resumable session per file vs. S3's multipart sequence (initiate / sign N parts / complete).
- Object versioning + soft delete are first-class on GCS buckets.
- S3-compatible APIs need not apply — for MinIO / R2 / B2, use
strategy: 's3' instead.