We are currently working on a new version of the documentation.

useMultipartUpload

Upload large files in chunks with configurable part size, concurrency, progress tracking, and abort support.

useMultipartUpload splits a file into parts and uploads them in parallel using S3 multipart upload. It handles presigning, concurrency, progress aggregation, and automatic abort on failure.

Basic Usage

components/large-upload.tsx

import { storageClient } from "@/lib/storage-client";

const { useMultipartUpload } = storageClient;

function LargeUploadForm() {
  const { state, upload, abort, reset } = useMultipartUpload({
    onProgress: (progress) => {
      console.log(`${Math.round(progress * 100)}%`);
    },
    onSuccess: (result) => {
      console.log("Uploaded:", result.key, `(${result.totalParts} parts)`);
    },
  });

  const handleSubmit = async (file: File) => {
    await upload(file, { userId: "u_1" });
  };

  return (
    <div>
      {state.isLoading && (
        <>
          <p>Uploading: {Math.round(state.progress * 100)}%</p>
          <button onClick={abort}>Cancel</button>
        </>
      )}
      {state.status === "error" && <p>Error: {state.error?.message}</p>}
      {state.status === "success" && <p>Done: {state.data?.key}</p>}
    </div>
  );
}

Options

OptionTypeDefaultDescription
partSizenumber10 * 1024 * 1024 (10 MB)Size of each upload part in bytes
concurrencynumber4Number of parts uploaded in parallel
onProgress(progress: number) => voidCalled with aggregated progress from 0 to 1
onSuccess(result: MultipartUploadResult) => voidCalled when the upload completes
onError(error: StorageError) => voidCalled when the upload fails
throwOnErrorbooleanOverride the client-level throwOnError setting

State

FieldTypeDescription
status"idle" | "loading" | "success" | "error"Current upload status
isLoadingbooleantrue while uploading
progressnumberAggregated progress from 0 to 1 across all parts
dataMultipartUploadResult | nullUpload result on success
errorStorageError | nullError details on failure

Methods

MethodSignatureDescription
upload(file: File, metadata: T) => Promise<void>Start the multipart upload. Metadata is typed from your server schema.
abort() => voidCancel the in-progress upload
reset() => voidReset state back to idle

Upload Result

On success, state.data contains:

type MultipartUploadResult = {
  key: string;        // Generated S3 key
  uploadId: string;   // S3 multipart upload ID
  totalParts: number; // Number of parts uploaded
};

How It Works

  1. Validates the file locally (size, name, type).
  2. Calls /multipart/create to initiate the upload and receive a key and uploadId.
  3. Splits the file into parts based on partSize.
  4. Presigns parts in batches of 10 via /multipart/presign-parts.
  5. Uploads parts in parallel (controlled by concurrency).
  6. Tracks per-part progress and aggregates it into a single 01 value.
  7. Calls /multipart/complete with all part ETags to finalize the upload.

If any step fails, the upload is automatically aborted via /multipart/abort.

Tuning Part Size and Concurrency

Adjust partSize and concurrency based on your use case:

const { state, upload, abort } = useMultipartUpload({
  partSize: 5 * 1024 * 1024,  // 5 MB parts
  concurrency: 6,              // 6 parallel uploads
});

Smaller parts improve progress granularity and allow faster retries on failure. Higher concurrency increases throughput but uses more bandwidth. The defaults (10 MB parts, 4 concurrent) work well for most cases.

Abort and Cleanup

Call abort() to cancel an in-progress upload. The hook also aborts automatically when the component unmounts, preventing orphaned uploads.

When an upload is aborted or fails, the client sends a /multipart/abort request to clean up incomplete S3 state.

Next Steps