chore: setup changesets for automated versioning and npm publishing (#128)#1
Conversation
…4brym#128) Replaces tag-based publishing with changesets workflow. Adds publint for package validation, changelog-github plugin for PR-linked changelogs, and npm trusted publishing via OIDC provenance. Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
) * fix: use authenticated fetch for context menu downloads When basic auth is enabled, the downloadObject method in FileContextMenu created a direct <a> link to the API endpoint. Browser-initiated downloads via <a> links do not include custom Authorization headers, causing 401 errors (especially in Chrome). This changes the download to use apiHandler.downloadFile() which goes through the Axios instance with proper auth headers already configured, then creates a temporary blob URL for the browser download. Fixes G4brym#63 * Add changeset for basic auth download fix Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: Debian <openclaw@openclaw.1.1.1.1> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Ensures every PR includes a changeset file, preventing merges that skip versioning. Runs `changeset status --since=origin/main` on all PRs. Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Updates the engines field in the dashboard package.json from "^20 || ^18 || ^16 || ^14.19" to "^24 || ^22 || ^20 || ^18", fixing the publish CI which uses Node 24. Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* fix: add missing publint script to package.json The release workflow runs `pnpm --filter r2-explorer publint` but the package had no "publint" script defined, causing CI to fail with ERR_PNPM_RECURSIVE_RUN_NO_SCRIPT. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * chore: add empty changeset Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* test: add comprehensive dashboard UI testing infrastructure Add Vitest component tests (110 tests across 13 files) covering utilities, Pinia stores, Vue components, and router configuration. Add Playwright E2E test scaffolding and integrate dashboard tests into CI pipeline. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * chore: add empty changeset for testing infrastructure Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * ci: add Playwright E2E tests to GitHub Actions Add a separate CI job that builds the dashboard, installs Chromium, and runs Playwright E2E tests against a local wrangler dev server. Includes a dedicated E2E wrangler config without auth for CI compatibility. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: make E2E tests work by using worker dev directory for deps Move E2E worker entry and wrangler config to packages/worker/dev/ so worker dependencies (hono, chanfana, zod) resolve correctly. Fix app-loads test to use getByRole for strict element matching. Remove login E2E tests since auth is not configured in the E2E environment. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * test: add comprehensive E2E tests covering all dashboard features Add 35 Playwright E2E tests across 10 spec files covering: - File operations: create folder/file, delete, rename - File preview: text, JSON, markdown with filename header - File browsing: breadcrumbs, table columns, sorting - Navigation: folder drill-down, breadcrumb back, sidebar - Context menu: file vs folder menu items, open actions - Search: prefix search and clear - Metadata: open dialog, add custom metadata - Share links: create dialog, generate link, manage shares - Email: list with sender/subject, detail view, navigation Also adds API helpers (helpers.ts) for seeding test data via the worker upload/folder/delete endpoints. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * docs: add testing requirements to CLAUDE.md Require E2E test coverage for any UI changes and document test commands for component and E2E tests. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * test: expand E2E coverage with upload, download, preview, email, and share link tests Add 11 new E2E tests covering file upload/download, CSV/HTML preview, file edit/save/cancel, email body content, mark read/unread, share link options (expiration/password/max downloads), manage dialog, and revocation. Fix strict mode violations in app-loads and share-links tests. Set workers to 1 to prevent parallel race conditions on shared R2 bucket. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
…m#134) * chore: include CHANGELOG.md and markdown files in docs package Add "files" field to docs package.json to ensure CHANGELOG.md and all .md files are included in the published package. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: include CHANGELOG.md in worker package, revert docs change Add CHANGELOG.md to the worker package "files" field so it's included in the published npm package. Revert the mistaken docs package change. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * chore: copy CHANGELOG.md and docs markdown files into worker package Update the worker build script to copy CHANGELOG.md from root and all docs markdown files into a docs/ directory, so they are bundled in the published npm package. Add the generated paths to .gitignore. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
This reverts commit 4ac7229.
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Mark docs-new and r2-explorer-dev-worker as private to prevent changesets from attempting to publish them to npm. Also add r2-explorer-dev-worker to the changeset ignore list. Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…G4brym#144) The `c.header("Access-Control-Allow-Credentials", "asads")` line in the ListObjects handler sets an invalid value for the header (valid values are "true" or the header should be omitted). This appears to be a debug leftover. CORS is already properly handled by the hono cors() middleware when enabled via config. Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
…#149) Wrap base64 + JSON decoding of customMetadata and httpMetadata query parameters in try-catch blocks for PutObject and CreateUpload endpoints. Previously, malformed input would crash with an unhandled exception returning a 500 error. Now returns a descriptive 400 error instead. Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
) If no R2 bucket is configured via emailRouting.targetBucket and no bucket binding exists in the environment, the email handler would crash with a TypeError when calling bucket.put() on undefined. This adds a guard that throws a descriptive error message instead. Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
…G4brym#147) - Fix OpenAPI summary in HeadObject from "Get Object" to "Head Object" - Add missing third-level fallback for base64 key decoding in HeadObject and PutMetadata to match GetObject's decoding logic, preventing potential failures with certain encoded file keys Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
…eck (G4brym#146) The download counter was incremented before verifying the shared file still exists in the bucket. If the file was deleted, accessing the share link would increment the counter and eventually exhaust the download limit without any actual downloads. Move file retrieval before counter increment so the count only increases on successful downloads. Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 6 to 7. - [Release notes](https://github.com/actions/upload-artifact/releases) - [Commits](actions/upload-artifact@v6...v7) --- updated-dependencies: - dependency-name: actions/upload-artifact dependency-version: '7' dependency-type: direct:production update-type: version-update:semver-major ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps [actions/setup-node](https://github.com/actions/setup-node) from 4 to 6. - [Release notes](https://github.com/actions/setup-node/releases) - [Commits](actions/setup-node@v4...v6) --- updated-dependencies: - dependency-name: actions/setup-node dependency-version: '6' dependency-type: direct:production update-type: version-update:semver-major ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps [actions/checkout](https://github.com/actions/checkout) from 4 to 6. - [Release notes](https://github.com/actions/checkout/releases) - [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md) - [Commits](actions/checkout@v4...v6) --- updated-dependencies: - dependency-name: actions/checkout dependency-version: '6' dependency-type: direct:production update-type: version-update:semver-major ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
* Fix Content-Disposition header injection in GetObject The filename in the Content-Disposition header was interpolated directly without sanitization. Filenames containing double quotes could break header parsing or enable header injection. This fix: - Strips non-ASCII characters and replaces double quotes in the ASCII `filename` parameter for compatibility - Adds RFC 5987 `filename*` parameter with proper UTF-8 percent encoding, matching the approach used in getShareLink.ts Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Fix CI: update test for new Content-Disposition format and add changeset - Update GetObject test to expect the sanitized filename with RFC 5987 filename* parameter added by the header injection fix - Add required changeset for the patch release Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
- Add ShareMetadata interface to types.d.ts for share link metadata - Replace `any` type in getShareLink.ts with ShareMetadata - Type parsed JSON in listShares.ts with ShareMetadata - Type share metadata object in createShareLink.ts with ShareMetadata - Fix readOnlyMiddleware to use Hono's Next type instead of CallableFunction
Add a "Duplicate" context menu option that copies files and folders within the same bucket, appending " (copy)" to the name while preserving all metadata. - New CopyObject backend endpoint (POST /api/buckets/:bucket/copy) - Frontend duplicate logic with progress tracking for folder duplication - Backend integration tests (6 tests) and E2E test for file duplication Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
…tants Replaces the minimal 20-line CLAUDE.md with a full reference covering repository structure, all development commands, package details, API routes, build pipeline, code style conventions, important file locations, and share link implementation notes. https://claude.ai/code/session_018aqegB3K211sW1c1xf8Eux
|
Review these changes at https://app.gitnotebooks.com/Cybersoulja/R2-Explorer/pull/1 |
Reviewer's GuideReplaces tag-triggered npm publishing with a Changesets-based release workflow (including publint validation and GitHub releases), adds comprehensive dashboard unit/E2E testing, and introduces several backend/frontend features and robustness fixes such as object copy/duplicate, safer metadata parsing, improved share link typing/behavior, and more resilient key decoding and email handling. Sequence diagram for file and folder duplicate/copy operationsequenceDiagram
actor User
participant FilesFolderPage
participant FileContextMenu
participant FileOptions
participant apiHandler
participant WorkerAPI
participant R2Bucket
User->>FileContextMenu: Right-click row
User->>FileContextMenu: Click Duplicate
FileContextMenu-->>FilesFolderPage: emit duplicateObject(row)
FilesFolderPage-->>FileOptions: call duplicateObject(row)
alt row.type == folder
FileOptions->>FileOptions: generateCopyName(row.key, true)
FileOptions->>apiHandler: fetchFile(selectedBucket, row.key, "")
apiHandler->>WorkerAPI: GET /api/buckets/:bucket?prefix=...
WorkerAPI->>R2Bucket: list(prefix)
R2Bucket-->>WorkerAPI: folderContents
WorkerAPI-->>apiHandler: folderContents
apiHandler-->>FileOptions: folderContents
FileOptions->>apiHandler: createFolder(destPrefix, selectedBucket)
apiHandler->>WorkerAPI: POST /api/buckets/:bucket/folder
WorkerAPI->>R2Bucket: put(destPrefix)
R2Bucket-->>WorkerAPI: ok
loop each innerFile
FileOptions->>FileOptions: newKey = innerFile.key.replace(sourcePrefix, destPrefix)
FileOptions->>apiHandler: copyObject(selectedBucket, innerFile.key, newKey)
apiHandler->>WorkerAPI: POST /api/buckets/:bucket/copy { sourceKey, destinationKey }
WorkerAPI->>R2Bucket: get(sourceKey)
R2Bucket-->>WorkerAPI: object
WorkerAPI->>R2Bucket: put(destinationKey, body, metadata)
R2Bucket-->>WorkerAPI: ok
WorkerAPI-->>apiHandler: ok
apiHandler-->>FileOptions: ok
FileOptions->>FileOptions: update progress notification
end
FileOptions->>FileOptions: show "Folder duplicated" notification
else row.type == file
FileOptions->>FileOptions: destKey = generateCopyName(row.key, false)
FileOptions->>apiHandler: copyObject(selectedBucket, row.key, destKey)
apiHandler->>WorkerAPI: POST /api/buckets/:bucket/copy { sourceKey, destinationKey }
WorkerAPI->>R2Bucket: get(sourceKey)
R2Bucket-->>WorkerAPI: object
WorkerAPI->>R2Bucket: put(destKey, body, metadata)
R2Bucket-->>WorkerAPI: ok
WorkerAPI-->>apiHandler: ok
apiHandler-->>FileOptions: ok
FileOptions->>FileOptions: show "File duplicated" notification
end
FileOptions->>FilesFolderPage: $bus.emit(fetchFiles)
FilesFolderPage->>WorkerAPI: refresh listing
WorkerAPI-->>FilesFolderPage: updated objects
FilesFolderPage-->>User: updated file list with copy
Sequence diagram for safer object download and share link accesssequenceDiagram
actor User
participant Dashboard
participant apiHandler
participant WorkerAPI
participant R2Bucket
User->>Dashboard: Click Download in context menu
Dashboard->>apiHandler: downloadFile(bucket, key, { downloadType: objectUrl })
apiHandler->>WorkerAPI: GET /api/buckets/:bucket/:key
WorkerAPI->>R2Bucket: get(filePath)
R2Bucket-->>WorkerAPI: object
WorkerAPI->>WorkerAPI: build headers
WorkerAPI->>WorkerAPI: sanitize asciiFileName
WorkerAPI->>WorkerAPI: set Content-Disposition with ascii and UTF-8 filename*
WorkerAPI-->>apiHandler: Response with body and headers
apiHandler-->>Dashboard: data
Dashboard->>Dashboard: create Blob and object URL
Dashboard->>Dashboard: trigger anchor click
Dashboard-->>User: File download starts
Note over User,WorkerAPI: Public share link access
User->>WorkerAPI: GET /share/:shareId
WorkerAPI->>R2Bucket: search .r2-explorer/sharable-links/:shareId.json across buckets
R2Bucket-->>WorkerAPI: share metadata object
WorkerAPI->>WorkerAPI: parse JSON as ShareMetadata
WorkerAPI->>R2Bucket: get(shareMetadata.key)
alt file exists
R2Bucket-->>WorkerAPI: file
WorkerAPI->>WorkerAPI: increment currentDownloads
WorkerAPI->>R2Bucket: put updated ShareMetadata
WorkerAPI-->>User: file response
else file missing
R2Bucket-->>WorkerAPI: null
WorkerAPI-->>User: 404 Shared file not found
end
Class diagram for updated bucket operations and share metadata typesclassDiagram
class OpenAPIRoute {
<<abstract>>
+schema any
+getValidatedData(schemaType)
+handle(c)
}
class ShareMetadata {
+string bucket
+string key
+number expiresAt
+string passwordHash
+number maxDownloads
+number currentDownloads
+string createdBy
+number createdAt
}
class CreateShareLink {
+schema
+handle(c: AppContext)
}
class GetShareLink {
+schema
+handle(c: AppContext)
}
class ListShares {
+schema
+handle(c: AppContext)
}
class CopyObject {
+schema
+handle(c: AppContext)
}
class GetObject {
+schema
+handle(c: AppContext)
}
class readOnlyMiddleware {
+readOnlyMiddleware(c: AppContext, next: Next)
}
OpenAPIRoute <|-- CreateShareLink
OpenAPIRoute <|-- GetShareLink
OpenAPIRoute <|-- ListShares
OpenAPIRoute <|-- CopyObject
OpenAPIRoute <|-- GetObject
CreateShareLink --> ShareMetadata : creates
GetShareLink --> ShareMetadata : parses and updates
ListShares --> ShareMetadata : parses list
class AppContext {
+env Record~string, R2Bucket~
+req any
+header(name: string, value: string)
+get(key: string) any
}
class R2Bucket {
+get(key: string) Promise~R2Object~
+put(key: string, body: any, options: R2PutOptions) Promise~R2Object~
+list(options: any) Promise~R2Objects~
+head(key: string) Promise~R2Object~
+createMultipartUpload(key: string, options: any)
}
CopyObject --> AppContext : uses
CopyObject --> R2Bucket : get, put
GetObject --> R2Bucket : get
class R2Object {
+any body
+number size
+string httpEtag
+Record~string, string~ customMetadata
+R2HTTPMetadata httpMetadata
+writeHttpMetadata(headers: Headers)
}
R2Bucket --> R2Object
class R2HTTPMetadata {
+string contentType
+string contentLanguage
+string contentEncoding
+string cacheControl
}
class Next {
+call()
}
readOnlyMiddleware --> AppContext : reads config
readOnlyMiddleware --> Next : invokes
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
There was a problem hiding this comment.
Hey - I've found 5 issues, and left some high level feedback:
- In the new CopyObject route, the base64 decoding of
sourceKey/destinationKeywill throw on malformed input and surface as a 500; consider wrapping the decode in a try/catch and returning a 400 with a clear error message (similar to the custom/http metadata handlers) and returning a JSON response instead of the rawbucket.putresult for consistency with other bucket endpoints. - The
duplicateObjectfolder logic inFileOptions.vueperforms a sequentialcopyObjectfor every item without any per-file error handling or cancellation; for large folders this could be slow and brittle, so it may be worth adding basic error reporting (e.g., stop and notify on first failure) or batching/parallelizing copies while keeping the progress UI accurate.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- In the new CopyObject route, the base64 decoding of `sourceKey`/`destinationKey` will throw on malformed input and surface as a 500; consider wrapping the decode in a try/catch and returning a 400 with a clear error message (similar to the custom/http metadata handlers) and returning a JSON response instead of the raw `bucket.put` result for consistency with other bucket endpoints.
- The `duplicateObject` folder logic in `FileOptions.vue` performs a sequential `copyObject` for every item without any per-file error handling or cancellation; for large folders this could be slow and brittle, so it may be worth adding basic error reporting (e.g., stop and notify on first failure) or batching/parallelizing copies while keeping the progress UI accurate.
## Individual Comments
### Comment 1
<location path="packages/dashboard/e2e/helpers.ts" line_range="7-8" />
<code_context>
+const BUCKET = "MY_BUCKET";
+
+/** Base64-encode a key the same way the dashboard does. */
+function encodeKey(key: string): string {
+ return btoa(unescape(encodeURIComponent(key)));
+}
+
</code_context>
<issue_to_address>
**issue (bug_risk):** encodeKey relies on browser-only globals (btoa/unescape) in a Node/Playwright context
In the Playwright test runner (Node), `btoa` and `unescape` are undefined, so this will throw at runtime in e2e tests. Please switch to a Node-safe base64 implementation, e.g.:
```ts
function encodeKey(key: string): string {
return Buffer.from(key, 'utf8').toString('base64');
}
```
Apply the same change to other `btoa`/`unescape` usages in this file (such as when encoding custom metadata in `seedEmail`).
</issue_to_address>
### Comment 2
<location path="packages/dashboard/e2e/helpers.ts" line_range="117-118" />
<code_context>
+ };
+
+ const encodedKey = encodeKey(key);
+ const encodedMeta = btoa(
+ unescape(encodeURIComponent(JSON.stringify(customMeta))),
+ );
+
</code_context>
<issue_to_address>
**issue (bug_risk):** Encoding customMetadata also uses browser-only globals and may break in Node
As with `encodeKey`, this `btoa` + `unescape` usage will fail in the Node-based Playwright runner. Consider a Node-safe alternative:
```ts
const encodedMeta = Buffer
.from(JSON.stringify(customMeta), 'utf8')
.toString('base64');
```
This preserves behavior without relying on browser globals.
</issue_to_address>
### Comment 3
<location path="packages/worker/src/modules/buckets/copyObject.ts" line_range="42-45" />
<code_context>
+ });
+ }
+
+ const sourceKey = decodeURIComponent(escape(atob(data.body.sourceKey)));
+ const destinationKey = decodeURIComponent(
+ escape(atob(data.body.destinationKey)),
+ );
</code_context>
<issue_to_address>
**suggestion:** Decoding of sourceKey/destinationKey lacks error handling for malformed base64/encoding
If these values aren’t valid base64 or in the expected format, `atob`/`decodeURIComponent` will throw and currently cause a 500. To align with `putObject`/`createUpload`, consider wrapping the decoding in a try/catch and returning a 400 with a clear error message (e.g. `Invalid sourceKey: expected base64-encoded key`).
```suggestion
let sourceKey: string;
try {
sourceKey = decodeURIComponent(escape(atob(data.body.sourceKey)));
} catch {
throw new HTTPException(400, {
message: "Invalid sourceKey: expected base64-encoded key",
});
}
let destinationKey: string;
try {
destinationKey = decodeURIComponent(
escape(atob(data.body.destinationKey)),
);
} catch {
throw new HTTPException(400, {
message: "Invalid destinationKey: expected base64-encoded key",
});
}
```
</issue_to_address>
### Comment 4
<location path="packages/worker/tests/integration/multipart.test.ts" line_range="132-128" />
<code_context>
+ expect(body).toContain("Invalid customMetadata");
+ });
+
+ it("should return 400 for malformed httpMetadata", async () => {
+ if (!MY_TEST_BUCKET_1)
+ throw new Error("MY_TEST_BUCKET_1 binding not available");
+
+ const objectKey = "bad-metadata.txt";
+ const base64ObjectKey = btoa(objectKey);
+ const blobBody = new Blob(["content"], {
+ type: "application/octet-stream",
+ });
+
+ const badHttpMetadata = btoa("{broken json");
+
+ const request = createTestRequest(
+ `/api/buckets/MY_TEST_BUCKET_1/upload?key=${encodeURIComponent(base64ObjectKey)}&httpMetadata=${encodeURIComponent(badHttpMetadata)}`,
+ "POST",
+ blobBody,
+ { "Content-Type": "application/octet-stream" },
+ );
+
+ const response = await app.fetch(request, env, createExecutionContext());
+ expect(response.status).toBe(400);
+ const body = await response.text();
+ expect(body).toContain("Invalid httpMetadata");
+ });
</code_context>
<issue_to_address>
**suggestion (testing):** Add a multipart test for non–base64 httpMetadata to complete malformed-input coverage.
Similar to `customMetadata`, please add a multipart test where `httpMetadata` is not base64 at all, so multipart coverage includes both invalid base64 and invalid JSON cases and fully exercises the new validation logic for malformed inputs.
Suggested implementation:
```typescript
expect(body).toContain("Invalid customMetadata");
});
it("should return 400 for non-base64 httpMetadata in multipart upload", async () => {
if (!MY_TEST_BUCKET_1)
throw new Error("MY_TEST_BUCKET_1 binding not available");
const objectKey = "bad-multipart-http-metadata.txt";
const formData = new FormData();
formData.append("key", objectKey);
formData.append(
"file",
new Blob(["content"], {
type: "application/octet-stream",
}),
objectKey,
);
// httpMetadata is not base64 at all
formData.append("httpMetadata", "not-base64-http-metadata");
const request = createTestRequest(
`/api/buckets/MY_TEST_BUCKET_1/upload`,
"POST",
formData,
);
const response = await app.fetch(request, env, createExecutionContext());
expect(response.status).toBe(400);
const body = await response.text();
expect(body).toContain("Invalid httpMetadata");
});
it("should return 400 for malformed customMetadata", async () => {
```
1. Ensure that `FormData` and `Blob` are available in the test environment. If not globally available, you may need to import or polyfill them at the top of the test file (for example via `undici` or `@cloudflare/workers-types` setup).
2. If `createTestRequest` requires explicit headers for multipart requests in your codebase, you may need to add a `headers` argument with the appropriate `Content-Type` including the multipart boundary, or adapt the helper to accept a `FormData` body and set headers automatically (if it doesn’t already).
3. If there is an existing helper for building multipart upload requests (e.g. `createMultipartUploadRequest`), prefer using that helper instead of constructing the `FormData` directly, mirroring how other multipart tests are written in this file.
</issue_to_address>
### Comment 5
<location path="packages/worker/tests/integration/object.test.ts" line_range="150-151" />
<code_context>
expect(response.headers.has("etag")).toBe(true);
- // Default R2 GetObject includes Content-Disposition: attachment; filename="key"
- expect(response.headers.get("content-disposition")).toBe(`attachment; filename="${TEST_OBJECT_KEY}"`);
+ // Default R2 GetObject includes Content-Disposition with sanitized filename and RFC 5987 filename*
+ expect(response.headers.get("content-disposition")).toBe(`attachment; filename="${TEST_OBJECT_KEY}"; filename*=UTF-8''${encodeURIComponent(TEST_OBJECT_KEY)}`);
const body = await response.text();
</code_context>
<issue_to_address>
**suggestion (testing):** Consider adding a dedicated test for non-ASCII and quote-containing filenames to validate sanitization logic.
Since the code now sanitizes non-ASCII and double-quote characters, please add a test where the key includes such characters (e.g. `file-ä".txt`) and assert that:
- `filename` shows the sanitized form (non-ASCII replaced, quotes normalized), and
- `filename*` preserves the original name via UTF-8 encoding.
This will ensure the new behavior is covered and validated.
Suggested implementation:
```typescript
expect(response.headers.get("content-type")).toBe(TEST_OBJECT_CONTENT_TYPE);
expect(response.headers.get("content-length")).toBe(TEST_OBJECT_CONTENT.length.toString());
expect(response.headers.has("etag")).toBe(true);
// Default R2 GetObject includes Content-Disposition with sanitized filename and RFC 5987 filename*
expect(response.headers.get("content-disposition")).toBe(`attachment; filename="${TEST_OBJECT_KEY}"; filename*=UTF-8''${encodeURIComponent(TEST_OBJECT_KEY)}`);
const body = await response.text();
expect(body).toBe(TEST_OBJECT_CONTENT);
```
Please add the following new test case in the same `describe` block that contains the existing R2 `GetObject` test (typically close to the test that uses `TEST_OBJECT_KEY`):
```ts
it("returns sanitized Content-Disposition for non-ASCII and quoted filenames", async () => {
const SPECIAL_KEY = 'file-ä".txt';
const SPECIAL_CONTENT = "special content";
const SPECIAL_CONTENT_TYPE = "text/plain";
// Arrange: upload an object with non-ASCII and quote-containing key
// Adjust this part to match your existing helpers for putting objects into R2.
// For example, if you already have a helper like `putObject(TEST_OBJECT_KEY, TEST_OBJECT_CONTENT, TEST_OBJECT_CONTENT_TYPE)`,
// call it here instead of inlining the logic.
await fetch(`http://127.0.0.1:${port}/r2/${encodeURIComponent(SPECIAL_KEY)}`, {
method: "PUT",
headers: {
"content-type": SPECIAL_CONTENT_TYPE,
"content-length": SPECIAL_CONTENT.length.toString(),
},
body: SPECIAL_CONTENT,
});
// Act: retrieve the object
const response = await fetch(`http://127.0.0.1:${port}/r2/${encodeURIComponent(SPECIAL_KEY)}`);
expect(response.status).toBe(200);
expect(response.headers.get("content-type")).toBe(SPECIAL_CONTENT_TYPE);
const contentDisposition = response.headers.get("content-disposition");
expect(contentDisposition).not.toBeNull();
// Example header value:
// attachment; filename="file-a'.txt"; filename*=UTF-8''file-%C3%A4%22.txt
// We don't assert the exact sanitized string to avoid coupling to the implementation,
// but we do assert on the properties we care about.
// Extract filename and filename* components
const filenameMatch = contentDisposition!.match(/filename="([^"]+)"/);
const filenameStarMatch = contentDisposition!.match(/filename\*\=UTF-8''([^;]+)$/);
expect(filenameMatch).not.toBeNull();
expect(filenameStarMatch).not.toBeNull();
const sanitizedFilename = filenameMatch![1];
const encodedFilenameStar = filenameStarMatch![1];
// Assert: `filename` is sanitized:
// - non-ASCII characters are removed/replaced
// - double quotes are normalized/removed
expect(sanitizedFilename).not.toContain("ä");
expect(sanitizedFilename).not.toContain('"');
// It should still generally resemble the original name (same base/extension)
expect(sanitizedFilename).toContain("file");
expect(sanitizedFilename).toContain(".txt");
// Assert: `filename*` preserves the original filename (RFC 5987, UTF-8 encoded)
expect(encodedFilenameStar).toBe(encodeURIComponent(SPECIAL_KEY));
expect(decodeURIComponent(encodedFilenameStar)).toBe(SPECIAL_KEY);
const body = await response.text();
expect(body).toBe(SPECIAL_CONTENT);
});
```
You may need to:
1. Replace the inline `fetch(..., { method: "PUT", ... })` used for uploading with the existing test helper or route you use elsewhere in this file to create the R2 object (e.g. a helper that seeds R2 with `TEST_OBJECT_KEY`).
2. Use the correct base URL/path (`/r2/...` above is an example; adjust to whatever path your existing tests use to interact with R2 objects).
3. If this file already defines utilities for parsing `Content-Disposition`, you can reuse them instead of doing the regex parsing inline here.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| function encodeKey(key: string): string { | ||
| return btoa(unescape(encodeURIComponent(key))); |
There was a problem hiding this comment.
issue (bug_risk): encodeKey relies on browser-only globals (btoa/unescape) in a Node/Playwright context
In the Playwright test runner (Node), btoa and unescape are undefined, so this will throw at runtime in e2e tests. Please switch to a Node-safe base64 implementation, e.g.:
function encodeKey(key: string): string {
return Buffer.from(key, 'utf8').toString('base64');
}Apply the same change to other btoa/unescape usages in this file (such as when encoding custom metadata in seedEmail).
| const encodedMeta = btoa( | ||
| unescape(encodeURIComponent(JSON.stringify(customMeta))), |
There was a problem hiding this comment.
issue (bug_risk): Encoding customMetadata also uses browser-only globals and may break in Node
As with encodeKey, this btoa + unescape usage will fail in the Node-based Playwright runner. Consider a Node-safe alternative:
const encodedMeta = Buffer
.from(JSON.stringify(customMeta), 'utf8')
.toString('base64');This preserves behavior without relying on browser globals.
| const sourceKey = decodeURIComponent(escape(atob(data.body.sourceKey))); | ||
| const destinationKey = decodeURIComponent( | ||
| escape(atob(data.body.destinationKey)), | ||
| ); |
There was a problem hiding this comment.
suggestion: Decoding of sourceKey/destinationKey lacks error handling for malformed base64/encoding
If these values aren’t valid base64 or in the expected format, atob/decodeURIComponent will throw and currently cause a 500. To align with putObject/createUpload, consider wrapping the decoding in a try/catch and returning a 400 with a clear error message (e.g. Invalid sourceKey: expected base64-encoded key).
| const sourceKey = decodeURIComponent(escape(atob(data.body.sourceKey))); | |
| const destinationKey = decodeURIComponent( | |
| escape(atob(data.body.destinationKey)), | |
| ); | |
| let sourceKey: string; | |
| try { | |
| sourceKey = decodeURIComponent(escape(atob(data.body.sourceKey))); | |
| } catch { | |
| throw new HTTPException(400, { | |
| message: "Invalid sourceKey: expected base64-encoded key", | |
| }); | |
| } | |
| let destinationKey: string; | |
| try { | |
| destinationKey = decodeURIComponent( | |
| escape(atob(data.body.destinationKey)), | |
| ); | |
| } catch { | |
| throw new HTTPException(400, { | |
| message: "Invalid destinationKey: expected base64-encoded key", | |
| }); | |
| } |
|
|
||
| const response = await app.fetch(request, env, createExecutionContext()); | ||
| expect(response.status).toBe(400); | ||
| const body = await response.text(); |
There was a problem hiding this comment.
suggestion (testing): Add a multipart test for non–base64 httpMetadata to complete malformed-input coverage.
Similar to customMetadata, please add a multipart test where httpMetadata is not base64 at all, so multipart coverage includes both invalid base64 and invalid JSON cases and fully exercises the new validation logic for malformed inputs.
Suggested implementation:
expect(body).toContain("Invalid customMetadata");
});
it("should return 400 for non-base64 httpMetadata in multipart upload", async () => {
if (!MY_TEST_BUCKET_1)
throw new Error("MY_TEST_BUCKET_1 binding not available");
const objectKey = "bad-multipart-http-metadata.txt";
const formData = new FormData();
formData.append("key", objectKey);
formData.append(
"file",
new Blob(["content"], {
type: "application/octet-stream",
}),
objectKey,
);
// httpMetadata is not base64 at all
formData.append("httpMetadata", "not-base64-http-metadata");
const request = createTestRequest(
`/api/buckets/MY_TEST_BUCKET_1/upload`,
"POST",
formData,
);
const response = await app.fetch(request, env, createExecutionContext());
expect(response.status).toBe(400);
const body = await response.text();
expect(body).toContain("Invalid httpMetadata");
});
it("should return 400 for malformed customMetadata", async () => {- Ensure that
FormDataandBlobare available in the test environment. If not globally available, you may need to import or polyfill them at the top of the test file (for example viaundicior@cloudflare/workers-typessetup). - If
createTestRequestrequires explicit headers for multipart requests in your codebase, you may need to add aheadersargument with the appropriateContent-Typeincluding the multipart boundary, or adapt the helper to accept aFormDatabody and set headers automatically (if it doesn’t already). - If there is an existing helper for building multipart upload requests (e.g.
createMultipartUploadRequest), prefer using that helper instead of constructing theFormDatadirectly, mirroring how other multipart tests are written in this file.
| // Default R2 GetObject includes Content-Disposition with sanitized filename and RFC 5987 filename* | ||
| expect(response.headers.get("content-disposition")).toBe(`attachment; filename="${TEST_OBJECT_KEY}"; filename*=UTF-8''${encodeURIComponent(TEST_OBJECT_KEY)}`); |
There was a problem hiding this comment.
suggestion (testing): Consider adding a dedicated test for non-ASCII and quote-containing filenames to validate sanitization logic.
Since the code now sanitizes non-ASCII and double-quote characters, please add a test where the key includes such characters (e.g. file-ä".txt) and assert that:
filenameshows the sanitized form (non-ASCII replaced, quotes normalized), andfilename*preserves the original name via UTF-8 encoding.
This will ensure the new behavior is covered and validated.
Suggested implementation:
expect(response.headers.get("content-type")).toBe(TEST_OBJECT_CONTENT_TYPE);
expect(response.headers.get("content-length")).toBe(TEST_OBJECT_CONTENT.length.toString());
expect(response.headers.has("etag")).toBe(true);
// Default R2 GetObject includes Content-Disposition with sanitized filename and RFC 5987 filename*
expect(response.headers.get("content-disposition")).toBe(`attachment; filename="${TEST_OBJECT_KEY}"; filename*=UTF-8''${encodeURIComponent(TEST_OBJECT_KEY)}`);
const body = await response.text();
expect(body).toBe(TEST_OBJECT_CONTENT);Please add the following new test case in the same describe block that contains the existing R2 GetObject test (typically close to the test that uses TEST_OBJECT_KEY):
it("returns sanitized Content-Disposition for non-ASCII and quoted filenames", async () => {
const SPECIAL_KEY = 'file-ä".txt';
const SPECIAL_CONTENT = "special content";
const SPECIAL_CONTENT_TYPE = "text/plain";
// Arrange: upload an object with non-ASCII and quote-containing key
// Adjust this part to match your existing helpers for putting objects into R2.
// For example, if you already have a helper like `putObject(TEST_OBJECT_KEY, TEST_OBJECT_CONTENT, TEST_OBJECT_CONTENT_TYPE)`,
// call it here instead of inlining the logic.
await fetch(`http://127.0.0.1:${port}/r2/${encodeURIComponent(SPECIAL_KEY)}`, {
method: "PUT",
headers: {
"content-type": SPECIAL_CONTENT_TYPE,
"content-length": SPECIAL_CONTENT.length.toString(),
},
body: SPECIAL_CONTENT,
});
// Act: retrieve the object
const response = await fetch(`http://127.0.0.1:${port}/r2/${encodeURIComponent(SPECIAL_KEY)}`);
expect(response.status).toBe(200);
expect(response.headers.get("content-type")).toBe(SPECIAL_CONTENT_TYPE);
const contentDisposition = response.headers.get("content-disposition");
expect(contentDisposition).not.toBeNull();
// Example header value:
// attachment; filename="file-a'.txt"; filename*=UTF-8''file-%C3%A4%22.txt
// We don't assert the exact sanitized string to avoid coupling to the implementation,
// but we do assert on the properties we care about.
// Extract filename and filename* components
const filenameMatch = contentDisposition!.match(/filename="([^"]+)"/);
const filenameStarMatch = contentDisposition!.match(/filename\*\=UTF-8''([^;]+)$/);
expect(filenameMatch).not.toBeNull();
expect(filenameStarMatch).not.toBeNull();
const sanitizedFilename = filenameMatch![1];
const encodedFilenameStar = filenameStarMatch![1];
// Assert: `filename` is sanitized:
// - non-ASCII characters are removed/replaced
// - double quotes are normalized/removed
expect(sanitizedFilename).not.toContain("ä");
expect(sanitizedFilename).not.toContain('"');
// It should still generally resemble the original name (same base/extension)
expect(sanitizedFilename).toContain("file");
expect(sanitizedFilename).toContain(".txt");
// Assert: `filename*` preserves the original filename (RFC 5987, UTF-8 encoded)
expect(encodedFilenameStar).toBe(encodeURIComponent(SPECIAL_KEY));
expect(decodeURIComponent(encodedFilenameStar)).toBe(SPECIAL_KEY);
const body = await response.text();
expect(body).toBe(SPECIAL_CONTENT);
});You may need to:
- Replace the inline
fetch(..., { method: "PUT", ... })used for uploading with the existing test helper or route you use elsewhere in this file to create the R2 object (e.g. a helper that seeds R2 withTEST_OBJECT_KEY). - Use the correct base URL/path (
/r2/...above is an example; adjust to whatever path your existing tests use to interact with R2 objects). - If this file already defines utilities for parsing
Content-Disposition, you can reuse them instead of doing the regex parsing inline here.
Replaces tag-based publishing with changesets workflow. Adds publint
for package validation, changelog-github plugin for PR-linked changelogs,
and npm trusted publishing via OIDC provenance.
Co-authored-by: Claude Opus 4.6 noreply@anthropic.com
Summary by Sourcery
Introduce object duplication capabilities in the dashboard and backend, strengthen metadata and share-link handling in the worker API, and expand automated testing and release workflows with Changesets-driven publishing.
New Features:
Bug Fixes:
Enhancements:
Build:
CI:
Deployment:
Documentation:
Tests: