-
Notifications
You must be signed in to change notification settings - Fork 742
Address resource loading with GCS and restore domain signup restrictions #972
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
WalkthroughAdds build-time env and Docker build args in CI. Refactors API routes to always generate signed S3 URLs via bucket providers. Updates frontend video components to fetch playlists via internal API. Refines S3 utils method definitions. Introduces domain-restricted signup in NextAuth signIn callback. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor U as Client
participant API as /api/video/playlistUrl
participant DB as Database
participant BP as BucketProvider
participant S3 as S3
U->>API: GET playlistUrl?videoId
API->>DB: select video leftJoin bucket by video.bucket
DB-->>API: { video, bucket }
alt Video not found
API-->>U: 404
else Video COMPLETE
alt video.public == false
API->>API: getCurrentUser() and validate ownership
opt Not owner
API-->>U: 401
end
end
API->>BP: createBucketProvider(bucket)
API->>BP: getSignedObjectUrl(ownerId/videoId/output/...m3u8)
BP->>S3: Generate presigned URL
S3-->>BP: signed URL
BP-->>API: signed URL
API-->>U: 200 { playlistOne: signedURL, playlistTwo: null }
else Non-COMPLETE
API-->>U: 200 fallback URLs
end
sequenceDiagram
autonumber
actor O as OAuth/Email User
participant NA as NextAuth signIn callback
participant DB as Database
participant ENV as ServerEnv
O->>NA: signIn({ user, email, credentials })
NA->>ENV: read CAP_ALLOWED_SIGNUP_DOMAINS
alt No domains configured
NA-->>O: allow (true)
else Domains configured
NA->>DB: find user by email
DB-->>NA: existingUser | null
alt existingUser
NA-->>O: allow (true)
else new user
NA->>NA: isEmailAllowedForSignup(email)
alt allowed
NA-->>O: allow (true)
else not allowed
NA-->>O: deny (false)
end
end
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Poem
Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. ✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
apps/web/app/api/screenshot/route.ts (1)
71-72
: Use trusted ownerId from DB for S3 prefix, not user-supplied userId.Building the key prefix from the request
userId
can cause false 404s and makes access control fragile. Usevideo.ownerId
.- const screenshotPrefix = `${userId}/${videoId}/`; + const screenshotPrefix = `${video.ownerId}/${videoId}/`;
🧹 Nitpick comments (12)
apps/web/app/api/video/playlistUrl/route.ts (1)
21-27
: userId query param is unused; either validate or remove.Currently ignored but still required by clients. Either validate it matches
video.ownerId
or drop it from the contract to avoid confusion.Example validation:
- const userId = searchParams.get("userId") || ""; + const userId = searchParams.get("userId") || ""; ... - if (!userId || !videoId) { + if (!userId || !videoId) { ... } + if (userId && userId !== video.ownerId) { + return new Response(JSON.stringify({ error: true, message: "Invalid user/video pair" }), { status: 401, headers: getHeaders(origin) }); + }apps/web/app/api/screenshot/route.ts (1)
74-81
: Prefer deterministic selection when multiple PNGs exist.
find
returns the first match; S3 listing order isn’t guaranteed. Sort byLastModified
desc and pick the newest.- const screenshot = objects.Contents?.find((object) => - object.Key?.endsWith(".png"), - ); + const screenshot = objects.Contents + ?.filter((o) => o.Key?.endsWith(".png")) + .sort((a, b) => { + const ta = a.LastModified ? new Date(a.LastModified).getTime() : 0; + const tb = b.LastModified ? new Date(b.LastModified).getTime() : 0; + return tb - ta; + })[0];packages/database/auth/auth-options.tsx (1)
193-223
: Solid restoration of domain-restricted signup; consider a couple refinements.
- Lowercase before DB/domain checks to avoid collation surprises.
- Allow invited users regardless of domain (optional).
- const userEmail = + let userEmail = user?.email || (typeof email === "string" ? email : typeof credentials?.email === "string" ? credentials.email : null); - if (!userEmail || typeof userEmail !== "string") return true; + if (!userEmail || typeof userEmail !== "string") return true; + userEmail = userEmail.toLowerCase(); const [existingUser] = await db() .select() .from(users) .where(eq(users.email, userEmail)) .limit(1); // Only apply domain restrictions for new users, existing ones can always sign in if ( !existingUser && !isEmailAllowedForSignup(userEmail, allowedDomains) ) { console.warn(`Signup blocked for email domain: ${userEmail}`); return false; }If you want to allow invites across domains, check
organization_invites.invitedEmail = userEmail
before blocking..github/workflows/docker-build-web.yml (1)
41-51
: Docs suggestion: add a Storage (GCS via S3) guide.Create docs/storage/gcs.md and link from README. Include HMAC key creation steps, required envs (CAP_AWS_* and endpoints), and notes on path-style addressing and CORS.
I can draft this guide with screenshots and a checklist.
apps/web/app/api/thumbnail/route.ts (5)
38-46
: Use 404 (Not Found) instead of 401 when the video doesn’t exist.401 implies authentication and usually requires WWW-Authenticate. This is a resource existence case.
- status: 401, + status: 404,
48-57
: Return 404 (Not Found) for missing resource, not 401.- status: 401, + status: 404,
29-36
: Optional: verify the video belongs to the supplied userId.Prevents mismatched userId/videoId from probing other tenants’ buckets. If thumbnails are intended public, skip; otherwise constrain by owner.
-import { eq } from "drizzle-orm"; +import { and, eq } from "drizzle-orm"; ... - .where(eq(videos.id, videoId)); + .where(and(eq(videos.id, videoId), eq(videos.ownerId, userId)));
62-71
: Type the S3 objects instead ofany
.This avoids accidental property typos and improves IDE help.
+import type { _Object as S3Object } from "@aws-sdk/client-s3"; ... - const contents = listResponse.Contents || []; - const thumbnailKey = contents.find((item: any) => + const contents = (listResponse.Contents || []) as S3Object[]; + const thumbnailKey = contents.find((item) => item.Key?.endsWith("screen-capture.jpg"), )?.Key;
10-15
: Add an OPTIONS handler for CORS preflight.You already set CORS headers; answering OPTIONS avoids 404 preflight on cross-origin embeds.
Add outside the changed block:
export async function OPTIONS(request: NextRequest) { const origin = request.headers.get("origin") as string; return new Response(null, { status: 204, headers: getHeaders(origin) }); }apps/web/app/s/[videoId]/_components/ShareVideo.tsx (1)
128-142
: DRY: extractvideoSrc
resolution into a helper shared with Embed.Both Share and Embed duplicate this branching. A small util reduces drift.
apps/web/app/embed/[videoId]/_components/EmbedVideo.tsx (2)
184-190
: Fix event listener cleanup to prevent leaks.Handlers added inline can’t be removed; store references and remove those.
- player.addEventListener("play", () => listener(true)); - player.addEventListener("pause", () => listener(false)); + const handlePlay = () => listener(true); + const handlePause = () => listener(false); + player.addEventListener("play", handlePlay); + player.addEventListener("pause", handlePause); return () => { - player.removeEventListener("play", () => listener(true)); - player.removeEventListener("pause", () => listener(false)); + player.removeEventListener("play", handlePlay); + player.removeEventListener("pause", handlePause); player.removeEventListener("loadedmetadata", handleLoadedMetadata); };
152-166
: Consider consolidatingvideoSrc
logic with ShareVideo.A shared helper avoids future divergence (e.g., adding a new type).
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (10)
.github/workflows/docker-build-web.yml
(2 hunks)apps/web/app/api/screenshot/route.ts
(1 hunks)apps/web/app/api/thumbnail/route.ts
(1 hunks)apps/web/app/api/video/playlistUrl/route.ts
(2 hunks)apps/web/app/embed/[videoId]/_components/EmbedVideo.tsx
(1 hunks)apps/web/app/s/[videoId]/_components/ShareVideo.tsx
(1 hunks)apps/web/utils/s3.ts
(1 hunks)packages/database/auth/auth-options.tsx
(2 hunks)packages/database/migrations/meta/0008_snapshot.json
(1 hunks)packages/database/migrations/meta/_journal.json
(1 hunks)
🧰 Additional context used
📓 Path-based instructions (4)
apps/web/app/api/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
When HTTP routes are necessary, implement them under app/api/*, configure CORS correctly, and set precise revalidation
Files:
apps/web/app/api/screenshot/route.ts
apps/web/app/api/video/playlistUrl/route.ts
apps/web/app/api/thumbnail/route.ts
apps/web/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
apps/web/**/*.{ts,tsx}
: Use TanStack Query v5 for client-side server state and data fetching in the web app
Mutations should call Server Actions and perform precise cache updates with setQueryData/setQueriesData, avoiding broad invalidations
Prefer Server Components for initial data and pass initialData to client components for React Query hydration
Files:
apps/web/app/api/screenshot/route.ts
apps/web/app/api/video/playlistUrl/route.ts
apps/web/app/s/[videoId]/_components/ShareVideo.tsx
apps/web/app/api/thumbnail/route.ts
apps/web/app/embed/[videoId]/_components/EmbedVideo.tsx
apps/web/utils/s3.ts
{apps/web,packages/ui}/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
{apps/web,packages/ui}/**/*.{ts,tsx}
: Use Tailwind CSS exclusively for styling in the web app and shared React UI components
Component naming: React components in PascalCase; hooks in camelCase starting with 'use'
Files:
apps/web/app/api/screenshot/route.ts
apps/web/app/api/video/playlistUrl/route.ts
apps/web/app/s/[videoId]/_components/ShareVideo.tsx
apps/web/app/api/thumbnail/route.ts
apps/web/app/embed/[videoId]/_components/EmbedVideo.tsx
apps/web/utils/s3.ts
**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Use strict TypeScript and avoid any; prefer shared types from packages
Files:
apps/web/app/api/screenshot/route.ts
apps/web/app/api/video/playlistUrl/route.ts
apps/web/app/s/[videoId]/_components/ShareVideo.tsx
packages/database/auth/auth-options.tsx
apps/web/app/api/thumbnail/route.ts
apps/web/app/embed/[videoId]/_components/EmbedVideo.tsx
apps/web/utils/s3.ts
🧬 Code graph analysis (3)
apps/web/app/api/video/playlistUrl/route.ts (3)
packages/database/index.ts (1)
db
(30-35)packages/database/schema.ts (1)
s3Buckets
(362-372)apps/web/utils/s3.ts (1)
createBucketProvider
(374-397)
packages/database/auth/auth-options.tsx (4)
packages/env/server.ts (1)
serverEnv
(80-84)packages/database/index.ts (1)
db
(30-35)packages/database/schema.ts (1)
users
(45-87)packages/database/auth/domain-utils.ts (1)
isEmailAllowedForSignup
(3-19)
apps/web/app/api/thumbnail/route.ts (1)
apps/web/utils/helpers.ts (1)
getHeaders
(18-26)
🪛 GitHub Actions: Validate Migrations
packages/database/migrations/meta/_journal.json
[error] 2-2: Migration journal version cannot be changed (was: $BASE_VERSION, now: $CURRENT_VERSION)
🪛 Checkov (3.2.334)
.github/workflows/docker-build-web.yml
[MEDIUM] 42-43: Basic Auth Credentials
(CKV_SECRET_4)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: Clippy
- GitHub Check: Build Desktop (aarch64-apple-darwin, macos-latest)
- GitHub Check: Build Desktop (x86_64-pc-windows-msvc, windows-latest)
🔇 Additional comments (8)
packages/database/migrations/meta/_journal.json (1)
62-62
: Migration meta file is up-to-date; no action needed.
Verified no diffs againstmain
forpackages/database/migrations/meta/_journal.json
(version 5 matches).packages/database/migrations/meta/0008_snapshot.json (1)
1748-1748
: Ignore snapshot revert — no actual changes detected
packages/database/migrations/meta/0008_snapshot.json
is identical tomain
; no trailing newline or schema change to revert.Likely an incorrect or invalid review comment.
apps/web/app/api/video/playlistUrl/route.ts (1)
69-71
: Signed URL cacheability: align TTL with response caching.Ensure
bucketProvider.getSignedObjectUrl
TTL is >= any CDN/browser caching implied byCACHE_CONTROL_HEADERS
; otherwise clients may cache expired URLs.What is the signed URL expiration used here? If it’s short (e.g., 60s), consider reducing cache max-age or adding a query param nonce and short max-age.
apps/web/app/api/screenshot/route.ts (1)
91-91
: LGTM: unified on signed URLs.Switching to
bucketProvider.getSignedObjectUrl
removes brittle public URL paths and fixes 403s.packages/database/auth/auth-options.tsx (1)
17-17
: LGTM: brings back domain restriction utilities.Import is correct and scoped.
apps/web/utils/s3.ts (1)
264-292
: LGTM: async provider methods bring consistency and correct typing.The
headObject
,putObject
, andcopyObject
async forms match the interface and remove implicitany
s.apps/web/app/s/[videoId]/_components/ShareVideo.tsx (1)
139-142
: LGTM: unify non-desktop sources to internal signed playlist URL.Aligns Share flow with API-based signed URLs and removes public S3 coupling.
apps/web/app/embed/[videoId]/_components/EmbedVideo.tsx (1)
163-166
: LGTM: route all non-desktop sources via/api/playlist?videoType=video
.Removes environment/public S3 dependencies and uses signed URLs.
# Add required build-time environment variables | ||
echo "DATABASE_URL=mysql://build:build@build:3306/build" >> .env | ||
echo "CAP_AWS_BUCKET=build-bucket" >> .env | ||
echo "CAP_AWS_REGION=us-east-1" >> .env | ||
echo "CAP_AWS_ACCESS_KEY=build-access-key" >> .env | ||
echo "CAP_AWS_SECRET_KEY=build-secret-key" >> .env | ||
echo "NEXTAUTH_SECRET=build-nextauth-secret-placeholder-32-chars" >> .env | ||
echo "NEXTAUTH_URL=https://build.placeholder.com" >> .env | ||
echo "CAP_AWS_ENDPOINT=https://build.placeholder.com" >> .env | ||
echo "S3_PUBLIC_ENDPOINT=https://build.placeholder.com" >> .env | ||
echo "S3_INTERNAL_ENDPOINT=https://build.placeholder.com" >> .env |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Don’t write secrets to .env and avoid baking them into images; use GitHub Secrets (and BuildKit secrets) instead.
- Avoid committing/echoing credentials (DB URL, AWS keys, NEXTAUTH_SECRET) to a file and passing them as ARGs (persistable in layers).
- Pass non-secret build-time values as build-args; provide secrets via
${{ secrets.* }}
(masked) and preferably BuildKitsecrets:
with matching Dockerfile mounts.
Minimal safer change (no Dockerfile edits), move secrets from .env
to masked build-args and stop writing them to .env
:
- name: Create .env file
run: |
echo "WEB_URL=http://localhost:3000" > .env
echo "NEXT_PUBLIC_DOCKER_BUILD=true" >> .env
echo "NEXT_PUBLIC_CAP_AWS_BUCKET=capso" >> .env
echo "NEXT_PUBLIC_CAP_AWS_REGION=us-east-1" >> .env
- # Add required build-time environment variables
- echo "DATABASE_URL=mysql://build:build@build:3306/build" >> .env
- echo "CAP_AWS_BUCKET=build-bucket" >> .env
- echo "CAP_AWS_REGION=us-east-1" >> .env
- echo "CAP_AWS_ACCESS_KEY=build-access-key" >> .env
- echo "CAP_AWS_SECRET_KEY=build-secret-key" >> .env
- echo "NEXTAUTH_SECRET=build-nextauth-secret-placeholder-32-chars" >> .env
- echo "NEXTAUTH_URL=https://build.placeholder.com" >> .env
- echo "CAP_AWS_ENDPOINT=https://build.placeholder.com" >> .env
- echo "S3_PUBLIC_ENDPOINT=https://build.placeholder.com" >> .env
- echo "S3_INTERNAL_ENDPOINT=https://build.placeholder.com" >> .env
+ # Keep secrets out of .env in CI—provide via masked build-args below
And mask build-args with repo/environment secrets:
build-args: |
- DATABASE_URL=mysql://build:build@build:3306/build
- CAP_AWS_BUCKET=build-bucket
- CAP_AWS_REGION=us-east-1
- CAP_AWS_ACCESS_KEY=build-access-key
- CAP_AWS_SECRET_KEY=build-secret-key
- NEXTAUTH_SECRET=build-nextauth-secret-placeholder-32-chars
- NEXTAUTH_URL=https://build.placeholder.com
- CAP_AWS_ENDPOINT=https://build.placeholder.com
- S3_PUBLIC_ENDPOINT=https://build.placeholder.com
- S3_INTERNAL_ENDPOINT=https://build.placeholder.com
+ DATABASE_URL=${{ secrets.BUILD_DATABASE_URL }}
+ CAP_AWS_BUCKET=${{ secrets.CAP_AWS_BUCKET }}
+ CAP_AWS_REGION=${{ secrets.CAP_AWS_REGION }}
+ CAP_AWS_ACCESS_KEY=${{ secrets.CAP_AWS_ACCESS_KEY }}
+ CAP_AWS_SECRET_KEY=${{ secrets.CAP_AWS_SECRET_KEY }}
+ NEXTAUTH_SECRET=${{ secrets.NEXTAUTH_SECRET }}
+ NEXTAUTH_URL=${{ secrets.NEXTAUTH_URL }}
+ CAP_AWS_ENDPOINT=${{ secrets.CAP_AWS_ENDPOINT }}
+ S3_PUBLIC_ENDPOINT=${{ secrets.S3_PUBLIC_ENDPOINT }}
+ S3_INTERNAL_ENDPOINT=${{ secrets.S3_INTERNAL_ENDPOINT }}
If you can adjust the Dockerfile, prefer BuildKit secrets:
- build-args: |
+ build-args: |
CAP_AWS_BUCKET=${{ secrets.CAP_AWS_BUCKET }}
CAP_AWS_REGION=${{ secrets.CAP_AWS_REGION }}
CAP_AWS_ENDPOINT=${{ secrets.CAP_AWS_ENDPOINT }}
S3_PUBLIC_ENDPOINT=${{ secrets.S3_PUBLIC_ENDPOINT }}
S3_INTERNAL_ENDPOINT=${{ secrets.S3_INTERNAL_ENDPOINT }}
+ secrets: |
+ DATABASE_URL=${{ secrets.BUILD_DATABASE_URL }}
+ CAP_AWS_ACCESS_KEY=${{ secrets.CAP_AWS_ACCESS_KEY }}
+ CAP_AWS_SECRET_KEY=${{ secrets.CAP_AWS_SECRET_KEY }}
+ NEXTAUTH_SECRET=${{ secrets.NEXTAUTH_SECRET }}
+ NEXTAUTH_URL=${{ secrets.NEXTAUTH_URL }}
Note: Checkov flagged basic auth in lines 42–43; changes above address it.
I can send a Dockerfile patch using RUN --mount=type=secret,id=...
if you want.
Also applies to: 72-83
🧰 Tools
🪛 Checkov (3.2.334)
[MEDIUM] 42-43: Basic Auth Credentials
(CKV_SECRET_4)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (2)
.github/workflows/docker-build-web.yml (2)
41-51
: Stop writing secrets to .env in CI; use masked build-args or BuildKit secrets.These lines embed DB creds, AWS/GCS HMAC keys, and NEXTAUTH_SECRET into the image build context and CI logs/layers. Prior comments already flagged this; issue still present.
Apply (minimal safer change):
- name: Create .env file run: | echo "WEB_URL=http://localhost:3000" > .env echo "NEXT_PUBLIC_DOCKER_BUILD=true" >> .env echo "NEXT_PUBLIC_CAP_AWS_BUCKET=capso" >> .env echo "NEXT_PUBLIC_CAP_AWS_REGION=us-east-1" >> .env - # Add required build-time environment variables - echo "DATABASE_URL=mysql://build:build@build:3306/build" >> .env - echo "CAP_AWS_BUCKET=build-bucket" >> .env - echo "CAP_AWS_REGION=us-east-1" >> .env - echo "CAP_AWS_ACCESS_KEY=build-access-key" >> .env - echo "CAP_AWS_SECRET_KEY=build-secret-key" >> .env - echo "NEXTAUTH_SECRET=build-nextauth-secret-placeholder-32-chars" >> .env - echo "NEXTAUTH_URL=https://build.placeholder.com" >> .env - echo "CAP_AWS_ENDPOINT=https://build.placeholder.com" >> .env - echo "S3_PUBLIC_ENDPOINT=https://build.placeholder.com" >> .env - echo "S3_INTERNAL_ENDPOINT=https://build.placeholder.com" >> .env + # Keep secrets out of .env in CI—provide via masked build-args/secrets belowCheckov’s “Basic Auth Credentials” finding (lines 42–43) is addressed by this removal.
71-81
: Do not bake secrets via build-args; pass as masked secrets (BuildKit).Move sensitive values (DB URL, access key, secret key, NEXTAUTH_SECRET) to the action’s “secrets” and keep only non-sensitive args as build-args.
with: context: . file: apps/web/Dockerfile platforms: linux/${{ matrix.platform }} push: true outputs: type=image,name=ghcr.io/${{ github.repository_owner }}/cap-web,push-by-digest=true cache-from: type=gha,scope=buildx-${{ matrix.platform }} cache-to: type=gha,mode=max,scope=buildx-${{ matrix.platform }} - build-args: | - DATABASE_URL=mysql://build:build@build:3306/build - CAP_AWS_BUCKET=build-bucket - CAP_AWS_REGION=us-east-1 - CAP_AWS_ACCESS_KEY=build-access-key - CAP_AWS_SECRET_KEY=build-secret-key - NEXTAUTH_SECRET=build-nextauth-secret-placeholder-32-chars - NEXTAUTH_URL=https://build.placeholder.com - CAP_AWS_ENDPOINT=https://build.placeholder.com - S3_PUBLIC_ENDPOINT=https://build.placeholder.com - S3_INTERNAL_ENDPOINT=https://build.placeholder.com + build-args: | + # non-sensitive + CAP_AWS_BUCKET=${{ secrets.CAP_AWS_BUCKET }} + CAP_AWS_REGION=${{ secrets.CAP_AWS_REGION }} + CAP_AWS_ENDPOINT=${{ secrets.CAP_AWS_ENDPOINT }} + S3_PUBLIC_ENDPOINT=${{ secrets.S3_PUBLIC_ENDPOINT }} + S3_INTERNAL_ENDPOINT=${{ secrets.S3_INTERNAL_ENDPOINT }} + NEXTAUTH_URL=${{ secrets.NEXTAUTH_URL }} + secrets: | + DATABASE_URL=${{ secrets.BUILD_DATABASE_URL }} + CAP_AWS_ACCESS_KEY=${{ secrets.CAP_AWS_ACCESS_KEY }} + CAP_AWS_SECRET_KEY=${{ secrets.CAP_AWS_SECRET_KEY }} + NEXTAUTH_SECRET=${{ secrets.NEXTAUTH_SECRET }}If Dockerfile changes are possible, mount them with RUN --mount=type=secret,id=... and read from /run/secrets/*.
🧹 Nitpick comments (4)
apps/web/app/api/video/playlistUrl/route.ts (4)
40-44
: Query by id with leftJoin is good; consider limit(1).Adds a small safeguard and communicates intent to the planner; negligible perf impact.
- .leftJoin(s3Buckets, eq(videos.bucket, s3Buckets.id)) - .where(eq(videos.id, videoId)); + .leftJoin(s3Buckets, eq(videos.bucket, s3Buckets.id)) + .where(eq(videos.id, videoId)) + .limit(1);
56-65
: Use 404 for missing video (and 400 for missing params earlier).“Not found” should be 404, not 401. Also the earlier “!userId || !videoId” case should be 400 since it’s a bad request, not auth failure.
- { - status: 404, - headers: getHeaders(origin), - }, + { status: 404, headers: getHeaders(origin) },Optional follow-up: drop userId requirement entirely (unused) or verify it matches video.ownerId.
70-79
: Return 403 for non-public unauthorized access (not 401).401 implies missing/invalid authentication; here the user is authenticated but forbidden if not the owner.
- return new Response( - JSON.stringify({ error: true, message: "Video is not public" }), - { status: 401, headers: getHeaders(origin) }, - ); + return new Response( + JSON.stringify({ error: true, message: "Forbidden" }), + { status: 403, headers: getHeaders(origin) }, + );
81-84
: Consider explicit signed URL TTL and centralize playlistKey.
- If supported, pass a short expiresIn (e.g., 5–15m) when generating the URL.
- Extract playlist key construction to a shared helper to avoid drift across routes.
Would you like me to propose a helper (e.g., buildPlaylistKey(video)) and wire an expiresIn option if available in createBucketProvider?
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (2)
.github/workflows/docker-build-web.yml
(2 hunks)apps/web/app/api/video/playlistUrl/route.ts
(4 hunks)
🧰 Additional context used
📓 Path-based instructions (4)
apps/web/app/api/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
When HTTP routes are necessary, implement them under app/api/*, configure CORS correctly, and set precise revalidation
Files:
apps/web/app/api/video/playlistUrl/route.ts
apps/web/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
apps/web/**/*.{ts,tsx}
: Use TanStack Query v5 for client-side server state and data fetching in the web app
Mutations should call Server Actions and perform precise cache updates with setQueryData/setQueriesData, avoiding broad invalidations
Prefer Server Components for initial data and pass initialData to client components for React Query hydration
Files:
apps/web/app/api/video/playlistUrl/route.ts
{apps/web,packages/ui}/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
{apps/web,packages/ui}/**/*.{ts,tsx}
: Use Tailwind CSS exclusively for styling in the web app and shared React UI components
Component naming: React components in PascalCase; hooks in camelCase starting with 'use'
Files:
apps/web/app/api/video/playlistUrl/route.ts
**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Use strict TypeScript and avoid any; prefer shared types from packages
Files:
apps/web/app/api/video/playlistUrl/route.ts
🧬 Code graph analysis (1)
apps/web/app/api/video/playlistUrl/route.ts (3)
packages/database/schema.ts (2)
videos
(231-277)s3Buckets
(362-372)apps/web/utils/helpers.ts (1)
getHeaders
(18-26)apps/web/utils/s3.ts (1)
createBucketProvider
(374-397)
🪛 Checkov (3.2.334)
.github/workflows/docker-build-web.yml
[MEDIUM] 42-43: Basic Auth Credentials
(CKV_SECRET_4)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Build Desktop (aarch64-apple-darwin, macos-latest)
- GitHub Check: Build Desktop (x86_64-pc-windows-msvc, windows-latest)
🔇 Additional comments (1)
apps/web/app/api/video/playlistUrl/route.ts (1)
2-3
: LGTM: moved to per-bucket provider with session import.Imports align with the new signed-URL flow and bucket-aware lookups.
Also applies to: 8-9
CAP_AWS_ACCESS_KEY
: GCS HMAC Access Key IDCAP_AWS_SECRET_KEY
: GCS HMAC Secret for the keyCAP_AWS_BUCKET
: set to the bucket nameCAP_AWS_REGION
: put anything should work, GCS doesn't care but the S3 library might need it non-emptyCAP_AWS_ENDPOINT
: set tohttps://storage.googleapis.com
S3_PUBLIC_ENDPOINT
: set tohttps://storage.googleapis.com
S3_INTERNAL_ENDPOINT
: set tohttps://storage.googleapis.com
CAP_ALLOWED_SIGNUP_DOMAINS
was not removed in the original PR, I presumed thesignIn
callback was removed by accident. But please correct me if otherwise.P.S. I'm not sure where to put the GCS settings guide. Let me know, in case it's needed, if there's a good place to document it.
Summary by CodeRabbit