Skip to content

feat : Add toggle button to switch between dark and light mode#228

Open
RiH-137 wants to merge 1 commit intoshopstr-eng:mainfrom
RiH-137:feature/dark-theme-animation
Open

feat : Add toggle button to switch between dark and light mode#228
RiH-137 wants to merge 1 commit intoshopstr-eng:mainfrom
RiH-137:feature/dark-theme-animation

Conversation

@RiH-137
Copy link

@RiH-137 RiH-137 commented Feb 14, 2026

No description provided.

Copilot AI review requested due to automatic review settings February 14, 2026 15:25
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds persistence for failed Nostr relay publishes by storing the failed event payload alongside relay lists, enabling later retrieval/retry from the DB.

Changes:

  • Extend trackFailedRelayPublish to optionally send the full NostrEvent payload.
  • Add failed_relay_publishes table (and indexes) including event_data.
  • Update DB API endpoints to store and fetch failed publishes using event_data in the tracking table.

Reviewed changes

Copilot reviewed 7 out of 7 changed files in this pull request and generated 5 comments.

Show a summary per file
File Description
utils/nostr/nostr-helper-functions.ts Passes full event objects into failed-publish tracking.
utils/db/db-service.ts Creates failed_relay_publishes table and adds a migration guard for event_data.
utils/db/db-client.ts Extends client helper to POST optional event payload to the API.
pages/api/db/track-failed-publish.ts Persists event_data for failed publishes (and upserts it).
pages/api/db/get-failed-publishes.ts Fetches retry candidates directly from failed_relay_publishes.event_data and parses JSON.
pages/api/db/clear-failed-publish.ts Ensures the failed-publish table exists before clearing/updating.
db/schema.sql Adds the failed_relay_publishes table and indexes to the schema.
Comments suppressed due to low confidence (1)

pages/api/db/clear-failed-publish.ts:1

  • This handler repeats schema creation at request time. To avoid runtime DDL overhead and potential locking, move table creation to migrations/DB init and remove it from this request path.
import type { NextApiRequest, NextApiResponse } from "next";

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 26 to +38
CREATE TABLE IF NOT EXISTS failed_relay_publishes (
event_id TEXT PRIMARY KEY,
relays TEXT NOT NULL,
event_data TEXT,
created_at BIGINT NOT NULL,
retry_count INTEGER DEFAULT 0
)
`);

await client.query(`
ALTER TABLE failed_relay_publishes
ADD COLUMN IF NOT EXISTS event_data TEXT
`);
Copy link

Copilot AI Feb 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Running DDL (CREATE TABLE / ALTER TABLE) on every API request adds avoidable latency and can create lock contention under load. Prefer performing this schema setup in your centralized migration/initialization path (e.g., initializeTables / schema.sql), and remove per-request DDL from the handler (or guard it behind a one-time startup init).

Copilot uses AI. Check for mistakes.
Comment on lines +18 to +32
await client.query(`
CREATE TABLE IF NOT EXISTS failed_relay_publishes (
event_id TEXT PRIMARY KEY,
relays TEXT NOT NULL,
event_data TEXT,
created_at BIGINT NOT NULL,
retry_count INTEGER DEFAULT 0
)
`);

await client.query(`
ALTER TABLE failed_relay_publishes
ADD COLUMN IF NOT EXISTS event_data TEXT
`);

Copy link

Copilot AI Feb 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This endpoint also performs per-request DDL, which is expensive and can block concurrent reads/writes. Since the PR already adds the table/indexes via db-service and schema.sql, consider removing these queries here and relying on migrations/initialization instead.

Suggested change
await client.query(`
CREATE TABLE IF NOT EXISTS failed_relay_publishes (
event_id TEXT PRIMARY KEY,
relays TEXT NOT NULL,
event_data TEXT,
created_at BIGINT NOT NULL,
retry_count INTEGER DEFAULT 0
)
`);
await client.query(`
ALTER TABLE failed_relay_publishes
ADD COLUMN IF NOT EXISTS event_data TEXT
`);

Copilot uses AI. Check for mistakes.
Comment on lines +42 to 54
`INSERT INTO failed_relay_publishes (event_id, relays, event_data, created_at, retry_count)
VALUES ($1, $2, $3, $4, 0)
ON CONFLICT (event_id) DO UPDATE SET
relays = EXCLUDED.relays,
event_data = EXCLUDED.event_data,
created_at = EXCLUDED.created_at`,
[eventId, JSON.stringify(relays), Math.floor(Date.now() / 1000)]
[
eventId,
JSON.stringify(relays),
event ? JSON.stringify(event) : null,
Math.floor(Date.now() / 1000),
]
);
Copy link

Copilot AI Feb 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The API accepts and stores arbitrary event JSON from the client without any size/shape validation. This can lead to DB bloat or request/DB DoS if very large payloads are posted. Consider validating that event matches the expected Nostr event shape and enforcing a reasonable size limit (e.g., reject if serialized length exceeds a threshold, and/or store only the minimal fields required for retry).

Copilot uses AI. Check for mistakes.
Comment on lines +226 to +231
IF NOT EXISTS (
SELECT 1 FROM information_schema.columns
WHERE table_name = 'failed_relay_publishes' AND column_name = 'event_data'
) THEN
ALTER TABLE failed_relay_publishes ADD COLUMN event_data TEXT;
END IF;
Copy link

Copilot AI Feb 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The information_schema.columns check doesn’t scope by schema. If multiple schemas exist (or if the app uses a non-public schema), this can produce false positives/negatives. Consider filtering by table_schema (e.g., current_schema() or a configured schema name) to make the migration deterministic.

Copilot uses AI. Check for mistakes.
Comment on lines +44 to +56
.map((row: any) => {
try {
return {
eventId: row.event_id,
relays: JSON.parse(row.relays),
event: JSON.parse(row.event_data),
retryCount: row.retry_count,
};
} catch {
return null;
}
})
.filter(Boolean);
Copy link

Copilot AI Feb 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The JSON parse failures are swallowed silently, which can make diagnosing bad data difficult (and can lead to events being skipped without visibility). Consider at least logging the event_id (and possibly whether relays vs event_data failed) when the catch triggers, or returning a structured error entry for observability.

Copilot uses AI. Check for mistakes.
@calvadev
Copy link
Collaborator

This looks good, just address the copilot comments to move forward!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants

Comments