feat(zoho-books-ingestion): zoho-desk-ingestion#1198
feat(zoho-books-ingestion): zoho-desk-ingestion#1198MohdShoaib-18169 wants to merge 2 commits intomainfrom
Conversation
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. WalkthroughAdds a full Zoho Desk integration: frontend routes and sidebar, OAuth connector creation, admin/manual sync endpoints, a Zoho API client, queue-based ticket and attachment ingestion into Vespa (with OCR/spreadsheet parsing), AI prompt/context updates for tickets, new DB schemas and summary-generation pipelines, and supporting scripts/config. Changes
Sequence Diagram(s)sequenceDiagram
participant User as User (Admin/User)
participant FE as Frontend
participant OAuth as Zoho OAuth
participant API as Backend API
participant DB as Database
participant Queue as PgBoss
participant Worker as Worker
participant Vespa as Vespa
User->>FE: Click "Zoho Desk" / open integration
FE->>FE: Show status / OAuth modal
User->>OAuth: Authorize, Zoho returns code/refresh token
FE->>API: POST /admin/connector/create (refreshToken)
API->>API: Verify token (ZohoDeskClient)
API->>DB: insert connector / update oauthCredentials
DB-->>API: connectorId
API-->>FE: Success
User->>FE: Click "Start Sync" (admin)
FE->>API: POST /zoho-desk/start_sync
API->>Queue: enqueue SyncZohoDeskQueue
API-->>FE: Accepted
Queue->>Worker: Sync job picked up
Worker->>DB: fetch Zoho connectors
Worker->>ZohoAPI: list tickets (batch)
ZohoAPI-->>Worker: ticket lists
Worker->>Queue: enqueue ProcessZohoDeskTicketQueue per ticket
Worker->>Queue: enqueue ProcessZohoDeskAttachmentQueue for attachments
Worker->>Vespa: upsert zoho_ticket document (transformer)
Vespa-->>Worker: OK
Worker->>Worker: process attachment jobs -> download attachment -> OCR/parse
Worker->>Vespa: update attachment OCR text
Vespa-->>Worker: OK
Worker->>DB: update ingestion state, lastModifiedTime, sync history
Worker-->>Queue: enqueue summary jobs (SUMMARY_QUEUE_NAME)
sequenceDiagram
participant SummaryEnqueue as Producer
participant SummaryQueue as SUMMARY_QUEUE
participant SummaryWorker as SummaryWorker
participant DB as DB
participant Vespa as Vespa
participant LLM as LLM Provider
SummaryEnqueue->>SummaryQueue: enqueue individual/aggregate/whole jobs
SummaryQueue->>SummaryWorker: worker picks job
SummaryWorker->>DB: insert/update summary records
SummaryWorker->>LLM: generate summary (generateText / provider)
LLM-->>SummaryWorker: summary text
SummaryWorker->>DB: update summary record (completed)
SummaryWorker->>Vespa: update whole-resolution summary field when ready
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120+ minutes Areas requiring extra attention:
Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello @MohdShoaib-18169, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request delivers a complete integration with Zoho Desk, allowing the system to ingest and make support tickets searchable. It establishes distinct administrative and user-facing interfaces for managing the integration, supported by new API endpoints and a resilient, queue-driven backend synchronization process. The changes also expand the AI's contextual understanding to incorporate Zoho Desk ticket data, significantly improving search relevance and the generation of intelligent follow-up questions. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces a new integration for Zoho Desk, including frontend components for both admin and user configuration, and a comprehensive backend implementation for data ingestion, processing, and storage. The backend work is particularly thorough, with a robust API client, queue-based processing, and incremental sync logic. My review focuses on improving code maintainability, type safety, and fixing a critical issue with a disabled worker. Key suggestions include removing code duplication in the frontend, improving type safety with useQuery, replacing console.log with structured logging, and enabling the attachment processing worker.
server/queue/index.ts
Outdated
| //Zoho Desk Attachment Processing Worker - processes OCR for attachments | ||
| // await boss.work( | ||
| // ProcessZohoDeskAttachmentQueue, | ||
| // { batchSize: 5 }, | ||
| // async (jobs) => { | ||
| // await Promise.all( | ||
| // jobs.map(async (job) => { | ||
| // const startTime = Date.now() | ||
| // try { | ||
| // await processAttachmentJob(job as PgBoss.Job<AttachmentJob>) | ||
| // const endTime = Date.now() | ||
| // syncJobSuccess.inc( | ||
| // { | ||
| // sync_job_name: ProcessZohoDeskAttachmentQueue, | ||
| // sync_job_auth_type: AuthType.OAuth, | ||
| // }, | ||
| // 1, | ||
| // ) | ||
| // syncJobDuration.observe( | ||
| // { | ||
| // sync_job_name: ProcessZohoDeskAttachmentQueue, | ||
| // sync_job_auth_type: AuthType.OAuth, | ||
| // }, | ||
| // endTime - startTime, | ||
| // ) | ||
| // } catch (error) { | ||
| // const errorMessage = getErrorMessage(error) | ||
| // Logger.error( | ||
| // error, | ||
| // `Error processing Zoho Desk attachment ${errorMessage} ${(error as Error).stack}`, | ||
| // ) | ||
| // syncJobError.inc( | ||
| // { | ||
| // sync_job_name: ProcessZohoDeskAttachmentQueue, | ||
| // sync_job_auth_type: AuthType.OAuth, | ||
| // sync_job_error_type: `${errorMessage}`, | ||
| // }, | ||
| // 1, | ||
| // ) | ||
| // } | ||
| // }), | ||
| // ) | ||
| // }, | ||
| // ) | ||
|
|
There was a problem hiding this comment.
The worker for ProcessZohoDeskAttachmentQueue is commented out. This is a critical issue as it means that attachment processing jobs, which are queued by processTicketJob, will never be executed. As a result, attachments from Zoho Desk tickets will not be processed for OCR or indexed. This code must be uncommented for the feature to function as intended.
await boss.work(
ProcessZohoDeskAttachmentQueue,
{ batchSize: 5 },
async (jobs) => {
await Promise.all(
jobs.map(async (job) => {
const startTime = Date.now()
try {
await processAttachmentJob(job as PgBoss.Job<AttachmentJob>)
const endTime = Date.now()
syncJobSuccess.inc(
{
sync_job_name: ProcessZohoDeskAttachmentQueue,
sync_job_auth_type: AuthType.OAuth,
},
1,
)
syncJobDuration.observe(
{
sync_job_name: ProcessZohoDeskAttachmentQueue,
sync_job_auth_type: AuthType.OAuth,
},
endTime - startTime,
)
} catch (error) {
const errorMessage = getErrorMessage(error)
Logger.error(
error,
`Error processing Zoho Desk attachment ${errorMessage} ${(error as Error).stack}`,
)
syncJobError.inc(
{
sync_job_name: ProcessZohoDeskAttachmentQueue,
sync_job_auth_type: AuthType.OAuth,
sync_job_error_type: `${errorMessage}`,
},
1,
)
}
}),
)
},
)| beforeLoad: async ({ params }) => { | ||
| return params | ||
| }, | ||
| loader: async (params) => { | ||
| return params | ||
| }, |
| console.log("\n🔑 ZOHO TOKEN REFRESH: Calling Zoho OAuth API") | ||
| console.log(` Endpoint: https://${this.config.accountsDomain}/oauth/v2/token`) | ||
| console.log(` Timeout: 30 seconds`) | ||
| console.log(` Client ID: ${this.config.clientId?.substring(0, 20)}...`) | ||
| console.log("") | ||
|
|
||
| const response = await this.accountsClient.post<ZohoTokenResponse>( | ||
| "/token", | ||
| querystring.stringify(tokenData), | ||
| ) | ||
|
|
||
| console.log("\n✅ ZOHO TOKEN REFRESH: Received response from Zoho") | ||
| console.log(` Status: ${response.status} ${response.statusText}`) | ||
| console.log(` Has access_token: ${!!response.data?.access_token}`) | ||
| console.log("") |
There was a problem hiding this comment.
| const [, setZohoDeskStatus] = useState("") | ||
| const [isStartingSyncLoading, setIsStartingSyncLoading] = useState(false) | ||
|
|
||
| const { isPending, data, refetch } = useQuery<any[]>({ |
| beforeLoad: async ({ params, context }) => { | ||
| return params | ||
| }, | ||
| loader: async (params) => { | ||
| return params | ||
| }, |
| <li | ||
| className={`group flex justify-between items-center ${location.pathname.includes("/integrations/zoho-desk") ? "bg-[#EBEFF2] dark:bg-slate-700" : ""} hover:bg-[#EBEFF2] dark:hover:bg-slate-700 rounded-[6px] pt-[8px] pb-[8px] ml-[8px] mr-[8px] cursor-pointer`} | ||
| onClick={() => { | ||
| router.navigate({ | ||
| to: | ||
| role === UserRole.SuperAdmin || role === UserRole.Admin | ||
| ? "/admin/integrations/zoho-desk" | ||
| : "/integrations/zoho-desk", | ||
| }) | ||
| }} | ||
| > | ||
| <span className="text-[14px] font-semibold text-blue-600 dark:text-blue-400 ml-[8px]">Z</span> | ||
| <span className="text-[14px] dark:text-gray-200 pl-[10px] pr-[10px] truncate cursor-pointer flex-grow max-w-[250px]"> | ||
| Zoho Desk | ||
| </span> | ||
| </li> |
There was a problem hiding this comment.
There is significant code duplication among the <li> elements for each integration. This makes the component harder to maintain. Consider creating a reusable IntegrationSidebarItem component to encapsulate the shared logic and styling. This would make the code cleaner and easier to update in the future.
| // TODO: Fix enrichment - these API endpoints don't exist in Zoho Desk API v1 | ||
| // Commenting out for now to avoid errors | ||
| /* | ||
| // Fetch createdBy agent info if it's just an ID | ||
| if (fullTicket.createdBy && typeof fullTicket.createdBy === 'string') { | ||
| const createdByAgent = await client.fetchAgentById(fullTicket.createdBy) | ||
| if (createdByAgent) { | ||
| fullTicket.createdBy = createdByAgent as any | ||
| Logger.info("✅ Enriched ticket with createdBy agent info", { | ||
| ticketId, | ||
| createdByEmail: createdByAgent.email, | ||
| }) | ||
| } | ||
| } | ||
|
|
||
| // Fetch modifiedBy agent info if it's just an ID | ||
| if (fullTicket.modifiedBy && typeof fullTicket.modifiedBy === 'string') { | ||
| const modifiedByAgent = await client.fetchAgentById(fullTicket.modifiedBy) | ||
| if (modifiedByAgent) { | ||
| fullTicket.modifiedBy = modifiedByAgent as any | ||
| Logger.info("✅ Enriched ticket with modifiedBy agent info", { | ||
| ticketId, | ||
| modifiedByEmail: modifiedByAgent.email, | ||
| }) | ||
| } | ||
| } | ||
|
|
||
| // Fetch account info if we only have accountId | ||
| if (fullTicket.accountId && !fullTicket.account) { | ||
| const account = await client.fetchAccountById(fullTicket.accountId) | ||
| if (account) { | ||
| fullTicket.account = account as any | ||
| Logger.info("✅ Enriched ticket with account info", { | ||
| ticketId, | ||
| accountName: account.accountName, | ||
| }) | ||
| } | ||
| } | ||
|
|
||
| // Fetch product info if we only have productId | ||
| if (fullTicket.productId && !fullTicket.product) { | ||
| const product = await client.fetchProductById(fullTicket.productId) | ||
| if (product) { | ||
| fullTicket.product = product as any | ||
| Logger.info("✅ Enriched ticket with product info", { | ||
| ticketId, | ||
| productName: product.productName, | ||
| }) | ||
| } | ||
| } | ||
|
|
||
| // Fetch team info if we only have teamId | ||
| if (fullTicket.teamId && !fullTicket.team) { | ||
| const team = await client.fetchTeamById(fullTicket.teamId) | ||
| if (team) { | ||
| fullTicket.team = team as any | ||
| Logger.info("✅ Enriched ticket with team info", { | ||
| ticketId, | ||
| teamName: team.name, | ||
| }) | ||
| } | ||
| } | ||
| */ |
There was a problem hiding this comment.
| console.log("\n📥 RAW ZOHO API RESPONSE - TICKET") | ||
| console.log("=" .repeat(80)) | ||
| console.log(JSON.stringify(fullTicket, null, 2)) | ||
| console.log("=" .repeat(80)) | ||
| console.log("") |
There was a problem hiding this comment.
| console.log("\n🆕 FIRST-TIME SYNC - Fetching from past year") | ||
| console.log(` Start Date: ${lastModifiedTime}`) | ||
| console.log(` End Date: ${now}`) | ||
| console.log(` Connector ID: ${connectorId}`) |
| agentWhiteList, | ||
| }: IntegrationProps) => { | ||
| const navigate = useNavigate() | ||
| const [, setZohoDeskStatus] = useState("") |
There was a problem hiding this comment.
The state variable zohoDeskStatus is initialized but its value is never read. The setter setZohoDeskStatus is used, but since the value is not used for rendering or logic, this useState call is unnecessary and can be removed to clean up the code.
const [isStartingSyncLoading, setIsStartingSyncLoading] = useState(false)
There was a problem hiding this comment.
Actionable comments posted: 7
🧹 Nitpick comments (3)
server/shared/types.ts (1)
51-96: Switch toz.enumfor ZohoDeskEntity.
z.nativeEnumis deprecated in Zod v4; usingz.enum(ZohoDeskEntity)keeps us on the happy path and aligns with our migration guidance.-export const ZohoDeskEntitySchema = z.nativeEnum(ZohoDeskEntity) +export const ZohoDeskEntitySchema = z.enum(ZohoDeskEntity)Based on learnings.
server/search/vespaService.ts (1)
21-46: Export the Zoho schema literal for reuse.We already have multiple scripts reaching for
"zoho_ticket"; exporting this constant lets them import it and drop duplicated strings/casts.-const zohoTicketSchema = "zoho_ticket" as const +export const zohoTicketSchema = "zoho_ticket" as constserver/types.ts (1)
249-252: Use enum constant instead of raw literalWe already import
Apps, so we can reference the enum directly instead of duplicating the raw string. That keeps the schema in sync if the enum ever changes.- app: z.literal("zoho-desk" as const).optional(), + app: z.literal(Apps.ZohoDesk).optional(),
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (24)
frontend/src/components/IntegrationsSidebar.tsx(1 hunks)frontend/src/routes/_authenticated/admin/integrations/zoho-desk.tsx(1 hunks)frontend/src/routes/_authenticated/integrations/zoho-desk.tsx(1 hunks)server/ai/prompts.ts(16 hunks)server/ai/types.ts(2 hunks)server/api/admin.ts(5 hunks)server/api/oauth.ts(3 hunks)server/check-ticket.ts(1 hunks)server/config.ts(2 hunks)server/db/schema/connectors.ts(2 hunks)server/db/schema/ingestions.ts(2 hunks)server/download-attachment.ts(1 hunks)server/integrations/zoho/client.ts(1 hunks)server/integrations/zoho/queue.ts(1 hunks)server/integrations/zoho/sync.ts(1 hunks)server/integrations/zoho/transformer.ts(1 hunks)server/integrations/zoho/types.ts(1 hunks)server/queue/index.ts(5 hunks)server/search/vespaService.ts(2 hunks)server/server.ts(5 hunks)server/shared/types.ts(3 hunks)server/sync-server.ts(5 hunks)server/types.ts(2 hunks)server/vespa/services.xml(1 hunks)
🧰 Additional context used
🧠 Learnings (6)
📚 Learning: 2025-07-04T07:00:46.837Z
Learnt from: naSim087
Repo: xynehq/xyne PR: 638
File: server/integrations/slack/index.ts:793-824
Timestamp: 2025-07-04T07:00:46.837Z
Learning: In server/integrations/slack/index.ts, the dual IngestionState instances follow a specific pattern: `ingestionState` stores the current state that gets updated during processing, while `ingestionOldState` stores the old/previously saved state used for resumption logic when restarting ingestion.
Applied to files:
server/db/schema/connectors.tsserver/db/schema/ingestions.ts
📚 Learning: 2025-06-10T05:40:04.427Z
Learnt from: naSim087
Repo: xynehq/xyne PR: 525
File: frontend/src/routes/_authenticated/admin/integrations/slack.tsx:141-148
Timestamp: 2025-06-10T05:40:04.427Z
Learning: In frontend/src/routes/_authenticated/admin/integrations/slack.tsx, the ConnectAction enum and related connectAction state (lines 141-148, 469-471) are intentionally kept for future development, even though they appear unused in the current implementation.
Applied to files:
frontend/src/routes/_authenticated/integrations/zoho-desk.tsxfrontend/src/routes/_authenticated/admin/integrations/zoho-desk.tsx
📚 Learning: 2025-10-27T09:26:06.403Z
Learnt from: Sithaarth24
Repo: xynehq/xyne PR: 1127
File: server/db/schema/userWorkflowPermissions.ts:94-94
Timestamp: 2025-10-27T09:26:06.403Z
Learning: In Zod v4 and later, z.nativeEnum() is deprecated. Use z.enum() directly with TypeScript enums instead. The syntax z.enum(MyEnum) is now valid for native TypeScript enums.
Applied to files:
server/shared/types.ts
📚 Learning: 2025-08-11T14:10:56.008Z
Learnt from: naSim087
Repo: xynehq/xyne PR: 725
File: server/server.ts:784-811
Timestamp: 2025-08-11T14:10:56.008Z
Learning: In the xyne application (server/server.ts), the following endpoints are intentionally accessible to regular authenticated users (not just admins):
- POST /oauth/create - allows users to create OAuth providers
- POST /slack/ingest_more_channel - allows users to ingest Slack channels
- POST /slack/start_ingestion - allows users to start Slack ingestion
- DELETE /oauth/connector/delete - allows users to delete OAuth connectors
- POST /connector/update_status - allows users to update connector status
- GET /connectors/all - allows users to view all connectors
- GET /oauth/global-slack-provider - allows users to check for global Slack provider
These endpoints enable self-service integration management for users.
Applied to files:
server/server.tsserver/api/admin.ts
📚 Learning: 2025-05-28T10:47:41.020Z
Learnt from: naSim087
Repo: xynehq/xyne PR: 484
File: server/integrations/google/sync.ts:222-222
Timestamp: 2025-05-28T10:47:41.020Z
Learning: The functions `handleGoogleDriveChange` and `getDriveChanges` in `server/integrations/google/sync.ts` are intentionally exported for future changes, even though they are not currently being imported by other modules.
Applied to files:
server/queue/index.tsserver/integrations/zoho/sync.tsserver/sync-server.ts
📚 Learning: 2025-06-06T08:12:29.547Z
Learnt from: naSim087
Repo: xynehq/xyne PR: 506
File: server/scripts/clear-vespa-data.ts:290-290
Timestamp: 2025-06-06T08:12:29.547Z
Learning: Test files and utility scripts in the scripts directory may have more relaxed type safety standards compared to production code, and concerns about type casting patterns like "as any as VespaSchema" may be less critical in these contexts.
Applied to files:
server/search/vespaService.ts
🧬 Code graph analysis (15)
frontend/src/routes/_authenticated/integrations/zoho-desk.tsx (4)
frontend/src/routes/_authenticated/admin/integrations/zoho-desk.tsx (2)
getConnectors(31-45)Route(397-418)frontend/src/api.ts (1)
api(5-5)frontend/src/oauth/index.ts (1)
OAuthModal(3-126)frontend/src/components/IntegrationsSidebar.tsx (1)
IntegrationsSidebar(10-125)
server/types.ts (1)
server/api/admin.ts (1)
CreateZohoDeskConnector(515-612)
server/download-attachment.ts (2)
server/integrations/zoho/client.ts (2)
downloadAttachment(605-636)ZohoDeskClient(20-899)server/db/schema/connectors.ts (1)
connectors(60-109)
server/integrations/zoho/transformer.ts (3)
server/logger/index.ts (2)
getLogger(36-93)Subsystem(15-15)server/integrations/zoho/types.ts (3)
ZohoTicket(75-164)ZohoThread(48-72)ZohoAttachment(13-28)server/shared/fileUtils.ts (1)
getFileType(10-32)
server/check-ticket.ts (1)
server/search/vespa.ts (1)
GetDocument(22-22)
server/integrations/zoho/client.ts (3)
server/logger/index.ts (2)
getLogger(36-93)Subsystem(15-15)server/integrations/zoho/types.ts (10)
ZohoDeskConfig(192-199)ZohoErrorResponse(185-189)ZohoTokenResponse(5-10)ZohoIngestionOptions(202-207)ZohoTicketListResponse(176-176)ZohoTicket(75-164)ZohoThreadListResponse(179-179)ZohoThread(48-72)ZohoCommentListResponse(182-182)ZohoAttachment(13-28)server/utils.ts (1)
delay(137-138)
server/server.ts (3)
server/types.ts (2)
createZohoDeskConnectorSchema(248-252)CreateZohoDeskConnector(254-254)server/api/admin.ts (1)
CreateZohoDeskConnector(515-612)server/api/oauth.ts (1)
OAuthCallback(43-345)
server/integrations/zoho/queue.ts (10)
server/logger/index.ts (2)
getLogger(36-93)Subsystem(15-15)server/search/vespa.ts (3)
GetDocument(22-22)UpdateDocument(24-24)insert(21-21)server/sheetChunk.ts (1)
chunkSheetWithHeaders(408-465)server/integrations/zoho/client.ts (1)
ZohoDeskClient(20-899)server/integrations/zoho/transformer.ts (4)
transformZohoTicketToVespa(108-215)VespaZohoTicket(78-84)VespaZohoTicketBase(9-75)VespaAttachmentType(86-94)server/shared/types.ts (1)
Apps(40-40)server/queue/index.ts (2)
boss(45-45)ProcessZohoDeskAttachmentQueue(58-58)server/lib/chunkByOCR.ts (1)
chunkByOCRFromBuffer(666-753)server/db/ingestion.ts (3)
getIngestionById(110-121)updateIngestionMetadata(90-105)updateIngestionStatus(52-85)server/db/schema/connectors.ts (2)
SelectConnector(157-157)connectors(60-109)
server/queue/index.ts (5)
server/queue/boss.ts (1)
boss(4-7)server/integrations/zoho/sync.ts (1)
handleZohoDeskSync(42-105)server/metrics/sync/sync-metrics.ts (3)
syncJobSuccess(13-17)syncJobDuration(4-9)syncJobError(34-38)server/utils.ts (1)
getErrorMessage(103-106)server/integrations/zoho/queue.ts (2)
processTicketJob(177-606)TicketJob(153-159)
server/api/admin.ts (7)
server/db/schema/oauthProviders.ts (1)
SelectOAuthProvider(72-72)server/db/oauthProvider.ts (1)
getOAuthProvider(36-51)server/db/connector.ts (2)
db(3-3)insertConnector(42-91)server/db/user.ts (1)
getUserByEmail(148-157)server/integrations/zoho/client.ts (1)
ZohoDeskClient(20-899)server/utils.ts (1)
getErrorMessage(103-106)server/queue/index.ts (1)
SyncZohoDeskQueue(56-56)
server/integrations/zoho/sync.ts (6)
server/logger/index.ts (2)
getLogger(36-93)Subsystem(15-15)server/db/schema/connectors.ts (3)
SelectConnector(157-157)ZohoDeskOAuthIngestionState(153-155)connectors(60-109)server/db/ingestion.ts (3)
createIngestion(15-24)updateIngestionStatus(52-85)updateIngestionMetadata(90-105)server/queue/index.ts (2)
boss(45-45)ProcessZohoDeskTicketQueue(57-57)server/db/syncHistory.ts (1)
insertSyncHistory(10-30)server/integrations/zoho/queue.ts (1)
TicketJob(153-159)
frontend/src/routes/_authenticated/admin/integrations/zoho-desk.tsx (4)
frontend/src/routes/_authenticated/integrations/zoho-desk.tsx (3)
IntegrationProps(22-26)getConnectors(29-39)Route(216-235)frontend/src/api.ts (1)
api(5-5)frontend/src/components/Sidebar.tsx (1)
Sidebar(40-390)frontend/src/components/IntegrationsSidebar.tsx (1)
IntegrationsSidebar(10-125)
server/sync-server.ts (2)
server/types.ts (1)
startZohoDeskSyncSchema(580-582)server/api/admin.ts (1)
StartZohoDeskSyncApi(2473-2514)
server/ai/prompts.ts (2)
server/shared/types.ts (2)
Apps(40-40)GooglePeopleEntity(33-33)server/ai/context.ts (1)
userContext(1122-1141)
server/api/oauth.ts (4)
server/db/oauthProvider.ts (1)
getOAuthProvider(36-51)server/db/connector.ts (3)
db(3-3)insertConnector(42-91)updateConnector(479-496)server/integrations/zoho/client.ts (1)
ZohoDeskClient(20-899)server/db/schema/connectors.ts (1)
SelectConnector(157-157)
🔇 Additional comments (6)
server/check-ticket.ts (1)
3-21: LGTM: handy Vespa smoke test.Nice little helper to confirm Zoho tickets are making it into Vespa—no blockers from my side.
frontend/src/components/IntegrationsSidebar.tsx (1)
74-89: LGTM: Zoho Desk entry wired like the other SaaS tiles.The role-based routing mirrors Google/Microsoft and keeps the UX consistent.
server/vespa/services.xml (1)
71-71: LGTM: zoho_ticket registered with the content cluster.With the document type in place Vespa can accept Zoho tickets immediately.
server/db/schema/connectors.ts (1)
129-156: LGTM: Zoho Desk ingestion state covered.Typing the new state and folding it into the discriminated union keeps connector resumes predictable.
server/server.ts (1)
1238-1253: LGTM: Zoho Desk admin endpoints are properly scoped.Living under
/adminwith the existing role middleware and validators keeps the new connector flow constrained to admins.server/integrations/zoho/queue.ts (1)
233-378: Stop dumping full Zoho payloads to logsThese
console.log(JSON.stringify(...))blocks emit entire tickets, threads, comments, attachments, etc. into stdout. That’s a ton of customer PII/content and, in production, a serious compliance/privacy problem. Please remove (or strictly gate behind an opt-in debug flag that never runs in production) all of these raw dumps, including the similar blocks later in the file for threads/comments/attachments.- console.log("\n📥 RAW ZOHO API RESPONSE - TICKET") - console.log("=".repeat(80)) - console.log(JSON.stringify(fullTicket, null, 2)) - console.log("=".repeat(80)) - console.log("") + // Optionally add a debug log with sanitized metadata if needed⛔ Skipped due to learnings
Learnt from: Aman-Asrani-Juspay Repo: xynehq/xyne PR: 777 File: server/api/agent/workflowAgentUtils.ts:95-103 Timestamp: 2025-09-11T08:30:51.431Z Learning: The development team prefers to keep detailed logging (including user emails, attachment IDs, and content previews) during integration phases for debugging purposes, even if it includes potentially sensitive information. They prioritize debugging capability over log privacy during development.
| url.searchParams.set("redirect_uri", `${config.host}/callback`) | ||
| url.searchParams.set("access_type", "offline") | ||
| url.searchParams.set("state", newState) |
There was a problem hiding this comment.
Fix Zoho OAuth redirect path.
We serve the OAuth callback at /oauth/callback, but this branch registers Zoho with ${config.host}/callback. Zoho will redirect to /callback, miss our handler, and the flow aborts. Please align the redirect URI with the actual callback route here (and in the token exchange code) to keep the OAuth loop intact.
- url.searchParams.set("redirect_uri", `${config.host}/callback`)
+ url.searchParams.set("redirect_uri", `${config.host}/oauth/callback`)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| url.searchParams.set("redirect_uri", `${config.host}/callback`) | |
| url.searchParams.set("access_type", "offline") | |
| url.searchParams.set("state", newState) | |
| url.searchParams.set("redirect_uri", `${config.host}/oauth/callback`) | |
| url.searchParams.set("access_type", "offline") | |
| url.searchParams.set("state", newState) |
🤖 Prompt for AI Agents
In server/api/admin.ts around lines 339 to 341, the Zoho OAuth redirect_uri is
set to `${config.host}/callback` but the app expects `/oauth/callback`; update
this value to `${config.host}/oauth/callback` and make the corresponding change
in the token-exchange/request handling code so both the authorization request
and the token exchange use the same `/oauth/callback` path.
| }).toString(), | ||
| }) |
There was a problem hiding this comment.
Use the correct Zoho redirect URI.
The Zoho token exchange must echo the same redirect URI we registered (/oauth/callback). Sending /callback causes Zoho to reject the code exchange with an invalid redirect error. Please update the URI to /oauth/callback to restore the flow.
- redirect_uri: `${config.host}/callback`,
+ redirect_uri: `${config.host}/oauth/callback`,Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In server/api/oauth.ts around lines 167 to 168, the token exchange is sending
the wrong redirect URI (/callback) which Zoho rejects; replace the redirect
parameter value with the registered path /oauth/callback so the exchange echoes
the exact redirect URI used during authorization, update the code that builds
the token request body or URL to use "/oauth/callback" and ensure any
URL-encoding preserves that exact string.
| const results = await db | ||
| .select() | ||
| .from(connectors) | ||
| .where(eq(connectors.app, "zoho-desk")) | ||
| .limit(1) | ||
|
|
||
| const connector = results[0] | ||
| if (!connector) { | ||
| console.log("No Zoho Desk connector found") | ||
| return | ||
| } | ||
|
|
||
| const credentials = JSON.parse(connector.credentials as string) | ||
|
|
||
| const client = new ZohoDeskClient({ | ||
| orgId: credentials.orgId || "", | ||
| clientId: credentials.clientId, | ||
| clientSecret: credentials.clientSecret, | ||
| refreshToken: credentials.refreshToken, | ||
| }) | ||
|
|
||
| const url = "https://desk.zoho.com/api/v1/tickets/458844000263860317/threads/458844000264135441/attachments/458844000264135446/content" | ||
|
|
||
| console.log("Downloading attachment from Zoho...") | ||
| console.log("URL:", url) | ||
|
|
||
| const buffer = await client.downloadAttachmentFromUrl(url) | ||
|
|
||
| console.log("Downloaded successfully!") | ||
| console.log("Size:", buffer.length, "bytes") | ||
| console.log("First 100 bytes as text:", buffer.toString('utf8', 0, Math.min(100, buffer.length))) | ||
|
|
||
| const fs = await import('fs/promises') | ||
| await fs.writeFile('/tmp/zoho-attachment.png', buffer) |
There was a problem hiding this comment.
Handle connectors that only have OAuth credentials.
Zoho Desk connectors produced by the OAuth callback typically leave credentials null and store everything in oauthCredentials. In that case JSON.parse(connector.credentials as string) throws before we even hit the API, so the script can’t be used with the very connectors it’s meant to debug. Please guard against null and fall back to whichever column is populated (and bail cleanly if neither is).
-import { connectors } from "@/db/schema"
-import { eq } from "drizzle-orm"
+import { connectors } from "@/db/schema"
+import { eq } from "drizzle-orm"
+import { Apps } from "@/shared/types"
...
- .where(eq(connectors.app, "zoho-desk"))
+ .where(eq(connectors.app, Apps.ZohoDesk))
...
- const credentials = JSON.parse(connector.credentials as string)
+ const rawCredentials =
+ connector.credentials ?? connector.oauthCredentials
+ if (!rawCredentials) {
+ console.log("Connector has no stored Zoho Desk credentials")
+ return
+ }
+ const credentials = JSON.parse(rawCredentials)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const results = await db | |
| .select() | |
| .from(connectors) | |
| .where(eq(connectors.app, "zoho-desk")) | |
| .limit(1) | |
| const connector = results[0] | |
| if (!connector) { | |
| console.log("No Zoho Desk connector found") | |
| return | |
| } | |
| const credentials = JSON.parse(connector.credentials as string) | |
| const client = new ZohoDeskClient({ | |
| orgId: credentials.orgId || "", | |
| clientId: credentials.clientId, | |
| clientSecret: credentials.clientSecret, | |
| refreshToken: credentials.refreshToken, | |
| }) | |
| const url = "https://desk.zoho.com/api/v1/tickets/458844000263860317/threads/458844000264135441/attachments/458844000264135446/content" | |
| console.log("Downloading attachment from Zoho...") | |
| console.log("URL:", url) | |
| const buffer = await client.downloadAttachmentFromUrl(url) | |
| console.log("Downloaded successfully!") | |
| console.log("Size:", buffer.length, "bytes") | |
| console.log("First 100 bytes as text:", buffer.toString('utf8', 0, Math.min(100, buffer.length))) | |
| const fs = await import('fs/promises') | |
| await fs.writeFile('/tmp/zoho-attachment.png', buffer) | |
| const results = await db | |
| .select() | |
| .from(connectors) | |
| .where(eq(connectors.app, Apps.ZohoDesk)) | |
| .limit(1) | |
| const connector = results[0] | |
| if (!connector) { | |
| console.log("No Zoho Desk connector found") | |
| return | |
| } | |
| const rawCredentials = | |
| connector.credentials ?? connector.oauthCredentials | |
| if (!rawCredentials) { | |
| console.log("Connector has no stored Zoho Desk credentials") | |
| return | |
| } | |
| const credentials = JSON.parse(rawCredentials) | |
| const client = new ZohoDeskClient({ | |
| orgId: credentials.orgId || "", | |
| clientId: credentials.clientId, | |
| clientSecret: credentials.clientSecret, | |
| refreshToken: credentials.refreshToken, | |
| }) | |
| const url = "https://desk.zoho.com/api/v1/tickets/458844000263860317/threads/458844000264135441/attachments/458844000264135446/content" | |
| console.log("Downloading attachment from Zoho...") | |
| console.log("URL:", url) | |
| const buffer = await client.downloadAttachmentFromUrl(url) | |
| console.log("Downloaded successfully!") | |
| console.log("Size:", buffer.length, "bytes") | |
| console.log("First 100 bytes as text:", buffer.toString('utf8', 0, Math.min(100, buffer.length))) | |
| const fs = await import('fs/promises') | |
| await fs.writeFile('/tmp/zoho-attachment.png', buffer) |
🤖 Prompt for AI Agents
In server/download-attachment.ts around lines 7 to 40, the code blindly
JSON.parse(connector.credentials) which throws when connectors use
oauthCredentials instead; change it to: check if connector.credentials is
non-null and non-empty then parse it, else check connector.oauthCredentials and
parse that, and if neither exist log a clear message and return; ensure parsing
is wrapped in try/catch to log parse errors and return, and then construct
ZohoDeskClient using fields from the parsed object (with orgId defaulting to
""), so the script works for both OAuth and non‑OAuth connectors and fails
cleanly when credentials are missing or malformed.
| private static globalAccessToken: string | null = null | ||
| private static globalTokenExpiresAt: number | null = null | ||
| private static ongoingRefresh: Promise<string> | null = null | ||
|
|
||
| constructor(config: ZohoDeskConfig) { | ||
| this.config = { | ||
| apiDomain: "desk.zoho.com", | ||
| accountsDomain: "accounts.zoho.com", | ||
| ...config, | ||
| } | ||
|
|
||
| // API client for Zoho Desk API | ||
| this.apiClient = axios.create({ | ||
| baseURL: `https://${this.config.apiDomain}/api/v1`, | ||
| timeout: 30000, | ||
| headers: { | ||
| "Content-Type": "application/json", | ||
| }, | ||
| }) | ||
|
|
||
| // Accounts client for OAuth | ||
| this.accountsClient = axios.create({ | ||
| baseURL: `https://${this.config.accountsDomain}/oauth/v2`, | ||
| timeout: 30000, // 30 seconds for OAuth token refresh (Zoho can be slow) | ||
| headers: { | ||
| "Content-Type": "application/x-www-form-urlencoded", | ||
| }, | ||
| }) | ||
|
|
||
| // Add response interceptor for error handling | ||
| this.apiClient.interceptors.response.use( | ||
| (response) => response, | ||
| (error: AxiosError<ZohoErrorResponse>) => { | ||
| if (error.response) { | ||
| logger.error("Zoho API error", { | ||
| status: error.response.status, | ||
| errorCode: error.response.data?.errorCode, | ||
| message: error.response.data?.message, | ||
| url: error.config?.url, | ||
| }) | ||
| } | ||
| throw error | ||
| }, | ||
| ) | ||
| } | ||
|
|
||
| /** | ||
| * Refresh the access token using the refresh token | ||
| * Uses a global mutex to prevent concurrent refresh requests across all instances | ||
| */ | ||
| async refreshAccessToken(): Promise<string> { | ||
| // If there's an ongoing refresh, wait for it instead of starting a new one | ||
| if (ZohoDeskClient.ongoingRefresh) { | ||
| logger.info("⏳ Token refresh API call already in progress, waiting for it to complete (no duplicate API call)") | ||
| try { | ||
| const token = await ZohoDeskClient.ongoingRefresh | ||
| this.accessToken = token | ||
| this.tokenExpiresAt = ZohoDeskClient.globalTokenExpiresAt | ||
| logger.info("✅ Reused token from ongoing refresh (no API call made)", { | ||
| tokenLength: token.length, | ||
| }) | ||
| return token | ||
| } catch (error) { | ||
| // If ongoing refresh failed, we'll try again | ||
| logger.warn("⚠️ Ongoing refresh failed, will retry", { | ||
| error: error instanceof Error ? error.message : String(error), | ||
| }) | ||
| } | ||
| } | ||
|
|
||
| // Check if we have a valid global token | ||
| if ( | ||
| ZohoDeskClient.globalAccessToken && | ||
| ZohoDeskClient.globalTokenExpiresAt && | ||
| ZohoDeskClient.globalTokenExpiresAt > Date.now() | ||
| ) { | ||
| logger.debug("✅ Reusing valid global access token (no API call needed)", { | ||
| expiresIn: Math.floor((ZohoDeskClient.globalTokenExpiresAt - Date.now()) / 1000) + "s", | ||
| tokenLength: ZohoDeskClient.globalAccessToken.length, | ||
| }) | ||
| this.accessToken = ZohoDeskClient.globalAccessToken | ||
| this.tokenExpiresAt = ZohoDeskClient.globalTokenExpiresAt | ||
| return ZohoDeskClient.globalAccessToken | ||
| } | ||
|
|
||
| // Start a new refresh and store it globally | ||
| logger.info("🔄 No valid global token found - calling Zoho token API", { | ||
| hasRefreshToken: !!this.config.refreshToken, | ||
| hasClientId: !!this.config.clientId, | ||
| hasClientSecret: !!this.config.clientSecret, | ||
| refreshTokenLength: this.config.refreshToken?.length, | ||
| }) | ||
|
|
||
| const refreshPromise = (async () => { | ||
| try { | ||
| const tokenData = { | ||
| grant_type: "refresh_token", | ||
| client_id: this.config.clientId, | ||
| client_secret: this.config.clientSecret, | ||
| refresh_token: this.config.refreshToken, | ||
| } | ||
|
|
||
| console.log("\n🔑 ZOHO TOKEN REFRESH: Calling Zoho OAuth API") | ||
| console.log(` Endpoint: https://${this.config.accountsDomain}/oauth/v2/token`) | ||
| console.log(` Timeout: 30 seconds`) | ||
| console.log(` Client ID: ${this.config.clientId?.substring(0, 20)}...`) | ||
| console.log("") | ||
|
|
||
| const response = await this.accountsClient.post<ZohoTokenResponse>( | ||
| "/token", | ||
| querystring.stringify(tokenData), | ||
| ) | ||
|
|
||
| console.log("\n✅ ZOHO TOKEN REFRESH: Received response from Zoho") | ||
| console.log(` Status: ${response.status} ${response.statusText}`) | ||
| console.log(` Has access_token: ${!!response.data?.access_token}`) | ||
| console.log("") | ||
|
|
||
| // Log full response to see what Zoho actually returns | ||
| logger.info("📥 Received Zoho token response", { | ||
| status: response.status, | ||
| statusText: response.statusText, | ||
| hasData: !!response.data, | ||
| responseDataType: typeof response.data, | ||
| responseData: response.data, // Log FULL response | ||
| }) | ||
|
|
||
| // Check for error in response | ||
| if ((response.data as any)?.error) { | ||
| logger.error("❌ Zoho API returned error in response!", { | ||
| error: (response.data as any).error, | ||
| errorDescription: (response.data as any).error_description, | ||
| fullResponse: response.data, | ||
| }) | ||
| throw new Error(`Zoho API error: ${(response.data as any).error} - ${(response.data as any).error_description || 'No description'}`) | ||
| } | ||
|
|
||
| if (!response.data?.access_token) { | ||
| logger.error("❌ Zoho API returned NO access_token!", { | ||
| responseData: JSON.stringify(response.data), | ||
| responseKeys: Object.keys(response.data || {}), | ||
| hasAccessToken: !!response.data?.access_token, | ||
| }) | ||
| throw new Error("Zoho API did not return an access_token") | ||
| } | ||
|
|
||
| const accessToken = response.data.access_token | ||
| const expiresIn = response.data.expires_in || 3600 | ||
| const expiresAt = Date.now() + expiresIn * 1000 | ||
|
|
||
| // Update global token | ||
| ZohoDeskClient.globalAccessToken = accessToken | ||
| ZohoDeskClient.globalTokenExpiresAt = expiresAt | ||
|
|
||
| // Update instance token | ||
| this.accessToken = accessToken | ||
| this.tokenExpiresAt = expiresAt | ||
|
|
||
| logger.info("✅ Zoho access token refreshed via API and stored globally for reuse", { | ||
| expiresIn: expiresIn + "s", | ||
| tokenLength: accessToken.length, | ||
| message: "All subsequent requests will reuse this token without API calls", | ||
| }) | ||
|
|
||
| return accessToken | ||
| } catch (error) { | ||
| console.log("\n❌ ZOHO TOKEN REFRESH: Failed to refresh access token") | ||
| console.log("=" .repeat(80)) | ||
| console.log(` Error Type: ${error instanceof Error ? error.constructor.name : typeof error}`) | ||
| console.log(` Error Message: ${error instanceof Error ? error.message : String(error)}`) | ||
| if (error instanceof Error && error.stack) { | ||
| console.log(` Stack Trace:\n${error.stack}`) | ||
| } | ||
| console.log("=" .repeat(80)) | ||
| console.log("") | ||
|
|
||
| logger.error("❌ Failed to refresh Zoho access token", { | ||
| errorMessage: error instanceof Error ? error.message : String(error), | ||
| errorStack: error instanceof Error ? error.stack : undefined, | ||
| }) | ||
| throw new Error(`Failed to refresh Zoho access token: ${error}`) | ||
| } finally { | ||
| // Clear the ongoing refresh promise | ||
| ZohoDeskClient.ongoingRefresh = null | ||
| } | ||
| })() | ||
|
|
||
| // Store the ongoing refresh | ||
| ZohoDeskClient.ongoingRefresh = refreshPromise | ||
|
|
There was a problem hiding this comment.
Don't share refresh results across connectors.
The static globalAccessToken/globalTokenExpiresAt/ongoingRefresh cache is shared by every ZohoDeskClient instance. As soon as connector B runs after connector A, it will reuse connector A’s token (because the cached entry still looks valid) and every API call will 401. Those retries never get to the real refresh for connector B, so the sync fails permanently. Please scope the cache by credential (e.g. refresh token + org) or drop the static cache entirely. For example:
- private static globalAccessToken: string | null = null
- private static globalTokenExpiresAt: number | null = null
- private static ongoingRefresh: Promise<string> | null = null
+ private static tokenCache = new Map<
+ string,
+ { accessToken: string; expiresAt: number }
+ >()
+ private static refreshPromises = new Map<string, Promise<string>>()
@@
- if (ZohoDeskClient.ongoingRefresh) {
+ const cacheKey = `${this.config.orgId}:${this.config.refreshToken}`
+ const pending = ZohoDeskClient.refreshPromises.get(cacheKey)
+ if (pending) {
@@
- if (
- ZohoDeskClient.globalAccessToken &&
- ZohoDeskClient.globalTokenExpiresAt &&
- ZohoDeskClient.globalTokenExpiresAt > Date.now()
- ) {
+ const cached = ZohoDeskClient.tokenCache.get(cacheKey)
+ if (cached && cached.expiresAt > Date.now()) {
@@
- this.accessToken = ZohoDeskClient.globalAccessToken
- this.tokenExpiresAt = ZohoDeskClient.globalTokenExpiresAt
- return ZohoDeskClient.globalAccessToken
+ this.accessToken = cached.accessToken
+ this.tokenExpiresAt = cached.expiresAt
+ return cached.accessToken
@@
- ZohoDeskClient.ongoingRefresh = null
+ ZohoDeskClient.refreshPromises.delete(cacheKey)
@@
- ZohoDeskClient.globalAccessToken = accessToken
- ZohoDeskClient.globalTokenExpiresAt = expiresAt
- ZohoDeskClient.ongoingRefresh = refreshPromise
+ ZohoDeskClient.tokenCache.set(cacheKey, { accessToken, expiresAt })
+ ZohoDeskClient.refreshPromises.set(cacheKey, refreshPromise)Committable suggestion skipped: line range outside the PR's diff.
| const newLastModifiedTime = new Date().toISOString() | ||
| const newState: ZohoDeskOAuthIngestionState = { | ||
| app: Apps.ZohoDesk, | ||
| authType: AuthType.OAuth, | ||
| lastModifiedTime: newLastModifiedTime, | ||
| lastUpdated: newLastModifiedTime, | ||
| } |
There was a problem hiding this comment.
Don't advance lastModifiedTime beyond what we actually synced.
Persisting lastModifiedTime as new Date().toISOString() means any ticket updated after the last fetch but before this assignment will be skipped forever on the next run. We need to store the highest modifiedTime we actually enqueued (and only fall back to now when no tickets were found) so incremental syncs remain lossless. One way is to have syncTickets return the max modifiedTime it saw:
- await syncTickets(
+ const latestModifiedTime = await syncTickets(
@@
- const newLastModifiedTime = new Date().toISOString()
+ const nowIso = new Date().toISOString()
+ const newLastModifiedTime = latestModifiedTime ?? nowIsoand inside syncTickets track and return that value:
-async function syncTickets(
+async function syncTickets(
@@
-): Promise<void> {
+): Promise<string | undefined> {
let from = 1
const limit = 100
let hasMore = true
+ let latestModifiedTimeSeen = lastModifiedTime
@@
- for (const ticket of tickets) {
+ for (const ticket of tickets) {
try {
@@
+ if (
+ ticket.modifiedTime &&
+ (!latestModifiedTimeSeen ||
+ ticket.modifiedTime > latestModifiedTimeSeen)
+ ) {
+ latestModifiedTimeSeen = ticket.modifiedTime
+ }
@@
- Logger.info("✅ TICKET SYNC COMPLETE", {
+ Logger.info("✅ TICKET SYNC COMPLETE", {
@@
- })
+ })
+ return latestModifiedTimeSeen
}Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In server/integrations/zoho/sync.ts around lines 247-253, the code currently
sets lastModifiedTime to new Date().toISOString(), which can skip records
updated between the fetch and this assignment; change the flow so that
syncTickets returns (or exposes) the maximum modifiedTime it actually processed,
then set newState.lastModifiedTime and newState.lastUpdated to that max value;
if syncTickets processed no tickets, fall back to now; ensure comparison/parsing
uses the same timestamp format and that the returned max is a string (ISO) so
the persisted state reflects the highest synced modifiedTime only.
There was a problem hiding this comment.
Actionable comments posted: 10
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
server/db/connector.ts (1)
503-538: Incomplete cascade deletion may leave orphaned records.
deleteConnectordeletes ingestions but does not deletesyncJobsoroauthProvidersthat may reference this connector. In contrast,deleteOauthConnectorhandles all three. This inconsistency could leave orphaned foreign key references depending on how connectors are created.If
deleteConnectoris meant for non-OAuth connectors that don't havesyncJobsoroauthProviders, consider adding a clarifying comment. Otherwise, align the cascade logic:const internalConnectorId = connector[0].id // Delete ingestions first (they reference connectors via foreign key) const { ingestions } = await import("./schema") await trx .delete(ingestions) .where(eq(ingestions.connectorId, internalConnectorId)) Logger.debug(`Deleted ingestions for connector: ${connectorId}`) + // Delete sync jobs if any exist + await trx.delete(syncJobs).where(eq(syncJobs.connectorId, internalConnectorId)) + Logger.debug(`Deleted sync jobs for connector: ${connectorId}`) + + // Delete OAuth providers if any exist + await trx + .delete(oauthProviders) + .where(eq(oauthProviders.connectorId, internalConnectorId)) + Logger.debug(`Deleted OAuth providers for connector: ${connectorId}`) + // Now delete the connector itselfserver/api/oauth.ts (1)
300-338: Zoho Desk OAuth flow doesn't trigger ingestion.After creating the Zoho Desk connector (lines 253-279), the code falls through to the
elsebranch (lines 323-338) which enqueues a generic SaaS job. However, Zoho Desk should likely use theSyncZohoDeskQueueinstead for proper ingestion handling.} else if (app === Apps.Slack) { const abortController = new AbortController() globalAbortControllers.set(`${connector.id}`, abortController) + } else if (app === Apps.ZohoDesk) { + // Zoho Desk uses a separate sync queue + Logger.info("✅ ZOHO CALLBACK: Connector created, sync can be triggered manually", { + connectorId: connector.id, + email, + }) } else { const SaasJobPayload: SaaSOAuthJob = {server/api/chat/chat.ts (1)
1126-1157: Zoho JSON context: guard against mixed result types to avoid odd JSON shapesWhen any result has
sddocname === "zoho_ticket", you treat allresultsas Zoho tickets and spreadanswerContextMapinto an object. If some entries are non-Zoho docs andanswerContextMapreturns a string, object-spread will create numeric keys from characters, producing surprising JSON. Consider restricting this branch to Zoho-only sets or filtering tofields.sddocname === "zoho_ticket"before buildingticketObjects.- const hasZohoTickets = results?.some( - (r) => (r as VespaSearchResults).fields?.sddocname === "zoho_ticket" - ) + const zohoResults = + results?.filter( + (r) => (r as VespaSearchResults).fields?.sddocname === "zoho_ticket", + ) || [] + const hasZohoTickets = zohoResults.length > 0 @@ - const ticketObjects = await Promise.all( - results.map(async (v, idx) => { + const ticketObjects = await Promise.all( + zohoResults.map(async (v, idx) => {
♻️ Duplicate comments (13)
server/vespa/schemas/chat_user.sd (1)
73-78: Embedding dimension standardization.The embedding dimension has been updated from
DIMSto768, consistent with the broader changes across Vespa schemas in this PR. This change aligns with the event schema and other schema updates.Also applies to: 130-132
server/vespa/schemas/mail_attachment.sd (1)
66-72: Embedding dimension standardization.The embedding dimensions have been updated to
768, consistent with the broader schema updates across this PR.Also applies to: 95-99
server/vespa/schemas/datasource_file.sd (1)
83-91: Embedding dimension standardization for text and image embeddings.Both
chunk_embeddingsandimage_chunk_embeddingshave been updated to use768dimensions, aligning with the standardization across all Vespa schemas in this PR.Also applies to: 107-111
server/api/admin.ts (2)
342-342: Fix Zoho OAuth redirect path.The redirect URI is set to
/callbackbut the OAuth callback handler is registered at/oauth/callback. This mismatch will cause Zoho to reject the authorization flow.- url.searchParams.set("redirect_uri", `${config.host}/callback`) + url.searchParams.set("redirect_uri", `${config.host}/oauth/callback`)
530-531: Avoid@ts-ignoreby properly typing the request validation.The
@ts-ignoredirective suppresses type-checking which could hide real issues. Consider defining a proper Zod schema for the request body and usingzValidatorto get proper type inference.+const createZohoDeskConnectorSchema = z.object({ + refreshToken: z.string(), +}) + export const CreateZohoDeskConnector = async (c: Context) => { // ... - // @ts-ignore - const form = c.req.valid("json") + const form = c.req.valid("json") as z.infer<typeof createZohoDeskConnectorSchema>server/api/oauth.ts (1)
180-180: Use the correct Zoho redirect URI.The redirect URI must match exactly what was used in the authorization request. This uses
/callbackbut should be/oauth/callbackto match the callback handler.- redirect_uri: `${config.host}/callback`, + redirect_uri: `${config.host}/oauth/callback`,server/integrations/zoho/sync.ts (1)
253-268: Don’t advancelastModifiedTimetonow—track and persist the max ticketmodifiedTimeactually syncedYou’re still setting
lastModifiedTimetonew Date().toISOString()aftersyncTickets, andsyncTicketsdoesn’t return a cursor. This can permanently skip tickets whosemodifiedTimefalls between the previouslastModifiedTimeand when this assignment happens (especially for long-running syncs). Instead, havesyncTicketstrack and return the highestmodifiedTimeit actually enqueued, and persist that; fall back tonowonly if no tickets were processed.- Logger.info("🎫 SYNC CONNECTOR: Starting ticket sync", { connectorId }) - await syncTickets(client, connector, lastModifiedTime, metrics, ingestion) - Logger.info("✅ SYNC CONNECTOR: Ticket sync completed", { + Logger.info("🎫 SYNC CONNECTOR: Starting ticket sync", { connectorId }) + const latestModifiedTime = await syncTickets( + client, + connector, + lastModifiedTime, + metrics, + ingestion, + ) + Logger.info("✅ SYNC CONNECTOR: Ticket sync completed", { connectorId, ticketsFetched: metrics.ticketsFetched, }) // 7. Update connector state with new timestamp - const newLastModifiedTime = new Date().toISOString() + const nowIso = new Date().toISOString() + const newLastModifiedTime = latestModifiedTime ?? nowIso const newState: ZohoDeskOAuthIngestionState = { app: Apps.ZohoDesk, authType: AuthType.OAuth, lastModifiedTime: newLastModifiedTime, - lastUpdated: newLastModifiedTime, + lastUpdated: newLastModifiedTime, }-async function syncTickets( +async function syncTickets( client: ZohoDeskClient, connector: SelectConnector, lastModifiedTime: string | undefined, metrics: ZohoSyncMetrics, ingestion: any, -): Promise<void> { +): Promise<string | undefined> { let from = 1 const limit = 100 let hasMore = true + let latestModifiedTimeSeen = lastModifiedTime @@ - for (const ticket of tickets) { + for (const ticket of tickets) { try { @@ - await boss.send(ProcessZohoDeskTicketQueue, ticketJob, { + await boss.send(ProcessZohoDeskTicketQueue, ticketJob, { retryLimit: 2, expireInHours: 23, singletonKey: `zoho-ticket-${ticket.id}`, }) @@ - metrics.ticketsFetched++ + metrics.ticketsFetched++ queuedInBatch++ + + if ( + ticket.modifiedTime && + (!latestModifiedTimeSeen || + ticket.modifiedTime > latestModifiedTimeSeen) + ) { + latestModifiedTimeSeen = ticket.modifiedTime + } @@ console.log("\n✅ TICKET SYNC COMPLETE") @@ Logger.info("✅ TICKET SYNC COMPLETE", { @@ }) + return latestModifiedTimeSeen }Also applies to: 403-628
server/integrations/zoho/client.ts (3)
144-152: Replace console.log debug traces with structured logger once integration stabilizesThe class currently uses many
console.logstatements (token refresh, ticket/thread/comment fetches, attachment flows). For production this is noisy and bypasses the central logger; it also makes it harder to filter logs per connector or subsystem.Once you’re done debugging, consider either removing these logs or converting them to
logger.debug/infowith structured fields.Also applies to: 159-163, 290-332, 541-567, 792-804
33-37: Global token cache is shared across all connectors (still a critical bug)The static
globalAccessToken/globalTokenExpiresAt/ongoingRefreshcache is process‑wide. Any ZohoDeskClient instance will happily reuse a “valid” global token, regardless of which connector/org/refresh token it belongs to. Once connector A refreshes, connector B will start using A’s token and 401 endlessly; attachment workers in particular never hit the 401‑retry path.Scope the cache by connector identity (e.g.
${orgId}:${refreshToken}) or drop the static cache entirely.- // Global token refresh mutex to prevent concurrent refresh requests - private static globalAccessToken: string | null = null - private static globalTokenExpiresAt: number | null = null - private static ongoingRefresh: Promise<string> | null = null + // Token cache and refresh promises scoped by connector (orgId + refreshToken) + private static tokenCache = new Map< + string, + { accessToken: string; expiresAt: number } + >() + private static refreshPromises = new Map<string, Promise<string>>() @@ async refreshAccessToken(): Promise<string> { - // If there's an ongoing refresh, wait for it instead of starting a new one - if (ZohoDeskClient.ongoingRefresh) { + const cacheKey = `${this.config.orgId}:${this.config.refreshToken}` + + const pending = ZohoDeskClient.refreshPromises.get(cacheKey) + if (pending) { … - const token = await ZohoDeskClient.ongoingRefresh - this.accessToken = token - this.tokenExpiresAt = ZohoDeskClient.globalTokenExpiresAt + const token = await pending + const cached = ZohoDeskClient.tokenCache.get(cacheKey) + this.accessToken = token + this.tokenExpiresAt = cached?.expiresAt ?? null @@ - // Check if we have a valid global token - if ( - ZohoDeskClient.globalAccessToken && - ZohoDeskClient.globalTokenExpiresAt && - ZohoDeskClient.globalTokenExpiresAt > Date.now() - ) { + const cached = ZohoDeskClient.tokenCache.get(cacheKey) + if (cached && cached.expiresAt > Date.now()) { @@ - this.accessToken = ZohoDeskClient.globalAccessToken - this.tokenExpiresAt = ZohoDeskClient.globalTokenExpiresAt - return ZohoDeskClient.globalAccessToken + this.accessToken = cached.accessToken + this.tokenExpiresAt = cached.expiresAt + return cached.accessToken @@ - const refreshPromise = (async () => { + const refreshPromise = (async () => { @@ - // Update global token - ZohoDeskClient.globalAccessToken = accessToken - ZohoDeskClient.globalTokenExpiresAt = expiresAt + // Cache token for this connector + ZohoDeskClient.tokenCache.set(cacheKey, { accessToken, expiresAt }) @@ - // Clear the ongoing refresh promise - ZohoDeskClient.ongoingRefresh = null + ZohoDeskClient.refreshPromises.delete(cacheKey) @@ - // Store the ongoing refresh - ZohoDeskClient.ongoingRefresh = refreshPromise + ZohoDeskClient.refreshPromises.set(cacheKey, refreshPromise)Also applies to: 84-125
164-171: Stop logging full OAuth token responses (access_token is a secret)
logger.info("📥 Received Zoho token response", { … responseData: response.data })writes the entire OAuth payload, includingaccess_token, into structured logs. That’s a live credential; logs become a secret store.Log only non‑sensitive metadata (status, scope, etc.) and explicitly omit/redact
access_token,refresh_token, and similar fields.- logger.info("📥 Received Zoho token response", { - status: response.status, - statusText: response.statusText, - hasData: !!response.data, - responseDataType: typeof response.data, - responseData: response.data, // Log FULL response - }) + const { access_token, refresh_token, id_token, ...rest } = + (response.data as any) || {} + logger.info("📥 Received Zoho token response", { + status: response.status, + statusText: response.statusText, + hasData: !!response.data, + responseDataType: typeof response.data, + scope: (response.data as any)?.scope, + // Never log raw tokens + tokensRedacted: Boolean(access_token || refresh_token || id_token), + responseMeta: Object.keys(rest || {}), + })server/ai/prompts.ts (1)
579-695: Ticket context block duplication across promptsThe “Ticket Context Format” and Zoho Desk ticket instructions are duplicated almost verbatim across
baselinePrompt,baselinePromptJson, andbaselineReasoningPromptJson. This increases maintenance overhead and risks the prompts drifting out of sync over time.If you keep these long‑form instructions, consider extracting them into a shared constant and interpolating into each prompt.
Also applies to: 637-694, 810-822
server/integrations/zoho/queue.ts (2)
584-588: Trim or convert console.log traces before shipping to productionThis worker file has extensive
console.logoutput (raw tickets/threads/comments/attachments, full ticket structures, OCR previews, error stacks). It’s very useful while bringing up the integration, but will be noisy and hard to control in production.Consider either:
- Guarding these logs behind a debug flag, or
- Converting key ones to
Logger.debug/infowith structured fields and dropping the large JSON/preview dumps.Also applies to: 265-270, 373-400, 416-421, 746-807, 870-877, 889-923, 957-975, 996-1008, 1017-1056
218-227: Remove logging of raw connector credentials (secrets in logs)These logs currently include
credentialsValue(sliced or full) both in the “Connector loaded” info log and in the JSON parse error log. That leaks OAuth client secrets/refresh tokens into log storage.Drop the credential payload entirely from logs; log only non‑sensitive metadata like
connectorId, boolean flags, and types.- Logger.info("📋 Connector loaded from database", { - connectorId, - hasCredentials: !!connector.credentials, - credentialsType: typeof connector.credentials, - credentialsValue: - typeof connector.credentials === "string" - ? connector.credentials.substring(0, 100) - : connector.credentials, - }) + Logger.info("📋 Connector loaded from database", { + connectorId, + hasCredentials: !!connector.credentials, + credentialsType: typeof connector.credentials, + }) @@ - Logger.error("❌ Failed to parse credentials JSON", { - error: error instanceof Error ? error.message : String(error), - credentialsValue: connector.credentials, - }) + Logger.error("❌ Failed to parse credentials JSON", { + error: error instanceof Error ? error.message : String(error), + hasCredentials: Boolean(connector.credentials), + })Also applies to: 239-244
🟠 Major comments (15)
frontend/vite.config.ts-113-113 (1)
113-113: Remove hardcoded ngrok domain before merge.The hardcoded ngrok domain
'1d3f98c817b9.ngrok-free.app'should not be committed to the codebase. Ngrok URLs are temporary, session-specific, and developer-specific. This will break for other developers and in different environments.Use an environment variable instead:
- allowedHosts: ['1d3f98c817b9.ngrok-free.app'], + allowedHosts: env.VITE_ALLOWED_HOSTS ? env.VITE_ALLOWED_HOSTS.split(',') : undefined,Then developers can configure their local ngrok domain in
.envfiles:VITE_ALLOWED_HOSTS=your-ngrok-domain.ngrok-free.appAlternatively, if this was added only for testing, remove this line entirely before merging.
server/zoho-api-14-170 (1)
14-170: Remove or anonymize PII from sample API responses.The sample responses contain real personal information including:
- Email addresses (e.g.,
asingh20@univoedtech.com,subham.maity@juspay.in)- Phone numbers (e.g.,
+91 9444 96 8290)- Full names and job titles
This poses compliance/privacy risks (GDPR, CCPA) and should be replaced with fictional data.
Consider using placeholder data like:
user@example.comfor emails+91 XXXX XX XXXXfor phone numbers- Generic names like "John Doe", "Jane Smith"
scripts/zoho-refresh-token.js-69-77 (1)
69-77: Avoid logging sensitive token data.Logging the full access token to console is a security risk. Consider logging only a truncated version or removing the token logging entirely.
console.log("✅ Token refresh successful!") console.log("\n📋 New Token Details:") console.log("=====================================") - console.log("Access Token:", data.access_token) + console.log("Access Token:", data.access_token ? `${data.access_token.substring(0, 20)}...` : "N/A") console.log("Token Type:", data.token_type || "Bearer") console.log("Expires In:", data.expires_in || 3600, "seconds") console.log("Scope:", data.scope || CONFIG.SCOPE) console.log("Generated At:", new Date().toISOString()) console.log("=====================================\n")server/search/vespa.ts-166-166 (1)
166-166: Remove commented-out exports or document why they're retained.Lines 166 and 181 contain commented-out exports. If these functions are no longer needed, remove them entirely. If they're temporarily disabled for a specific reason, add a comment explaining why.
- //export const getThreadItems = vespa.getThreadItems.bind(vespa) export const SearchVespaThreads = vespa.SearchVespaThreads.bind(vespa) // DataSource operations export const insertDataSource = vespa.insertDataSource.bind(vespa)- //export const getSlackUserDetails = vespa.getSlackUserDetails.bind(vespa) // Utility operations export const getTimestamp = vespa.getTimestamp.bind(vespa)Also applies to: 181-181
server/queue/index.ts-531-533 (1)
531-533: Add error handling forstartSummaryWorker()initialization.While
startSummaryWorker()handles errors within job processing via try-catch, the function lacks error handling forboss.work()initialization failures. The call at line 531 should be wrapped in try-catch to prevent initialization errors from crashing the queue setup process, similar to the error handling pattern used for other workers ininitWorkers().server/api/admin.ts-587-605 (1)
587-605: Remove hardcoded department ID before production.This hardcoded department ID (
458844000213531089) will break for other Zoho Desk organizations and prevents proper multi-tenant support. The TODO comment acknowledges this is a workaround, but this should be addressed before merging.Consider one of these approaches:
- Enable the
Desk.settings.READscope to fetch actual department info- Make the department ID configurable via environment variable
- Allow admins to provide department IDs during connector creation
+ // TODO: Replace hardcoded department with proper fetching once OAuth scope is configured + // For now, allow department to be passed in the request body + const form = c.req.valid("json") as { refreshToken: string; departmentId?: string } + const departmentId = form.departmentId || config.ZohoDefaultDepartmentId + const userInfo = { email: sub, - associatedDepartmentIds: ["458844000213531089"], + associatedDepartmentIds: departmentId ? [departmentId] : [], associatedDepartments: [ { - id: "458844000213531089", - name: "Credit", + id: departmentId || "", + name: "Default", }, ], }Committable suggestion skipped: line range outside the PR's diff.
server/workers/summary-worker.ts-446-496 (1)
446-496: UsePromise.allSettledwith per-job error handling instead ofPromise.all.The current code will mark the entire batch as failed if any single job throws. When using
Promise.all, the first rejection propagates to pg-boss's batch handler, causing all jobs in the batch to be retried together rather than independently. ReplacePromise.allwithPromise.allSettledand explicitly handle each job's outcome usingboss.complete()for success andboss.fail()for errors to ensure individual job failures are isolated.server/api/chat/chat.ts-3436-3467 (1)
3436-3467: Zoho DeskdepartmentIdsin metadata path: same missing-connector concern as iterative pathHere too,
userDepartmentIdsremainsundefinedif there’s no Zoho OAuth connector oroauthCredentialsparsing fails, yet you pass it down intogetItemsviadepartmentIds. Ensure the downstream logic treats “no departmentIds” as “no access” for Zoho tickets, not “all departments”; otherwise, you may expose tickets to users without an associated Zoho connector.- if (apps?.includes(Apps.ZohoDesk)) { + if (apps?.includes(Apps.ZohoDesk)) { @@ - if (zohoConnector?.oauthCredentials) { - const credentials = JSON.parse(zohoConnector.oauthCredentials) - if (credentials.departmentIds && credentials.departmentIds.length > 0) { - userDepartmentIds = credentials.departmentIds // Store ALL department IDs + if (zohoConnector?.oauthCredentials) { + const credentials = JSON.parse(zohoConnector.oauthCredentials) + if (credentials.departmentIds && credentials.departmentIds.length > 0) { + userDepartmentIds = credentials.departmentIds // Store ALL department IDs @@ - loggerWithChild({ email: email }).error( - `[SearchWithFilters] ❌ Error fetching Zoho departmentIds: ${error}`, - ) + loggerWithChild({ email: email }).error( + `[SearchWithFilters] ❌ Error fetching Zoho departmentIds: ${error}`, + ) + userDepartmentIds = [] // defensive: no access on failureserver/api/chat/chat.ts-1411-1520 (1)
1411-1520: Zoho Desk permission filtering: define behaviour when no connector/departmentIds are foundYou now fetch a Zoho Desk OAuth connector and derive
userDepartmentIds, but if there’s no connector oroauthCredentialsparsing fails, you log a warning and proceed withdepartmentIdsundefined. Depending on howsearchVespa/searchVespaAgentinterpret a missingdepartmentIds, this could still return all Zoho tickets instead of none. It’s safer to explicitly treat “no connector / no departmentIds” as “no Zoho access” (e.g., dropApps.ZohoDeskfrom the apps list or setdepartmentIds: []so the backend returns zero Zoho results).- if (agentAppEnums.includes(Apps.ZohoDesk) || classification.filters.apps?.includes(Apps.ZohoDesk)) { + if ( + agentAppEnums.includes(Apps.ZohoDesk) || + classification.filters.apps?.includes(Apps.ZohoDesk) + ) { @@ - } else { - loggerWithChild({ email: email }).warn( - `[Iterative RAG] ⚠️ No Zoho Desk connector found for user ${email}` - ) - } + } else { + loggerWithChild({ email: email }).warn( + `[Iterative RAG] ⚠️ No Zoho Desk connector found for user ${email} — Zoho results will be suppressed`, + ) + userDepartmentIds = [] // explicit “no access” + } @@ - if (ticketParticipants && Object.keys(ticketParticipants).length > 0 || timestampField) { + if ( + userDepartmentIds && + userDepartmentIds.length > 0 && + (ticketParticipants && Object.keys(ticketParticipants).length > 0 || timestampField) + ) {server/integrations/zoho/queue.ts-619-647 (1)
619-647: Summary jobs are enqueued twice for existing ticketsFor existing tickets you:
- Enqueue summary jobs inside the
if (existingTicket)branch (lines ~621–639), and- Then unconditionally enqueue summary jobs again after the insert/update block (lines ~660–677).
This will double‑enqueue all thread/comment summary jobs on updates.
Keep only one of these calls (likely the unconditional one after the upsert) to avoid duplicate work and potential racey updates.
- if (existingTicket) { - … - // Enqueue summary generation jobs for updated ticket - // Re-generate summaries when ticket is updated (new threads/comments may have been added) - try { - Logger.info("🔄 Enqueueing summary generation jobs for updated ticket", { - ticketId, - threadCount: vespaTicket.threads?.length || 0, - commentCount: vespaTicket.comments?.length || 0, - }) - - await enqueueSummaryJobs( - fullTicket, - vespaTicket.threads || [], - vespaTicket.comments || [], - ) - … - } catch (error) { - … - } - } else { + if (existingTicket) { … } else { … }Also applies to: 657-684
server/ai/prompts.ts-1933-2051 (1)
1933-2051: Resolve conflicting guidance in ticketPromptJson about filtering vs. “format ALL tickets”In
ticketPromptJson:
- Lines 1966–1969: “Focus ONLY on ticket items that directly match the query criteria … If no relevant tickets … return null.”
- Lines 2045–2049: “Format ALL tickets found in the Retrieved Context – do not apply additional filtering … If there is even one ticket, format and return them as specified.”
This is contradictory and can produce inconsistent behavior depending on which rule the model latches onto. Decide whether the search layer is already filtering (so the model should format all items) or whether the model must do an extra relevance pass, and then remove/adjust the conflicting block.
server/db/summaries.ts-108-122 (1)
108-122: Add unique constraint on(ticketId, summaryType)to enforce idempotency
insertAggregateSummaryuses.onConflictDoNothing(), but the table lacks a unique constraint on(ticketId, summaryType). Without it, multiple rows can be inserted for the same ticket+type pair, violating the 1‑row invariant and causinggetAggregateSummaryto silently return only the first result. This creates a silent failure mode that could lead to race conditions or inconsistent state.Add a unique constraint on
(ticketId, summaryType)in the table schema.zohoSchema.md-1-437 (1)
1-437: Update zohoSchema.md to match the current Vespa schemaThe documentation file is severely outdated. The actual
server/vespa/schemas/zoho_ticket.sdand TypeScript transformer models are already aligned—the issue is that this .md does not reflect the current schema:Missing or incorrect fields in documentation:
- Uses
field emailinstead of the correctfield contactEmailattachmentTypestruct is incomplete; actual schema includesattachmentId,attachmentUrl,processingStatus,fileType, andsize- Missing root-level field
ticketAttachments: array<attachmentType>- Missing flattened search arrays:
threadMessages,commentMessages,attachmentTexts- Struct is named
threadSchemain .md butmessageSchemain actual schema; missingcreatedTimefield- Missing system fields:
workspaceExternalId,app,entity,permissions,docId,channel,sharedDepartments- Missing custom fields:
merchantId,productDetails,firstResponseTime,resolutionTime,onHoldStartTime,onHoldEndTime,escalatedEndDateEmbedding dimensions:
- Documentation shows
tensor<bfloat16>(x[384])but actual schema usestensor<bfloat16>(v[768])Rank profiles:
- The
hybridandinitialrank profiles differ from the actual schema implementationSync this .md with the actual
server/vespa/schemas/zoho_ticket.sdfile. The transformer TypeScript models are already correct.server/integrations/zoho/queue.ts-931-938 (1)
931-938: Recompute derived attachment text fields when OCR completesAfter OCR,
updateAttachmentInTicketupdates only the per‑attachmentattachmentDetailandprocessingStatusfields. However, the flattened arraysattachmentTexts,threadMessages, andcommentMessagesare defined in the Vespa schema and populated during initial transformation by extracting text from attachment structures. Since these derived fields are not recomputed after OCR updates, they remain stale and will not include the newly extracted OCR text when sent to Vespa viaUpdateDocument. This will cause BM25 search to miss the new content.Rebuild these derived fields from the updated attachment structures before calling
UpdateDocumentto ensure search indexes include all OCR text:
attachmentTexts: flattenattachmentDetailfrom ticket, thread, and comment attachmentsthreadMessagesandcommentMessages: if they depend on attachment text, update accordinglyserver/integrations/zoho/client.ts-654-715 (1)
654-715: Refactor attachment downloads to use makeRequest for consistent retry/refresh behavior
downloadAttachmentanddownloadAttachmentFromUrlbypassmakeRequestand callapiClient.getdirectly. They lack the 401-refresh, 429-backoff, and 5xx-retry logic thatmakeRequestprovides (lines 268+). If a transient 429, 5xx, or 401 error occurs during attachment download, the request fails immediately with no recovery attempt.Extend
makeRequestto accept an optionalresponseTypeparameter, then refactor both methods to use it. Axios preserveserror.response.statusand headers even withresponseType: "arraybuffer", so the existing retry logic will work without modification.
🟡 Minor comments (5)
scripts/zoho-api-data.yaml-1-283 (1)
1-283: Clarify the purpose and fix syntax issues in this YAML file.This file has several issues:
Invalid YAML syntax: The static analysis correctly identifies syntax errors (e.g., line 10 with backtick character). The file won't parse as valid YAML.
Inconsistent comment styles: The file mixes
##,#, and```for comments, which is confusing and breaks YAML parsing.Draft/placeholder content: Contains notes like "## Need to extract from department Api or can have static mapping" suggesting this is a work-in-progress design document rather than a final schema.
Recommendation:
If this is documentation/design notes:
- Rename to
.md(Markdown) or.txt- Add a header explaining it's a draft schema design
- Clean up the comment syntax
If this is intended as executable YAML:
- Fix all syntax errors
- Use only
#for comments- Remove backticks and placeholder text
- Complete all "Need to extract..." sections
Based on learnings from static analysis tools, the file currently cannot be parsed as valid YAML.
server/services/summaryService.ts-13-13 (1)
13-13: Fix inaccurate comment about model used.The comment says "Uses GPT-4o" but the code actually uses
Models.Vertex_Gemini_2_5_Flash./** * Generate summary for a single thread or comment * Minimum 80-100 words to capture sufficient detail - * Uses GPT-4o for high quality summaries + * Uses Vertex Gemini 2.5 Flash for high quality summaries */server/queue/summary-generation.ts-3-3 (1)
3-3: Use alias import path for consistency.The import uses
./boss(relative path) while other parts of the codebase use@/queue/boss(alias import). Change line 3 to:import { boss } from "@/queue/boss"This aligns with the import convention used in
server/integrations/zoho/sync.tsandserver/integrations/zoho/queue.ts.server/workers/summary-worker.ts-296-306 (1)
296-306: Add null safety checks for Vespa ticket properties.The code accesses
vespaTicketproperties without verifying they exist after the null check at line 298-300. If the document exists but has missing fields, this could cause runtime errors.const vespaTicket = await GetDocument("zoho_ticket" as any, ticketId) if (!vespaTicket) { throw new Error(`Ticket not found in Vespa: ${ticketId}`) } - console.log(`\n✅ FETCHED TICKET FROM VESPA`) - console.log(`Ticket Number: ${vespaTicket.ticketNumber}`) - console.log(`Subject: ${vespaTicket.subject}`) - console.log(`Status: ${vespaTicket.status}`) - console.log(`\n`) + logger.debug(`Fetched ticket from Vespa`, { + ticketId, + ticketNumber: vespaTicket.ticketNumber, + subject: vespaTicket.subject, + status: vespaTicket.status, + }) const ticketData = { ticketNumber: vespaTicket.ticketNumber || "", subject: vespaTicket.subject || "",Committable suggestion skipped: line range outside the PR's diff.
server/vespa/schemas/zoho_ticket.sd-493-509 (1)
493-509:hybridrank-profile does not include BM25 despite comment.The comment on line 493 says "combines BM25 + vector similarity", but the first-phase expression only includes embedding closeness terms—no BM25 component. This is misleading and may not behave as expected.
If hybrid search is intended, the expression should include a BM25 term:
rank-profile hybrid { inputs { query(q) tensor<bfloat16>(v[768]) query(alpha) double } constants { THREAD_EMB_WEIGHT: 0.5 DESCRIPTION_EMB_WEIGHT: 0.2 SUBJECT_EMB_WEIGHT: 0.1 COMMENT_EMB_WEIGHT: 0.1 RESOLUTION_EMB_WEIGHT: 0.1 } first-phase { - expression: query(alpha) * ((THREAD_EMB_WEIGHT * closeness(field, threadSummary_embedding)) + (DESCRIPTION_EMB_WEIGHT * closeness(field, description_embedding)) + (COMMENT_EMB_WEIGHT * closeness(field, commentSummary_embedding)) + (SUBJECT_EMB_WEIGHT * closeness(field, subject_embedding)) + (RESOLUTION_EMB_WEIGHT * closeness(field, wholeResolutionSummary_embedding))) + expression: (query(alpha) * ((THREAD_EMB_WEIGHT * closeness(field, threadSummary_embedding)) + (DESCRIPTION_EMB_WEIGHT * closeness(field, description_embedding)) + (COMMENT_EMB_WEIGHT * closeness(field, commentSummary_embedding)) + (SUBJECT_EMB_WEIGHT * closeness(field, subject_embedding)) + (RESOLUTION_EMB_WEIGHT * closeness(field, wholeResolutionSummary_embedding)))) + ((1 - query(alpha)) * (bm25(subject) + bm25(description) + bm25(threadSummary) + bm25(commentSummary) + bm25(wholeResolutionSummary))) } }Alternatively, if semantic-only ranking is intended, update the comment to reflect that.
| function ChatPageWrapper() { | ||
| const { user, workspace, agentWhiteList } = Route.useRouteContext() | ||
| return ( | ||
| <ChatPage | ||
| user={user} | ||
| workspace={workspace} | ||
| agentWhiteList={agentWhiteList} | ||
| /> | ||
| ) | ||
| } |
There was a problem hiding this comment.
Missing DocumentOperationsProvider wrapper will cause runtime error.
The ChatPage component uses the useDocumentOperations() hook (line 233 in chat.tsx), which requires being wrapped in DocumentOperationsProvider. The parent route at /_authenticated/chat wraps ChatPage with this provider, but this wrapper does not.
When users navigate directly to /chat/$chatId, the useDocumentOperations() call will fail because there's no provider in the component tree.
Apply this diff to add the missing provider:
+import { DocumentOperationsProvider } from "@/contexts/DocumentOperationsContext"
+
function ChatPageWrapper() {
const { user, workspace, agentWhiteList } = Route.useRouteContext()
return (
- <ChatPage
- user={user}
- workspace={workspace}
- agentWhiteList={agentWhiteList}
- />
+ <DocumentOperationsProvider>
+ <ChatPage
+ user={user}
+ workspace={workspace}
+ agentWhiteList={agentWhiteList}
+ />
+ </DocumentOperationsProvider>
)
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| function ChatPageWrapper() { | |
| const { user, workspace, agentWhiteList } = Route.useRouteContext() | |
| return ( | |
| <ChatPage | |
| user={user} | |
| workspace={workspace} | |
| agentWhiteList={agentWhiteList} | |
| /> | |
| ) | |
| } | |
| import { DocumentOperationsProvider } from "@/contexts/DocumentOperationsContext" | |
| function ChatPageWrapper() { | |
| const { user, workspace, agentWhiteList } = Route.useRouteContext() | |
| return ( | |
| <DocumentOperationsProvider> | |
| <ChatPage | |
| user={user} | |
| workspace={workspace} | |
| agentWhiteList={agentWhiteList} | |
| /> | |
| </DocumentOperationsProvider> | |
| ) | |
| } |
🤖 Prompt for AI Agents
In frontend/src/routes/_authenticated/chat.$chatId.tsx around lines 6 to 15, the
ChatPage is rendered without the required DocumentOperationsProvider which
causes useDocumentOperations() to throw when navigating directly to this route;
import DocumentOperationsProvider from its module and wrap the ChatPage return
JSX with <DocumentOperationsProvider> so the provider is in the tree (preserve
passing user, workspace, agentWhiteList props unchanged), ensuring the provider
encloses ChatPage before export.
| @@ -1,5 +1,6 @@ | |||
| { | |||
| "dependencies": { | |||
| "node-fetch": "^3.3.2", | |||
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n package.jsonRepository: xynehq/xyne
Length of output: 176
🌐 Web query:
node-fetch v3 ESM only CommonJS compatibility
💡 Result:
Short answer: node-fetch v3 is ESM-only — you cannot require() it. Options: migrate to ESM, load it dynamically from CommonJS, use a CommonJS wrapper package, or stay on node-fetch v2. [1][2]
How to apply each option (concise):
-
Migrate to ESM (preferred long-term): convert files to .mjs or set "type":"module" and use import fetch from 'node-fetch'. [1][6]
-
Dynamic import from CommonJS (works without changing project type):
const fetch = (...args) => import('node-fetch').then(({ default: fetch }) => fetch(...args));
(or await (await import('node-fetch')).default(url)). [1][4] -
Use a maintained CommonJS wrapper package if you must stay CJS (examples: node-fetch-cjs / node-fetch-commonjs). Install and require that wrapper. [3][4]
-
Pin to v2 (CommonJS) if you cannot migrate: npm install node-fetch@2 — v2 will continue to receive critical fixes. [1][2]
Sources: GitHub README (node-fetch), npm package page, Stack Overflow discussion, node-fetch-cjs wrapper docs. [1][2][4][3]
🌐 Web query:
Node.js native fetch API version support
💡 Result:
Short answer
- Experimental global fetch first appeared in Node.js v17.5/v17.6 (as an experimental API). [1][2]
- It was enabled by default (still marked experimental) in Node.js v18.0.0. You can disable it with --no-experimental-fetch. [2][3]
Sources
- Node.js globals history (shows added in v17.5/v17.6). [turn0search4].
- Node.js v18 release notes (fetch experimental; enabled by default in v18). [turn0search2].
Verify ESM compatibility or consider native fetch.
node-fetch v3.x is ESM-only and cannot be used with require(). The project's package.json has no "type":"module" field, indicating a CommonJS project by default. Options: (1) migrate the project to ESM with "type":"module", (2) use dynamic import if staying CommonJS, (3) use a CommonJS wrapper package, or (4) pin to node-fetch@2.x for full CommonJS compatibility. If targeting Node.js 18+, consider using the native global fetch API instead to eliminate this dependency.
🤖 Prompt for AI Agents
In package.json around line 3 the project currently depends on "node-fetch":
"^3.3.2" which is ESM-only and will fail under CommonJS; either convert the repo
to ESM by adding "type":"module" and ensure all imports use import syntax, or if
staying CommonJS replace this dependency with a CommonJS-compatible option: pin
to node-fetch@2.x, install a CommonJS wrapper package, or remove the dependency
and use the built-in global fetch (Node 18+) after verifying runtime supports
it; alternatively, use dynamic import() of node-fetch v3 where the call site is
CommonJS and handle the returned module.default. Ensure package.json and import
sites are updated consistently for the chosen approach.
| const CONFIG = { | ||
| CLIENT_ID: "1000.HRFLWOF4DAFL4SK4IZ3UKXCI6CRQJV", | ||
| CLIENT_SECRET: "55898a4588dd60641f5bf5a00575cca7f82e580989", | ||
| REFRESH_TOKEN: | ||
| "1000.1c79e331da83dfa2a8244ef1f0a3bfde.9716adf4510ba82244f7ad5078fefe90", | ||
| ACCOUNTS_URL: "https://accounts.zoho.com", | ||
| SCOPE: "Desk.tickets.READ", | ||
| } |
There was a problem hiding this comment.
Critical: Remove hardcoded OAuth credentials from source code.
This file contains hardcoded Zoho OAuth credentials (CLIENT_ID, CLIENT_SECRET, REFRESH_TOKEN) which is a serious security vulnerability. These credentials should never be committed to version control as they could be used to access Zoho APIs with your organization's permissions.
Load credentials from environment variables instead:
-const CONFIG = {
- CLIENT_ID: "1000.HRFLWOF4DAFL4SK4IZ3UKXCI6CRQJV",
- CLIENT_SECRET: "55898a4588dd60641f5bf5a00575cca7f82e580989",
- REFRESH_TOKEN:
- "1000.1c79e331da83dfa2a8244ef1f0a3bfde.9716adf4510ba82244f7ad5078fefe90",
- ACCOUNTS_URL: "https://accounts.zoho.com",
- SCOPE: "Desk.tickets.READ",
-}
+const CONFIG = {
+ CLIENT_ID: process.env.ZOHO_CLIENT_ID,
+ CLIENT_SECRET: process.env.ZOHO_CLIENT_SECRET,
+ REFRESH_TOKEN: process.env.ZOHO_REFRESH_TOKEN,
+ ACCOUNTS_URL: process.env.ZOHO_ACCOUNTS_URL || "https://accounts.zoho.com",
+ SCOPE: "Desk.tickets.READ",
+}
+
+if (!CONFIG.CLIENT_ID || !CONFIG.CLIENT_SECRET || !CONFIG.REFRESH_TOKEN) {
+ console.error("❌ Missing required environment variables: ZOHO_CLIENT_ID, ZOHO_CLIENT_SECRET, ZOHO_REFRESH_TOKEN")
+ process.exit(1)
+}Additionally, rotate these credentials immediately as they are now exposed in the git history.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const CONFIG = { | |
| CLIENT_ID: "1000.HRFLWOF4DAFL4SK4IZ3UKXCI6CRQJV", | |
| CLIENT_SECRET: "55898a4588dd60641f5bf5a00575cca7f82e580989", | |
| REFRESH_TOKEN: | |
| "1000.1c79e331da83dfa2a8244ef1f0a3bfde.9716adf4510ba82244f7ad5078fefe90", | |
| ACCOUNTS_URL: "https://accounts.zoho.com", | |
| SCOPE: "Desk.tickets.READ", | |
| } | |
| const CONFIG = { | |
| CLIENT_ID: process.env.ZOHO_CLIENT_ID, | |
| CLIENT_SECRET: process.env.ZOHO_CLIENT_SECRET, | |
| REFRESH_TOKEN: process.env.ZOHO_REFRESH_TOKEN, | |
| ACCOUNTS_URL: process.env.ZOHO_ACCOUNTS_URL || "https://accounts.zoho.com", | |
| SCOPE: "Desk.tickets.READ", | |
| } | |
| if (!CONFIG.CLIENT_ID || !CONFIG.CLIENT_SECRET || !CONFIG.REFRESH_TOKEN) { | |
| console.error("❌ Missing required environment variables: ZOHO_CLIENT_ID, ZOHO_CLIENT_SECRET, ZOHO_REFRESH_TOKEN") | |
| process.exit(1) | |
| } |
🧰 Tools
🪛 Gitleaks (8.30.0)
[high] 15-15: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
| //getThreadItems, | ||
| searchVespaAgent, | ||
| getSlackUserDetails, | ||
| //getSlackUserDetails, |
There was a problem hiding this comment.
Commented imports are still in use - this will cause runtime errors.
getThreadItems is used at lines 801, 1033, 1434, and 1643, and getSlackUserDetails is used at line 1225. Commenting out these imports will cause ReferenceError at runtime when any Slack-related tool is invoked.
- //getThreadItems,
+ getThreadItems,
searchVespaAgent,
- //getSlackUserDetails,
+ getSlackUserDetails,📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| //getThreadItems, | |
| searchVespaAgent, | |
| getSlackUserDetails, | |
| //getSlackUserDetails, | |
| getThreadItems, | |
| searchVespaAgent, | |
| getSlackUserDetails, |
🤖 Prompt for AI Agents
In server/api/chat/tools.ts around lines 16 to 18, the imports for
getThreadItems and getSlackUserDetails were commented out but those functions
are referenced elsewhere (getThreadItems at lines 801, 1033, 1434, 1643;
getSlackUserDetails at line 1225), causing ReferenceError at runtime; restore
the imports by uncommenting them (or if you intentionally removed the
implementations, replace those usages with safe guards or remove/replace calls),
ensure the imported names match the exported identifiers and run tests to
confirm Slack-related tools no longer throw at runtime.
| // Fetch user's Zoho departmentId for permission filtering if querying Zoho Desk | ||
| let userDepartmentId: string | undefined = undefined | ||
| const appsArray = Array.isArray(app) ? app : app ? [app] : [] | ||
| if (appsArray.includes(Apps.ZohoDesk) || (agentAppEnums && agentAppEnums.includes(Apps.ZohoDesk))) { | ||
| try { | ||
| const zohoConnector = await getConnectorByAppAndEmailId( | ||
| db, | ||
| Apps.ZohoDesk, | ||
| AuthType.OAuth, | ||
| email, | ||
| ) | ||
|
|
||
| if (zohoConnector?.oauthCredentials) { | ||
| try { | ||
| const credentials = JSON.parse(zohoConnector.oauthCredentials) | ||
| if (credentials.departmentIds && credentials.departmentIds.length > 0) { | ||
| userDepartmentId = credentials.departmentIds[0] | ||
| Logger.info(`[executeVespaSearch] Found user's departmentId for permissions: ${userDepartmentId}`, { email }) | ||
| } | ||
| } catch (parseError) { | ||
| Logger.warn("[executeVespaSearch] Could not parse oauthCredentials", { email, parseError }) | ||
| } | ||
| } else { | ||
| Logger.warn("[executeVespaSearch] Zoho connector found but no oauthCredentials", { email }) | ||
| } | ||
| } catch (error) { | ||
| Logger.error("[executeVespaSearch] Could not fetch Zoho connector details", { email, error }) | ||
| } | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
rg -n "departmentIds" --type ts -C3Repository: xynehq/xyne
Length of output: 16575
🏁 Script executed:
# Check which functions call tools.ts executeVespaSearch and which use chat.ts functions
rg -n "executeVespaSearch" --type tsRepository: xynehq/xyne
Length of output: 768
🏁 Script executed:
# Check the context around line 597 in tools.ts
sed -n '590,610p' server/api/chat/tools.tsRepository: xynehq/xyne
Length of output: 777
🏁 Script executed:
# Find the function containing line 597 in tools.ts
sed -n '500,600p' server/api/chat/tools.ts | head -120Repository: xynehq/xyne
Length of output: 3315
🏁 Script executed:
# Check if searchGlobal is imported or used elsewhere
rg -n "searchGlobal\|from.*tools" --type ts | grep -E "(import.*searchGlobal|searchGlobal)" | head -20Repository: xynehq/xyne
Length of output: 37
🏁 Script executed:
# Check what is exported from tools.ts
rg -n "export" server/api/chat/tools.ts | head -20Repository: xynehq/xyne
Length of output: 814
🏁 Script executed:
# Check what's in agentTools and search for its usage
rg -n "agentTools|import.*from.*tools" --type ts | head -30Repository: xynehq/xyne
Length of output: 466
Fix departmentIds filtering in executeVespaSearch: currently uses only first department ID instead of all departments.
The code at lines 351-352 extracts only credentials.departmentIds[0], but this function is used by the active searchGlobal agent tool (exported through agentTools). In contrast, other search paths in server/api/chat/chat.ts (lines 1459, 3456) correctly store and use the entire departmentIds array for complete permission filtering.
Users with access to multiple Zoho departments will only have tickets from the first department included in search results. Update line 352 to:
userDepartmentId = credentials.departmentIds
and adjust the parameter type from string | undefined to string[] | undefined at line 337, then pass it correctly to search functions.
🤖 Prompt for AI Agents
In server/api/chat/tools.ts around lines 336 to 364, the code only assigns the
first department ID (credentials.departmentIds[0]) to userDepartmentId which
limits permission filtering to a single department; change the userDepartmentId
declaration type from string | undefined to string[] | undefined, assign the
entire credentials.departmentIds array (userDepartmentId =
credentials.departmentIds), and update any calls that consume userDepartmentId
so they accept and pass an array of department IDs (string[]) to the downstream
search/permission functions instead of a single string.
| const result = await db.execute( | ||
| `SELECT state, COUNT(*) as count FROM pgboss.job WHERE name = '${queueName}' GROUP BY state` | ||
| ) |
There was a problem hiding this comment.
SQL injection vulnerability: use parameterized queries.
The SQL query uses string interpolation with the queueName variable, which creates a SQL injection risk. Even though queueName is currently hardcoded, this pattern is unsafe and should be avoided.
Apply this diff to use parameterized queries:
- const result = await db.execute(
- `SELECT state, COUNT(*) as count FROM pgboss.job WHERE name = '${queueName}' GROUP BY state`
- )
+ const result = await db.execute(
+ `SELECT state, COUNT(*) as count FROM pgboss.job WHERE name = $1 GROUP BY state`,
+ [queueName]
+ )📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const result = await db.execute( | |
| `SELECT state, COUNT(*) as count FROM pgboss.job WHERE name = '${queueName}' GROUP BY state` | |
| ) | |
| const result = await db.execute( | |
| `SELECT state, COUNT(*) as count FROM pgboss.job WHERE name = $1 GROUP BY state`, | |
| [queueName] | |
| ) |
🤖 Prompt for AI Agents
In server/check-attachment-queue.ts around lines 10-12, the SQL currently
interpolates queueName directly into the query creating a SQL injection risk;
change it to a parameterized query using a placeholder (e.g. $1) and pass
queueName as a parameter to db.execute (e.g. db.execute('SELECT state, COUNT(*)
as count FROM pgboss.job WHERE name = $1 GROUP BY state', [queueName])); ensure
you remove the template literal, keep the rest of the SQL intact, and pass the
params array to db.execute so the DB client handles proper escaping.
| const totalResult = await db.execute( | ||
| `SELECT COUNT(*) as total FROM pgboss.job WHERE name = '${queueName}'` | ||
| ) |
There was a problem hiding this comment.
SQL injection vulnerability: use parameterized queries.
Same issue as above—the SQL query uses string interpolation instead of parameterized queries.
Apply this diff:
- const totalResult = await db.execute(
- `SELECT COUNT(*) as total FROM pgboss.job WHERE name = '${queueName}'`
- )
+ const totalResult = await db.execute(
+ `SELECT COUNT(*) as total FROM pgboss.job WHERE name = $1`,
+ [queueName]
+ )📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const totalResult = await db.execute( | |
| `SELECT COUNT(*) as total FROM pgboss.job WHERE name = '${queueName}'` | |
| ) | |
| const totalResult = await db.execute( | |
| `SELECT COUNT(*) as total FROM pgboss.job WHERE name = $1`, | |
| [queueName] | |
| ) |
🤖 Prompt for AI Agents
In server/check-attachment-queue.ts around lines 24 to 26, the SQL uses string
interpolation which creates an SQL injection risk; change the call to use a
parameterized query (e.g. use a positional placeholder like $1 or ? according to
our db.execute API) and pass queueName as a separate parameter array/argument
instead of embedding it into the SQL string; ensure you also remove any
surrounding quotes from the interpolated value so the query relies solely on the
parameter.
| const result = await db.execute( | ||
| `DELETE FROM pgboss.job WHERE name = '${queueName}'` | ||
| ) |
There was a problem hiding this comment.
SQL injection vulnerability in DELETE statement.
The DELETE query uses string interpolation, creating a SQL injection vulnerability. This is particularly critical for a DELETE operation as it could lead to unintended data loss.
Apply this diff to use parameterized queries:
- const result = await db.execute(
- `DELETE FROM pgboss.job WHERE name = '${queueName}'`
- )
+ const result = await db.execute(
+ `DELETE FROM pgboss.job WHERE name = $1`,
+ [queueName]
+ )📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const result = await db.execute( | |
| `DELETE FROM pgboss.job WHERE name = '${queueName}'` | |
| ) | |
| const result = await db.execute( | |
| `DELETE FROM pgboss.job WHERE name = $1`, | |
| [queueName] | |
| ) |
🤖 Prompt for AI Agents
In server/clear-attachment-queue.ts around lines 10 to 12 the DELETE uses string
interpolation and is vulnerable to SQL injection; change it to a parameterized
query by replacing the interpolated SQL with a prepared statement that uses a
placeholder for queueName and pass queueName as a bound parameter to db.execute
(or the DB client's parameter array/object) so the value is escaped safely and
cannot alter the SQL.
| <allow until="2025-12-03">indexing-change</allow> | ||
| <allow until='2025-12-03'>field-type-change</allow> | ||
| <allow until='2025-12-03'>global-document-change</allow> | ||
| <allow until='2025-12-03'>schema-removal</allow> |
There was a problem hiding this comment.
Validation overrides have already expired.
The override dates are set to 2025-12-03, but today is December 08, 2025. These overrides are no longer active and Vespa will enforce validation rules, potentially blocking schema changes in this PR.
Update to a future date:
<validation-overrides>
- <allow until="2025-12-03">indexing-change</allow>
- <allow until='2025-12-03'>field-type-change</allow>
- <allow until='2025-12-03'>global-document-change</allow>
- <allow until='2025-12-03'>schema-removal</allow>
+ <allow until="2025-12-31">indexing-change</allow>
+ <allow until="2025-12-31">field-type-change</allow>
+ <allow until="2025-12-31">global-document-change</allow>
+ <allow until="2025-12-31">schema-removal</allow>
</validation-overrides>🤖 Prompt for AI Agents
In server/vespa/validation-overrides.xml around lines 2 to 5 the override
expiration dates are past (2025-12-03) so they are no longer effective; update
each <allow ...> until="..." attribute to a future ISO date (e.g., 2026-12-03 or
another agreed-upon date) so the overrides remain active, keep quoting
consistent (prefer double quotes) across all entries, and ensure the same new
date is applied to each of the four allow entries.
| curl --location 'https://desk.zoho.com/api/v1/tickets?limit=2' \ | ||
| --header 'Authorization: Zoho-oauthtoken 1000.c1657cc447093669bf3b0b054224520a.66acd53e027a04348f1fedd4aeaae0aa' \ | ||
| --header 'Content-Type: application/json' \ | ||
| --header 'Cookie: JSESSIONID=AA23B4A9295E12B36A4B423612C79190; crmcsr=6b324ec5-a3b4-4a08-a221-04cabc79296f; zalb_2d3a2b5a21=92a1dcb3f315402f61d63a903b2bbee0; zalb_9a26c99460=84b5a076da4ea1ba0b64a38c8fd681c6; zd_group_name=eb67dbd99ad7832e0112ab054f03c8d26ee1d8d2ad1f38e0c618fd47e081ab69' | ||
|
|
There was a problem hiding this comment.
Critical: Remove real OAuth tokens from documentation.
This documentation file contains real OAuth access tokens in the curl examples. These tokens should be replaced with placeholder values to prevent unauthorized API access.
Replace with placeholder tokens throughout the file:
-curl --location 'https://desk.zoho.com/api/v1/tickets?limit=2' \
---header 'Authorization: Zoho-oauthtoken 1000.c1657cc447093669bf3b0b054224520a.66acd53e027a04348f1fedd4aeaae0aa' \
+curl --location 'https://desk.zoho.com/api/v1/tickets?limit=2' \
+--header 'Authorization: Zoho-oauthtoken <YOUR_ACCESS_TOKEN>' \Apply similar changes to all curl examples in this file (lines 113, 359, 461, 519, 602).
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In server/zoho-api around lines 3 to 7 (and also update examples at lines 113,
359, 461, 519, 602), the curl examples contain real OAuth access tokens and
sensitive cookie/session values; replace any real tokens, API keys, session IDs
or cookies with generic placeholders such as YOUR_ZOHO_OAUTH_TOKEN,
YOUR_SESSION_ID, and YOUR_COOKIE_VALUES, ensuring spacing/format stays valid
(e.g., Authorization: Zoho-oauthtoken YOUR_ZOHO_OAUTH_TOKEN) and update all
other curl examples in this file to use the same placeholders.
Description
Testing
Additional Notes
Summary by CodeRabbit
New Features
Background Services
✏️ Tip: You can customize this high-level summary in your review settings.