Conversation
…a and remove references to Ari
…etails for multiple Discord bots
…e vendoring workflow for Discord bots
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds a shared prompts package and registry, introduces per-bot profile configuration and overlay composition, replaces hard-coded persona defaults with parameterized prompts, updates alias/mention detection to use profile aliases, and propagates profile overlays across backend and Discord prompt flows with tests and docs. Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Discord Client
participant Event as MessageCreate Handler
participant Alias as MentionAliases
participant Filter as CatchupFilter
participant Processor as MessageProcessor
participant Registry as Prompt Registry
participant Overlay as ProfileOverlayComposer
participant AI as OpenAI Service
Client->>Event: message received
Event->>Alias: resolveBotMentionAliases(profile, botUsername)
Alias-->>Event: aliases[]
Event->>Filter: evaluate relevance (content, aliases)
Filter-->>Event: score / decision
Event->>Processor: build reflect/context (include trigger)
Processor->>Registry: renderPrompt(key, {botProfileDisplayName})
Registry-->>Processor: basePrompt
Processor->>Overlay: composePromptWithProfileOverlay(basePrompt, profile, usage)
Overlay-->>Processor: composedPrompt
Processor->>AI: send messages (system: composedPrompt, user: ...)
AI-->>Processor: response
Processor-->>Client: send reply
sequenceDiagram
participant Init as App Start
participant Env as Env Loader
participant Profile as readBotProfileConfig
participant File as File System
participant RegistryFactory as createPromptRegistry
participant Prompts as `@footnote/prompts`
participant Runtime as runtimeConfig
Init->>Env: read env (BOT_PROFILE_*, PROMPT_CONFIG_PATH)
Env->>Profile: parse values
alt overlay path provided
Profile->>File: read overlay file
File-->>Profile: overlayText
else inline overlay
Profile->>Profile: use inline overlay text
end
Profile-->>Runtime: BotProfileConfig
Runtime->>RegistryFactory: createPromptRegistry(promptConfigPath)
RegistryFactory->>Prompts: load defaults + apply overrides
Prompts-->>RegistryFactory: PromptRegistry instance
RegistryFactory-->>Runtime: promptRegistry
Runtime-->>Init: ready
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Comment |
…ve development workflow
…les and not injected for vendored profiles
…mporary directory cleanup
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/backend/src/services/prompts/promptRegistry.ts`:
- Around line 55-62: wrappedRegistry.renderPrompt merges
DEFAULT_BACKEND_PROMPT_VARIABLES with caller-provided PromptVariables, but
spreading the caller object lets explicit undefined values overwrite defaults
(e.g., botProfileDisplayName becoming undefined). Fix by filtering out
undefined-valued keys from the incoming variables before merging (or merge by
iterating keys and only assigning when value !== undefined) so
DEFAULT_BACKEND_PROMPT_VARIABLES remains for any undefined fields; update
wrappedRegistry.renderPrompt to pass the cleaned variables object into
registry.renderPrompt while referencing DEFAULT_BACKEND_PROMPT_VARIABLES,
PromptVariables, wrappedRegistry.renderPrompt, and registry.renderPrompt.
In `@packages/prompts/src/defaults.yaml`:
- Around line 231-268: The reflect chat system prompt (chat.system.template)
lacks explicit refusal/safety instructions; update the template used by
reflect.chat.system (the chat.system.template block) to include clear refusal
language mirroring discord.chat.system: instruct the model to refuse requests
that are illegal, harmful, or unethical, provide brief safe alternatives or
offer to provide general information, and include escalation/clarification
behavior for ambiguous requests; keep the rest of the CITATION/RESPONSE_METADATA
rules intact and ensure the new lines are unambiguous about refusal and safe
guidance.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: ce0b04b5-e23c-4a4d-9c4d-b9790bbf794d
📒 Files selected for processing (4)
deploy/Dockerfile.backenddeploy/Dockerfile.botpackages/backend/src/services/prompts/promptRegistry.tspackages/prompts/src/defaults.yaml
| wrappedRegistry.renderPrompt = ( | ||
| key: PromptKey, | ||
| variables: PromptVariables = {} | ||
| ): RenderedPrompt { | ||
| const definition = this.getPrompt(key); | ||
| const content = interpolateTemplate(definition.template, variables); | ||
| return { | ||
| content, | ||
| description: definition.description, | ||
| cache: definition.cache, | ||
| }; | ||
| } | ||
|
|
||
| /** | ||
| * Indicates whether a prompt is defined. Useful for lightweight startup | ||
| * assertions without forcing interpolation. | ||
| */ | ||
| public hasPrompt(key: PromptKey): boolean { | ||
| return Boolean(this.prompts[key]); | ||
| } | ||
|
|
||
| /** | ||
| * Ensures that each requested key has a corresponding definition. This is | ||
| * handy for startup checks so operators immediately know if their overrides | ||
| * omitted any high-severity prompts. | ||
| */ | ||
| public assertKeys(keys: PromptKey[]): void { | ||
| for (const key of keys) { | ||
| if (!this.hasPrompt(key)) { | ||
| throw new Error(`Missing prompt definition for key: ${key}`); | ||
| } | ||
| } | ||
| } | ||
|
|
||
| /** | ||
| * Loads and flattens a YAML prompt file into the internal map representation. | ||
| */ | ||
| private loadPromptFile(filePath: string, optional: boolean): PromptMap { | ||
| const resolvedPath = path.isAbsolute(filePath) | ||
| ? filePath | ||
| : path.resolve(filePath); | ||
|
|
||
| if (!fs.existsSync(resolvedPath)) { | ||
| if (optional) { | ||
| return {}; | ||
| } | ||
| throw new Error( | ||
| `Prompt configuration file not found: ${resolvedPath}` | ||
| ); | ||
| } | ||
|
|
||
| const fileContents = fs.readFileSync(resolvedPath, 'utf-8'); | ||
| const parsed = yaml.load(fileContents); | ||
| if (!parsed || typeof parsed !== 'object') { | ||
| throw new Error( | ||
| `Prompt configuration did not parse to an object: ${resolvedPath}` | ||
| ); | ||
| } | ||
|
|
||
| return this.flattenPromptTree(parsed as Record<string, unknown>); | ||
| } | ||
|
|
||
| /** | ||
| * Recursively walks a nested object structure, producing dot-delimited keys | ||
| * that match the PromptKey union. | ||
| */ | ||
| private flattenPromptTree( | ||
| tree: Record<string, unknown>, | ||
| prefix = '' | ||
| ): PromptMap { | ||
| const result: PromptMap = {}; | ||
|
|
||
| for (const [segment, value] of Object.entries(tree)) { | ||
| const key = prefix ? `${prefix}.${segment}` : segment; | ||
|
|
||
| if (value && typeof value === 'object' && !Array.isArray(value)) { | ||
| const candidate = value as Record<string, unknown>; | ||
| const template = candidate.template ?? candidate.prompt; | ||
|
|
||
| if (typeof template === 'string' && isPromptKey(key)) { | ||
| result[key] = { | ||
| template, | ||
| description: | ||
| typeof candidate.description === 'string' | ||
| ? candidate.description | ||
| : undefined, | ||
| cache: | ||
| typeof candidate.cache === 'object' && | ||
| candidate.cache !== null | ||
| ? (candidate.cache as PromptCachePolicy) | ||
| : undefined, | ||
| }; | ||
| continue; | ||
| } | ||
|
|
||
| Object.assign(result, this.flattenPromptTree(candidate, key)); | ||
| } | ||
| } | ||
|
|
||
| return result; | ||
| } | ||
| } | ||
|
|
||
| /** | ||
| * Runtime guard used while flattening the YAML tree. | ||
| */ | ||
| const isPromptKey = (value: string): value is PromptKey => | ||
| KNOWN_PROMPT_KEYS.has(value as PromptKey); | ||
|
|
||
| /** | ||
| * Holds onto the active registry instance so that callers can use the | ||
| * functional `renderPrompt` helper without threading references everywhere. | ||
| */ | ||
| let activePromptRegistry: PromptRegistry | null = null; | ||
|
|
||
| /** | ||
| * Registers the singleton prompt registry for downstream helpers. Typically | ||
| * invoked from the Discord bot's environment bootstrap after loading overrides. | ||
| */ | ||
| export const setActivePromptRegistry = (registry: PromptRegistry): void => { | ||
| activePromptRegistry = registry; | ||
| ): RenderedPrompt => | ||
| registry.renderPrompt(key, { | ||
| ...DEFAULT_BACKEND_PROMPT_VARIABLES, | ||
| ...variables, | ||
| }); |
There was a problem hiding this comment.
Preserve backend defaults when callers pass undefined variables.
PromptVariables explicitly allows undefined, so ...variables can overwrite botProfileDisplayName: 'Footnote' with undefined. In this PR that means a partially populated bot profile can render prompts with a blank or unresolved display name instead of falling back cleanly.
🔧 Suggested fix
- wrappedRegistry.renderPrompt = (
- key: PromptKey,
- variables: PromptVariables = {}
- ): RenderedPrompt =>
- registry.renderPrompt(key, {
- ...DEFAULT_BACKEND_PROMPT_VARIABLES,
- ...variables,
- });
+ wrappedRegistry.renderPrompt = (
+ key: PromptKey,
+ variables: PromptVariables = {}
+ ): RenderedPrompt => {
+ const resolvedVariables: PromptVariables = {
+ ...DEFAULT_BACKEND_PROMPT_VARIABLES,
+ };
+
+ for (const [name, value] of Object.entries(variables)) {
+ if (value !== undefined) {
+ resolvedVariables[name] = value;
+ }
+ }
+
+ return registry.renderPrompt(key, resolvedVariables);
+ };📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| wrappedRegistry.renderPrompt = ( | |
| key: PromptKey, | |
| variables: PromptVariables = {} | |
| ): RenderedPrompt { | |
| const definition = this.getPrompt(key); | |
| const content = interpolateTemplate(definition.template, variables); | |
| return { | |
| content, | |
| description: definition.description, | |
| cache: definition.cache, | |
| }; | |
| } | |
| /** | |
| * Indicates whether a prompt is defined. Useful for lightweight startup | |
| * assertions without forcing interpolation. | |
| */ | |
| public hasPrompt(key: PromptKey): boolean { | |
| return Boolean(this.prompts[key]); | |
| } | |
| /** | |
| * Ensures that each requested key has a corresponding definition. This is | |
| * handy for startup checks so operators immediately know if their overrides | |
| * omitted any high-severity prompts. | |
| */ | |
| public assertKeys(keys: PromptKey[]): void { | |
| for (const key of keys) { | |
| if (!this.hasPrompt(key)) { | |
| throw new Error(`Missing prompt definition for key: ${key}`); | |
| } | |
| } | |
| } | |
| /** | |
| * Loads and flattens a YAML prompt file into the internal map representation. | |
| */ | |
| private loadPromptFile(filePath: string, optional: boolean): PromptMap { | |
| const resolvedPath = path.isAbsolute(filePath) | |
| ? filePath | |
| : path.resolve(filePath); | |
| if (!fs.existsSync(resolvedPath)) { | |
| if (optional) { | |
| return {}; | |
| } | |
| throw new Error( | |
| `Prompt configuration file not found: ${resolvedPath}` | |
| ); | |
| } | |
| const fileContents = fs.readFileSync(resolvedPath, 'utf-8'); | |
| const parsed = yaml.load(fileContents); | |
| if (!parsed || typeof parsed !== 'object') { | |
| throw new Error( | |
| `Prompt configuration did not parse to an object: ${resolvedPath}` | |
| ); | |
| } | |
| return this.flattenPromptTree(parsed as Record<string, unknown>); | |
| } | |
| /** | |
| * Recursively walks a nested object structure, producing dot-delimited keys | |
| * that match the PromptKey union. | |
| */ | |
| private flattenPromptTree( | |
| tree: Record<string, unknown>, | |
| prefix = '' | |
| ): PromptMap { | |
| const result: PromptMap = {}; | |
| for (const [segment, value] of Object.entries(tree)) { | |
| const key = prefix ? `${prefix}.${segment}` : segment; | |
| if (value && typeof value === 'object' && !Array.isArray(value)) { | |
| const candidate = value as Record<string, unknown>; | |
| const template = candidate.template ?? candidate.prompt; | |
| if (typeof template === 'string' && isPromptKey(key)) { | |
| result[key] = { | |
| template, | |
| description: | |
| typeof candidate.description === 'string' | |
| ? candidate.description | |
| : undefined, | |
| cache: | |
| typeof candidate.cache === 'object' && | |
| candidate.cache !== null | |
| ? (candidate.cache as PromptCachePolicy) | |
| : undefined, | |
| }; | |
| continue; | |
| } | |
| Object.assign(result, this.flattenPromptTree(candidate, key)); | |
| } | |
| } | |
| return result; | |
| } | |
| } | |
| /** | |
| * Runtime guard used while flattening the YAML tree. | |
| */ | |
| const isPromptKey = (value: string): value is PromptKey => | |
| KNOWN_PROMPT_KEYS.has(value as PromptKey); | |
| /** | |
| * Holds onto the active registry instance so that callers can use the | |
| * functional `renderPrompt` helper without threading references everywhere. | |
| */ | |
| let activePromptRegistry: PromptRegistry | null = null; | |
| /** | |
| * Registers the singleton prompt registry for downstream helpers. Typically | |
| * invoked from the Discord bot's environment bootstrap after loading overrides. | |
| */ | |
| export const setActivePromptRegistry = (registry: PromptRegistry): void => { | |
| activePromptRegistry = registry; | |
| ): RenderedPrompt => | |
| registry.renderPrompt(key, { | |
| ...DEFAULT_BACKEND_PROMPT_VARIABLES, | |
| ...variables, | |
| }); | |
| wrappedRegistry.renderPrompt = ( | |
| key: PromptKey, | |
| variables: PromptVariables = {} | |
| ): RenderedPrompt => { | |
| const resolvedVariables: PromptVariables = { | |
| ...DEFAULT_BACKEND_PROMPT_VARIABLES, | |
| }; | |
| for (const [name, value] of Object.entries(variables)) { | |
| if (value !== undefined) { | |
| resolvedVariables[name] = value; | |
| } | |
| } | |
| return registry.renderPrompt(key, resolvedVariables); | |
| }; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/backend/src/services/prompts/promptRegistry.ts` around lines 55 -
62, wrappedRegistry.renderPrompt merges DEFAULT_BACKEND_PROMPT_VARIABLES with
caller-provided PromptVariables, but spreading the caller object lets explicit
undefined values overwrite defaults (e.g., botProfileDisplayName becoming
undefined). Fix by filtering out undefined-valued keys from the incoming
variables before merging (or merge by iterating keys and only assigning when
value !== undefined) so DEFAULT_BACKEND_PROMPT_VARIABLES remains for any
undefined fields; update wrappedRegistry.renderPrompt to pass the cleaned
variables object into registry.renderPrompt while referencing
DEFAULT_BACKEND_PROMPT_VARIABLES, PromptVariables, wrappedRegistry.renderPrompt,
and registry.renderPrompt.
| chat: | ||
| system: | ||
| description: >- | ||
| Canonical reflect system prompt for web and other non-Discord message surfaces. | ||
| template: |- | ||
| You are {{botProfileDisplayName}}, an AI assistant from the Footnote project. You help people think through tough questions while staying honest and fair. You explore multiple ethical perspectives, trace your sources, and show how you reach your conclusions. Be helpful, thoughtful, and transparent in your responses. | ||
|
|
||
| CITATION STYLE | ||
| - Place citation links immediately after the specific clause or sentence they support. | ||
| - Use numeric inline markdown links in the response text: [1](https://example.com/source), [2](https://example.com/source). | ||
| - Reuse a citation number when referencing the same source again in the same response. | ||
|
|
||
| RESPONSE METADATA PAYLOAD | ||
| After your conversational reply, leave a blank line and append a single JSON object on its own line prefixed with <RESPONSE_METADATA>. | ||
| This metadata records provenance and TRACE chips for downstream systems. | ||
|
|
||
| Required fields: | ||
| - provenance: one of "Retrieved", "Inferred", or "Speculative" | ||
| - tradeoffCount: integer >= 0 capturing how many value tradeoffs you surfaced (use 0 if none) | ||
| - citations: array of {"title": string, "url": fully-qualified URL, "snippet"?: string} objects (use [] if none) | ||
|
|
||
| Optional fields: | ||
| - evidenceScore: integer 1..5 when you can assess source support strength | ||
| - freshnessScore: integer 1..5 when you can assess recency reliability | ||
| - For inferred/speculative answers with limited source grounding, omitting evidenceScore/freshnessScore is expected | ||
| - If you performed web search or used retrieved external sources, include both evidenceScore and freshnessScore | ||
| - Do not emit strings for these scores; emit JSON integers only | ||
|
|
||
| Example: | ||
| <RESPONSE_METADATA>{"provenance":"Retrieved","tradeoffCount":1,"citations":[{"title":"Example","url":"https://example.com"}],"evidenceScore":4,"freshnessScore":4} | ||
|
|
||
| Guidelines: | ||
| - Emit valid, minified JSON (no comments, no code fences, no trailing text) | ||
| - Always include the <RESPONSE_METADATA> block after every response | ||
| - Omit optional fields if you cannot assess them confidently | ||
| - Keep citations array order aligned with first appearance of inline citation markers ([1], [2], ...) | ||
| - Use "Inferred" for reasoning-based answers, "Retrieved" for fact-based, "Speculative" for uncertain answers | ||
| planner: |
There was a problem hiding this comment.
Restore explicit refusal/safety instructions in reflect.chat.system.
This new canonical prompt now covers web and other non-Discord surfaces, but unlike discord.chat.system it never tells the model to refuse harmful, illegal, or unethical requests. That leaves the message model relying only on planner routing for safety, which is a weaker guarantee on the new surface.
🔧 Minimal prompt hardening
You are {{botProfileDisplayName}}, an AI assistant from the Footnote project. You help people think through tough questions while staying honest and fair. You explore multiple ethical perspectives, trace your sources, and show how you reach your conclusions. Be helpful, thoughtful, and transparent in your responses.
+
+ SAFETY BOUNDARIES
+ - Never provide harmful, illegal, or unethical instructions.
+ - If asked for such content, respond with a brief, polite refusal.
+ - Be clear about uncertainty and avoid overstating support.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/prompts/src/defaults.yaml` around lines 231 - 268, The reflect chat
system prompt (chat.system.template) lacks explicit refusal/safety instructions;
update the template used by reflect.chat.system (the chat.system.template block)
to include clear refusal language mirroring discord.chat.system: instruct the
model to refuse requests that are illegal, harmful, or unethical, provide brief
safe alternatives or offer to provide general information, and include
escalation/clarification behavior for ambiguous requests; keep the rest of the
CITATION/RESPONSE_METADATA rules intact and ensure the new lines are unambiguous
about refusal and safe guidance.
Establishes a system for users to easily adjust the behavior/personality of their bot(s) through env strings/prompt files.
This also allows for spinning up multiple Discord bots on the same backend.
Summary by CodeRabbit
New Features
Documentation
Bug Fixes & Improvements
Tests