Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
69 commits
Select commit Hold shift + click to select a range
9d633db
Trigger Build
kieran-wilkinson-4 Nov 21, 2025
fee0224
feat: creates new prompt resources
bencegadanyi1-nhs Nov 24, 2025
772cb23
feat: makes new prompt name available as env var in lambda
bencegadanyi1-nhs Nov 24, 2025
f383c60
Merge branch 'main' into AEA-5919-Kieran
bencegadanyi1-nhs Nov 24, 2025
f4a79ef
feat: Update prompt to use xml formatting
kieran-wilkinson-4 Nov 24, 2025
13fe3b1
feat: Add RAG environment variables and update test cases accordingly
kieran-wilkinson-4 Nov 24, 2025
7ebfc67
refactor: adds chat type prompt parsing
bencegadanyi1-nhs Nov 25, 2025
f4dd61f
fix: adds chat type prompt parsing corrected to response syntax
bencegadanyi1-nhs Nov 25, 2025
b43b70e
chore: adds debug logging
bencegadanyi1-nhs Nov 25, 2025
219f773
chore: hard codes inference config for test purposes
bencegadanyi1-nhs Nov 25, 2025
ef932e0
feat: Add inference configuration
kieran-wilkinson-4 Nov 25, 2025
e1274e9
feat: Update system prompt to acknowledge implicit and explicit sources
kieran-wilkinson-4 Nov 25, 2025
65a163a
feat: Update system prompt to acknowledge implicit and explicit sources
kieran-wilkinson-4 Nov 26, 2025
e39b31b
feat: Set Inference Confic in CDK
kieran-wilkinson-4 Nov 26, 2025
49f12d0
feat: Use Markdown in Prompt Response
kieran-wilkinson-4 Nov 26, 2025
3b1c3d5
feat: Remove Inference from Env Config
kieran-wilkinson-4 Nov 26, 2025
e5ad207
feat: Get Prompts from File
kieran-wilkinson-4 Nov 26, 2025
5dd4dc0
feat: Get Prompts from File via Directory
kieran-wilkinson-4 Nov 26, 2025
25129f5
feat: Move Prompt into CDK Project
kieran-wilkinson-4 Nov 26, 2025
3bc4690
feat: Move Inference Config to Prompt Settings
kieran-wilkinson-4 Nov 26, 2025
248c93e
feat: Pass Consistant Default Inference Values Through App
kieran-wilkinson-4 Nov 27, 2025
c944a39
trigger build
kieran-wilkinson-4 Nov 27, 2025
f376d04
feat: Add test coverage around chat prompt template
kieran-wilkinson-4 Nov 27, 2025
11ad828
feat: Add tests for edge cases and reformulation
kieran-wilkinson-4 Nov 27, 2025
d3763d8
feat: Add dataclass for bedrock config
kieran-wilkinson-4 Nov 28, 2025
f597132
feat: Use slack actions for citations
kieran-wilkinson-4 Dec 2, 2025
8a336d6
feat: Update slack actions and models
kieran-wilkinson-4 Dec 2, 2025
0f3a951
feat: Remove JSON from slack block builder
kieran-wilkinson-4 Dec 2, 2025
5a32c4f
feat: Use first reference for each citation
kieran-wilkinson-4 Dec 2, 2025
ee9c5af
feat: Log citation if missing
kieran-wilkinson-4 Dec 2, 2025
2a98a90
Merge branch 'main' into AEA-5919-Add-Citation-Buttons
kieran-wilkinson-4 Dec 2, 2025
f1067de
feat: Log bedrock response and add feedback back
kieran-wilkinson-4 Dec 2, 2025
df127d9
feat: Update retrieval configuration
kieran-wilkinson-4 Dec 2, 2025
78aff9f
feat: Update prompt request parameters
kieran-wilkinson-4 Dec 3, 2025
66f1d4a
fix: Add kb_response back in
kieran-wilkinson-4 Dec 4, 2025
eff156b
fix: fix title in citations
kieran-wilkinson-4 Dec 4, 2025
56caac0
feat: pull citations from body
kieran-wilkinson-4 Dec 4, 2025
ac0245a
fix: sort tests for citations
kieran-wilkinson-4 Dec 4, 2025
926a340
feat: Update citation button values
kieran-wilkinson-4 Dec 4, 2025
7ad19e6
feat: Update inline citations and citation block
kieran-wilkinson-4 Dec 5, 2025
f824d2e
feat: fix unit tests
kieran-wilkinson-4 Dec 5, 2025
9bf1fea
feat: add cite action to handlers
kieran-wilkinson-4 Dec 5, 2025
bbda9f9
feat: Add additional logs
kieran-wilkinson-4 Dec 5, 2025
0c185ed
feat: fix citation regex
kieran-wilkinson-4 Dec 5, 2025
a2ba172
feat: use citation dictionary
kieran-wilkinson-4 Dec 5, 2025
6b743b7
feat: use citation dictionary
kieran-wilkinson-4 Dec 5, 2025
3accd7f
feat: use citation dictionary
kieran-wilkinson-4 Dec 5, 2025
7058ead
feat: use citation dictionary
kieran-wilkinson-4 Dec 5, 2025
8cf7a4e
feat: use multiple citation actions
kieran-wilkinson-4 Dec 8, 2025
20b7c7d
feat: update styling and try use markdown instead of mrkdwn
kieran-wilkinson-4 Dec 8, 2025
1f56e86
feat: roll back to mrkdwn
kieran-wilkinson-4 Dec 8, 2025
c9b06cb
feat: Update system and user prompts
kieran-wilkinson-4 Dec 8, 2025
26de83e
feat: Update styling and formatting
kieran-wilkinson-4 Dec 8, 2025
79a6120
feat: Update button styling
kieran-wilkinson-4 Dec 8, 2025
7e5fa9b
feat: Make sure citations use correct name
kieran-wilkinson-4 Dec 8, 2025
1deb45e
feat: Rollback formatting prompting
kieran-wilkinson-4 Dec 9, 2025
a6bc780
feat: Update citation prompting and update action
kieran-wilkinson-4 Dec 9, 2025
caa1d48
feat: Update citation prompting and update action
kieran-wilkinson-4 Dec 9, 2025
00ea6ef
feat: Update citation prompting and update action
kieran-wilkinson-4 Dec 9, 2025
f52df16
feat: Increase max tokens
kieran-wilkinson-4 Dec 9, 2025
b9b2020
feat: Split button context by double vertical bar
kieran-wilkinson-4 Dec 9, 2025
6957177
feat: Split button context by double vertical bar
kieran-wilkinson-4 Dec 10, 2025
b93eb60
feat: rebuild
kieran-wilkinson-4 Dec 10, 2025
f5bbdb5
feat: rebuild
kieran-wilkinson-4 Dec 10, 2025
4250251
feat: update system prompt to latest changes
kieran-wilkinson-4 Dec 10, 2025
13fd0aa
Merge branch 'main' into AEA-5919-Add-Citation-Buttons
kieran-wilkinson-4 Dec 10, 2025
9116819
feat: Update unit tests
kieran-wilkinson-4 Dec 10, 2025
a2faa0c
feat: reduce method complexity when creating slack response
kieran-wilkinson-4 Dec 10, 2025
f3ecf6d
feat: test logging coverage
kieran-wilkinson-4 Dec 10, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
96 changes: 49 additions & 47 deletions packages/cdk/prompts/systemPrompt.txt
Original file line number Diff line number Diff line change
@@ -1,47 +1,49 @@
<SystemInstructions>
You are an AI assistant designed to provide helpful information and guidance related to healthcare systems,
data integration and user setup.

<Requirements>
1. Break down the question(s) based on the context
2. Examine the information provided in the question(s) or requirement(s).
3. Refer to your knowledge base to find relevant details, specifications, and useful references/ links.
4. The knowledge base is your source of truth before anything else
5. Acknowledge explicit and implicit evidence
5a. If no explicit evidence is available, state implicit evidence with a caveat
6. Provide critical thinking before replying to make the direction actionable and authoritative
7. Provide a clear and comprehensive answer by drawing inferences,
making logical connections from the available information, comparing previous messages,
and providing users with link and/ or references to follow.
8. Be clear in answers, direct actions are preferred (eg., "Check Postcode" &gt; "Refer to documentation")
</Requirements>

<Constraints>
1. Quotes should be italic
2. Document titles and document section names should be bold
3. If there is a single question, or the user is asking for direction, do not list items
4. If the query has multiple questions *and* the answer includes multiple answers for multiple questions
(as lists or bullet), the list items must be formatted as \`*<question>*
- <answer(s)>\`.
4a. If there are multiple questions in the query, shorten the question to less than 50 characters
</Constraints>

<Output>
- Use Markdown, avoid XML
- Structured, informative, and tailored to the specific context of the question.
- Provide evidence to support results
- Acknowledging any assumptions or limitations in your knowledge or understanding.
- Text structure should be in Markdown
</Output>

<Tone>
Professional, helpful, authoritative.
</Tone>

<Examples>
<Example1>
Q: Should alerts be automated?
A: *Section 1.14.1* mentions handling rejected prescriptions, which implies automation.
</Example1>
</Examples>
</SystemInstructions>
You are an AI assistant designed to provide guidance and references from your knowledge base to help users make decisions when onboarding. It is *VERY* important you return *ALL* references, for user examination.

# Response
## Response Structure
- *Summary*: 100 characters maximum, capturing core answer
- *Answer* (use "mrkdown") (< 800 characters)
- Page break (use `------`)
- \[Bibliography\]

## Formatting ("mrkdwn")
a. *Bold* for:
- Headings, subheadings: *Answer:*, *Bibliography:*
- Source names: *NHS England*, *EPS*
b. _Italic_ for:
- Citations, references, document titles
c. Block Quotes for:
- Direct quotes >1 sentence
- Technical specifications, parameters
- Examples
d. `Inline code` for:
- System names, field names: `PrescriptionID`
- Short technical terms: `HL7 FHIR`
e. Links:
- Do not provide links

# Thinking
## Question Handling
- Detect whether the query contains one or multiple questions
- Split complex queries into individual sub-questions
- Identify question type: factual, procedural, diagnostic, troubleshooting, or clarification-seeking
- For multi-question queries: number sub-questions clearly (Q1, Q2, etc)

## RAG & Knowledge Base Integration
- Relevance threshold handling:
- Score > 0.85 (High confidence)
- Score 0.70 - 0.85 (Medium confidence)
- Score < 0.70 (Low confidence)

## Corrections
- Change _National Health Service Digital (NHSD)_ references to _National Health Service England (NHSE)_

# Bibliography
## Format
<cit>source number||summary title||link||filename||text snippet||reasoning</cit>\n

## Requirements
- Return **ALL** retrieved documents, their name and a text snippet, from "CONTEXT"
- Get full text references from search results for Bibliography
- Title should be less than 50 characters
11 changes: 5 additions & 6 deletions packages/cdk/prompts/userPrompt.txt
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
- Using your knowledge around the National Health Service (NHS), Electronic Prescription Service (EPS) and the Fast Healthcare Interoperability Resources' (FHIR) onboarding, Supplier Conformance Assessment List (SCAL), APIs, developer guides and error resolution; please answer the following question and cite direct quotes and document sections.
- If my query is asking for instructions (i.e., "How to...", "How do I...") provide step by steps instructions
- Do not provide general advice or external instructions
# QUERY
{{user_query}}

<SearchResults>$search_results$</SearchResults>

<UserQuery>{{user_query}}</UserQuery>`
# CONTEXT
## Results $search_results$
## LIST ALL RESULTS IN TABLE
10 changes: 6 additions & 4 deletions packages/cdk/resources/BedrockPromptResources.ts
Original file line number Diff line number Diff line change
Expand Up @@ -18,12 +18,14 @@ export class BedrockPromptResources extends Construct {
constructor(scope: Construct, id: string, props: BedrockPromptResourcesProps) {
super(scope, id)

const claudeHaikuModel = BedrockFoundationModel.ANTHROPIC_CLAUDE_HAIKU_V1_0
const claudeSonnetModel = BedrockFoundationModel.ANTHROPIC_CLAUDE_SONNET_V1_0
// Nova Pro is recommended for text generation tasks requiring high accuracy and complex understanding.
const novaProModel = BedrockFoundationModel.AMAZON_NOVA_PRO_V1
// Nova Lite is recommended for tasks
const novaLiteModel = BedrockFoundationModel.AMAZON_NOVA_LITE_V1

const queryReformulationPromptVariant = PromptVariant.text({
variantName: "default",
model: claudeHaikuModel,
model: novaLiteModel,
promptVariables: ["topic"],
promptText: props.settings.reformulationPrompt.text
})
Expand All @@ -37,7 +39,7 @@ export class BedrockPromptResources extends Construct {

const ragResponsePromptVariant = PromptVariant.chat({
variantName: "default",
model: claudeSonnetModel,
model: novaProModel,
promptVariables: ["query", "search_results"],
system: props.settings.systemPrompt.text,
messages: [props.settings.userPrompt]
Expand Down
2 changes: 1 addition & 1 deletion packages/cdk/resources/BedrockPromptSettings.ts
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ export class BedrockPromptSettings extends Construct {
this.inferenceConfig = {
temperature: 0,
topP: 1,
maxTokens: 512,
maxTokens: 1500,
stopSequences: [
"Human:"
]
Expand Down
5 changes: 5 additions & 0 deletions packages/slackBotFunction/app/services/ai_processor.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,11 @@ def process_ai_query(user_query: str, session_id: str | None = None) -> AIProces
# session_id enables conversation continuity across multiple queries
kb_response = query_bedrock(reformulated_query, session_id)

logger.info(
"response from bedrock",
extra={"response_text": kb_response},
)

return {
"text": kb_response["output"]["text"],
"session_id": kb_response.get("sessionId"),
Expand Down
15 changes: 13 additions & 2 deletions packages/slackBotFunction/app/services/bedrock.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ def query_bedrock(user_query: str, session_id: str = None) -> RetrieveAndGenerat
inference_config = prompt_template.get("inference_config")

if not inference_config:
default_values = {"temperature": 0, "maxTokens": 512, "topP": 1}
default_values = {"temperature": 0, "maxTokens": 1500, "topP": 1}
inference_config = default_values
logger.warning(
"No inference configuration found in prompt template; using default values",
Expand All @@ -43,6 +43,7 @@ def query_bedrock(user_query: str, session_id: str = None) -> RetrieveAndGenerat
"knowledgeBaseConfiguration": {
"knowledgeBaseId": config.KNOWLEDGEBASE_ID,
"modelArn": config.RAG_MODEL_ID,
"retrievalConfiguration": {"vectorSearchConfiguration": {"numberOfResults": 5}},
"generationConfiguration": {
"guardrailConfiguration": {
"guardrailId": config.GUARD_RAIL_ID,
Expand All @@ -57,6 +58,16 @@ def query_bedrock(user_query: str, session_id: str = None) -> RetrieveAndGenerat
}
},
},
"orchestrationConfiguration": {
"inferenceConfig": {
"textInferenceConfig": {
**inference_config,
"stopSequences": [
"Human:",
],
}
},
},
},
},
}
Expand All @@ -79,7 +90,7 @@ def query_bedrock(user_query: str, session_id: str = None) -> RetrieveAndGenerat
response = client.retrieve_and_generate(**request_params)
logger.info(
"Got Bedrock response",
extra={"session_id": response.get("sessionId"), "has_citations": len(response.get("citations", [])) > 0},
extra={"session_id": response.get("sessionId")},
)
return response

Expand Down
2 changes: 1 addition & 1 deletion packages/slackBotFunction/app/services/prompt_loader.py
Original file line number Diff line number Diff line change
Expand Up @@ -106,7 +106,7 @@ def load_prompt(prompt_name: str, prompt_version: str = None) -> dict:
actual_version = response.get("version", "DRAFT")

# Extract inference configuration with defaults
default_inference = {"temperature": 0, "topP": 1, "maxTokens": 512}
default_inference = {"temperature": 0, "topP": 1, "maxTokens": 1500}
raw_inference = response["variants"][0].get("inferenceConfiguration", {})
raw_text_config = raw_inference.get("textInferenceConfiguration", {})
inference_config = {**default_inference, **raw_text_config}
Expand Down
Loading