Skip to content

Conversation

@promisingcoder
Copy link

@promisingcoder promisingcoder commented Jun 10, 2025

/claim #34

What type of PR is this? (check all applicable)

  • Refactor
  • Feature
  • Bug Fix
  • Optimization
  • Documentation Update

Description

Related Tickets & Documents

  • Related Issue #
  • Closes #

QA Instructions, Screenshots, Recordings

Please replace this line with instructions on how to test your changes, and add screenshots or recordings if possible.

Added/updated tests?

We encourage keeping test coverage at 80% or above.

  • Yes
  • No, and this is why: please explain why tests are not included

Community Support

@promisingcoder
Copy link
Author

image
image
image
image
image

image

@promisingcoder
Copy link
Author

@adnan-cto @mubashir-oss
please provide me with a review so that I know if there is anything that needs editing or if this is what you want
so that I can edit if needed

@promisingcoder
Copy link
Author

video.mp4

- Increase chunks processed in synthesis from 3 to 5
- Expand chunk text from 200 to 500 chars for better context
- Add sophisticated regex-based section filtering for negation queries
- Implement query-type specific decomposition strategies
- Double retrieved chunks for complex queries
- Perform preliminary search before decomposing complex queries
- Pass relevant document excerpts to the LLM when creating sub-queries
- Add context-based prompting to generate specific, concrete sub-queries
- Eliminate placeholder text ([topic], [concept]) by using actual document terminology
- Add detailed examples and guidelines for context-aware query generation
@promisingcoder
Copy link
Author

I have improved the rag greatly
will upload a video recording in an hour

@promisingcoder
Copy link
Author

video.mp4

@promisingcoder
Copy link
Author

RAG System Enhancements

I've implemented two key improvements to fix the RAG system's handling of complex queries:

1. Response Quality Improvements

  • Increased processed chunks (3→5) and text context (200→500 chars)
  • Added regex-based section filtering for precise negation handling
  • Doubled chunk retrieval for complex queries
  • Created query-type specific decomposition strategies

2. Context-Aware Query Generation

  • Added preliminary document search before query decomposition
  • Eliminated placeholder text (e.g., "[concept]") by feeding document context to the LLM
  • Enhanced prompting with examples and explicit instructions

These changes ensure comprehensive, accurate responses for multi-hop and negation queries that previously returned generic or incomplete answers.

@promisingcoder
Copy link
Author

@adnan-cto @mubashir-oss
there are other improvements I intend to make to the rag
but I will need your feedback first

@promisingcoder
Copy link
Author

@adnan-cto @mubashir-oss
Just a reminder to give me feedback so that I can continue the implementation

@amonkhepri
Copy link

@promisingcoder did you receive any feedback in the end?

@promisingcoder
Copy link
Author

@promisingcoder did you receive any feedback in the end?

no !!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants