feat: implement request cancellation and timeout handling for frontend API calls#505
feat: implement request cancellation and timeout handling for frontend API calls#505mahek2016 wants to merge 4 commits intoAOSSIE-Org:mainfrom
Conversation
… API response handling
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (2)
📝 WalkthroughWalkthroughImplements frontend request cancellation and 20s timeout, adds regeneration/navigation/loading UX, hard-sizes backend generators to t5-small and disables Sense2Vec, and adds defensive parsing/validation and logging to the /get_mcq endpoint. Changes
Sequence Diagram(s)sequenceDiagram
participant User as User/UI
participant TextInput as Text_Input.jsx
participant APIClient as apiClient.js
participant Backend as Backend Server
participant AbortCtl as AbortController
User->>TextInput: Click "Next" / "Regenerate"
TextInput->>TextInput: disable UI, save payload
TextInput->>APIClient: makeRequest(endpoint, payload)
APIClient->>AbortCtl: cancel previous controller (if any)
APIClient->>AbortCtl: create new controller + start 20s timeout
APIClient->>Backend: POST with signal
alt Backend responds
Backend->>APIClient: 200 OK (quiz data)
APIClient->>TextInput: return data
TextInput->>TextInput: store QA pairs, navigate to /output, enable UI
else Timeout / Abort
AbortCtl->>APIClient: abort -> throws AbortError
APIClient->>TextInput: reject with AbortError
TextInput->>TextInput: show error/alert, enable UI
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 3 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 10
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (5)
backend/Generator/main.py (2)
55-56:⚠️ Potential issue | 🔴 Critical
identify_keywordswill return empty whens2visNone.As noted in the
mcq.pyreview,identify_keywordsfilters answers usingis_word_available, which requires a non-nulls2v. Withself.s2v = None, this will filter out all keywords, causing MCQ generation to return empty results. This is a downstream effect of the root cause inmcq.py.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/Generator/main.py` around lines 55 - 56, identify_keywords is returning empty when self.s2v is None because it calls is_word_available which requires a non-null s2v; update identify_keywords (or the availability check it calls) to short-circuit when self.s2v is None by skipping the word-availability filtering and using the fallback frequency/levenshtein logic so keywords still surface; reference identify_keywords, is_word_available and self.s2v to locate the change, and ensure downstream call find_sentences_with_keywords receives the non-empty keywords list.
67-70:⚠️ Potential issue | 🟡 MinorBare
exceptclause swallows all exceptions.This catches all exceptions silently, including
KeyboardInterruptandSystemExit, making debugging difficult. Catch specific exceptions and consider logging the error.🛡️ Proposed fix
try: generated_questions = generate_multiple_choice_questions(keyword_sentence_mapping, self.device, self.tokenizer, self.model, self.s2v, self.normalized_levenshtein) - except: + except Exception as e: + print(f"Error generating MCQ questions: {e}") return final_output🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/Generator/main.py` around lines 67 - 70, Replace the bare except with a specific exception handler: catch Exception as e (not a bare except) around the call to generate_multiple_choice_questions and log the error before returning final_output; ensure KeyboardInterrupt/SystemExit are not swallowed (i.e., don't catch BaseException) and re-raise them if caught. Target the call to generate_multiple_choice_questions (which uses self.device, self.tokenizer, self.model, self.s2v, self.normalized_levenshtein) and make sure the handler logs the exception details (e.g., include e) and then returns final_output.backend/Generator/mcq.py (3)
14-20:⚠️ Potential issue | 🔴 Critical
is_word_availablewill crash whens2visNone.In
main.py,self.s2vis now set toNone, but this function directly callss2v_model.get_best_sense(word)without a null check. This will raise anAttributeErrorat runtime.🐛 Proposed fix to add null check
def is_word_available(word, s2v_model): + if s2v_model is None: + return False word = word.replace(" ", "_") sense = s2v_model.get_best_sense(word) if sense is not None: return True else: return False🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/Generator/mcq.py` around lines 14 - 20, The function is_word_available currently calls s2v_model.get_best_sense(word) without guarding against s2v_model being None; update is_word_available to first check if s2v_model is truthy (or not None) and immediately return False if it is None, then proceed to call s2v_model.get_best_sense(word) and return True/False based on the result; refer to the is_word_available function and the s2v_model parameter when making this change.
31-54:⚠️ Potential issue | 🔴 Critical
find_similar_wordswill crash whens2visNone.This function calls
s2v_model.get_best_sense(word)ands2v_model.most_similar(sense, n=15)without null checks. Withs2v=Noneinmain.py, this will raise anAttributeError.🐛 Proposed fix to add null check
def find_similar_words(word, s2v_model): output = [] + if s2v_model is None: + return output word_preprocessed = word.translate(word.maketrans("", "", string.punctuation)) word_preprocessed = word_preprocessed.lower()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/Generator/mcq.py` around lines 31 - 54, find_similar_words currently calls s2v_model.get_best_sense(...) and s2v_model.most_similar(...) without validating s2v_model, so it will raise AttributeError when s2v_model is None; add a guard at the top of find_similar_words to check if s2v_model is falsy and return an empty list (or a sensible fallback), and ensure subsequent calls to get_best_sense and most_similar only run when s2v_model is valid; reference: function find_similar_words and methods get_best_sense / most_similar on s2v_model.
163-165:⚠️ Potential issue | 🟠 Major
identify_keywordsfilters out all answers whens2visNone.Line 164 calls
is_word_available(answer, s2v_model). If you apply the null check fix returningFalsewhens2v_model is None, this loop will exclude all answers, resulting in an empty list. Consider changing the logic to include all answers when Sense2Vec is unavailable.🔧 Proposed fix to handle missing Sense2Vec gracefully
answers = [] for answer in total_phrases_filtered: - if answer not in answers and is_word_available(answer, s2v_model): + if answer not in answers and (s2v_model is None or is_word_available(answer, s2v_model)): answers.append(answer)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/Generator/mcq.py` around lines 163 - 165, The current loop in Generator/mcq.py filters out all candidate answers when Sense2Vec is unavailable because it calls is_word_available(answer, s2v_model) which you set to return False for s2v_model is None; update the logic so missing s2v does not exclude answers — either change the call site in the loop to append when s2v_model is None (e.g. use condition (s2v_model is None or is_word_available(...))) or change is_word_available to return True when s2v_model is None; modify the loop around answers and the function is_word_available (referenced symbols: is_word_available, total_phrases_filtered, answers) accordingly to include candidates when Sense2Vec is not present.
🧹 Nitpick comments (7)
eduaid_web/src/utils/apiClient.js (3)
74-77: Cannot distinguish timeout from manual cancellation.Both timeout and explicit
abort()triggerAbortError. Consider usingAbortSignal.timeout()(if supported) or tracking the timeout state to provide more specific error messages to users.Example approach
+ let timedOut = false; const timeoutId = setTimeout(() => { + timedOut = true; if (this.currentController) { this.currentController.abort(); } }, timeout); // ... in catch block: if (error.name === 'AbortError') { - throw new Error('Request was cancelled or timed out'); + throw new Error(timedOut ? 'Request timed out' : 'Request was cancelled'); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@eduaid_web/src/utils/apiClient.js` around lines 74 - 77, The AbortError handler currently can't tell if the request timed out or was manually aborted; update the request/abort setup where AbortController is created (the fetch/timeout logic in apiClient.js) to distinguish timeout vs manual cancellation by either using AbortSignal.timeout() if available or by setting a local flag (e.g., isTimeout) when the timeout fires and clearing it on manual abort; then in the catch block that checks error.name === 'AbortError' inspect that flag (or signal type) and throw distinct errors/messages for timeout vs user-initiated abort so callers can react appropriately.
109-123:postFormDatalacks cancellation and timeout support.Unlike
post(), this method doesn't support request cancellation or timeout. File uploads could be long-running operations where users might want to cancel. Consider routing throughmakeRequestor implementing similar logic.Proposed fix
async postFormData(endpoint, formData, options = {}) { - const url = `${this.baseUrl}${endpoint}`; - - const response = await fetch(url, { + return this.makeRequest(endpoint, { ...options, method: 'POST', body: formData, + // Don't set Content-Type - browser sets it with boundary for FormData }); - - if (!response.ok) { - throw new Error(`HTTP error! status: ${response.status}`); - } - - return await response.json(); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@eduaid_web/src/utils/apiClient.js` around lines 109 - 123, The postFormData method currently performs a raw fetch without cancellation or timeout; update it to reuse the existing makeRequest flow (or replicate its logic) so uploads support AbortController and timeout handling: accept/merge options.signal and timeout, create/chain an AbortController when no signal is provided (or when implementing timeout), start a timer to abort after options.timeout, pass the final signal to fetch while preserving method:'POST' and body:formData, ensure the timer is cleared on completion, and throw errors consistent with makeRequest on non-ok responses; reference the postFormData and makeRequest functions when making the change.
42-56: SingleAbortControllercauses cross-request cancellation.Using a single
currentControllerinstance means any new request will abort all in-flight requests, not just requests to the same endpoint. If the application makes concurrent requests to different endpoints (e.g., fetching user data while generating questions), they'll interfere with each other.If this is intentional for the current use case (preventing duplicate generation requests), consider documenting this behavior or scoping controllers per endpoint type.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@eduaid_web/src/utils/apiClient.js` around lines 42 - 56, The code uses a shared this.currentController which causes new requests to abort all in-flight requests; change to create a per-request AbortController (e.g., const controller = new AbortController()) instead of overwriting this.currentController, pass controller.signal to fetch, and use a per-request timeoutId that only calls controller.abort(); alternatively, if you need deduplication per endpoint, implement a Map keyed by endpoint/request type (e.g., controllersByKey.set(key, controller)) and only abort the controller for that key; ensure you clear the timeout and remove the controller from any storage when the request finishes.backend/server.py (1)
78-80: Consider catching more specific exceptions.The static analysis tool flags catching a bare
Exception. While this provides resilience, it may hide unexpected bugs. Consider catching specific expected exceptions (e.g.,KeyError,ValueError) and logging unexpected ones with more detail for debugging.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/server.py` around lines 78 - 80, In get_mcq replace the broad "except Exception as e" with targeted handlers for the expected errors (e.g., "except (KeyError, ValueError) as e" returning jsonify({"output": []}), 200) and add a separate generic handler that logs full exception details (stack trace) or re-raises for unexpected errors; reference get_mcq, the current print("❌ ERROR inside get_mcq:", ...) line, and the return jsonify({"output": []}), 200 to locate and update the handlers accordingly.eduaid_web/src/pages/Output.jsx (1)
138-140: Fallback to{}makes truthiness check ineffective.
JSON.parse(...) || {}returns an empty object if localStorage is empty, butif (qaPairsFromStorage)will still betruefor{}. Consider checking for actual content:Proposed fix
const qaPairsFromStorage = - JSON.parse(localStorage.getItem("qaPairs")) || {}; - if (qaPairsFromStorage) { + JSON.parse(localStorage.getItem("qaPairs")); + if (qaPairsFromStorage && Object.keys(qaPairsFromStorage).length > 0) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@eduaid_web/src/pages/Output.jsx` around lines 138 - 140, The current pattern JSON.parse(localStorage.getItem("qaPairs")) || {} makes qaPairsFromStorage truthy even when empty, so the if (qaPairsFromStorage) check is ineffective; change to first read the raw string (localStorage.getItem("qaPairs")) and only JSON.parse when it exists, then check that the parsed object actually has content (e.g., raw && parsed && Object.keys(parsed).length > 0) before proceeding; update references to qaPairsFromStorage in Output.jsx to use this presence check so empty objects don't trigger the branch.eduaid_web/src/pages/Text_Input.jsx (1)
108-118: Re-throwing error after alerting creates redundant error logging.After showing an alert to the user, the error is re-thrown and caught again in
handleSaveToLocalStorage, causing duplicateconsole.errorlogs. Consider returning a success/failure indicator instead.Cleaner approach
- } catch (error) { - console.error("Error:", error); - - if (error.message?.includes("cancelled")) { - alert("Request cancelled or timed out. Please try again."); - } else { - alert("Failed to generate questions. Please try again."); - } - - throw error; // let outer function handle loading reset - } + } catch (error) { + console.error("Error:", error); + + if (error.message?.includes("cancelled")) { + alert("Request cancelled or timed out. Please try again."); + } else { + alert("Failed to generate questions. Please try again."); + } + return false; // indicate failure + } + return true; // indicate success🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@eduaid_web/src/pages/Text_Input.jsx` around lines 108 - 118, The catch block currently re-throws the error after alerting which causes duplicate logging when the caller (handleSaveToLocalStorage) also catches it; instead, stop re-throwing and return a clear success/failure indicator (e.g., return false or { success: false, error }) from the function containing this catch (the catch shown in Text_Input.jsx), update handleSaveToLocalStorage to check that return value and handle loading/reset logic accordingly, and ensure any cleanup currently done in callers is preserved (or move cleanup into a finally block) so you no longer rely on throwing to signal failure.backend/Generator/main.py (1)
8-8: Unused import:Sense2Vec.
Sense2Vecis imported but never instantiated sinceself.s2v = Noneis used across all generators. Consider removing this unused import to avoid confusion.♻️ Proposed fix
-from sense2vec import Sense2Vec🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/Generator/main.py` at line 8, Remove the unused Sense2Vec import: the module-level import "Sense2Vec" is never used because generators set self.s2v = None, so delete the "from sense2vec import Sense2Vec" line and any related dead references; keep self.s2v initialization as-is (e.g., in constructors) or only add the import back where Sense2Vec is actually instantiated.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/Generator/main.py`:
- Around line 86-91: ShortQGenerator sets self.s2v = None which breaks
identify_keywords (used by generate_shortq/generate_normal_questions);
initialize self.s2v the same way as MCQGenerator does (or load the appropriate
sentence-vector/embedding model used elsewhere) during ShortQGenerator.__init__,
or alternatively add a guard in identify_keywords to raise a clear error or
lazily initialize s2v when first needed; reference ShortQGenerator,
generate_shortq, generate_normal_questions, identify_keywords, and self.s2v to
locate where to make the change.
- Around line 28-33: The Sense2Vec model is never loaded (self.s2v = None),
causing identify_keywords() and is_word_available(answer, self.s2v) to raise
AttributeError; either load the Sense2Vec model during initialization or add a
safe fallback. In Generator.__init__ replace self.s2v = None with code that
loads Sense2Vec (e.g., Sense2Vec().from_disk(...) or the project’s expected load
path) and ensure self.s2v is a valid object before calling is_word_available, or
modify identify_keywords() to check for a None s2v and skip sense-based checks /
use a simpler POS-based fallback. Also consider switching the tokenizer/model
initialization from a hardcoded 't5-small' to the configured higher-capacity
model name (the same symbol self.tokenizer / self.model) or make it
configurable.
- Line 12: requirements.txt currently pins the wrong package for the imports
used in the code; update it to install strsimpy (>=0.2.1) instead of
strsim==0.0.3 so that the import from strsimpy.normalized_levenshtein works in
main.py and mcq.py; specifically, replace the strsim entry with strsimpy>=0.2.1
in requirements.txt and run your environment install to verify that
NormalizedLevenshtein imports resolve.
In `@backend/server.py`:
- Around line 72-80: The endpoint get_mcq currently always returns 200 with
{"output": []} even on exceptions; change the except block to return a non-200
status (e.g., 500) and include error information in the JSON (for example
{"output": [], "error": "<message>"}), and preferably log the full exception via
app.logger.exception or similar instead of print; keep the successful-empty case
(when not output or "questions" not in output) returning 200 with an empty
output so the frontend can distinguish between no-results and server errors.
- Line 41: The /get_content endpoint references
docs_service.get_document_content(document_url) but docs_service is commented
out, causing a runtime NameError; either restore the service initialization by
uncommenting the GoogleDocsService instantiation (docs_service =
main.GoogleDocsService(SERVICE_ACCOUNT_FILE, SCOPES)) so docs_service is
available to the endpoint, or remove the /get_content route and any calls to
docs_service.get_document_content to eliminate the broken dependency; update
only the docs_service initialization or the endpoint (not both) and ensure any
required imports/config variables (SERVICE_ACCOUNT_FILE, SCOPES) remain present.
In `@eduaid_web/src/pages/Output.jsx`:
- Line 169: Replace the loose equality check in the if condition that compares
questionType to the string "get_boolq" with a strict equality check: update the
condition using questionType === "get_boolq" (locate the if (questionType ==
"get_boolq") block in Output.jsx) to avoid type-coercion issues.
- Around line 34-43: The code modifies storedPayload.regenerate before verifying
storedPayload exists, which can throw if localStorage.getItem("lastQuizPayload")
is null or "null"; update the logic in Output.jsx to first retrieve the raw
string from localStorage (localStorage.getItem("lastQuizPayload")), check that
it's non-null/non-empty, then safely JSON.parse it (or wrap parse in try/catch)
and only then assign storedPayload.regenerate = Date.now(); if retrieval or
parse fails, call setRegenError("No previous quiz parameters found.") and
return; reference the storedPayload variable and setRegenError function to
locate where to reorder and add the null/parse checks.
- Around line 47-53: The code incorrectly reads response.data from the
apiClient.post result (apiClient.post returns the parsed JSON like { output:
[...] }), so replace usages to read response.output when persisting QA pairs;
specifically update the block around apiClient.post(...) and
localStorage.setItem("qaPairs", ...) to store JSON.stringify(response.output).
Also remove the immediate window.location.reload() (or move it after verifying
storage/read) since reloading makes the localStorage write redundant—either
persist response.output and let existing page logic read it, or if you must
reload, avoid the extra storage step.
In `@eduaid_web/src/pages/Text_Input.jsx`:
- Around line 83-86: The if-check using responseData.length currently has
alert("No questions generated. Try longer text."); and return placed outside the
conditional due to wrong indentation; move the alert and return so they are
inside the if block that checks if (!responseData.length) to ensure the function
exits only when no questions were generated (reference the responseData variable
and the if (!responseData.length) block in Text_Input.jsx).
- Around line 127-140: When docUrl is used you fetch the document but never
forward it to the backend and you hardcode the question type; after
apiClient.post("/get_content", { document_url: docUrl }) and setText(data...),
call sendToBackend with that fetched text and the currently selected question
type instead of the hardcoded "get_mcq" (retrieve the type via
localStorage.getItem("selectedQuestionType") or the existing
difficulty/numQuestions variables), and also persist
text/difficulty/numQuestions into localStorage similarly to the text branch;
update the branch that handles docUrl to invoke sendToBackend(fetchedText,
difficulty, selectedQuestionType) (and await it) so quizzes are generated for
Google Doc URLs.
---
Outside diff comments:
In `@backend/Generator/main.py`:
- Around line 55-56: identify_keywords is returning empty when self.s2v is None
because it calls is_word_available which requires a non-null s2v; update
identify_keywords (or the availability check it calls) to short-circuit when
self.s2v is None by skipping the word-availability filtering and using the
fallback frequency/levenshtein logic so keywords still surface; reference
identify_keywords, is_word_available and self.s2v to locate the change, and
ensure downstream call find_sentences_with_keywords receives the non-empty
keywords list.
- Around line 67-70: Replace the bare except with a specific exception handler:
catch Exception as e (not a bare except) around the call to
generate_multiple_choice_questions and log the error before returning
final_output; ensure KeyboardInterrupt/SystemExit are not swallowed (i.e., don't
catch BaseException) and re-raise them if caught. Target the call to
generate_multiple_choice_questions (which uses self.device, self.tokenizer,
self.model, self.s2v, self.normalized_levenshtein) and make sure the handler
logs the exception details (e.g., include e) and then returns final_output.
In `@backend/Generator/mcq.py`:
- Around line 14-20: The function is_word_available currently calls
s2v_model.get_best_sense(word) without guarding against s2v_model being None;
update is_word_available to first check if s2v_model is truthy (or not None) and
immediately return False if it is None, then proceed to call
s2v_model.get_best_sense(word) and return True/False based on the result; refer
to the is_word_available function and the s2v_model parameter when making this
change.
- Around line 31-54: find_similar_words currently calls
s2v_model.get_best_sense(...) and s2v_model.most_similar(...) without validating
s2v_model, so it will raise AttributeError when s2v_model is None; add a guard
at the top of find_similar_words to check if s2v_model is falsy and return an
empty list (or a sensible fallback), and ensure subsequent calls to
get_best_sense and most_similar only run when s2v_model is valid; reference:
function find_similar_words and methods get_best_sense / most_similar on
s2v_model.
- Around line 163-165: The current loop in Generator/mcq.py filters out all
candidate answers when Sense2Vec is unavailable because it calls
is_word_available(answer, s2v_model) which you set to return False for s2v_model
is None; update the logic so missing s2v does not exclude answers — either
change the call site in the loop to append when s2v_model is None (e.g. use
condition (s2v_model is None or is_word_available(...))) or change
is_word_available to return True when s2v_model is None; modify the loop around
answers and the function is_word_available (referenced symbols:
is_word_available, total_phrases_filtered, answers) accordingly to include
candidates when Sense2Vec is not present.
---
Nitpick comments:
In `@backend/Generator/main.py`:
- Line 8: Remove the unused Sense2Vec import: the module-level import
"Sense2Vec" is never used because generators set self.s2v = None, so delete the
"from sense2vec import Sense2Vec" line and any related dead references; keep
self.s2v initialization as-is (e.g., in constructors) or only add the import
back where Sense2Vec is actually instantiated.
In `@backend/server.py`:
- Around line 78-80: In get_mcq replace the broad "except Exception as e" with
targeted handlers for the expected errors (e.g., "except (KeyError, ValueError)
as e" returning jsonify({"output": []}), 200) and add a separate generic handler
that logs full exception details (stack trace) or re-raises for unexpected
errors; reference get_mcq, the current print("❌ ERROR inside get_mcq:", ...)
line, and the return jsonify({"output": []}), 200 to locate and update the
handlers accordingly.
In `@eduaid_web/src/pages/Output.jsx`:
- Around line 138-140: The current pattern
JSON.parse(localStorage.getItem("qaPairs")) || {} makes qaPairsFromStorage
truthy even when empty, so the if (qaPairsFromStorage) check is ineffective;
change to first read the raw string (localStorage.getItem("qaPairs")) and only
JSON.parse when it exists, then check that the parsed object actually has
content (e.g., raw && parsed && Object.keys(parsed).length > 0) before
proceeding; update references to qaPairsFromStorage in Output.jsx to use this
presence check so empty objects don't trigger the branch.
In `@eduaid_web/src/pages/Text_Input.jsx`:
- Around line 108-118: The catch block currently re-throws the error after
alerting which causes duplicate logging when the caller
(handleSaveToLocalStorage) also catches it; instead, stop re-throwing and return
a clear success/failure indicator (e.g., return false or { success: false, error
}) from the function containing this catch (the catch shown in Text_Input.jsx),
update handleSaveToLocalStorage to check that return value and handle
loading/reset logic accordingly, and ensure any cleanup currently done in
callers is preserved (or move cleanup into a finally block) so you no longer
rely on throwing to signal failure.
In `@eduaid_web/src/utils/apiClient.js`:
- Around line 74-77: The AbortError handler currently can't tell if the request
timed out or was manually aborted; update the request/abort setup where
AbortController is created (the fetch/timeout logic in apiClient.js) to
distinguish timeout vs manual cancellation by either using AbortSignal.timeout()
if available or by setting a local flag (e.g., isTimeout) when the timeout fires
and clearing it on manual abort; then in the catch block that checks error.name
=== 'AbortError' inspect that flag (or signal type) and throw distinct
errors/messages for timeout vs user-initiated abort so callers can react
appropriately.
- Around line 109-123: The postFormData method currently performs a raw fetch
without cancellation or timeout; update it to reuse the existing makeRequest
flow (or replicate its logic) so uploads support AbortController and timeout
handling: accept/merge options.signal and timeout, create/chain an
AbortController when no signal is provided (or when implementing timeout), start
a timer to abort after options.timeout, pass the final signal to fetch while
preserving method:'POST' and body:formData, ensure the timer is cleared on
completion, and throw errors consistent with makeRequest on non-ok responses;
reference the postFormData and makeRequest functions when making the change.
- Around line 42-56: The code uses a shared this.currentController which causes
new requests to abort all in-flight requests; change to create a per-request
AbortController (e.g., const controller = new AbortController()) instead of
overwriting this.currentController, pass controller.signal to fetch, and use a
per-request timeoutId that only calls controller.abort(); alternatively, if you
need deduplication per endpoint, implement a Map keyed by endpoint/request type
(e.g., controllersByKey.set(key, controller)) and only abort the controller for
that key; ensure you clear the timeout and remove the controller from any
storage when the request finishes.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
backend/Generator/main.pybackend/Generator/mcq.pybackend/server.pyeduaid_web/src/pages/Output.jsxeduaid_web/src/pages/Text_Input.jsxeduaid_web/src/utils/apiClient.js
| from nltk import FreqDist | ||
| from nltk.corpus import brown | ||
| from similarity.normalized_levenshtein import NormalizedLevenshtein | ||
| from strsimpy.normalized_levenshtein import NormalizedLevenshtein |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if strsimpy is listed in requirements files and related dependency management files
echo "=== Searching for strsimpy in dependency files ==="
find . -type f \( -name "requirements*.txt" -o -name "setup.py" -o -name "pyproject.toml" -o -name "poetry.lock" -o -name "Pipfile" \) -not -path "./.git/*" | head -20
echo ""
echo "=== Checking for strsimpy mentions ==="
rg "strsimpy" --type py --type txt -lRepository: AOSSIE-Org/EduAid
Length of output: 221
🏁 Script executed:
#!/bin/bash
# Check the actual file and context around the import
echo "=== Checking backend/Generator/main.py ==="
if [ -f "backend/Generator/main.py" ]; then
head -20 "backend/Generator/main.py"
else
echo "File not found at backend/Generator/main.py"
find . -name "main.py" -path "*/Generator/*" 2>/dev/null | head -5
fiRepository: AOSSIE-Org/EduAid
Length of output: 956
🏁 Script executed:
#!/bin/bash
# Check mcq.py for comparison
echo "=== Looking for mcq.py ==="
find . -name "mcq.py" -type f 2>/dev/null | head -10Repository: AOSSIE-Org/EduAid
Length of output: 114
🏁 Script executed:
#!/bin/bash
# Check requirements.txt content
echo "=== Contents of requirements.txt ==="
cat requirements.txt
echo ""
echo "=== Search for strsimpy in mcq.py ==="
rg "strsimpy" backend/Generator/mcq.py -A 2 -B 2Repository: AOSSIE-Org/EduAid
Length of output: 1037
🌐 Web query:
strsim vs strsimpy python package difference
💡 Result:
strsim and strsimpy are essentially two different PyPI distributions of the same project (“python-string-similarity”, a Python implementation of tdebatty/java-string-similarity)—but they differ in packaging, API surface details, and recency. [1][2]
Key differences
-
Release recency / maintenance
strsimlatest is 0.0.3 (Jul 13, 2018). [1]strsimpylatest is 0.2.1 (Sep 10, 2021) (newer, more releases). [2]
-
Import namespace (most important practical difference)
- With
strsim, you typically import from a module namedsimilarity(e.g.,from similarity.levenshtein import Levenshtein). [1] - With
strsimpy, you import fromstrsimpy(e.g.,from strsimpy.levenshtein import Levenshtein). [2]
- With
-
Some API differences (example: weighted Levenshtein)
strsimshows a pattern using a class/interface for substitution cost. [1]strsimpyshows using callables/functions (substitution_cost_fn, etc.). [2]
Which should you use?
If you’re starting new code, prefer strsimpy: it’s the newer distribution and avoids the slightly confusing similarity import namespace used by strsim. [1][2]
Sources: PyPI pages for strsim and strsimpy. [1][2]
Replace strsim with strsimpy in requirements.txt.
The code imports from strsimpy.normalized_levenshtein, but requirements.txt lists strsim==0.0.3. These are different packages with incompatible import namespaces—strsim uses from similarity.levenshtein while strsimpy uses from strsimpy.normalized_levenshtein. Update requirements.txt to strsimpy (version 0.2.1 or later) to match the actual imports in main.py and mcq.py.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/Generator/main.py` at line 12, requirements.txt currently pins the
wrong package for the imports used in the code; update it to install strsimpy
(>=0.2.1) instead of strsim==0.0.3 so that the import from
strsimpy.normalized_levenshtein works in main.py and mcq.py; specifically,
replace the strsim entry with strsimpy>=0.2.1 in requirements.txt and run your
environment install to verify that NormalizedLevenshtein imports resolve.
| self.tokenizer = T5Tokenizer.from_pretrained('t5-small') | ||
| self.model = T5ForConditionalGeneration.from_pretrained('t5-small') | ||
| self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu") | ||
| self.model.to(self.device) | ||
| self.nlp = spacy.load('en_core_web_sm') | ||
| self.s2v = Sense2Vec().from_disk('s2v_old') | ||
| self.s2v = None |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, check for documentation about the model change
rg -i "t5-small|sense2vec|s2v" --glob "*.md" --glob "CHANGELOG*"Repository: AOSSIE-Org/EduAid
Length of output: 453
🏁 Script executed:
# Find the get_answer_choices function to understand MCQ distractor generation
rg -n "get_answer_choices|def.*choice" --type pythonRepository: AOSSIE-Org/EduAid
Length of output: 91
🏁 Script executed:
# Check how s2v is used throughout the codebase
rg -n "self\.s2v" backend/Generator/main.pyRepository: AOSSIE-Org/EduAid
Length of output: 605
🏁 Script executed:
# Look for any initialization or usage of s2v in the entire codebase
rg -n "s2v" --type python -B 2 -A 2Repository: AOSSIE-Org/EduAid
Length of output: 91
🏁 Script executed:
# Check the overall structure of the main.py file
head -50 backend/Generator/main.pyRepository: AOSSIE-Org/EduAid
Length of output: 1877
🏁 Script executed:
# Find the mcq.py file
find . -name "mcq.py" -type fRepository: AOSSIE-Org/EduAid
Length of output: 87
🏁 Script executed:
# Check the identify_keywords and generate_multiple_choice_questions function implementations
rg -A 15 "def identify_keywords" --type-listRepository: AOSSIE-Org/EduAid
Length of output: 450
🏁 Script executed:
# Look for mcq.py more broadly
find backend -name "*.py" | head -20Repository: AOSSIE-Org/EduAid
Length of output: 250
🏁 Script executed:
# Check the identify_keywords function in mcq.py
rg -A 20 "def identify_keywords" backend/Generator/mcq.pyRepository: AOSSIE-Org/EduAid
Length of output: 949
🏁 Script executed:
# Check the generate_multiple_choice_questions function in mcq.py
rg -A 20 "def generate_multiple_choice_questions" backend/Generator/mcq.pyRepository: AOSSIE-Org/EduAid
Length of output: 980
🏁 Script executed:
# Get the total line count of mcq.py to determine how much to read
wc -l backend/Generator/mcq.pyRepository: AOSSIE-Org/EduAid
Length of output: 89
🏁 Script executed:
# Read the entire mcq.py file to understand s2v usage
cat -n backend/Generator/mcq.pyRepository: AOSSIE-Org/EduAid
Length of output: 11209
Sense2Vec model not loaded breaks MCQ generation.
The code imports Sense2Vec but sets self.s2v = None without loading the model, contradicting README.md which explicitly requires downloading and extracting the Sense2Vec model to the backend folder. This causes identify_keywords() to crash when calling is_word_available(answer, self.s2v) at line 164 of mcq.py—attempting to call .get_best_sense() on None raises AttributeError. This blocks keyword extraction entirely, not just distractor generation.
Additionally, t5-small is hardcoded instead of the higher-capacity model. Either load Sense2Vec properly or add fallback logic to handle None gracefully.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/Generator/main.py` around lines 28 - 33, The Sense2Vec model is never
loaded (self.s2v = None), causing identify_keywords() and
is_word_available(answer, self.s2v) to raise AttributeError; either load the
Sense2Vec model during initialization or add a safe fallback. In
Generator.__init__ replace self.s2v = None with code that loads Sense2Vec (e.g.,
Sense2Vec().from_disk(...) or the project’s expected load path) and ensure
self.s2v is a valid object before calling is_word_available, or modify
identify_keywords() to check for a None s2v and skip sense-based checks / use a
simpler POS-based fallback. Also consider switching the tokenizer/model
initialization from a hardcoded 't5-small' to the configured higher-capacity
model name (the same symbol self.tokenizer / self.model) or make it
configurable.
| self.tokenizer = T5Tokenizer.from_pretrained('t5-small') | ||
| self.model = T5ForConditionalGeneration.from_pretrained('t5-small') | ||
| self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu") | ||
| self.model.to(self.device) | ||
| self.nlp = spacy.load('en_core_web_sm') | ||
| self.s2v = Sense2Vec().from_disk('s2v_old') | ||
| self.s2v = None |
There was a problem hiding this comment.
ShortQGenerator also sets s2v=None.
While generate_shortq uses generate_normal_questions which doesn't require s2v directly, identify_keywords at line 112 still uses s2v and will be affected by the same issue as MCQGenerator.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/Generator/main.py` around lines 86 - 91, ShortQGenerator sets
self.s2v = None which breaks identify_keywords (used by
generate_shortq/generate_normal_questions); initialize self.s2v the same way as
MCQGenerator does (or load the appropriate sentence-vector/embedding model used
elsewhere) during ShortQGenerator.__init__, or alternatively add a guard in
identify_keywords to raise a clear error or lazily initialize s2v when first
needed; reference ShortQGenerator, generate_shortq, generate_normal_questions,
identify_keywords, and self.s2v to locate where to make the change.
| ShortQGen = main.ShortQGenerator() | ||
| qg = main.QuestionGenerator() | ||
| docs_service = main.GoogleDocsService(SERVICE_ACCOUNT_FILE, SCOPES) | ||
| # docs_service = main.GoogleDocsService(SERVICE_ACCOUNT_FILE, SCOPES) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
rg -n 'docs_service' backend/Repository: AOSSIE-Org/EduAid
Length of output: 461
🏁 Script executed:
sed -n '35,50p' backend/server.py
sed -n '200,215p' backend/server.pyRepository: AOSSIE-Org/EduAid
Length of output: 1063
🏁 Script executed:
sed -n '195,220p' backend/server.pyRepository: AOSSIE-Org/EduAid
Length of output: 926
🏁 Script executed:
rg -n 'docs_service\s*=' backend/ --type py
grep -r 'from.*docs_service\|import.*docs_service' backend/Repository: AOSSIE-Org/EduAid
Length of output: 259
Commenting out docs_service breaks the /get_content endpoint.
The /get_content endpoint at line 206 references docs_service.get_document_content(document_url), but the initialization at line 41 is commented out. This will cause a NameError when the endpoint is called.
Remove the /get_content endpoint entirely or uncomment the service initialization at line 41.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/server.py` at line 41, The /get_content endpoint references
docs_service.get_document_content(document_url) but docs_service is commented
out, causing a runtime NameError; either restore the service initialization by
uncommenting the GoogleDocsService instantiation (docs_service =
main.GoogleDocsService(SERVICE_ACCOUNT_FILE, SCOPES)) so docs_service is
available to the endpoint, or remove the /get_content route and any calls to
docs_service.get_document_content to eliminate the broken dependency; update
only the docs_service initialization or the endpoint (not both) and ensure any
required imports/config variables (SERVICE_ACCOUNT_FILE, SCOPES) remain present.
| if not output or "questions" not in output: | ||
| return jsonify({"output": []}), 200 | ||
|
|
||
| questions = output.get("questions", []) | ||
| return jsonify({"output": questions}), 200 | ||
|
|
||
| except Exception as e: | ||
| print("❌ ERROR inside get_mcq:", str(e)) | ||
| return jsonify({"output": []}), 200 |
There was a problem hiding this comment.
Returning HTTP 200 for all errors masks failures from the frontend.
Returning {"output": []} with status 200 for both legitimate empty results and actual errors prevents the frontend from distinguishing between:
- No questions could be generated (valid case)
- Server encountered an error (should be retried or reported)
Consider returning distinct status codes or including an error field:
Proposed improvement
# SAFE HANDLING
if not output or "questions" not in output:
- return jsonify({"output": []}), 200
+ return jsonify({"output": [], "status": "no_questions"}), 200
questions = output.get("questions", [])
- return jsonify({"output": questions}), 200
+ return jsonify({"output": questions, "status": "success"}), 200
except Exception as e:
print("❌ ERROR inside get_mcq:", str(e))
- return jsonify({"output": []}), 200
+ return jsonify({"output": [], "error": str(e)}), 500🧰 Tools
🪛 Ruff (0.15.2)
[warning] 78-78: Do not catch blind exception: Exception
(BLE001)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/server.py` around lines 72 - 80, The endpoint get_mcq currently
always returns 200 with {"output": []} even on exceptions; change the except
block to return a non-200 status (e.g., 500) and include error information in
the JSON (for example {"output": [], "error": "<message>"}), and preferably log
the full exception via app.logger.exception or similar instead of print; keep
the successful-empty case (when not output or "questions" not in output)
returning 200 with an empty output so the frontend can distinguish between
no-results and server errors.
eduaid_web/src/pages/Output.jsx
Outdated
| const response = await apiClient.post( | ||
| `/${endpoint}`, | ||
| storedPayload | ||
| ); | ||
|
|
||
| localStorage.setItem("qaPairs", JSON.stringify(response.data)); | ||
| window.location.reload(); // intentional minimal-risk approach |
There was a problem hiding this comment.
Incorrect response property access; response.data is undefined.
The apiClient.post() returns the parsed JSON directly (i.e., { output: [...] }), not a wrapper with a data property. Line 52 should access response.output, not response.data.
Also, window.location.reload() immediately after storing makes the localStorage write redundant since the page reloads and re-reads from storage anyway.
Proposed fix
const response = await apiClient.post(
`/${endpoint}`,
storedPayload
);
- localStorage.setItem("qaPairs", JSON.stringify(response.data));
- window.location.reload(); // intentional minimal-risk approach
+ const questions = response.output || [];
+ if (!questions.length) {
+ setRegenError("No questions generated. Try different text.");
+ return;
+ }
+ localStorage.setItem("qaPairs", JSON.stringify(questions));
+ window.location.reload();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@eduaid_web/src/pages/Output.jsx` around lines 47 - 53, The code incorrectly
reads response.data from the apiClient.post result (apiClient.post returns the
parsed JSON like { output: [...] }), so replace usages to read response.output
when persisting QA pairs; specifically update the block around
apiClient.post(...) and localStorage.setItem("qaPairs", ...) to store
JSON.stringify(response.output). Also remove the immediate
window.location.reload() (or move it after verifying storage/read) since
reloading makes the localStorage write redundant—either persist response.output
and let existing page logic read it, or if you must reload, avoid the extra
storage step.
| if (!responseData.length) { | ||
| alert("No questions generated. Try longer text."); | ||
| return; | ||
| } |
There was a problem hiding this comment.
Incorrect indentation causes validation to always execute.
The alert and return statements are not properly indented inside the if block, causing them to execute unconditionally after the check.
Proposed fix
const responseData = response.output || [];
if (!responseData.length) {
- alert("No questions generated. Try longer text.");
- return;
+ alert("No questions generated. Try longer text.");
+ return;
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (!responseData.length) { | |
| alert("No questions generated. Try longer text."); | |
| return; | |
| } | |
| if (!responseData.length) { | |
| alert("No questions generated. Try longer text."); | |
| return; | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@eduaid_web/src/pages/Text_Input.jsx` around lines 83 - 86, The if-check using
responseData.length currently has alert("No questions generated. Try longer
text."); and return placed outside the conditional due to wrong indentation;
move the alert and return so they are inside the if block that checks if
(!responseData.length) to ensure the function exits only when no questions were
generated (reference the responseData variable and the if (!responseData.length)
block in Text_Input.jsx).
…Doc quiz generation flow
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
eduaid_web/src/pages/Output.jsx (1)
179-186:⚠️ Potential issue | 🟠 MajorGuard boolean
outputbefore iterating.In the boolean branch,
.forEachis called without checking thatqaPairsFromStorage["output"]is an array. If storage is missing/corrupt, this can throw and break the page render.Proposed fix
- if (questionType === "get_boolq") { + if ( + questionType === "get_boolq" && + Array.isArray(qaPairsFromStorage["output"]) + ) { qaPairsFromStorage["output"].forEach((qaPair) => { combinedQaPairs.push({ question: qaPair, question_type: "Boolean", }); }); - } else if (qaPairsFromStorage["output"] && questionType !== "get_mcq") { + } else if ( + Array.isArray(qaPairsFromStorage["output"]) && + questionType !== "get_mcq" + ) { qaPairsFromStorage["output"].forEach((qaPair) => {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@eduaid_web/src/pages/Output.jsx` around lines 179 - 186, The boolean branch in Output.jsx calls qaPairsFromStorage["output"].forEach(...) without verifying the value, which can throw if missing or not an array; update the block that checks questionType === "get_boolq" to first confirm qaPairsFromStorage["output"] is an array (e.g., Array.isArray(qaPairsFromStorage["output"]) or use optional chaining and a safe default) before iterating, and only push into combinedQaPairs when the array check passes (leave the rest of the object shape: question and question_type unchanged).
♻️ Duplicate comments (1)
eduaid_web/src/pages/Output.jsx (1)
42-50:⚠️ Potential issue | 🟠 MajorValidate parsed payload before mutating it.
JSON.parse(rawPayload)can returnnull(e.g., stored value is"null"). In that casestoredPayload.regenerate = Date.now()throws and regeneration fails unexpectedly.Proposed fix
let storedPayload; try { storedPayload = JSON.parse(rawPayload); } catch (e) { setRegenError("Invalid quiz parameters."); return; } + if (!storedPayload || typeof storedPayload !== "object") { + setRegenError("No previous quiz parameters found."); + return; + } + storedPayload.regenerate = Date.now();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@eduaid_web/src/pages/Output.jsx` around lines 42 - 50, After parsing rawPayload into storedPayload, validate that storedPayload is a non-null object before mutating it; if JSON.parse(rawPayload) returns null or a non-object, call setRegenError("Invalid quiz parameters.") and return instead of attempting storedPayload.regenerate = Date.now(). Update the logic around JSON.parse(rawPayload) and the mutation of storedPayload.regenerate to guard against null/non-object results (referencing the storedPayload variable, the JSON.parse(rawPayload) call, and the setRegenError usage).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@eduaid_web/src/pages/Output.jsx`:
- Around line 179-186: The boolean branch in Output.jsx calls
qaPairsFromStorage["output"].forEach(...) without verifying the value, which can
throw if missing or not an array; update the block that checks questionType ===
"get_boolq" to first confirm qaPairsFromStorage["output"] is an array (e.g.,
Array.isArray(qaPairsFromStorage["output"]) or use optional chaining and a safe
default) before iterating, and only push into combinedQaPairs when the array
check passes (leave the rest of the object shape: question and question_type
unchanged).
---
Duplicate comments:
In `@eduaid_web/src/pages/Output.jsx`:
- Around line 42-50: After parsing rawPayload into storedPayload, validate that
storedPayload is a non-null object before mutating it; if JSON.parse(rawPayload)
returns null or a non-object, call setRegenError("Invalid quiz parameters.") and
return instead of attempting storedPayload.regenerate = Date.now(). Update the
logic around JSON.parse(rawPayload) and the mutation of storedPayload.regenerate
to guard against null/non-object results (referencing the storedPayload
variable, the JSON.parse(rawPayload) call, and the setRegenError usage).
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
eduaid_web/src/pages/Output.jsx (1)
182-206: Consider addingquestionTypeto the dependency array.The
useEffectreferencesquestionTypeat lines 182 and 191, but the dependency array at line 206 is empty. While this currently works becausequestionTypeis initialized once from localStorage, it violates the exhaustive-deps rule and could cause subtle bugs if the component behavior changes in the future.Proposed fix
- }, []); + }, [questionType]);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@eduaid_web/src/pages/Output.jsx` around lines 182 - 206, The useEffect block that builds combinedQaPairs and calls setQaPairs reads questionType (in the branches that handle "get_boolq" and the else-if) but has an empty dependency array; update the dependency array for that useEffect to include questionType so the effect re-runs when questionType changes (keep existing dependencies as-is), ensuring the branches that reference questionType (and the variables qaPairsFromStorage, combinedQaPairs, setQaPairs inside the effect) behave correctly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@eduaid_web/src/pages/Output.jsx`:
- Around line 60-68: The saved regenerated questions are stored as a raw array
but the loader (the useEffect that builds combinedQaPairs) expects an object
with keys like output_boolq, output_mcq, or output; update the storage calls so
they wrap the questions array in the expected shape (e.g. an object with the
same keys the loader checks, such as { output: questions } or the appropriate
key for that generation path) instead of storing a plain array. Modify the
localStorage.setItem call in Output.jsx where questions are saved and the
analogous save in Text_Input.jsx to serialize the wrapped object so the
useEffect that reads "qaPairs" finds the expected properties and combinedQaPairs
is populated. Ensure you update the save logic that currently calls
localStorage.setItem("qaPairs", JSON.stringify(questions)) to use the wrapped
object form.
---
Nitpick comments:
In `@eduaid_web/src/pages/Output.jsx`:
- Around line 182-206: The useEffect block that builds combinedQaPairs and calls
setQaPairs reads questionType (in the branches that handle "get_boolq" and the
else-if) but has an empty dependency array; update the dependency array for that
useEffect to include questionType so the effect re-runs when questionType
changes (keep existing dependencies as-is), ensuring the branches that reference
questionType (and the variables qaPairsFromStorage, combinedQaPairs, setQaPairs
inside the effect) behave correctly.
…questionType dependency
Fixes #504
Screenshots/Recordings:
Additional Notes:
Checklist
My PR addresses a single issue, fixes a single bug or makes a single improvement.
My code follows the project's code style and conventions
If applicable, I have made corresponding changes or additions to the documentation
If applicable, I have made corresponding changes or additions to tests
My changes generate no new warnings or errors
I have joined the Discord server and I will share a link to this PR with the project maintainers there
I have read the Contribution Guidelines
Once I submit my PR, CodeRabbit AI will automatically review it and I will address CodeRabbit's comments.
This PR contains AI-generated code. I have tested the code locally and I am responsible for it.
I have used the following AI models and tools:
Summary by CodeRabbit
New Features
Bug Fixes
Refactor
Chores