Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
69 changes: 35 additions & 34 deletions src/writer/blocks/writeraskkg.py
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ def register(cls, type: str):
"name": "Add inline graph citations",
"type": "Boolean",
"desc": "Shows what specific graph sources were used to answer the question.",
"default": "yes",
"default": "no",
"validator": {
"type": "boolean",
},
Expand Down Expand Up @@ -103,7 +103,7 @@ def run(self):
subqueries = self._get_field(
"subqueries", default_field_value="yes") == "yes"
graph_citations = self._get_field(
"graphCitations", default_field_value="yes") == "yes"
"graphCitations", default_field_value="no") == "yes"

response = client.graphs.question(
graph_ids=graph_ids,
Expand All @@ -115,41 +115,42 @@ def run(self):
}
)

answer_so_far = ""
result_dict = {}
citations_so_far = []

if use_streaming:
for chunk in response:
try:
delta_answer = chunk.model_extra.get("answer", "")
answer_so_far += delta_answer
result_dict["answer"] = answer_so_far

if graph_citations:
delta_sources = chunk.model_extra.get("sources", "")
citations_so_far.extend(delta_sources)
result_dict["citations"] = citations_so_far

self._set_state(state_element, result_dict)

Comment on lines -134 to -135
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removing this breaks the "Use streaming" functionality. The whole purpose of it is to update the state variable as we iterate over the response which is arriving over the network in multiple parts. With it the user can get more "AI-chatbot-like" experience when you are not waiting for the whole answer to appear at once but instead see it being generated

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@UladzislauK-Writer I see, so when streaming is enabled, I should include the set state inside the for loop?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, exactly

except json.JSONDecodeError:
logging.error(
"Could not parse stream chunk from graph.question")

else:
answer_so_far = response.answer
result_dict["answer"] = answer_so_far

if graph_citations:
citations_so_far = response.sources or []
result_dict["citations"] = citations_so_far

self._set_state(state_element, result_dict)
self.result = answer_so_far
self.result = self._parse_response(response, state_element, use_streaming, graph_citations)
if state_element:
self._set_state(state_element, self.result)
self.outcome = "success"

except BaseException as e:
self.outcome = "error"
raise e



def _parse_response(self, response, state_element, use_streaming: bool, graph_citations: bool):
if not use_streaming:
if graph_citations:
return {"answer": response.answer, "citations": response.sources or []}
return response.answer

answer = ""
citations = []

for chunk in response:
try:
delta_answer = chunk.model_extra.get("answer", "")
answer += delta_answer

if graph_citations:
delta_sources = chunk.model_extra.get("sources", "")
citations.extend(delta_sources)
self._set_state(state_element, {"answer": answer, "citations": citations})
else:
self._set_state(state_element, answer)

except json.JSONDecodeError:
logging.error("Could not parse stream chunk from graph.question")
Comment on lines +150 to +151
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Exception handler appears mismatched and should use logging.exception.

Two issues here:

  1. Wrong exception type: The try block doesn't perform any JSON parsing—model_extra.get() returns dict values without parsing. json.JSONDecodeError will never be raised. Consider catching a broader exception type or removing this handler if it's dead code.

  2. Use logging.exception: When logging from an exception handler, logging.exception automatically includes the traceback.

Suggested fix
-            except json.JSONDecodeError:
-                logging.error("Could not parse stream chunk from graph.question")
+            except Exception:
+                logging.exception("Could not process stream chunk from graph.question")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
except json.JSONDecodeError:
logging.error("Could not parse stream chunk from graph.question")
except Exception:
logging.exception("Could not process stream chunk from graph.question")
🧰 Tools
🪛 Ruff (0.14.14)

151-151: Use logging.exception instead of logging.error

Replace with exception

(TRY400)

🤖 Prompt for AI Agents
In `@src/writer/blocks/writeraskkg.py` around lines 150 - 151, The except block
catching json.JSONDecodeError in writeraskkg.py is incorrect because
model_extra.get() doesn't parse JSON and that exception will never be raised;
change the handler to catch a broader exception (e.g., Exception as e) around
the section that accesses model_extra.get(...) / graph.question and replace
logging.error(...) with logging.exception(...) so the traceback is captured and
the error message includes context (e.g., "Could not parse stream chunk from
graph.question"). If you determine the try/except is unnecessary (dead code),
remove it instead. Ensure you update the exception block associated with the
model_extra.get()/graph.question access in the same function where this diff
appears.


if graph_citations:
return {"answer": answer, "citations": citations}
return answer