Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion dingo/model/llm/llm_text_3h.py
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ def process_response(cls, response: str) -> ModelRes:
result = ModelRes()

# error_status
if response_model.score == "1":
if response_model.score == 1:
result.reason = [response_model.reason]
result.name = cls.prompt.__name__[8:].upper()
else:
Expand Down
7 changes: 6 additions & 1 deletion dingo/run/vsl.py
Original file line number Diff line number Diff line change
Expand Up @@ -169,6 +169,11 @@ def parse_args():
"app"],
default="visualization",
help="Choose the mode: visualization or app")
parser.add_argument(
"--port",
type=int,
default=8000,
help="Port for local HTTP server in visualization mode (default: 8000)")
return parser.parse_args()


Expand All @@ -195,7 +200,7 @@ def main():
success, new_html_filename = process_and_inject(args.input)
if success:
web_static_dir = os.path.join(os.path.dirname(__file__), "..", "..", "web-static")
port = 8000
port = args.port
try:
server = start_http_server(web_static_dir, port)
url = f"http://localhost:{port}/{new_html_filename}"
Expand Down
Binary file added docs/assets/wechat.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
1 change: 1 addition & 0 deletions docs/metrics.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@ This document provides comprehensive information about all quality metrics used

| Type | Metric | Description | Paper Source | Evaluation Results |
|------|--------|-------------|--------------|-------------------|
| `MathCompare` | PromptMathCompare | Compares the effectiveness of two tools in extracting mathematical formulas from HTML to Markdown format by evaluatin... | Internal Implementation | N/A |
| `QUALITY_BAD_HALLUCINATION` | PromptHallucination | Evaluates whether the response contains factual contradictions or hallucinations against provided context information | [TruthfulQA: Measuring How Models Mimic Human Falsehoods](https://arxiv.org/abs/2109.07958) (Lin et al., 2021) | N/A |
| `QUALITY_BAD_HALLUCINATION` | RuleHallucinationHHEM | Uses Vectara's HHEM-2.1-Open model for local hallucination detection by evaluating consistency between response and c... | [HHEM-2.1-Open](https://huggingface.co/vectara/hallucination_evaluation_model) (Forrest Bao, Miaoran Li, Rogger Luo, Ofer Mendelevitch) | N/A |
| `QUALITY_HARMLESS` | PromptTextHarmless | Checks if responses avoid harmful content, discriminatory language, and dangerous assistance | [Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback](https://arxiv.org/pdf/2204.05862) (Bai et al., 2022) | [📊 See Results](eval/prompt/qa_data_evaluated_by_3h.md) |
Expand Down
Loading