Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
22 changes: 17 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -263,8 +263,19 @@ Clean plugin architecture for custom rules, prompts, and models:
class MyCustomRule(BaseRule):
@classmethod
def eval(cls, input_data: Data) -> EvalDetail:
# Your logic here
return EvalDetail(status=False, label=['QUALITY_GOOD'])
# Example: check if content is empty
if not input_data.content:
return EvalDetail(
metric=cls.__name__,
status=True, # Found an issue
label=[f'{cls.metric_type}.{cls.__name__}'],
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This example is a great improvement. For better clarity, since cls.metric_type is defined by the @Model.rule_register decorator which isn't visible in this snippet, consider adding a small inline comment to explain where it comes from. This will help users understand the example more easily.

Suggested change
label=[f'{cls.metric_type}.{cls.__name__}'],
label=[f'{cls.metric_type}.{cls.__name__}'], # metric_type is set by the @Model.rule_register decorator

reason=["Content is empty"]
)
return EvalDetail(
metric=cls.__name__,
status=False, # No issue found
label=['QUALITY_GOOD']
)
```
**Why It Matters**: Adapt to domain-specific requirements without forking the codebase.

Expand All @@ -287,9 +298,9 @@ Dingo provides **70+ evaluation metrics** across multiple dimensions, combining
| **Security** | PII detection, Perspective API toxicity | Privacy and safety |

📊 **[View Complete Metrics Documentation →](docs/metrics.md)**
📖 **[RAG Evaluation Guide →](docs/rag_evaluation_metrics_zh.md)**
🔍 **[Hallucination Detection Guide →](docs/hallucination_guide.md)**
✅ **[Factuality Assessment Guide →](docs/factcheck_guide.md)**
📖 **[RAG Evaluation Guide (中文) →](docs/rag_evaluation_metrics_zh.md)**
🔍 **[Hallucination Detection Guide (中文) →](docs/hallucination_guide.md)**
✅ **[Factuality Assessment Guide (中文) →](docs/factcheck_guide.md)**

Most metrics are backed by academic research to ensure scientific rigor.

Expand Down Expand Up @@ -451,6 +462,7 @@ class DomainSpecificRule(BaseRule):
is_valid = your_validation_logic(text)

return EvalDetail(
metric=cls.__name__,
status=not is_valid, # False = good, True = bad
label=['QUALITY_GOOD' if is_valid else 'QUALITY_BAD_CUSTOM'],
reason=["Validation details..."]
Expand Down
39 changes: 27 additions & 12 deletions README_ja.md
Original file line number Diff line number Diff line change
Expand Up @@ -260,8 +260,19 @@ outputs/
class MyCustomRule(BaseRule):
@classmethod
def eval(cls, input_data: Data) -> EvalDetail:
# あなたのロジック
return EvalDetail(status=False, label=['QUALITY_GOOD'])
# 例:コンテンツが空かチェック
if not input_data.content:
return EvalDetail(
metric=cls.__name__,
status=True, # 問題を発見
label=[f'{cls.metric_type}.{cls.__name__}'],
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This example is a great improvement. For better clarity, since cls.metric_type is defined by the @Model.rule_register decorator which isn't visible in this snippet, consider adding a small inline comment to explain where it comes from. This will help users understand the example more easily.

Suggested change
label=[f'{cls.metric_type}.{cls.__name__}'],
label=[f'{cls.metric_type}.{cls.__name__}'], # metric_type is set by the @Model.rule_register decorator

reason=["コンテンツが空です"]
)
return EvalDetail(
metric=cls.__name__,
status=False, # 問題なし
label=['QUALITY_GOOD']
)
```
**重要性**:コードベースをフォークせずにドメイン固有のニーズに適応。

Expand Down Expand Up @@ -435,22 +446,26 @@ Dingo はドメイン固有のニーズに対応する柔軟な拡張メカニ
```python
from dingo.model import Model
from dingo.model.rule.base import BaseRule
from dingo.config.input_args import EvaluatorRuleArgs
from dingo.io import Data
from dingo.io.output.eval_detail import EvalDetail


@Model.rule_register('QUALITY_BAD_RELEVANCE', ['default'])
class MyCustomRule(BaseRule):
"""テキスト内のカスタムパターンをチェック"""

dynamic_config = EvaluatorRuleArgs(pattern=r'your_pattern_here')
@Model.rule_register('QUALITY_BAD_CUSTOM', ['default'])
class DomainSpecificRule(BaseRule):
"""ドメイン固有のパターンをチェック"""

@classmethod
def eval(cls, input_data: Data) -> EvalDetail:
res = EvalDetail()
# ここにルール実装
return res
text = input_data.content

# あなたのカスタムロジック
is_valid = your_validation_logic(text)

return EvalDetail(
metric=cls.__name__,
status=not is_valid, # False = 良好, True = 問題あり
label=['QUALITY_GOOD' if is_valid else 'QUALITY_BAD_CUSTOM'],
reason=["検証の詳細..."]
)
```

### カスタムLLM統合
Expand Down
39 changes: 27 additions & 12 deletions README_zh-CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -262,8 +262,19 @@ outputs/
class MyCustomRule(BaseRule):
@classmethod
def eval(cls, input_data: Data) -> EvalDetail:
# 你的逻辑
return EvalDetail(status=False, label=['QUALITY_GOOD'])
# 示例:检查内容是否为空
if not input_data.content:
return EvalDetail(
metric=cls.__name__,
status=True, # 发现问题
label=[f'{cls.metric_type}.{cls.__name__}'],
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This example is a great improvement. For better clarity, since cls.metric_type is defined by the @Model.rule_register decorator which isn't visible in this snippet, consider adding a small inline comment to explain where it comes from. This will help users understand the example more easily.

Suggested change
label=[f'{cls.metric_type}.{cls.__name__}'],
label=[f'{cls.metric_type}.{cls.__name__}'], # metric_type is set by the @Model.rule_register decorator

reason=["内容为空"]
)
return EvalDetail(
metric=cls.__name__,
status=False, # 未发现问题
label=['QUALITY_GOOD']
)
```
**为什么重要**:适应特定领域需求而无需分叉代码库。

Expand Down Expand Up @@ -437,22 +448,26 @@ Dingo 提供灵活的扩展机制来满足特定领域需求。
```python
from dingo.model import Model
from dingo.model.rule.base import BaseRule
from dingo.config.input_args import EvaluatorRuleArgs
from dingo.io import Data
from dingo.io.output.eval_detail import EvalDetail


@Model.rule_register('QUALITY_BAD_RELEVANCE', ['default'])
class MyCustomRule(BaseRule):
"""检查文本中的自定义模式"""

dynamic_config = EvaluatorRuleArgs(pattern=r'your_pattern_here')
@Model.rule_register('QUALITY_BAD_CUSTOM', ['default'])
class DomainSpecificRule(BaseRule):
"""检查特定领域的模式"""

@classmethod
def eval(cls, input_data: Data) -> EvalDetail:
res = EvalDetail()
# 您的规则实现
return res
text = input_data.content

# 你的自定义逻辑
is_valid = your_validation_logic(text)

return EvalDetail(
metric=cls.__name__,
status=not is_valid, # False = 良好, True = 有问题
label=['QUALITY_GOOD' if is_valid else 'QUALITY_BAD_CUSTOM'],
reason=["验证详情..."]
)
```

### 自定义LLM集成
Expand Down