Skip to content

Conversation

@m-misiura
Copy link
Collaborator

@m-misiura m-misiura commented Jul 31, 2025

This PR improves the HuggingFace detector for multi-label sequence classification tasks:

  1. When using AutoModelForSequenceClassification the detector now returns a ContentAnalysisResponse for each label whose score exceeds a configurable threshold, supporting multi-label outputs.
  2. Users can exclude specific labels from triggering detections by specifying them in the non_trigger_labelsparameter.
  3. The HuggingFace detector now imports scheme.py] from the common subdirectory to ensure schema consistency with other detectors.

Summary by Sourcery

Improve HuggingFace detector to support multi-label sequence classification by generating individual responses per label above a configurable threshold and excluding safe labels from detection through environment or request parameters

New Features:

  • Emit separate ContentAnalysisResponse for each label exceeding threshold to support multi-label classification
  • Enable exclusion of specified labels via SAFE_LABELS environment variable and per-request detector_params

Enhancements:

  • Import common.scheme for schema consistency with other detectors
  • Allow configurable detection threshold through detector_params

@sourcery-ai
Copy link

sourcery-ai bot commented Jul 31, 2025

Reviewer's Guide

This PR refactors the HuggingFace detector to support threshold-based multi-label classification with environment- and parameter-based label exclusion, aligns schema imports for consistency, and exposes configurable detector parameters throughout.

Sequence diagram for multi-label sequence classification with threshold and label exclusion

sequenceDiagram
    participant User as actor User
    participant App as HuggingFace Detector
    User->>App: Submit ContentAnalysisHttpRequest (with text, detector_params)
    App->>App: process_sequence_classification(text, detector_params)
    App->>App: For each label, check if prob >= threshold and not in safe_labels
    App->>App: For each valid label, create ContentAnalysisResponse
    App-->>User: Return ContentsAnalysisResponse (multiple responses if multi-label)
Loading

Class diagram for updated Detector class with multi-label support

classDiagram
    class Detector {
        +risk_names: list
        +model: Any
        +cuda_device: Any
        +model_name: str
        +safe_labels: set
        +__init__()
        +process_causal_lm(text)
        +process_sequence_classification(text, detector_params=None, threshold=None)
        +run(input: ContentAnalysisHttpRequest) : ContentsAnalysisResponse
        +close()
    }
    class ContentAnalysisHttpRequest
    class ContentAnalysisResponse
    class ContentsAnalysisResponse
    Detector --> ContentAnalysisHttpRequest
    Detector --> ContentAnalysisResponse
    Detector --> ContentsAnalysisResponse
Loading

File-Level Changes

Change Details Files
Enable multi-label classification outputs
  • Changed process_sequence_classification signature to accept detector_params and threshold
  • Replaced single-label prediction with loop to emit separate responses for each label above threshold
  • Used dynamic detection_type derived from model.config.problem_type or label
detectors/huggingface/detector.py
Implement label exclusion via environment and parameters
  • Added _parse_safe_labels_env to read SAFE_LABELS from env with default
  • Initialized self.safe_labels in init
  • Merged env and detector_params safe_labels and filtered out matching labels in outputs
detectors/huggingface/detector.py
Refactor shared schema import
  • Switched import from local scheme to common.scheme for unified response types
  • Removed obsolete detectors/huggingface/scheme.py file
detectors/huggingface/detector.py
detectors/huggingface/scheme.py
Expose detector parameters in run flow
  • Updated run method to forward input.detector_params to process_sequence_classification
detectors/huggingface/detector.py

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@m-misiura m-misiura marked this pull request as ready for review August 6, 2025 15:05
Copy link

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @m-misiura - I've reviewed your changes and they look great!

Prompt for AI Agents
Please address the comments from this code review:
## Individual Comments

### Comment 1
<location> `detectors/huggingface/detector.py:223` </location>
<code_context>
-        Returns:
-            List[ContentAnalysisResponse]: List of content analysis results.
-        """
+    def process_sequence_classification(self, text, detector_params=None, threshold=None):
+        detector_params = detector_params or {}
+        if threshold is None:
+            threshold = detector_params.get("threshold", 0.5)
+        # Merge safe_labels from env and request
+        request_safe_labels = set(detector_params.get("safe_labels", []))
+        all_safe_labels = set(self.safe_labels) | request_safe_labels 
         content_analyses = []
         tokenized = self.tokenizer(
</code_context>

<issue_to_address>
Combining safe_labels from environment and request may lead to type mismatches.

Because safe_labels may contain both ints and strings, and id2label keys are usually ints, mismatched types could cause incorrect filtering. Normalize types in all_safe_labels and comparison values to prevent subtle bugs.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment on lines +223 to +229
def process_sequence_classification(self, text, detector_params=None, threshold=None):
detector_params = detector_params or {}
if threshold is None:
threshold = detector_params.get("threshold", 0.5)
# Merge safe_labels from env and request
request_safe_labels = set(detector_params.get("safe_labels", []))
all_safe_labels = set(self.safe_labels) | request_safe_labels
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (bug_risk): Combining safe_labels from environment and request may lead to type mismatches.

Because safe_labels may contain both ints and strings, and id2label keys are usually ints, mismatched types could cause incorrect filtering. Normalize types in all_safe_labels and comparison values to prevent subtle bugs.

@m-misiura m-misiura merged commit 72fd63a into trustyai-explainability:main Aug 6, 2025
1 check failed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants