Skip to content

Conversation

@ricken07
Copy link
Contributor

@ricken07 ricken07 commented Feb 9, 2025

Mistral introducing our new moderation service, which is powered by the Mistral Moderation model, a classifier model based on Ministral 8B. It enables our users to detect harmful text content along several policy dimensions.

This pull request provides support for Mistral AI's Moderation API for Spring AI

  • Moderation Model
  • Moderation Options
  • Auto Configuration
  • Properties
  • Test

@ricken07 ricken07 force-pushed the mistral-moderation-api branch from 1e67d32 to aeeb029 Compare February 9, 2025 01:12
@ricken07 ricken07 force-pushed the mistral-moderation-api branch from aeeb029 to 3856aef Compare February 9, 2025 01:16
@tzolov tzolov self-assigned this Feb 10, 2025
@tzolov tzolov modified the milestone: 1.0.0-M6 Feb 10, 2025
@ilayaperumalg ilayaperumalg modified the milestones: 1.0.0-M6, 1.0.0-M7 Feb 12, 2025
tzolov pushed a commit to tzolov/spring-ai that referenced this pull request Apr 9, 2025
Implement MistralAI moderation capabilities to detect potentially harmful content.
This allows Spring AI applications to use Mistral's content moderation services
to identify and filter inappropriate content before processing

- Add MistralAiModerationApi for interacting with Mistral's moderation endpoints
- Create MistralAiModerationModel implementing the ModerationModel interface
- Add configuration properties and auto-configuration for the moderation model
- Extend Categories and CategoryScores with additional moderation categories
- Add integration tests to verify moderation functionality

Signed-off-by: Ricken Bazolo <[email protected]>
@tzolov
Copy link
Contributor

tzolov commented Apr 9, 2025

Thanks for you contribution @ricken07

rebased, squashed and merged at 3fcb10a

@tzolov tzolov closed this Apr 9, 2025
TheovanKraay pushed a commit to TheovanKraay/spring-ai that referenced this pull request Apr 11, 2025
Implement MistralAI moderation capabilities to detect potentially harmful content.
This allows Spring AI applications to use Mistral's content moderation services
to identify and filter inappropriate content before processing

- Add MistralAiModerationApi for interacting with Mistral's moderation endpoints
- Create MistralAiModerationModel implementing the ModerationModel interface
- Add configuration properties and auto-configuration for the moderation model
- Extend Categories and CategoryScores with additional moderation categories
- Add integration tests to verify moderation functionality

Signed-off-by: Ricken Bazolo <[email protected]>
Signed-off-by: Theo van Kraay <[email protected]>
chedim pushed a commit to couchbaselabs/spring-ai that referenced this pull request Sep 19, 2025
Implement MistralAI moderation capabilities to detect potentially harmful content.
This allows Spring AI applications to use Mistral's content moderation services
to identify and filter inappropriate content before processing

- Add MistralAiModerationApi for interacting with Mistral's moderation endpoints
- Create MistralAiModerationModel implementing the ModerationModel interface
- Add configuration properties and auto-configuration for the moderation model
- Extend Categories and CategoryScores with additional moderation categories
- Add integration tests to verify moderation functionality

Signed-off-by: Ricken Bazolo <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants