Skip to content

Explore support for content endpoint of detectors API #15

@evaline-ju

Description

@evaline-ju

Is your feature request related to a problem? Please describe.

The contents endpoint of the detector API has generally been a quick way for users to quickly try out detectors. This detectors endpoint currently is integrated with text generation, allowing for detection on unary text generation or streaming text generation through the orchestrator and is in-progress of being integrated with chat completions through this orchestrator endpoint.

It would then be beneficial to consider how some LLMs could be used as detectors in a workflow with LLM generation, whether on individual chat completion messages [input to chat completion] or choice messages [output of chat completions] or just on text generation input/output.

Describe the solution you'd like

Most of the current adapter-supported model classes like Granite Guardian or Llama Guard take in chat history. We want to firstly explore how these model classes can be used to analyze general text generation input/output.

Metadata

Metadata

Assignees

Labels

enhancementNew feature or request

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions