Skip to content

[Inference Provider]: Add TextCLF as an inference provider#2022

Open
textclf-api wants to merge 6 commits intohuggingface:mainfrom
textclf-api:textclf
Open

[Inference Provider]: Add TextCLF as an inference provider#2022
textclf-api wants to merge 6 commits intohuggingface:mainfrom
textclf-api:textclf

Conversation

@textclf-api
Copy link

@textclf-api textclf-api commented Mar 7, 2026

Adding TextCLF as a new Inference Provider for the @huggingface/inference library.


Note

Medium Risk
Introduces a new external-provider integration with custom request/response shaping, so failures will primarily show up as runtime API incompatibilities or malformed response handling rather than compile-time issues.

Overview
Adds TextCLF as a new third-party provider in @huggingface/inference, wiring it into provider selection/typing and documenting it in the README.

Implements TextCLFConversationalTask and a custom TextCLFTextGenerationTask that routes text-generation through TextCLF’s chat-completions endpoint (including parameter mapping like max_new_tokensmax_tokens) and validates/parses responses, plus adds provider-level integration tests for chat completion and streaming.

Written by Cursor Bugbot for commit 53ec4c7. This will update automatically on new commits. Configure here.

Copy link

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 1 potential issue.

Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.

@hanouticelina
Copy link
Contributor

Hey there!

Thank you for your interest in becoming an Inference Provider and for the excellent work you've put into this integration!
We really appreciate the effort.

However, we're currently in a consolidation phase focusing on growing usage of Inference Providers via new features and integrations rather than expanding to new partners. This means we've temporarily paused onboarding new providers while we work on these improvements.

We're not able to provide a specific timeline for when we'll resume new provider onboarding, but we'd love to revisit this integration in the future.

In the meantime:

  • Grow your presence on Hugging Face — publish models, datasets, and Spaces.
  • Grow your Hugging Face organization — build a community around your work.
  • Consider upgrading to a Team plan — this is a requirement for further integration as per the Inference Providers doc.

Thanks again for your contribution and understanding!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants