Skip to content

feat(inference): Add Latitude as inference provider#1927

Open
gsalberto wants to merge 1 commit intohuggingface:mainfrom
gsalberto:add-latitude-provider
Open

feat(inference): Add Latitude as inference provider#1927
gsalberto wants to merge 1 commit intohuggingface:mainfrom
gsalberto:add-latitude-provider

Conversation

@gsalberto
Copy link

@gsalberto gsalberto commented Jan 22, 2026

Summary

This PR adds support for Latitude.sh as an inference provider.

About Latitude:

  • Recently acquired by Megaport (ASX: MP1), a global Network-as-a-Service leader
  • Operating 10,000+ physical servers and 1,000+ GPUs globally
  • Combined with Megaport's platform spanning 1,000+ data centers in 26 countries
  • Tier-3 data centers with 99.99% SLA

AI Inference Platform Features:

  • OpenAI-compatible API at https://api.lsh.ai
  • Tool calling support
  • Structured output (JSON mode)
  • Vision/multimodal model support
  • Streaming responses
  • Dedicated GPUs with consistent performance

Changes

  • Added packages/inference/src/providers/latitude.ts with LatitudeConversationalTask and LatitudeTextGenerationTask
  • Added latitude-sh to INFERENCE_PROVIDERS in types.ts
  • Added latitude-sh to PROVIDERS_HUB_ORGS mapping
  • Added latitude-sh to HARDCODED_MODEL_INFERENCE_MAPPING in consts.ts
  • Added Latitude provider to PROVIDERS in getProviderHelper.ts

Related PRs

Latitude provides fast, affordable LLM inference on dedicated GPU infrastructure.

Supported tasks:
- conversational (chat completions)
- text-generation (completions)

Features:
- OpenAI-compatible API
- Tool calling / function calling
- Structured output (JSON mode)
- Vision / multimodal inputs
- Streaming with TTFT tracking

Provider details:
- Organization: https://huggingface.co/latitude-sh
- API Base: https://api.lsh.ai
- Pricing: https://api.lsh.ai/pricing
@gsalberto
Copy link
Author

Hi @hanouticelina @Wauplin - Just wanted to check if you have bandwidth to review this PR.
We'd love to add Latitude.sh as an inference provider. Let me know if you need any changes

@gsalberto
Copy link
Author

Hi @Wauplin @hanouticelina @SBrandeis - friendly bump on this PR. It's been 2 weeks and we'd love to get Latitude added as an inference provider.

We have production infrastructure ready with multiple models (Llama 3.1, Qwen 2.5/3, DeepSeek R1, Gemma 2) running on dedicated GPUs. Our API supports streaming, tool calling, vision, and structured output.

Happy to make any changes needed. Let us know if there's anything blocking the review. Thanks!

@SBrandeis
Copy link
Contributor

Hey there!
Thank you for your interest in becoming an Inference Provider and for the excellent work you've put into this integration!
We really appreciate the effort.

However, we're currently in a consolidation phase focusing on growing usage of Inference Providers via new features and integrations rather than expanding to new partners. This means we've temporarily paused onboarding new providers while we work on these improvements.

We're not able to provide a specific timeline for when we'll resume new provider onboarding, but we'd love to revisit this integration in the future.

Thanks again for your contribution and understanding!

@SBrandeis SBrandeis added the inference-providers integration of a new or existing Inference Provider label Feb 10, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

inference-providers integration of a new or existing Inference Provider

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants