Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
# Latitude: Bare-Metal Cloud & AI Inference

> [!TIP]
> All supported Latitude models can be found [here](https://huggingface.co/models?inference_provider=latitude-sh&sort=trending)

{{{logoSection}}}

{{{followUsSection}}}

## Fast, reliable AI inference powered by dedicated bare-metal GPU infrastructure.

[Latitude.sh](https://latitude.sh), recently **acquired by [Megaport](https://megaport.com)** (ASX: MP1), is a leading bare-metal cloud provider operating **10,000+ physical servers** and **1,000+ GPUs** across a global network of data centers. Together with Megaport's Network-as-a-Service platform spanning 1,000+ data centers in 26 countries, we power mission-critical workloads for enterprises, AI startups, and developers worldwide.

---

## Global Presence

- 🇺🇸 **United States**: Dallas, Chicago, New York, Miami, Los Angeles, Seattle
- 🇧🇷 **Brazil**: São Paulo
- 🇬🇧 **United Kingdom**: London
- 🇩🇪 **Germany**: Frankfurt
- 🇳🇱 **Netherlands**: Amsterdam
- 🇯🇵 **Japan**: Tokyo
- 🇸🇬 **Singapore**
- 🇦🇺 **Australia**: Sydney

---

## Infrastructure

- **1,000+ GPUs** including NVIDIA H100, A100, L40S, and RTX PRO 6000 Blackwell
- **10,000+ physical servers** (CPU and GPU)
- Combined with **Megaport's global NaaS platform** spanning 1,000+ data centers in 26 countries
- Tier-3 data centers with **99.99% SLA**

---

## AI Inference Platform

Our inference platform offers:

- **OpenAI-compatible API**: Drop-in replacement for existing integrations
- **Dedicated GPUs**: Consistent performance without noisy neighbor issues
- **Competitive pricing**: Direct GPU ownership means better margins passed to you
- **Full feature support**: Tool calling, structured output (JSON mode), vision/multimodal inputs, and streaming

---

## Pricing

For the latest rates, visit our [pricing page](https://api.lsh.ai/pricing).

| Model | Input ($/1M tokens) | Output ($/1M tokens) |
|-------|---------------------|----------------------|
| Llama 3.1 8B | $0.16 | $0.16 |
| Qwen 2.5 7B | $0.27 | $0.27 |
| Qwen 2.5 VL 7B (Vision) | $0.36 | $0.36 |
| Gemma 2 27B | $0.45 | $0.45 |
| Qwen3 32B | $0.45 | $0.90 |
| Qwen2.5 Coder 32B | $0.72 | $0.72 |
| DeepSeek R1 Distill 14B | $1.44 | $1.44 |

---

## Resources

- **Website**: [latitude.sh](https://latitude.sh)
- **API Documentation**: [api.lsh.ai/docs](https://api.lsh.ai/docs)
- **X (Twitter)**: [@lataboratories](https://x.com/lataboratories)
- **LinkedIn**: [Latitude.sh](https://www.linkedin.com/company/latitudeai/)
- **Discord**: [Join our community](https://discord.gg/latitude)
- **GitHub**: [@latitude-sh](https://github.com/latitude-sh)

{{{tasksSection}}}