@@ -23,8 +23,10 @@ Detailed setup instructions for each provider supported by OneLLM.
2323- [ Perplexity] ( #perplexity )
2424- [ DeepSeek] ( #deepseek )
2525- [ Moonshot] ( #moonshot )
26+ - [ GLM (Zhipu AI)] ( #glm )
2627- [ Cohere] ( #cohere )
2728- [ OpenRouter] ( #openrouter )
29+ - [ Vercel AI Gateway] ( #vercel )
2830- [ Azure OpenAI] ( #azure )
2931- [ AWS Bedrock] ( #bedrock )
3032- [ Google Vertex AI] ( #vertex )
@@ -33,7 +35,7 @@ Detailed setup instructions for each provider supported by OneLLM.
3335
3436---
3537
36- ## OpenAI {#openai}
38+ ## OpenAI
3739
3840### 1. Get API Key
39411 . Go to [ platform.openai.com] ( https://platform.openai.com )
@@ -66,7 +68,7 @@ print(response.choices[0].message['content'])
6668
6769---
6870
69- ## Anthropic {#anthropic}
71+ ## Anthropic
7072
7173### 1. Get API Key
72741 . Go to [ console.anthropic.com] ( https://console.anthropic.com )
@@ -95,7 +97,7 @@ response = client.chat.completions.create(
9597
9698---
9799
98- ## Google AI Studio {#google}
100+ ## Google AI Studio
99101
100102### 1. Get API Key
1011031 . Go to [ makersuite.google.com] ( https://makersuite.google.com )
@@ -122,7 +124,7 @@ response = client.chat.completions.create(
122124
123125---
124126
125- ## Mistral {#mistral}
127+ ## Mistral
126128
127129### 1. Get API Key
1281301 . Go to [ console.mistral.ai] ( https://console.mistral.ai )
@@ -151,7 +153,7 @@ response = client.chat.completions.create(
151153
152154---
153155
154- ## Groq {#groq}
156+ ## Groq
155157
156158### 1. Get API Key
1571591 . Go to [ console.groq.com] ( https://console.groq.com )
@@ -179,7 +181,7 @@ response = client.chat.completions.create(
179181
180182---
181183
182- ## Together AI {#together}
184+ ## Together AI
183185
184186### 1. Get API Key
1851871 . Go to [ api.together.xyz] ( https://api.together.xyz )
@@ -207,7 +209,7 @@ response = client.chat.completions.create(
207209
208210---
209211
210- ## Fireworks {#fireworks}
212+ ## Fireworks
211213
212214### 1. Get API Key
2132151 . Go to [ app.fireworks.ai] ( https://app.fireworks.ai )
@@ -229,7 +231,7 @@ response = client.chat.completions.create(
229231
230232---
231233
232- ## Anyscale {#anyscale}
234+ ## Anyscale
233235
234236### 1. Get API Key
2352371 . Go to [ anyscale.com] ( https://www.anyscale.com )
@@ -251,7 +253,7 @@ response = client.chat.completions.create(
251253
252254---
253255
254- ## X.AI {#xai}
256+ ## X.AI
255257
256258### 1. Get API Key
2572591 . Go to [ x.ai] ( https://x.ai )
@@ -273,7 +275,7 @@ response = client.chat.completions.create(
273275
274276---
275277
276- ## Perplexity {#perplexity}
278+ ## Perplexity
277279
278280### 1. Get API Key
2792811 . Go to [ perplexity.ai/settings/api] ( https://www.perplexity.ai/settings/api )
@@ -295,7 +297,7 @@ response = client.chat.completions.create(
295297
296298---
297299
298- ## DeepSeek {#deepseek}
300+ ## DeepSeek
299301
300302### 1. Get API Key
3013031 . Go to [ platform.deepseek.com] ( https://platform.deepseek.com )
@@ -317,7 +319,7 @@ response = client.chat.completions.create(
317319
318320---
319321
320- ## Moonshot {#moonshot}
322+ ## Moonshot
321323
322324### 1. Get API Key
3233251 . Go to [ platform.moonshot.ai] ( https://platform.moonshot.ai )
@@ -352,6 +354,45 @@ response = client.chat.completions.create(
352354
353355---
354356
357+ ## GLM (Zhipu AI) {#glm}
358+
359+ ### 1. Get API Key
360+ 1 . Go to [ open.bigmodel.cn] ( https://open.bigmodel.cn )
361+ 2 . Register account
362+ 3 . Navigate to API Keys section
363+ 4 . Create new API key
364+
365+ ### 2. Set Environment Variable
366+ ``` bash
367+ export GLM_API_KEY=" ..."
368+ # Or alternatively:
369+ export ZAI_API_KEY=" ..."
370+ ```
371+
372+ ### 3. Test Connection
373+ ``` python
374+ response = client.chat.completions.create(
375+ model = " glm/glm-4" ,
376+ messages = [{" role" : " user" , " content" : " 你好!" }]
377+ )
378+ ```
379+
380+ ### 4. Available Models
381+ - ` glm-4 ` - Latest GLM-4 model
382+ - ` glm-4-plus ` - Enhanced version
383+ - ` glm-4-air ` - Lightweight version
384+ - ` glm-4-flash ` - Fastest version
385+ - ` glm-4v ` - Vision support
386+
387+ ### 5. Features
388+ - ** Bilingual** : Strong Chinese and English support
389+ - ** Vision** : GLM-4V supports image understanding
390+ - ** Function Calling** : Tool use capabilities
391+ - ** Streaming** : Real-time response streaming
392+ - ** Cost-effective** : Competitive pricing for Chinese market
393+
394+ ---
395+
355396## Cohere {#cohere}
356397
357398### 1. Get API Key
@@ -374,7 +415,7 @@ response = client.chat.completions.create(
374415
375416---
376417
377- ## OpenRouter {#openrouter}
418+ ## OpenRouter
378419
379420### 1. Get API Key
3804211 . Go to [ openrouter.ai] ( https://openrouter.ai )
@@ -397,6 +438,48 @@ response = client.chat.completions.create(
397438
398439---
399440
441+ ## Vercel AI Gateway {#vercel}
442+
443+ ### 1. Get API Key
444+ 1 . Go to [ vercel.com/ai-gateway] ( https://vercel.com/ai-gateway )
445+ 2 . Sign up or log in to Vercel
446+ 3 . Navigate to AI Gateway dashboard
447+ 4 . Create new API key
448+
449+ ### 2. Set Environment Variable
450+ ``` bash
451+ export VERCEL_AI_API_KEY=" ..."
452+ ```
453+
454+ ### 3. Test Connection
455+ ``` python
456+ response = client.chat.completions.create(
457+ model = " vercel/openai/gpt-4o-mini" ,
458+ messages = [{" role" : " user" , " content" : " Hello!" }]
459+ )
460+ ```
461+
462+ ### 4. Available Models
463+ Use the format ` vercel/{vendor}/{model} ` :
464+ - OpenAI: ` vercel/openai/gpt-4o-mini ` , ` vercel/openai/gpt-4o `
465+ - Anthropic: ` vercel/anthropic/claude-sonnet-4 ` , ` vercel/anthropic/claude-opus-4 `
466+ - Google: ` vercel/google/gemini-2.5-pro ` , ` vercel/google/gemini-2.5-flash `
467+ - Meta: ` vercel/meta/llama-3.1-70b-instruct ` , ` vercel/meta/llama-3.1-8b-instruct `
468+ - xAI: ` vercel/xai/grok-2-latest `
469+ - Mistral: ` vercel/mistral/mistral-large-latest `
470+ - DeepSeek: ` vercel/deepseek/deepseek-chat `
471+ - Many more providers and models
472+
473+ ### 5. Features
474+ - ** Multi-Provider Gateway** : Access 100+ models from multiple providers
475+ - ** Unified Billing** : Single bill for all model usage
476+ - ** Streaming** : Real-time response streaming
477+ - ** Function Calling** : Tool use support for compatible models
478+ - ** Vision** : Multimodal capabilities for supported models
479+ - ** Production Ready** : Built for scale with Vercel's infrastructure
480+
481+ ---
482+
400483## Azure OpenAI {#azure}
401484
402485### 1. Setup Azure Resources
@@ -430,7 +513,7 @@ response = client.chat.completions.create(
430513
431514---
432515
433- ## AWS Bedrock {#bedrock}
516+ ## AWS Bedrock
434517
435518### 1. Setup AWS
4365191 . Create AWS account
@@ -457,7 +540,7 @@ response = client.chat.completions.create(
457540
458541---
459542
460- ## Google Vertex AI {#vertex}
543+ ## Google Vertex AI
461544
462545### 1. Setup GCP
4635461 . Create GCP project
@@ -480,7 +563,7 @@ response = client.chat.completions.create(
480563
481564---
482565
483- ## Ollama {#ollama}
566+ ## Ollama
484567
485568### 1. Install Ollama
486569``` bash
@@ -514,7 +597,7 @@ response = client.chat.completions.create(
514597
515598---
516599
517- ## llama.cpp {#llama-cpp}
600+ ## llama.cpp
518601
519602### 1. Install Dependencies
520603``` bash
@@ -580,4 +663,4 @@ client = OpenAI(
580663
581664- [ Provider Capabilities] ( capabilities.md ) - Compare features
582665- [ Examples] ( ../examples/providers.md ) - Provider examples
583- - [ Configuration] ( ../configuration.md ) - Advanced config
666+ - [ Configuration] ( ../configuration.md ) - Advanced config
0 commit comments