You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* feat: integrate Azure OpenAI support with client implementation and update documentation
- Added AzureAIClient for interacting with Azure OpenAI models.
- Updated README.md to include Azure OpenAI API key, endpoint, and version setup instructions.
- Modified configuration to support Azure as a model provider.
- Enhanced chat completion handling to include Azure API calls in websocket and streaming contexts.
- Updated generator configuration to define Azure models and their parameters.
* refactor: remove temporary ollma embedding for azure in testing phase
* fix: compatible for old version wiki cache #215 (#218)
---------
Co-authored-by: 赛卓林 <[email protected]>
@@ -53,6 +56,7 @@ For detailed instructions on using DeepWiki with Ollama and Docker, see [Ollama
53
56
> 💡 **Where to get these keys:**
54
57
> - Get a Google API key from [Google AI Studio](https://makersuite.google.com/app/apikey)
55
58
> - Get an OpenAI API key from [OpenAI Platform](https://platform.openai.com/api-keys)
59
+
> - Get Azure OpenAI credentials from [Azure Portal](https://portal.azure.com/) - create an Azure OpenAI resource and get the API key, endpoint, and API version
# Optional: Add this if you want to use OpenRouter models
67
71
OPENROUTER_API_KEY=your_openrouter_api_key
72
+
# Optional: Add this if you want to use Azure OpenAI models
73
+
AZURE_OPENAI_API_KEY=your_azure_openai_api_key
74
+
AZURE_OPENAI_ENDPOINT=your_azure_openai_endpoint
75
+
AZURE_OPENAI_VERSION=your_azure_openai_version
68
76
# Optional: Add Ollama host if not local. default: http://localhost:11434
69
77
OLLAMA_HOST=your_ollama_host
70
78
```
@@ -106,7 +114,7 @@ DeepWiki uses AI to:
106
114
107
115
1. Clone and analyze the GitHub, GitLab, or Bitbucket repository (including private repos with token authentication)
108
116
2. Create embeddings of the code for smart retrieval
109
-
3. Generate documentation with context-aware AI (using Google Gemini, OpenAI, OpenRouter, or local Ollama models)
117
+
3. Generate documentation with context-aware AI (using Google Gemini, OpenAI, OpenRouter, Azure OpenAI, or local Ollama models)
110
118
4. Create visual diagrams to explain code relationships
111
119
5. Organize everything into a structured wiki
112
120
6. Enable intelligent Q&A with the repository through the Ask feature
@@ -126,11 +134,13 @@ graph TD
126
134
M -->|OpenAI| E2[Generate with OpenAI]
127
135
M -->|OpenRouter| E3[Generate with OpenRouter]
128
136
M -->|Local Ollama| E4[Generate with Ollama]
137
+
M -->|Azure| E5[Generate with Azure]
129
138
130
139
E1 --> E[Generate Documentation]
131
140
E2 --> E
132
141
E3 --> E
133
142
E4 --> E
143
+
E5 --> E
134
144
135
145
D --> F[Create Visual Diagrams]
136
146
E --> G[Organize as Wiki]
@@ -144,7 +154,7 @@ graph TD
144
154
145
155
class A,D data;
146
156
class AA,M decision;
147
-
class B,C,E,F,G,AB,E1,E2,E3,E4 process;
157
+
class B,C,E,F,G,AB,E1,E2,E3,E4,E5 process;
148
158
class H result;
149
159
```
150
160
@@ -179,6 +189,7 @@ DeepWiki now implements a flexible provider-based model selection system support
179
189
-**Google**: Default `gemini-2.0-flash`, also supports `gemini-1.5-flash`, `gemini-1.0-pro`, etc.
180
190
-**OpenAI**: Default `gpt-4o`, also supports `o4-mini`, etc.
181
191
-**OpenRouter**: Access to multiple models via a unified API, including Claude, Llama, Mistral, etc.
192
+
-**Azure OpenAI**: Default `gpt-4o`, also supports `o4-mini`, etc.
182
193
-**Ollama**: Support for locally running open-source models like `llama3`
183
194
184
195
### Environment Variables
@@ -190,6 +201,9 @@ Each provider requires its corresponding API key environment variables:
190
201
GOOGLE_API_KEY=your_google_api_key # Required for Google Gemini models
191
202
OPENAI_API_KEY=your_openai_api_key # Required for OpenAI models
192
203
OPENROUTER_API_KEY=your_openrouter_api_key # Required for OpenRouter models
204
+
AZURE_OPENAI_API_KEY=your_azure_openai_api_key #Required for Azure OpenAI models
205
+
AZURE_OPENAI_ENDPOINT=your_azure_openai_endpoint #Required for Azure OpenAI models
206
+
AZURE_OPENAI_VERSION=your_azure_openai_version #Required for Azure OpenAI models
193
207
194
208
# OpenAI API Base URL Configuration
195
209
OPENAI_BASE_URL=https://custom-api-endpoint.com/v1 # Optional, for custom OpenAI API endpoints
@@ -206,7 +220,7 @@ DEEPWIKI_CONFIG_DIR=/path/to/custom/config/dir # Optional, for custom config fi
206
220
DeepWiki uses JSON configuration files to manage various aspects of the system:
207
221
208
222
1.**`generator.json`**: Configuration for text generation models
209
-
- Defines available model providers (Google, OpenAI, OpenRouter, Ollama)
223
+
- Defines available model providers (Google, OpenAI, OpenRouter, Azure, Ollama)
210
224
- Specifies default and available models for each provider
211
225
- Contains model-specific parameters like temperature and top_p
212
226
@@ -300,6 +314,9 @@ docker-compose up
300
314
| `GOOGLE_API_KEY` | Google Gemini API key for AI generation | No | Required only if you want to use Google Gemini models
301
315
|`OPENAI_API_KEY`| OpenAI API key for embeddings | Yes | Note: This is required even if you're not using OpenAI models, as it's used for embeddings. |
302
316
|`OPENROUTER_API_KEY`| OpenRouter API key for alternative models | No | Required only if you want to use OpenRouter models |
317
+
|`AZURE_OPENAI_API_KEY`| Azure OpenAI API key | No | Required only if you want to use Azure OpenAI models |
318
+
|`AZURE_OPENAI_ENDPOINT`| Azure OpenAI endpoint | No | Required only if you want to use Azure OpenAI models |
319
+
|`AZURE_OPENAI_VERSION`| Azure OpenAI version | No | Required only if you want to use Azure OpenAI models |
303
320
|`OLLAMA_HOST`| Ollama Host (default: http://localhost:11434)| No | Required only if you want to use external Ollama server |
304
321
|`PORT`| Port for the API server (default: 8001) | No | If you host API and frontend on the same machine, make sure change port of `SERVER_BASE_URL` accordingly |
305
322
|`SERVER_BASE_URL`| Base URL for the API server (default: http://localhost:8001)| No |
0 commit comments