Add option for custom API path #336
Replies: 7 comments 3 replies
-
Came here as I just got blocked from using OpenAI in my enterprise environment. Please, please, please do this! @rostrovsky, from the "Contributing to ShellGPT" section I see this:
So you have one approval from me! |
Beta Was this translation helpful? Give feedback.
-
Upon looking into this further, does this PR solve the above problem? |
Beta Was this translation helpful? Give feedback.
-
This issue should get fixed with this PR: #206 |
Beta Was this translation helpful? Give feedback.
-
@rostrovsky In the latest releases new |
Beta Was this translation helpful? Give feedback.
-
Since you moved to the openai python module the workaround in the PR does not work anymore. |
Beta Was this translation helpful? Give feedback.
-
Also, |
Beta Was this translation helpful? Give feedback.
-
I'm considering to use LiteLLM for multiple backends integration. This should provide an option to switch between various LLM backends including Azure. I would greatly appreciate any assistance with testing the Azure Integration PR #463. To test it follow these steps: git clone https://github.com/TheR1D/shell_gpt.git
cd shell_gpt
git checkout ollama
python -m venv venv
source venv/bin/activate
pip install -e .
export AZURE_API_KEY=YOUR_KEY
export AZURE_API_BASE=YOUR_API_BASE
export AZURE_API_VERSION=YOUR_API_VERSION
export AZURE_AD_TOKEN=YOUR_TOKEN # Optional
export AZURE_API_TYPE=YOUR_API_TYPE # Optional
sgpt --model azure/<your_deployment_name> --no-functions "Hi Azure" |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Problem statement
Currently
OpenAIClient
relies on hardcodedf"{self.api_host}/v1/chat/completions"
API path.Microsoft Azure allows private deployments of OpenAI GPT models: https://azure.microsoft.com/en-us/pricing/details/cognitive-services/openai-service/
These models have a completely different API path which looks like:
https://<something>.openai.azure.com/openai/deployments/GPT35/chat/completions
(this is real world example).As of now it's impossible to use
shell_gpt
with models deployed that way.Proposed solution
Introduce
OPENAI_API_PREFIX
configuration variable that will allow replacingv1
with user-defined API path.I will happily introduce this change as long as it will gets your approval. Thanks!
Benefit
shell_gpt
in restricted/enterprise environments.Beta Was this translation helpful? Give feedback.
All reactions