- Keep file strcture as:
AnyFolder/
└── GPT_light/
├── __init__.py
└── GPTAPI.py
└── request.py
- Open
request.py:gpt = GPT(model_name='<model_name>', p_key=<api_key>)---> Instantiation of the gpt class.- Change
<model_name>to actual model name such asgpt-3.5-turbo. Support any model that OpenAI supports. Check https://platform.openai.com/docs/models/overview for available models. - Change
<api_key>to actual api key which is typicallysk-*where*is a 48-digit code.
- Change
flag, response = gpt.call(<prompt>)---> Receive API feedback.- Change
<prompt>to actual prompt.Responsewill be the text response from the model.
- Change
- No restrictions on network connection. No need of campus network or VPN.
- Default transmitting url is api.ai-gaochao.cn.
- As far as we know, this provider can handle very high number of parallel requesting. 4. Multi-processing package will be updated later.