Replies: 2 comments 3 replies
-
🤖 Hi, Yes, it's possible to create a custom wrapper for chat models in LangChain, similar to the way it's done for non-chat LLMs. Here's a general guide on how you can achieve this:
from langchain.chat_models import BaseChatModel
class CustomChat(BaseChatModel):
pass
class CustomChat(BaseChatModel):
@property
def lc_secrets(self) -> Dict[str, str]:
return {"customchat_api_key": "CUSTOMCHAT_API_KEY"}
@property
def lc_serializable(self) -> bool:
return True
client: Any #: :meta private:
temperature: float = 0.7
model_kwargs: Dict[str, Any] = Field(default_factory=dict)
customchat_api_key: Optional[str] = None
request_timeout: Optional[Union[float, Tuple[float, float]]] = None
max_retries: int = 6
streaming: bool = False
max_tokens: Optional[int] = None
class CustomChat(BaseChatModel):
# ... previous code ...
def _create_retry_decorator(self) -> Callable[[Any], Any]:
# Your implementation here
def completion_with_retry(self, **kwargs: Any) -> Any:
# Your implementation here
def _combine_llm_outputs(self, llm_outputs: List[Optional[dict]]) -> dict:
# Your implementation here
def _stream(
self,
messages: List[BaseMessage],
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> Iterator[ChatGenerationChunk]:
# Your implementation here
def _generate(
self,
messages: List[BaseMessage],
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> ChatResult:
# Your implementation here
def _create_message_dicts(
self, messages: List[BaseMessage], stop: Optional[List[str]]
) -> Tuple[List[Dict[str, Any]], Dict[str, Any]]:
# Your implementation here
def _create_chat_result(self, response: Mapping[str, Any]) -> ChatResult:
# Your implementation here
async def _astream(
self,
messages: List[BaseMessage],
stop: Optional[List[str]] = None,
run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> AsyncIterator[ChatGenerationChunk]:
# Your implementation here
async def _agenerate(
self,
messages: List[BaseMessage],
stop: Optional[List[str]] = None,
run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> ChatResult:
# Your implementation here
@property
def _invocation_params(self) -> Mapping[str, Any]:
# Your implementation here
@property
def _llm_type(self) -> str:
return "customchat"
Remember to import all necessary modules and classes at the beginning of your file. For more details, you can refer to the source code of the I hope this helps! Let me know if you have any other questions. Best, SourcesThis response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
Beta Was this translation helpful? Give feedback.
-
@dosu-bot give an example how to build it with python requests |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
Is it possible to create a custom wrapper for chat models as it is for (non-chat) LLMs, as shown here:
https://python.langchain.com/docs/modules/model_io/models/llms/custom_llm
Any guidance on how to achieve the same with chat models would be most appreciated!
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions