|
1 | 1 | # LaunchDarkly AI SDK - LangChain Provider |
2 | 2 |
|
3 | | -This package provides LangChain integration for the LaunchDarkly Server-Side AI SDK. |
| 3 | +[](https://pypi.org/project/launchdarkly-server-sdk-ai-langchain/) |
4 | 4 |
|
5 | | -## Status |
6 | | - |
7 | | -🚧 **Coming Soon** - This package is a placeholder for future LangChain integration. |
| 5 | +This package provides LangChain integration for the LaunchDarkly Server-Side AI SDK, allowing you to use LangChain models and chains with LaunchDarkly's tracking and configuration capabilities. |
8 | 6 |
|
9 | 7 | ## Installation |
10 | 8 |
|
11 | 9 | ```bash |
12 | 10 | pip install launchdarkly-server-sdk-ai-langchain |
13 | 11 | ``` |
14 | 12 |
|
| 13 | +You'll also need to install the LangChain provider packages for the models you want to use: |
| 14 | + |
| 15 | +```bash |
| 16 | +# For OpenAI |
| 17 | +pip install langchain-openai |
| 18 | + |
| 19 | +# For Anthropic |
| 20 | +pip install langchain-anthropic |
| 21 | + |
| 22 | +# For Google |
| 23 | +pip install langchain-google-genai |
| 24 | +``` |
| 25 | + |
| 26 | +## Quick Start |
| 27 | + |
| 28 | +```python |
| 29 | +import asyncio |
| 30 | +from ldclient import LDClient, Config, Context |
| 31 | +from ldai import init |
| 32 | +from ldai_langchain import LangChainProvider |
| 33 | + |
| 34 | +# Initialize LaunchDarkly client |
| 35 | +ld_client = LDClient(Config("your-sdk-key")) |
| 36 | +ai_client = init(ld_client) |
| 37 | + |
| 38 | +# Get AI configuration |
| 39 | +context = Context.builder("user-123").build() |
| 40 | +config = ai_client.config("ai-config-key", context, {}) |
| 41 | + |
| 42 | +async def main(): |
| 43 | + # Create a LangChain provider from the AI configuration |
| 44 | + provider = await LangChainProvider.create(config) |
| 45 | + |
| 46 | + # Use the provider to invoke the model |
| 47 | + from ldai.models import LDMessage |
| 48 | + messages = [ |
| 49 | + LDMessage(role="system", content="You are a helpful assistant."), |
| 50 | + LDMessage(role="user", content="Hello, how are you?"), |
| 51 | + ] |
| 52 | + |
| 53 | + response = await provider.invoke_model(messages) |
| 54 | + print(response.message.content) |
| 55 | + |
| 56 | +asyncio.run(main()) |
| 57 | +``` |
| 58 | + |
15 | 59 | ## Usage |
16 | 60 |
|
| 61 | +### Using LangChainProvider with the Create Factory |
| 62 | + |
| 63 | +The simplest way to use the LangChain provider is with the static `create` factory method, which automatically creates the appropriate LangChain model based on your LaunchDarkly AI configuration: |
| 64 | + |
17 | 65 | ```python |
18 | | -# Coming soon |
| 66 | +from ldai_langchain import LangChainProvider |
| 67 | + |
| 68 | +# Create provider from AI configuration |
| 69 | +provider = await LangChainProvider.create(ai_config) |
| 70 | + |
| 71 | +# Invoke the model |
| 72 | +response = await provider.invoke_model(messages) |
19 | 73 | ``` |
20 | 74 |
|
| 75 | +### Using an Existing LangChain Model |
| 76 | + |
| 77 | +If you already have a LangChain model configured, you can use it directly: |
| 78 | + |
| 79 | +```python |
| 80 | +from langchain_openai import ChatOpenAI |
| 81 | +from ldai_langchain import LangChainProvider |
| 82 | + |
| 83 | +# Create your own LangChain model |
| 84 | +llm = ChatOpenAI(model="gpt-4", temperature=0.7) |
| 85 | + |
| 86 | +# Wrap it with LangChainProvider |
| 87 | +provider = LangChainProvider(llm) |
| 88 | + |
| 89 | +# Use with LaunchDarkly tracking |
| 90 | +response = await provider.invoke_model(messages) |
| 91 | +``` |
| 92 | + |
| 93 | +### Structured Output |
| 94 | + |
| 95 | +The provider supports structured output using LangChain's `with_structured_output`: |
| 96 | + |
| 97 | +```python |
| 98 | +response_structure = { |
| 99 | + "type": "object", |
| 100 | + "properties": { |
| 101 | + "sentiment": {"type": "string", "enum": ["positive", "negative", "neutral"]}, |
| 102 | + "confidence": {"type": "number"}, |
| 103 | + }, |
| 104 | + "required": ["sentiment", "confidence"], |
| 105 | +} |
| 106 | + |
| 107 | +result = await provider.invoke_structured_model(messages, response_structure) |
| 108 | +print(result.data) # {"sentiment": "positive", "confidence": 0.95} |
| 109 | +``` |
| 110 | + |
| 111 | +### Tracking Metrics |
| 112 | + |
| 113 | +Use the provider with LaunchDarkly's tracking capabilities: |
| 114 | + |
| 115 | +```python |
| 116 | +# Get the AI config with tracker |
| 117 | +config = ai_client.config("ai-config-key", context, {}) |
| 118 | + |
| 119 | +# Create provider |
| 120 | +provider = await LangChainProvider.create(config) |
| 121 | + |
| 122 | +# Track metrics automatically |
| 123 | +async def invoke(): |
| 124 | + return await provider.invoke_model(messages) |
| 125 | + |
| 126 | +response = await config.tracker.track_metrics_of( |
| 127 | + invoke, |
| 128 | + lambda r: r.metrics |
| 129 | +) |
| 130 | +``` |
| 131 | + |
| 132 | +### Static Utility Methods |
| 133 | + |
| 134 | +The `LangChainProvider` class provides several utility methods: |
| 135 | + |
| 136 | +#### Converting Messages |
| 137 | + |
| 138 | +```python |
| 139 | +from ldai.models import LDMessage |
| 140 | +from ldai_langchain import LangChainProvider |
| 141 | + |
| 142 | +messages = [ |
| 143 | + LDMessage(role="system", content="You are helpful."), |
| 144 | + LDMessage(role="user", content="Hello!"), |
| 145 | +] |
| 146 | + |
| 147 | +# Convert to LangChain messages |
| 148 | +langchain_messages = LangChainProvider.convert_messages_to_langchain(messages) |
| 149 | +``` |
| 150 | + |
| 151 | +#### Extracting Metrics |
| 152 | + |
| 153 | +```python |
| 154 | +from ldai_langchain import LangChainProvider |
| 155 | + |
| 156 | +# After getting a response from LangChain |
| 157 | +metrics = LangChainProvider.get_ai_metrics_from_response(ai_message) |
| 158 | +print(f"Success: {metrics.success}") |
| 159 | +print(f"Tokens used: {metrics.usage.total if metrics.usage else 'N/A'}") |
| 160 | +``` |
| 161 | + |
| 162 | +#### Provider Name Mapping |
| 163 | + |
| 164 | +```python |
| 165 | +# Map LaunchDarkly provider names to LangChain provider names |
| 166 | +langchain_provider = LangChainProvider.map_provider("gemini") # Returns "google-genai" |
| 167 | +``` |
| 168 | + |
| 169 | +## API Reference |
| 170 | + |
| 171 | +### LangChainProvider |
| 172 | + |
| 173 | +#### Constructor |
| 174 | + |
| 175 | +```python |
| 176 | +LangChainProvider(llm: BaseChatModel, logger: Optional[Any] = None) |
| 177 | +``` |
| 178 | + |
| 179 | +#### Static Methods |
| 180 | + |
| 181 | +- `create(ai_config: AIConfigKind, logger: Optional[Any] = None) -> LangChainProvider` - Factory method to create a provider from AI configuration |
| 182 | +- `convert_messages_to_langchain(messages: List[LDMessage]) -> List[BaseMessage]` - Convert LaunchDarkly messages to LangChain messages |
| 183 | +- `get_ai_metrics_from_response(response: AIMessage) -> LDAIMetrics` - Extract metrics from a LangChain response |
| 184 | +- `map_provider(ld_provider_name: str) -> str` - Map LaunchDarkly provider names to LangChain names |
| 185 | +- `create_langchain_model(ai_config: AIConfigKind) -> BaseChatModel` - Create a LangChain model from AI configuration |
| 186 | + |
| 187 | +#### Instance Methods |
| 188 | + |
| 189 | +- `invoke_model(messages: List[LDMessage]) -> ChatResponse` - Invoke the model with messages |
| 190 | +- `invoke_structured_model(messages: List[LDMessage], response_structure: Dict[str, Any]) -> StructuredResponse` - Invoke with structured output |
| 191 | +- `get_chat_model() -> BaseChatModel` - Get the underlying LangChain model |
| 192 | + |
21 | 193 | ## Documentation |
22 | 194 |
|
23 | 195 | For full documentation, please refer to the [LaunchDarkly AI SDK documentation](https://docs.launchdarkly.com/sdk/ai/python). |
|
0 commit comments