Skip to content

Commit 02c34a1

Browse files
authored
Merge pull request #108 from ks6088ts-labs/feature/issue-107_promptflow-langchain
add flex flow with LangChain sample
2 parents df00d51 + 5286387 commit 02c34a1

File tree

5 files changed

+123
-25
lines changed

5 files changed

+123
-25
lines changed

apps/11_promptflow/README.md

Lines changed: 61 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -36,8 +36,42 @@ $ pip install -r requirements.txt
3636
[Prompt flow > Quick start](https://microsoft.github.io/promptflow/how-to-guides/quick-start.html) provides a quick start guide to Prompt flow.
3737
Some of the examples are extracted from [github.com/microsoft/promptflow/examples](https://github.com/microsoft/promptflow/tree/main/examples) to guide you through the basic usage of Prompt flow.
3838

39+
**Set up connection**
40+
41+
```shell
42+
$ cd apps/11_promptflow
43+
44+
# List connections
45+
$ pf connection list
46+
47+
# Set parameters
48+
$ AZURE_OPENAI_KEY=<your_api_key>
49+
$ AZURE_OPENAI_ENDPOINT=<your_api_endpoint>
50+
$ CONNECTION_NAME=open_ai_connection
51+
52+
# Delete connection (if needed)
53+
$ pf connection delete \
54+
--name $CONNECTION_NAME
55+
56+
# Create connection
57+
$ pf connection create \
58+
--file connection_azure_openai.yaml \
59+
--set api_key=$AZURE_OPENAI_KEY \
60+
--set api_base=$AZURE_OPENAI_ENDPOINT \
61+
--name $CONNECTION_NAME
62+
63+
# Show connection
64+
$ pf connection show \
65+
--name $CONNECTION_NAME
66+
```
67+
3968
### [chat_minimal](https://github.com/microsoft/promptflow/tree/main/examples/flex-flows/chat-minimal)
4069

70+
A chat flow defined using function with minimal code. It demonstrates the minimal code to have a chat flow.
71+
72+
Tracing feature is available in Prompt flow, which allows you to trace the flow of the conversation. You can see its implementation in this example.
73+
Details are available in [Tracing](https://microsoft.github.io/promptflow/how-to-guides/tracing/index.html)
74+
4175
**Run as normal Python script**
4276

4377
```shell
@@ -67,6 +101,8 @@ $ pf run create \
67101
--stream
68102
```
69103

104+
`--column-mapping` is used to map the data in the JSONL file to the flow. For more details, refer to [Use column mapping](https://microsoft.github.io/promptflow/how-to-guides/run-and-evaluate-a-flow/use-column-mapping.html).
105+
70106
### playground_chat
71107

72108
```shell
@@ -79,30 +115,6 @@ $ pf flow init \
79115

80116
$ cd playground_chat
81117

82-
# Set parameters
83-
$ CONNECTION_NAME=open_ai_connection
84-
$ AZURE_OPENAI_KEY=<your_api_key>
85-
$ AZURE_OPENAI_ENDPOINT=<your_api_endpoint>
86-
87-
# List connections
88-
$ pf connection list
89-
90-
91-
# Delete connection (if needed)
92-
$ pf connection delete \
93-
--name $CONNECTION_NAME
94-
95-
# Create connection
96-
$ pf connection create \
97-
--file azure_openai.yaml \
98-
--set api_key=$AZURE_OPENAI_KEY \
99-
--set api_base=$AZURE_OPENAI_ENDPOINT \
100-
--name $CONNECTION_NAME
101-
102-
# Show connection
103-
$ pf connection show \
104-
--name $CONNECTION_NAME
105-
106118
# Interact with chat flow
107119
$ pf flow test \
108120
--flow . \
@@ -230,14 +242,38 @@ $ pf run create \
230242
$ pf run show-details --name $RUN_NAME
231243
```
232244

245+
[Tutorial: How prompt flow helps on quality improvement](https://github.com/microsoft/promptflow/blob/main/examples/tutorials/flow-fine-tuning-evaluation/promptflow-quality-improvement.md) provides a detailed guide on how to use Prompt flow to improve the quality of your LLM applications.
246+
233247
### [eval-chat-math](https://github.com/microsoft/promptflow/tree/main/examples/flows/evaluation/eval-chat-math)
234248

235249
This example shows how to evaluate the answer of math questions, which can compare the output results with the standard answers numerically.
236250
Details are available in the [eval-chat-math/README.md](./eval-chat-math/README.md).
237251
To understand how to operate the flow in VS Code, you can refer to the [Build your high quality LLM apps with Prompt flow](https://www.youtube.com/watch?v=gcIe6nk2gA4).
238252
This video shows how to evaluate the answer of math questions and guide you to tune the prompts using variants.
239253

240-
<!-- TODO: rag, tracing, deployments -->
254+
### flex_flow_langchain
255+
256+
To guide you through working with LangChain, we provide an example flex flow that
257+
258+
```shell
259+
$ cd apps/11_promptflow/flex_flow_langchain
260+
$ pf flow test \
261+
--flow main:LangChainRunner \
262+
--inputs question="What's 2+2?" \
263+
--init custom_connection=open_ai_connection
264+
265+
$ RUN_NAME=flex_flow_langchain-$(date +%s)
266+
$ pf run create \
267+
--name $RUN_NAME \
268+
--flow . \
269+
--data ./data.jsonl \
270+
--column-mapping question='${data.question}' \
271+
--stream
272+
273+
$ pf run show-details --name $RUN_NAME
274+
```
275+
276+
<!-- TODO: rag, deployments -->
241277

242278
## References
243279

File renamed without changes.
Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,2 @@
1+
{"question": "What's 4+4?"}
2+
{"question": "What's 4x4?"}
Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json
2+
entry: main:LangChainRunner
3+
sample:
4+
inputs:
5+
input: What's 2+2?
6+
prediction: What's 2+2? That's an elementary question. The answer you're looking for is that two and two is four.
7+
init:
8+
custom_connection: open_ai_connection
Lines changed: 52 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,52 @@
1+
from dataclasses import dataclass
2+
3+
from langchain_openai import AzureChatOpenAI
4+
from promptflow.client import PFClient
5+
from promptflow.connections import CustomConnection
6+
from promptflow.tracing import trace
7+
8+
9+
@dataclass
10+
class Result:
11+
answer: str
12+
13+
14+
class LangChainRunner:
15+
def __init__(self, custom_connection: CustomConnection):
16+
# https://python.langchain.com/v0.2/docs/integrations/chat/azure_chat_openai/
17+
self.llm = AzureChatOpenAI(
18+
temperature=0,
19+
api_key=custom_connection.secrets["api_key"],
20+
api_version=custom_connection.configs["api_version"],
21+
azure_endpoint=custom_connection.configs["api_base"],
22+
model="gpt-4o",
23+
)
24+
25+
@trace
26+
def __call__(
27+
self,
28+
question: str,
29+
) -> Result:
30+
response = self.llm.invoke(
31+
[
32+
(
33+
"system",
34+
"You are asking me to do some math, I can help with that.",
35+
),
36+
("human", question),
37+
],
38+
)
39+
return Result(answer=response.content)
40+
41+
42+
if __name__ == "__main__":
43+
from promptflow.tracing import start_trace
44+
45+
start_trace()
46+
pf = PFClient()
47+
connection = pf.connections.get(name="open_ai_connection")
48+
runner = LangChainRunner(custom_connection=connection)
49+
result = runner(
50+
question="What's 2+2?",
51+
)
52+
print(result)

0 commit comments

Comments
 (0)