Skip to content

Commit 74bb47a

Browse files
suekoujjmachan
andauthored
docs: fix _arize.md (#1643)
Co-authored-by: Jithin James <[email protected]>
1 parent 74998f2 commit 74bb47a

File tree

2 files changed

+9
-14
lines changed

2 files changed

+9
-14
lines changed

docs/howtos/integrations/_arize.md

Lines changed: 8 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -78,26 +78,20 @@ An ideal test dataset should contain data points of high quality and diverse nat
7878

7979

8080
```python
81-
from ragas.testset.generator import TestsetGenerator
82-
from ragas.testset.evolutions import simple, reasoning, multi_context
81+
from ragas.testset import TestsetGenerator
8382
from langchain_openai import ChatOpenAI, OpenAIEmbeddings
8483

8584
TEST_SIZE = 25
8685

8786
# generator with openai models
88-
generator_llm = ChatOpenAI(model="gpt-3.5-turbo-16k")
89-
critic_llm = ChatOpenAI(model="gpt-4")
87+
generator_llm = ChatOpenAI(model="gpt-4o-mini")
88+
critic_llm = ChatOpenAI(model="gpt-4o")
9089
embeddings = OpenAIEmbeddings()
9190

9291
generator = TestsetGenerator.from_langchain(generator_llm, critic_llm, embeddings)
9392

94-
# set question type distribution
95-
distribution = {simple: 0.5, reasoning: 0.25, multi_context: 0.25}
96-
9793
# generate testset
98-
testset = generator.generate_with_llamaindex_docs(
99-
documents, test_size=TEST_SIZE, distributions=distribution
100-
)
94+
testset = generator.generate_with_llamaindex_docs(documents, test_size=TEST_SIZE)
10195
test_df = testset.to_pandas()
10296
test_df.head()
10397
```
@@ -123,8 +117,8 @@ Build your query engine.
123117

124118

125119
```python
126-
from llama_index import VectorStoreIndex, ServiceContext
127-
from llama_index.embeddings import OpenAIEmbedding
120+
from llama_index.core import VectorStoreIndex, ServiceContext
121+
from llama_index.embeddings.openai import OpenAIEmbedding
128122

129123

130124
def build_query_engine(documents):
@@ -144,7 +138,7 @@ If you check Phoenix, you should see embedding spans from when your corpus data
144138

145139

146140
```python
147-
from phoenix.trace.dsl.helpers import SpanQuery
141+
from phoenix.trace.dsl import SpanQuery
148142

149143
client = px.Client()
150144
corpus_df = px.Client().query_spans(
@@ -240,7 +234,7 @@ Ragas uses LangChain to evaluate your LLM application data. Let's instrument Lan
240234

241235

242236
```python
243-
from phoenix.trace.langchain import LangChainInstrumentor
237+
from openinference.instrumentation.langchain import LangChainInstrumentor
244238

245239
LangChainInstrumentor().instrument()
246240
```

mkdocs.yml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -92,6 +92,7 @@ nav:
9292
- Integrations:
9393
- howtos/integrations/index.md
9494
- LlamaIndex: howtos/integrations/_llamaindex.md
95+
- Arize: howtos/integrations/_arize.md
9596
- LangGraph: howtos/integrations/_langgraph_agent_evaluation.md
9697
- Migrations:
9798
- From v0.1 to v0.2: howtos/migrations/migrate_from_v01_to_v02.md

0 commit comments

Comments
 (0)