Skip to content

Commit d85f02b

Browse files
authored
docs: add langfuse notebook (#216)
How to use langfuse dashboard with ragas metrics
1 parent c1ed36e commit d85f02b

File tree

7 files changed

+737
-34
lines changed

7 files changed

+737
-34
lines changed
204 KB
Loading
303 KB
Loading

docs/getstarted/evaluation.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,9 +15,9 @@ pip install ragas
1515
```
1616

1717
Ragas also uses OpenAI for running some metrics so make sure you have your openai key ready and available in your environment
18+
1819
```python
1920
import os
20-
2121
os.environ["OPENAI_API_KEY"] = "your-openai-key"
2222
```
2323
## The Data

docs/getstarted/monitoring.md

Lines changed: 24 additions & 32 deletions
Original file line numberDiff line numberDiff line change
@@ -1,35 +1,27 @@
11
(get-started-monitoring)=
22
# Monitoring
33

4-
Maintaining the quality and performance of an LLM application in a production environment can be challenging. Ragas provides a solution through production quality monitoring, offering valuable insights into your application's performance. This is achieved by constructing custom, smaller, more cost-effective, and faster models.
5-
6-
[**Get Early Access**](https://calendly.com/shahules/30min)
7-
8-
```{admonition} **Faithfulness**
9-
:class: note
10-
11-
This feature assists in identifying and quantifying instances of hallucinations.
12-
```
13-
14-
```{admonition} **Bad retrieval**
15-
:class: note
16-
17-
This feature helps identify and quantify poor context retrievals.
18-
```
19-
20-
```{admonition} **Bad response**
21-
:class: note
22-
23-
This feature helps in recognizing and quantifying evasive, harmful, or toxic responses.
24-
```
25-
26-
```{admonition} **Bad format**
27-
:class: note
28-
29-
This feature helps in detecting and quantifying responses with incorrect formatting.
30-
```
31-
32-
```{admonition} **Custom use-case**
33-
:class: hint
34-
35-
For monitoring other critical aspects that are specific to your use case. [Talk to founders](https://calendly.com/shahules/30min)
4+
Maintaining the quality and performance of an LLM application in a production environment can be challenging. Ragas provides with basic building blocks that you can use for production quality monitoring, offering valuable insights into your application's performance. This is achieved by constructing custom, smaller, more cost-effective, and faster models.
5+
6+
:::{note}
7+
This is feature is still in beta access. You can requests for
8+
[**early access**](https://calendly.com/shahules/30min) to try it out.
9+
:::
10+
11+
The Ragas metrics can also be used with other LLM observability tools like
12+
[Langsmith](https://www.langchain.com/langsmith) and
13+
[Langfuse](https://langfuse.com/) to get model-based feedback about various
14+
aspects of you application like those mentioned below
15+
16+
:::{seealso}
17+
[Langfuse Integration](../howtos/integrations/langfuse.ipynb) to see Ragas
18+
monitoring in action within the Langfuse dashboard and how to set it up
19+
:::
20+
21+
## Aspects to Monitor
22+
23+
1. Faithfulness: This feature assists in identifying and quantifying instances of hallucinations.
24+
2. Bad retrieval: This feature helps identify and quantify poor context retrievals.
25+
3. Bad response: This feature helps in recognizing and quantifying evasive, harmful, or toxic responses.
26+
4. Bad format: This feature helps in detecting and quantifying responses with incorrect formatting.
27+
5. Custom use-case: For monitoring other critical aspects that are specific to your use case. [Talk to founders](https://calendly.com/shahules/30min)

docs/howtos/integrations/index.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,4 +9,5 @@ happy to look into it 🙂
99
llamaindex.ipynb
1010
langchain.ipynb
1111
langsmith.ipynb
12+
langfuse.ipynb
1213
:::

0 commit comments

Comments
 (0)