Skip to content

feat: Added the docs for AI Features #63

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
# generated content
.contentlayer
.content-collections
.source


# test & build
/coverage
Expand Down
82 changes: 82 additions & 0 deletions .source/index.ts

Large diffs are not rendered by default.

39 changes: 39 additions & 0 deletions .source/source.config.mjs
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
// source.config.ts
import {
defineConfig,
defineDocs,
frontmatterSchema,
metaSchema
} from "fumadocs-mdx/config";
import { remarkAdmonition } from "fumadocs-core/mdx-plugins";
var docs = defineDocs({
// The root directory for all documentation
dir: "content/docs",
docs: {
schema: frontmatterSchema
},
meta: {
schema: metaSchema
}
});
var releaseNotes = defineDocs({
// The root directory for release notes
dir: "content/release-notes",
docs: {
schema: frontmatterSchema
},
meta: {
schema: metaSchema
}
});
var source_config_default = defineConfig({
mdxOptions: {
remarkPlugins: [remarkAdmonition],
rehypePlugins: []
}
});
export {
source_config_default as default,
docs,
releaseNotes
};
16 changes: 8 additions & 8 deletions app/mainPage.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -180,21 +180,21 @@ export default function MainPage() {
{
icon: <CodeIcon fontSize="inherit" />,
color: "text-green-500",
title: "API Reference",
title: "AI Native Features",
description:
"Comprehensive API documentation for integrating with Parseable programmatically.",
"AI Native features like Text to SQL, Summarization, Forecasting",
links: [
{
text: "Authentication",
href: `${baseUrl}api#authentication`,
text: "Text to SQL",
href: `${baseUrl}ai-features/text-to-sql`,
},
{
text: "Ingestion API",
href: `${baseUrl}api#log-ingestion`,
text: "Summarization",
href: `${baseUrl}ai-features/summarization`,
},
{
text: "Query API",
href: `${baseUrl}api#query-api`,
text: "Forecasting",
href: `${baseUrl}ai-features/forecasting`,
},
],
},
Expand Down
12 changes: 12 additions & 0 deletions content/docs/ai-features/_category_.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
{
"label": "AI Features",
"collapsible": true,
"collapsed": false,
"position": 5,
"link": {
"type": "generated-index",
"title": "AI Features",
"description": "AI Features available in Parseable."
},
"customProps": {}
}
52 changes: 52 additions & 0 deletions content/docs/ai-features/forecasting.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
---
title: "Ingestion Forecasting"
position: 3
---

<Callout type="info">
<EnterpriseBadge /> This feature requires an Enterprise license.
</Callout>

Parseable's AI-powered forecasting feature helps you predict future data volumes and optimize resource allocation by analyzing historical ingestion patterns.

## How It Works

The forecasting engine uses your recent log ingestion patterns to predict what's coming next. Enable it, and you'll see projected ingestion volumes right in your dashboards.

![Forecasting UI](../release-notes/static/forecast.png)

## Key Features

- **Extended Filter Support**: AI-based ingestion forecasting works with any filter you apply, allowing you to get automatic predictions for specific views (e.g., logs for a specific team or region)

- **Visualized Integration**: Forecasts appear directly in the Parseable Explore UI, so you can compare historical and predicted loads at a glance

- **Customizable Time Ranges**: Select different time ranges to see short-term or long-term forecasts based on your planning needs

## Benefits

- **Capacity Planning**: Anticipate storage and processing needs before they arise

- **Cost Optimization**: Allocate resources efficiently based on predicted usage patterns

- **Anomaly Detection**: Identify unusual spikes or drops in data volume that deviate from forecasted patterns

- **Proactive Management**: Move from reactive to proactive infrastructure management

## Use Cases

1. **Infrastructure Scaling**: Determine when to scale your infrastructure based on predicted log volume increases

2. **Budget Planning**: Forecast storage costs and resource requirements for financial planning

3. **Seasonal Pattern Analysis**: Identify cyclical patterns in your data to better understand your application behavior

4. **Anomaly Investigation**: Compare actual ingestion with forecasted values to quickly identify unexpected behavior

## Configuration

Forecasting is enabled by default in your Enterprise license. You can configure forecast settings including:

- Forecast horizon (how far into the future to predict)
- Confidence interval display
- Refresh frequency
42 changes: 42 additions & 0 deletions content/docs/ai-features/index.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
---
title: "AI Features"
---

import { IconChartCohort,IconDirections,IconCloudDataConnection } from '@tabler/icons-react';

Dive deeper into the key concepts of Parseable to understand how it works, its architecture, and how it can help you with your observability needs. Whether you're new to Parseable or looking to deepen your understanding, these concepts will provide a solid foundation for using Parseable effectively.

<Cards>

<Card href="/docs/ai-features/text-to-sql" icon={<IconChartCohort className="text-purple-600" />} title='Text to SQL'>
Convert natural language queries into SQL queries.
</Card>

<Card href="/docs/ai-features/summarization" icon={<IconDirections className="text-purple-600" />} title='Summarization of Datasets'>
Generate concise summaries of datasets.
</Card>

<Card href="/docs/ai-features/forecasting" icon={<IconCloudDataConnection className="text-purple-600" />} title='Forecasting for Ingestion'>
Predict future data volumes and optimize resource allocation.
</Card>

</Cards>

## Configure Your LLM

<Callout type="info">
All AI features require configuring an LLM provider before they can be used.
</Callout>

### Choosing Your LLM Provider

Parseable supports multiple AI models out of the box.

* Go to **Settings** > **AI Assistant**.
* Choose your preferred LLM provider (OpenAI GPT or Anthropic Claude).
* Add your API key.
* Save your preferences.

![LLM Configuration](../key-concepts/static/add-llm.png)

Once configured, all AI features will be available across your Parseable instance.
46 changes: 46 additions & 0 deletions content/docs/ai-features/summarization.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
---
title: "Dataset Summarization"
position: 1
---

<Callout type="info">
<EnterpriseBadge /> This feature requires an Enterprise license.
</Callout>

Parseable's AI-powered summarization feature simplifies data analysis and debugging by automatically generating concise overviews of your datasets.

## How It Works

1. Select any dataset within Parseable
2. Click on `Summarize my data` to generate a concise overview
3. The AI identifies key patterns, anomalies, and potential faults
4. Receive actionable recommendations and SQL queries to drill deeper

![Summarization UI](../release-notes/static/summarize-homepage.png)

## Benefits

- **Quick Insights**: Gain insights without manually combing through extensive datasets
- **Reduced Troubleshooting Time**: Pinpoint anomalies and root causes effortlessly
- **Simplified Collaboration**: Share clear, concise summaries across your team
- **Proactive Issue Resolution**: Leverage AI-driven recommendations to address issues before they escalate

## Example Use Case

When troubleshooting elevated error rates in your logs, the summarization feature can instantly highlight:

- Unusual spikes in errors between specific timestamps
- Affected services and hosts
- Suggested SQL queries to drill down further, such as:

```sql
SELECT host, COUNT(*) as error_count
FROM logs
WHERE status='error' AND timestamp BETWEEN '2025-07-20T00:00:00' AND '2025-07-20T06:00:00'
GROUP BY host
ORDER BY error_count DESC;
```

## Configuration

The summarization feature is available out of the box with your Enterprise license. No additional configuration is required beyond setting up your preferred LLM provider in the settings page.
100 changes: 100 additions & 0 deletions content/docs/ai-features/text-to-sql.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,100 @@
---
title: "Text to SQL"
position: 2
---

<Callout type="info">
<EnterpriseBadge /> This feature requires an Enterprise license.
</Callout>

Parseable's LLM based query builder allows you to generate SQL queries based on your natural language query. This feature is available in the Prism and can be accessed from the SQL editor. It can also help with fixing your queries by suggesting corrections based on the query you have written.

## 1. Choosing Your LLM Provider

**Parseable supports multiple AI models out of the box.**

* Go to **Settings** > **AI Assistant**.
* Choose your preferred LLM provider (e.g., OpenAI GPT, Anthropic Claude).
* Add your API key.
* Save your preferences.

![](../key-concepts/static/add-llm.png)

You can change this any time to fit team policies, costs, or performance needs.

## 2. Generating SQL from Plain English

**How to:**

* In your SQL editor, look for the "Generate with AI" button at the bottom.
![](../key-concepts/static/generate-with-ai.png)

* Type your question or description in plain language.
**Examples:**

* `Show me all error logs for the last hour grouped by host`
* `Find top 5 most common user agents in the backend table`
* `Summarize response statuses per environment tag`

**What happens:**
The AI will generate a ready-to-run SQL query for your prompt. You can copy, run, or tweak the result.

## 3. Using AI for Query Help

**Ask the AI anything in your SQL workflow:**

* **Write a new query:**
`Generate with AI` > "Show all 5xx errors grouped by host in the last hour."
* **Fix a query:**
Paste a broken query, then prompt, "Fix this query, it's giving a syntax error."
* **Explain a query:**
Ask, "Explain what this query does."
* **Tweak logic:**
"Can you add a filter for status = 500?"

The assistant uses your current datasets and history for context, so results are relevant.

## 4. Using Chat History

* All prompts and AI responses are saved automatically.
* Click the **History** tab in the assistant panel to view previous conversations.
* Rerun, reuse, or refine past queries from here.
* Useful for incident reviews, recurring analytics, and keeping a record of troubleshooting steps.

![](../key-concepts/static/chat-history.png)

## 5. The Library

* Save any query you want to reuse in the **Library**.
* From the Library pane, you can:
* Run saved queries directly
* Edit or improve them with the AI assistant
* Ask the assistant to explain saved queries

The Library is searchable and can be personal or shared with your team.

![](../key-concepts/static/query-library.png)

## 6. Let AI Fix Your Query

**When a query fails:**

* Click "Fix with AI" on the error message or use the AI assistant with your broken SQL.
* The AI analyzes your schema and query, returning a corrected version (e.g., fixing wrong field names, aggregation functions, or syntax).

This is especially useful in high-pressure situations or when exploring unfamiliar datasets.

![](../key-concepts/static/fix.png)

## Example Workflow

1. **Troubleshoot an issue:**
You notice a spike in latency. Type a plain English prompt describing what you need.
2. **Get a query:**
AI generates the SQL for you.
3. **Edit and run:**
Tweak or run the query.
4. **Query fails?**
Use "Fix with AI" to automatically correct it.
5. **Save to Library:**
Store your working query for future reuse, and ask the AI to explain it for documentation.
1 change: 1 addition & 0 deletions content/docs/meta.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
{
"title": "Documentation",
"pages": [
"ai-features",
"---Overview---",
"introduction",
"quickstart",
Expand Down