Skip to content

Commit a35cb12

Browse files
authored
Remove datashaper strip code (#1581)
Remove datashaper
1 parent 58f646a commit a35cb12

File tree

151 files changed

+2033
-4066
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

151 files changed

+2033
-4066
lines changed
Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,4 @@
1+
{
2+
"type": "minor",
3+
"description": "Remove old pipeline runner."
4+
}
Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,4 @@
1+
{
2+
"type": "minor",
3+
"description": "Remove DataShaper (first steps)."
4+
}

dictionary.txt

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -148,10 +148,6 @@ codebases
148148
# Microsoft
149149
MSRC
150150

151-
# Broken Upstream
152-
# TODO FIX IN DATASHAPER
153-
Arrary
154-
155151
# Prompt Inputs
156152
ABILA
157153
Abila

docs/examples_notebooks/index_migration.ipynb

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -206,9 +206,8 @@
206206
"metadata": {},
207207
"outputs": [],
208208
"source": [
209-
"from datashaper import NoopVerbCallbacks\n",
210-
"\n",
211209
"from graphrag.cache.factory import create_cache\n",
210+
"from graphrag.callbacks.noop_verb_callbacks import NoopVerbCallbacks\n",
212211
"from graphrag.index.flows.generate_text_embeddings import generate_text_embeddings\n",
213212
"\n",
214213
"# We only need to re-run the embeddings workflow, to ensure that embeddings for all required search fields are in place\n",

docs/index/architecture.md

Lines changed: 2 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -8,33 +8,9 @@ In order to support the GraphRAG system, the outputs of the indexing engine (in
88
This model is designed to be an abstraction over the underlying data storage technology, and to provide a common interface for the GraphRAG system to interact with.
99
In normal use-cases the outputs of the GraphRAG Indexer would be loaded into a database system, and the GraphRAG's Query Engine would interact with the database using the knowledge model data-store types.
1010

11-
### DataShaper Workflows
12-
13-
GraphRAG's Indexing Pipeline is built on top of our open-source library, [DataShaper](https://github.com/microsoft/datashaper).
14-
DataShaper is a data processing library that allows users to declaratively express data pipelines, schemas, and related assets using well-defined schemas.
15-
DataShaper has implementations in JavaScript and Python, and is designed to be extensible to other languages.
16-
17-
One of the core resource types within DataShaper is a [Workflow](https://github.com/microsoft/datashaper/blob/main/javascript/schema/src/workflow/WorkflowSchema.ts).
18-
Workflows are expressed as sequences of steps, which we call [verbs](https://github.com/microsoft/datashaper/blob/main/javascript/schema/src/workflow/verbs.ts).
19-
Each step has a verb name and a configuration object.
20-
In DataShaper, these verbs model relational concepts such as SELECT, DROP, JOIN, etc.. Each verb transforms an input data table, and that table is passed down the pipeline.
21-
22-
```mermaid
23-
---
24-
title: Sample Workflow
25-
---
26-
flowchart LR
27-
input[Input Table] --> select[SELECT] --> join[JOIN] --> binarize[BINARIZE] --> output[Output Table]
28-
```
29-
30-
### LLM-based Workflow Steps
31-
32-
GraphRAG's Indexing Pipeline implements a handful of custom verbs on top of the standard, relational verbs that our DataShaper library provides. These verbs give us the ability to augment text documents with rich, structured data using the power of LLMs such as GPT-4. We utilize these verbs in our standard workflow to extract entities, relationships, claims, community structures, and community reports and summaries. This behavior is customizable and can be extended to support many kinds of AI-based data enrichment and extraction tasks.
33-
34-
### Workflow Graphs
11+
### Workflows
3512

3613
Because of the complexity of our data indexing tasks, we needed to be able to express our data pipeline as series of multiple, interdependent workflows.
37-
In the GraphRAG Indexing Pipeline, each workflow may define dependencies on other workflows, effectively forming a directed acyclic graph (DAG) of workflows, which is then used to schedule processing.
3814

3915
```mermaid
4016
---
@@ -55,7 +31,7 @@ stateDiagram-v2
5531
The primary unit of communication between workflows, and between workflow steps is an instance of `pandas.DataFrame`.
5632
Although side-effects are possible, our goal is to be _data-centric_ and _table-centric_ in our approach to data processing.
5733
This allows us to easily reason about our data, and to leverage the power of dataframe-based ecosystems.
58-
Our underlying dataframe technology may change over time, but our primary goal is to support the DataShaper workflow schema while retaining single-machine ease of use and developer ergonomics.
34+
Our underlying dataframe technology may change over time, but our primary goal is to support the workflow schema while retaining single-machine ease of use and developer ergonomics.
5935

6036
### LLM Caching
6137

examples/README.md

Lines changed: 0 additions & 19 deletions
This file was deleted.

examples/__init__.py

Lines changed: 0 additions & 2 deletions
This file was deleted.

examples/custom_input/__init__.py

Lines changed: 0 additions & 2 deletions
This file was deleted.

examples/custom_input/pipeline.yml

Lines changed: 0 additions & 24 deletions
This file was deleted.

examples/custom_input/run.py

Lines changed: 0 additions & 46 deletions
This file was deleted.

0 commit comments

Comments
 (0)