Skip to content
This repository was archived by the owner on Oct 22, 2023. It is now read-only.
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -48,4 +48,8 @@ var/
# pytest
*pytest_cache

# Credentials
key_openai.txt

# Models saved locally
models/
44 changes: 44 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,50 @@ To run REMO, you will need the following:
2. Interact with the API using a REST client or web browser: `http://localhost:8000`


## Models

### Embedding Model

REMO currently uses the
Universal Sentence Encoder v5 for generating embeddings.

#### Loading from TensorFlow Hub

This is the default option.

When

```python
ARE_YOU_TESTING__LOAD_MODEL_LOCAL = False
```

in file `utils.py`, the model is loaded from TensorFlow Hub.

#### Loading from a local file

Downloading the model from TensorFlow Hub every time you need to spin up
the microservice would be expensive and time-consuming.

1. Download the `.tar.gz` file from
TensorFlow Hub: https://tfhub.dev/google/universal-sentence-encoder-large/5

![img.png](docs/images/embedding_local_1.png)

2. Extract the file to the folder
```
models/universal-sentence-encoder-large_5/
```
with
```shell
tar -xvzf universal-sentence-encoder-large_5.tar.gz
```

3. Set
```python
ARE_YOU_TESTING__LOAD_MODEL_LOCAL = True
```
in file `utils.py`.

## API Endpoints

- **POST /add_message**: Add a new message to REMO. Speaker, timestamp, and content required.
Expand Down
Binary file added docs/images/embedding_local_1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
25 changes: 19 additions & 6 deletions remo.py
Original file line number Diff line number Diff line change
@@ -1,42 +1,55 @@
from fastapi import FastAPI
import utils
import os
import uvicorn

app = FastAPI()
root_folder = os.getcwd()
#root_folder = 'C:/raven_private/REMO/'
# root_folder = 'C:/raven_private/REMO/'
max_cluster_size = 5
# REMO = Rolling Episodic Memory Organizer


@app.post("/add_message")
async def add_message(message: str, speaker: str, timestamp: float):
async def add_message(
message: str,
speaker: str,
timestamp: float,
):
# Add message to REMO
new_message = utils.create_message(message, speaker, timestamp)
print('\n\nADD MESSAGE -', new_message)
print("\n\nADD MESSAGE -", new_message)
utils.save_message(root_folder, new_message)

return {"detail": "Message added"}


@app.get("/search")
async def search(query: str):
# Search the tree for relevant nodes
print('\n\nSEARCH -', query)
print("\n\nSEARCH -", query)
taxonomy = utils.search_tree(root_folder, query)

return {"results": taxonomy}


@app.post("/rebuild_tree")
async def rebuild_tree():
# Trigger full tree rebuilding event
print('\n\nREBUILD TREE')
print("\n\nREBUILD TREE")
utils.rebuild_tree(root_folder, max_cluster_size)

return {"detail": "Tree rebuilding completed"}


@app.post("/maintain_tree")
async def maintain_tree():
# Trigger tree maintenance event
print('\n\nMAINTAIN TREE')
print("\n\nMAINTAIN TREE")
utils.maintain_tree(root_folder)

return {"detail": "Tree maintenance completed"}


if __name__ == '__main__':
uvicorn.run(app, host='0.0.0.0', port=8000)
Loading