Skip to content

Commit fdf00da

Browse files
authored
Merge branch 'main' into docs/0.6.0/router-updates
2 parents 8621437 + 70dfa23 commit fdf00da

File tree

15 files changed

+738
-554
lines changed

15 files changed

+738
-554
lines changed

.python-version

Lines changed: 0 additions & 1 deletion
This file was deleted.

README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -71,11 +71,11 @@ To get started with RAG, either from scratch or using a popular framework like L
7171
| [/RAG/07_user_role_based_rag.ipynb](python-recipes/RAG/07_user_role_based_rag.ipynb) | Implement a simple RBAC policy with vector search using Redis |
7272

7373
### LLM Memory
74-
LLMs are stateless. To maintain context within a conversation chat sessions must be stored and resent to the LLM. Redis manages the storage and retrieval of chat sessions to maintain context and conversational relevance.
74+
LLMs are stateless. To maintain context within a conversation chat sessions must be stored and resent to the LLM. Redis manages the storage and retrieval of message histories to maintain context and conversational relevance.
7575
| Recipe | Description |
7676
| --- | --- |
77-
| [/llm-session-manager/00_session_manager.ipynb](python-recipes/llm-session-manager/00_llm_session_manager.ipynb) | LLM session manager with semantic similarity |
78-
| [/llm-session-manager/01_multiple_sessions.ipynb](python-recipes/llm-session-manager/01_multiple_sessions.ipynb) | Handle multiple simultaneous chats with one instance |
77+
| [/llm-message-history/00_message_history.ipynb](python-recipes/llm-message-history/00_llm_message_history.ipynb) | LLM message history with semantic similarity |
78+
| [/llm-message-history/01_multiple_sessions.ipynb](python-recipes/llm-message-history/01_multiple_sessions.ipynb) | Handle multiple simultaneous chats with one instance |
7979

8080
### Semantic Cache
8181
An estimated 31% of LLM queries are potentially redundant ([source](https://arxiv.org/pdf/2403.02694)). Redis enables semantic caching to help cut down on LLM costs quickly.

java-recipes/README.md

Lines changed: 57 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,63 @@
2020
</div>
2121
<br>
2222

23+
## Setup
24+
25+
This project uses Docker Compose to set up a complete environment for running Java-based AI applications with Redis. The environment includes:
26+
27+
- A Jupyter Notebook server with Java kernel support
28+
- Redis Stack (includes Redis and RedisInsight)
29+
- Pre-installed dependencies for AI/ML workloads
30+
31+
### Prerequisites
32+
33+
- [Docker](https://docs.docker.com/get-docker/) and [Docker Compose](https://docs.docker.com/compose/install/)
34+
- OpenAI API key (for notebooks that use OpenAI services)
35+
36+
### Environment Configuration
37+
38+
1. Create a `.env` file in the project root with your OpenAI API key:
39+
40+
```bash
41+
OPENAI_API_KEY=your_openai_api_key_here
42+
```
43+
44+
## Running the Project
45+
46+
1. Clone the repository (if you haven't already):
47+
48+
```bash
49+
git clone https://github.com/redis-developer/redis-ai-resources.git
50+
cd redis-ai-resources/java-recipes
51+
```
52+
53+
2. Start the Docker containers:
54+
55+
```bash
56+
docker-compose up -d
57+
```
58+
59+
3. Access the Jupyter environment:
60+
- Open your browser and navigate to [http://localhost:8888](http://localhost:8888)
61+
- The token is usually shown in the docker-compose logs. You can view them with:
62+
63+
```bash
64+
docker-compose logs jupyter
65+
```
66+
67+
4. Access RedisInsight:
68+
- Open your browser and navigate to [http://localhost:8001](http://localhost:8001)
69+
- Connect to Redis using the following details:
70+
- Host: redis-java
71+
- Port: 6379
72+
- No password (unless configured)
73+
74+
5. When finished, stop the containers:
75+
76+
```bash
77+
docker-compose down
78+
```
79+
2380
## Notebooks
2481

2582
| Notebook | Description |

0 commit comments

Comments
 (0)