Skip to content

Commit a05062d

Browse files
authored
Merge pull request #5 from gmook9/dev
Fixed conversation memory logic + compiled executable for windows
2 parents 1bdc78d + d643eda commit a05062d

File tree

6 files changed

+101
-21
lines changed

6 files changed

+101
-21
lines changed

.gitignore

Lines changed: 18 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,20 @@
1+
# Virtual Environment
12
venv/
23
*.env
3-
__pycache__/
4+
5+
# Python Bytecode
6+
__pycache__/
7+
*.pyc
8+
9+
# PyInstaller Build Files
10+
/build/
11+
*.spec
12+
13+
# Ignore everything in dist except the .exe file
14+
dist/*
15+
!dist/*.exe
16+
17+
# Logs and OS-specific files
18+
*.log
19+
.DS_Store
20+
Thumbs.db

README.md

Lines changed: 42 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -5,26 +5,59 @@
55
# Detective 9 Text Based Game
66

77
## Overview
8-
Detective Game is an interactive text-based game where you play the role of a detective. This sample is meant to show of Llama 3 from Meta running locally. The AI bot that is running on Llama 3 via Ollama assumes a random role related to a crime scenario. Your goal is to determine whether the AI bot is innocent or guilty through a series of questions.
8+
Detective Game is an interactive text-based game where you play the role of a detective. This sample is meant to show off Llama 3 from Meta running locally. The AI bot that is running on Llama 3 via Ollama assumes a random role related to a crime scenario. Your goal is to determine whether the AI bot is innocent or guilty through a series of questions.
99

1010
## Info
1111
- **Llama 3**: Open source Large Language Model from Meta for generating AI responses.
1212
- **Ollama**: Platform for running Llama 3 locally.
1313
- **Rich**: Library for creating beautiful terminal outputs.
1414

15-
## Requirements
16-
- Llama 3 model files (from Ollama)
17-
- `python-dotenv` for environment variable management
18-
- `rich` for terminal UI
19-
- `langchain_community` for Llama 3 integration
15+
### Requirements for Running the Source Code:
16+
- **Llama 3 model files**: Required to generate AI responses (download from Ollama).
17+
- **Python Libraries**:
18+
- `python-dotenv`: For environment variable management.
19+
- `rich`: For terminal UI.
20+
- `langchain_community`: For Llama 3 integration.
21+
22+
These can be installed by running:
23+
```bash
24+
pip install -r requirements.txt
2025

2126
## Ollama Download
22-
Download: https://ollama.com/download
27+
Download: https://ollama.com/download
2328
Tutorial: https://www.youtube.com/watch?v=Asleok-Snfs
2429

2530
## Startup
26-
The command `ollama run llama3` initializes and runs the Llama 3 model locally using the Ollama platform. Without this it will not generate AI responses.
2731

28-
Run the command `python startGame.py` to start the game.
32+
### Method 1: Running the Source Code
2933

34+
1. **Initialize Llama 3**:
35+
- Run the command `ollama run llama3` to initialize and run the Llama 3 model locally using the Ollama platform. This is required for generating AI responses.
3036

37+
2. **Install Dependencies**:
38+
- Install the required Python packages by running:
39+
```bash
40+
pip install -r requirements.txt
41+
```
42+
43+
3. **Start the Game**:
44+
- Run the following command to start the game:
45+
```bash
46+
python startGame.py
47+
```
48+
49+
### Method 2: Running the Executable
50+
51+
1. **Initialize Llama 3**:
52+
- Just like with the source code method, you need to run the Llama 3 model:
53+
```bash
54+
ollama run llama3
55+
```
56+
57+
2. **Run the Game**:
58+
- Navigate to the `dist` folder where the executable is located.
59+
- Double-click on `startGame.exe` to launch the game.
60+
61+
Alternatively, you can run the executable via the command line:
62+
```bash
63+
./dist/startGame.exe

ai_bot.py

Lines changed: 24 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -1,22 +1,37 @@
11
from random import choice
22
from langchain_community.llms import Ollama
33

4-
# TO-DO FIX THIS
54
class QuestionGenerator:
65
def __init__(self, ai_bot):
76
self.ai_bot = ai_bot
7+
self.asked_questions = set() # Track asked questions and responses
88

99
def generate_random_question(self, case_synopsis):
10-
# Use the provided case synopsis instead of generating a new one
10+
# Prepare the context with past questions and responses
11+
context = "\n".join([f"Question: {q}\nResponse: {r}" for q, r in self.ai_bot.conversation_history])
12+
1113
question_prompt = (
1214
f"You are playing the role of a detective in a text-based detective game. Below is a fictional case synopsis:\n\n"
1315
f"'{case_synopsis}'\n\n"
14-
f"Your task is to generate a question that is appropriate for this scenario. "
16+
f"Here are the previous questions and responses:\n\n"
17+
f"{context}\n\n"
18+
f"Your task is to generate a new question that is appropriate for this scenario. "
19+
f"Make sure that the question is not similar to any of the previous ones and avoid asking redundant information. "
1520
f"Remember, this is a fictional game setting, so avoid any references to real-life guidance or advice. "
1621
f"The question should relate only to the details provided in the synopsis and should aim to uncover more information about the case.\n"
1722
f"Please provide only the question."
1823
)
19-
return self.ai_bot.llm.invoke(question_prompt)
24+
25+
# Generate a question
26+
new_question = self.ai_bot.llm.invoke(question_prompt)
27+
28+
# Check if the question was already asked
29+
if new_question in self.asked_questions:
30+
return self.generate_random_question(case_synopsis) # Retry if it's a repeat
31+
32+
# Store the new question
33+
self.asked_questions.add(new_question)
34+
return new_question
2035

2136
class AIBot:
2237
def __init__(self):
@@ -30,15 +45,15 @@ def generate_synopsis(self):
3045
return self.llm.invoke(self.prompt)
3146

3247
def respond(self, question):
33-
self.conversation_history.append(f"Question: {question}") # Add question to convo history
34-
context = "\n".join(self.conversation_history)
48+
self.conversation_history.append((question, self.llm.invoke(f"Respond to: {question}"))) # Add question and response to convo history
49+
context = "\n".join([f"Question: {q}\nResponse: {r}" for q, r in self.conversation_history])
3550
response_prompt = (
3651
f"Role: {self.role}. You are being questioned. "
3752
f"Here is the context so far:\n{context}\n\n"
38-
f"Now respond to the latest question: {question}. Only response in 1-2 setnences."
53+
f"Now respond to the latest question: {question}. Only respond in 1-2 sentences."
3954
)
4055
response = self.llm.invoke(response_prompt) # Get AI Response
41-
self.conversation_history.append(f"Response: {response}") # Add response to convo history
56+
self.conversation_history[-1] = (question, response) # Update with the actual response
4257

4358
return response
4459

@@ -47,4 +62,4 @@ def is_guilty(self):
4762

4863
# Modify this method to accept synopsis as an argument
4964
def generate_random_question(self, case_synopsis):
50-
return self.question_generator.generate_random_question(case_synopsis)
65+
return self.question_generator.generate_random_question(case_synopsis)

dist/startGame.exe

37.1 MB
Binary file not shown.

game.py

Lines changed: 17 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -55,22 +55,37 @@ def ask_question(self):
5555
self.console.print("1. Random")
5656
self.console.print("2. Type my own\n")
5757
choice = self.console.input("[bold yellow]Enter your choice (1/2): [/bold yellow]").strip()
58-
58+
5959
if choice == "1":
6060
user_question = self.ai_bot.generate_random_question(self.synopsis) # Pass the stored synopsis
6161
elif choice == "2":
6262
user_question = self.console.input("Type your question below: ").strip()
63+
64+
# Get the context from the conversation history
65+
context = "\n".join([f"Question: {q}\nResponse: {r}" for q, r in self.ai_bot.conversation_history])
66+
67+
# Create a prompt for AI to respond considering the context
68+
response_prompt = (
69+
f"Role: {self.ai_bot.role}. You are being questioned. It is a text-based detective game. "
70+
f"You know that you are {'guilty' if self.ai_bot.is_guilty() else 'innocent'}, but try not to give that away directly in your responses. "
71+
f"Here is the context so far:\n{context}\n\n"
72+
f"Now respond to the latest question: {user_question}. Only respond in 1-2 sentences."
73+
)
74+
response = self.ai_bot.llm.invoke(response_prompt) # Get AI Response
75+
self.ai_bot.conversation_history.append((user_question, response)) # Add the user question and AI response to convo history
76+
6377
else:
6478
self.console.print("[bold red]Invalid choice. Please try again.[/bold red]\n")
6579
return self.ask_question()
6680

6781
self.questions_asked += 1
6882
self.console.print("\n")
6983
self.console.print(Text(f"Question {self.questions_asked}: {user_question}", style="bold red"))
70-
response = self.ai_bot.respond(user_question)
84+
response = self.ai_bot.respond(user_question) if choice == "1" else response
7185
self.console.print(Panel(response, title=f"Response [{self.questions_asked}/{self.max_questions}]"))
7286
self.console.print("\n")
7387

88+
7489
def make_final_decision(self):
7590
self.console.print("\n[bold bright_magenta]Final Decision:[/bold bright_magenta]")
7691
self.console.print("1. Release")

requirements.txt

1.49 KB
Binary file not shown.

0 commit comments

Comments
 (0)