Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
200 changes: 200 additions & 0 deletions chatbot-app/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,200 @@
# AI Chat Application with Load Testing

![Chat Interface](./client/public/chat-interface.png)
*AI Chat Interface with real-time responses*

## Overview
This app is a full-stack application featuring a chat interface powered by LLM, built with React and FastAPI. It includes a load testing capability to evaluate performance under various conditions.

### Key Features
- 🤖 AI-powered chat interface
- 🎨 Modern UI with Tailwind CSS
- 📊 Built-in load testing capabilities
- 🔄 Real-time response handling
- 🌐 FastAPI backend with async support

## Prerequisites
- Python 3.8+
- Node.js 18.x+
- npm or yarn
- A Databricks workspace (for AI model serving)

## Environment Setup

1. Clone the repository:

```bash
git clone <repository-url>
```

2. Create and activate a Python virtual environment:

```bash
python -m venv venv
source venv/bin/activate # On Windows: .\venv\Scripts\activate
```

3. Install Python dependencies:

```bash
pip install -r requirements.txt
```

## Building the Frontend

1. Navigate to the client directory:

```bash
cd client
```

2. Install dependencies:

```bash
npm install
```

3. Build the production version:

```bash
npm run build
```

## Running the Application

1. For development with hot-reload:

```bash
# Terminal 1 - Frontend
cd client
npm start

# Terminal 2 - Backend
hypercorn app:app --bind 127.0.0.1:8000
```

2. For production:

```bash
hypercorn app:app --bind 127.0.0.1:8000
```

3. For Databricks Apps deployment:

a. Install the Databricks CLI:
```bash
brew install databricks
```

b. Create the app in your workspace:
```bash
databricks apps create chat-app
```


c. Create an `app.yaml` file in the root directory:

```yaml
command:
- "hypercorn"
- "app:app"
- "--bind"
- "127.0.0.1:8000"

env:
- name: "SERVING_ENDPOINT_NAME"
valueFrom: "serving_endpoint"
```

The `app.yaml` configuration uses Hypercorn as the ASGI server to run your FastAPI application.
The environment section defines `SERVING_ENDPOINT_NAME` which is configured (`serving_endpoint`) through apps creation in Databricks, securly storing and accessing sensitive values.

For detials on how to create an app in Databricks, please refer to the [Databricks Apps Documentation](https://docs.databricks.com/en/dev-tools/databricks-apps/configuration.html).


d. Sync your local files to Databricks workspace:
```bash
# Add node_modules/ and venv/ to .gitignore first if not already present
databricks sync --watch . /Workspace/Users/<your-email>/chat-app
```

e. Deploy the app:
```bash
databricks apps deploy chat-app --source-code-path /Workspace/Users/<your-email>/chat-app
```

The application will be available at your Databricks Apps URL:
- Production URL: https://chat-app-[id].cloud.databricksapps.com


The application includes built-in load testing capabilities.
To run a load test:

### Local App Testing
```bash
curl "http://localhost:8000/api/load-test?users=200&spawn_rate=2&test_time=10"
```

### Databricks Deployed App Testing
![load-test](./client/public/load-testing.png)
*Run load tests in the Databricks Apps UI*

Parameters:
- `users`: Number of concurrent users (default: 10)
- `spawn_rate`: Users to spawn per second (default: 2)
- `test_time`: Duration of test in seconds (default: 30)

### Load Testing Best Practices

1. **Gradual Scaling**
- Start with smaller numbers and gradually increase
- Monitor system performance metrics
- Watch for error rates and response times

2. **Production Testing**
- Schedule load tests during off-peak hours
- Alert relevant team members before large-scale tests
- Monitor application logs and metrics during tests


3. **Testing Scenarios**

# Light load test
https://chat-app-[id].cloud.databricksapps.com/api/load-test?users=200&spawn_rate=10&test_time=30

# Medium load test
https://chat-app-[id].cloud.databricksapps.com/api/load-test?users=1000&spawn_rate=100&test_time=30

# Heavy load test
https://chat-app-[id].cloud.databricksapps.com/api/load-test?users=10000&spawn_rate=1000&test_time=30


## Project Structure

```
chatbot-app/
├── app.py # FastAPI backend application
├── load_tester.py # Load testing endpoint
├── requirements.txt # Python dependencies
├── client/ # React frontend
│ ├── src/ # Source code
│ ├── public/ # Static assets
│ ├── build/ # # Static frontend files
│ └── package.json # Node.js dependencies
└── .env # Environment variables
```

## API Endpoints

- `GET /api/`: Health check endpoint
- `POST /api/chat`: Chat endpoint for AI interactions
- `GET /api/load-test`: Load testing endpoint

## Contributing

1. Fork the repository
2. Create your feature branch (`git checkout -b feature/amazing-feature`)
3. Commit your changes (`git commit -m 'Add some amazing feature'`)
4. Push to the branch (`git push origin feature/amazing-feature`)
5. Open a Pull Request

60 changes: 60 additions & 0 deletions chatbot-app/app.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
import logging
from fastapi import FastAPI, Depends
from pydantic import BaseModel
from typing import Annotated
import os
from fastapi.staticfiles import StaticFiles
from dotenv import load_dotenv
from load_tester import router as load_test_router
from databricks.sdk import WorkspaceClient
from databricks.sdk.service.serving import ChatMessage, ChatMessageRole
load_dotenv()

logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)
logger = logging.getLogger(__name__)
logger.info("Logger initialized successfully!")

app = FastAPI()
ui_app = StaticFiles(directory="client/build", html=True)
api_app = FastAPI()

app.mount("/api", api_app)
app.mount("/", ui_app)


SERVING_ENDPOINT_NAME = os.getenv("SERVING_ENDPOINT_NAME")

if not SERVING_ENDPOINT_NAME:
logger.error("SERVING_ENDPOINT_NAME environment variable is not set")
raise ValueError("SERVING_ENDPOINT_NAME environment variable is not set")

# client
def client():
return WorkspaceClient()

class ChatRequest(BaseModel):
message: str

class ChatResponse(BaseModel):
content: str


@api_app.post("/chat", response_model=ChatResponse)
def chat_with_llm(
request: ChatRequest, client: Annotated[WorkspaceClient, Depends(client)]
):
response = client.serving_endpoints.query(
SERVING_ENDPOINT_NAME,
messages=[ChatMessage(content=request.message, role=ChatMessageRole.USER)],
)
return ChatResponse(content=response.choices[0].message.content)

@api_app.get("/")
async def root():
return {"message": "Hello World"}

api_app.include_router(load_test_router)

9 changes: 9 additions & 0 deletions chatbot-app/app.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
command:
- "hypercorn"
- "app:app"
- "--bind"
- "127.0.0.1:$DATABRICKS_APP_PORT"

env:
- name: "SERVING_ENDPOINT_NAME"
valueFrom: "serving_endpoint"
69 changes: 69 additions & 0 deletions chatbot-app/client/app/globals.css
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
@tailwind base;
@tailwind components;
@tailwind utilities;

@layer base {
:root {
--background: 0 0% 100%;
--foreground: 222.2 84% 4.9%;
--card: 0 0% 100%;
--card-foreground: 222.2 84% 4.9%;
--popover: 0 0% 100%;
--popover-foreground: 222.2 84% 4.9%;
--primary: 222.2 47.4% 11.2%;
--primary-foreground: 210 40% 98%;
--secondary: 210 40% 96.1%;
--secondary-foreground: 222.2 47.4% 11.2%;
--muted: 210 40% 96.1%;
--muted-foreground: 215.4 16.3% 46.9%;
--accent: 210 40% 96.1%;
--accent-foreground: 222.2 47.4% 11.2%;
--destructive: 0 84.2% 60.2%;
--destructive-foreground: 210 40% 98%;
--border: 214.3 31.8% 91.4%;
--input: 214.3 31.8% 91.4%;
--ring: 222.2 84% 4.9%;
--radius: 0.5rem;
--chart-1: 12 76% 61%;
--chart-2: 173 58% 39%;
--chart-3: 197 37% 24%;
--chart-4: 43 74% 66%;
--chart-5: 27 87% 67%;
}

.dark {
--background: 222.2 84% 4.9%;
--foreground: 210 40% 98%;
--card: 222.2 84% 4.9%;
--card-foreground: 210 40% 98%;
--popover: 222.2 84% 4.9%;
--popover-foreground: 210 40% 98%;
--primary: 210 40% 98%;
--primary-foreground: 222.2 47.4% 11.2%;
--secondary: 217.2 32.6% 17.5%;
--secondary-foreground: 210 40% 98%;
--muted: 217.2 32.6% 17.5%;
--muted-foreground: 215 20.2% 65.1%;
--accent: 217.2 32.6% 17.5%;
--accent-foreground: 210 40% 98%;
--destructive: 0 62.8% 30.6%;
--destructive-foreground: 210 40% 98%;
--border: 217.2 32.6% 17.5%;
--input: 217.2 32.6% 17.5%;
--ring: 212.7 26.8% 83.9%;
--chart-1: 220 70% 50%;
--chart-2: 160 60% 45%;
--chart-3: 30 80% 55%;
--chart-4: 280 65% 60%;
--chart-5: 340 75% 55%;
}
}

@layer base {
* {
@apply border-border;
}
body {
@apply bg-background text-foreground;
}
}
16 changes: 16 additions & 0 deletions chatbot-app/client/components.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
{
"$schema": "https://ui.shadcn.com/schema.json",
"style": "default",
"rsc": false,
"tsx": false,
"tailwind": {
"config": "tailwind.config.js",
"css": "src/index.css",
"baseColor": "slate",
"cssVariables": true
},
"aliases": {
"components": "@/components",
"utils": "@/lib/utils"
}
}
6 changes: 6 additions & 0 deletions chatbot-app/client/config-overrides.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
const path = require('path');

module.exports = function override(config) {
config.resolve.alias['@'] = path.resolve(__dirname, 'src');
return config;
};
10 changes: 10 additions & 0 deletions chatbot-app/client/craco.config.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
module.exports = {
style: {
postcss: {
plugins: [
require('tailwindcss'),
require('autoprefixer'),
],
},
},
}
Loading