Skip to content

Commit 2fa047a

Browse files
committed
feat: fastapi readme
1 parent 4ac2238 commit 2fa047a

File tree

3 files changed

+42
-10
lines changed

3 files changed

+42
-10
lines changed

fastapi/.env.example

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -5,9 +5,6 @@ LAKEBASE_CATALOG_NAME=<LAKEBASE_CATALOG_NAME> # Name of the Lakebase catalog
55
SYNCHED_TABLE_STORAGE_CATALOG=<SYNCED_TABLE_CATALOG> # Catalog where you have permissions to create tables. Used to store metadata from the lakebase synced table pipeline.
66
SYNCHED_TABLE_STORAGE_SCHEMA=<SYNCED_TABLE_SCHEMA> # Schema where you have permissions to create tables. Used to store metadata from the lakebase synced table pipeline.
77
DATABRICKS_DATABASE_PORT=5432
8-
DATABRICKS_HOST=<DATABRICKS_HOST> # e.g., https://adb-1234567890123456.7.azuredatabricks.net
9-
DATABRICKS_TOKEN=<PAT_TOKEN> # e.g., dapi
10-
DATABRICKS_USER_NAME=<YOUR_USER_NAME> # e.g., [email protected]
118

129
# Database Connection Pool Settings
1310
DB_POOL_SIZE=5

fastapi/README.md

Lines changed: 41 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,8 @@ The sample application provides the following API endpoints:
1212
#### API v1
1313
- `/api/v1/healthcheck` - Returns a response to validate the health of the application
1414
- `/api/v1/table` - Query data from Databricks tables
15+
- `/api/v1/resources/create-lakebase-resources` - Create Lakebase resources
16+
- `/api/v1/resources/delete-lakebase-resources` - Delete Lakebase resources
1517
- `/api/v1/orders/count` - Get total order count from Lakebase (PostgreSQL) database
1618
- `/api/v1/orders/sample` - Get sample order keys for testing
1719
- `/api/v1/orders/pages` - Get orders with traditional page-based pagination
@@ -35,10 +37,42 @@ pip install -r requirements.txt
3537
# Set environment variables (if not using .env file)
3638
export DATABRICKS_WAREHOUSE_ID=your-warehouse-id
3739

40+
# If using a .env file:
41+
cp .env.example .env
42+
# Fill in .env fields
43+
3844
# Run the application
3945
uvicorn app:app --reload
4046
```
4147

48+
## Running the Lakebase Example
49+
50+
❗️**Important**: Following these steps will deploy a Lakebase instance and synced table pipeline in your Databricks workspace that will incur costs.
51+
52+
### 1. Create Lakebase Resources
53+
With the app running and your `.env` file configured:
54+
1. Navigate to http://localhost:8000/docs (Swagger UI)
55+
2. Find the `/api/v1/resources/create-lakebase-resources` endpoint
56+
3. Click **Try it out**
57+
4. Set `create_resources` to `true` (confirming you understand the costs)
58+
5. Configure other fields as needed
59+
6. Click **Execute**
60+
7. Wait for resource creation (takes several minutes)
61+
62+
### 2. Validate and Test
63+
Once resources are created:
64+
1. Check the Databricks UI for: database instance, database, catalog, and `orders_synced` table
65+
2. Restart your local app: `uvicorn app:app --reload`
66+
3. Return to http://localhost:8000/docs - you should now see the `/orders` endpoints available
67+
68+
### 3. Clean Up Resources
69+
To avoid ongoing costs:
70+
1. Navigate to `/api/v1/resources/delete-lakebase-resources` endpoint
71+
2. Set `confirm_deletion` to `true`
72+
3. Click **Execute**
73+
74+
75+
4276
## Running Tests
4377

4478
```bash
@@ -68,13 +102,14 @@ The application uses environment variables for configuration:
68102
- `DATABRICKS_TOKEN` - (Optional) The Databricks access token
69103

70104
### Lakebase PostgreSQL Database (for orders management)
71-
- `DATABRICKS_DATABASE_INSTANCE` - The name of the Databricks database instance
72-
- `DATABRICKS_DATABASE_NAME` - The Lakebase PostgreSQL database name
105+
- `LAKEBASE_INSTANCE_NAME` - The name of an existing Databricks database instance or the name of a new instance
106+
- `LAKEBASE_DATABASE_NAME` - The Lakebase PostgreSQL database name
107+
- `LAKEBASE_CATALOG_NAME` - The name of the Lakebase catalog
108+
- `SYNCHED_TABLE_STORAGE_CATALOG` - Catalog where you have permissions to create tables. Used to store metadata from the lakebase synced table pipeline.
109+
- `SYNCHED_TABLE_STORAGE_SCHEMA` - Schema where you have permissions to create tables. Used to store metadata from the lakebase synced table pipeline.
73110
- `DATABRICKS_DATABASE_PORT` - (Optional) Database port (default: 5432)
74-
- `DEFAULT_POSTGRES_SCHEMA` - (Optional) Database schema (default: public)
75-
- `DEFAULT_POSTGRES_TABLE` - (Optional) Orders table name (default: orders_synced)
76111
- `DB_POOL_SIZE` - (Optional) Connection pool size (default: 5)
77112
- `DB_MAX_OVERFLOW` - (Optional) Max pool overflow (default: 10)
78-
- `DB_POOL_TIMEOUT` - (Optional) Pool timeout in seconds (default: 30)
79-
- `DB_COMMAND_TIMEOUT` - (Optional) Command timeout in seconds (default: 10)
113+
- `DB_POOL_TIMEOUT` - (Optional) Pool timeout in seconds (default: 10)
114+
- `DB_COMMAND_TIMEOUT` - (Optional) Command timeout in seconds (default: 30)
80115
- `DB_POOL_RECYCLE_INTERVAL` - (Optional) Connection recycle interval in seconds (default: 3600)

fastapi/requirements.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ pytest>=7.4.0
66
pytest-mock>=3.10.0
77
pytest-asyncio>=0.21.0
88
httpx>=0.24.1
9-
databricks-sdk>=0.8.0
9+
databricks-sdk>=0.61.0
1010
databricks-sql-connector==4.0.2
1111
pandas>=2.0.0
1212
annotated-types==0.7.0

0 commit comments

Comments
 (0)