Skip to content

Commit f1ad749

Browse files
authored
ENG-7498: Blog (From Jupyter Notebook to Production Dashboard) (#1600)
* blog * add thumbnail to blog
1 parent e1a90bb commit f1ad749

File tree

2 files changed

+310
-0
lines changed

2 files changed

+310
-0
lines changed

assets/blog/jupyter_reflex.png

1.33 MB
Loading
Lines changed: 310 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,310 @@
1+
---
2+
author: Ahmad Al Hakim
3+
date: 2025-08-28
4+
title: From Jupyter Notebook to Production Dashboard
5+
description: "Learn how to take your Jupyter Notebook workflows from quick experiments to fully interactive production dashboards using Python."
6+
image: /blog/jupyter_reflex.png
7+
meta: [
8+
{
9+
"name": "keywords",
10+
"content": "Jupyter Notebook, Python dashboards, data science workflows, interactive dashboards, productionizing notebooks, Python web apps, data visualization, machine learning apps, data scientist guide, dashboard deployment"
11+
}
12+
]
13+
---
14+
15+
## The Data Scientist's Dilemma
16+
17+
Data scientists excel at analysis but struggle with productionization. You build sophisticated models in Jupyter notebooks, then face the dreaded request: "Can we make this a live dashboard?"
18+
19+
The usual options aren't great. Hand it off to engineers and wait months. Use limited dashboard tools that can't handle your analysis complexity. Or learn React, APIs, and deployment just to make your Python work interactive.
20+
21+
This guide shows a different path: transforming your Jupyter analysis directly into a production dashboard without leaving Python.
22+
23+
## Our Starting Point: The Jupyter Notebook
24+
25+
Let's work with a realistic scenario: analyzing customer churn using the IBM Telco dataset. Here's what a typical analysis notebook looks like:
26+
27+
```python
28+
import pandas as pd
29+
import numpy as np
30+
import matplotlib.pyplot as plt
31+
import seaborn as sns
32+
from sklearn.ensemble import RandomForestClassifier
33+
34+
# Load the IBM Telco customer churn dataset
35+
url = "https://raw.githubusercontent.com/IBM/telco-customer-churn-on-icp4d/master/data/Telco-Customer-Churn.csv"
36+
df = pd.read_csv(url)
37+
38+
print(f"Dataset shape: {df.shape}")
39+
40+
# Prepare columns and add some realistic features
41+
df['last_login'] = pd.to_datetime("2022-01-01") + pd.to_timedelta(np.random.randint(0, 400, len(df)), unit='D')
42+
df['usage_last_30d'] = np.random.randint(10, 500, len(df))
43+
df['usage_prev_30d'] = np.random.randint(10, 500, len(df))
44+
df['support_tickets'] = np.random.poisson(2, len(df))
45+
df['plan_value'] = df['MonthlyCharges']
46+
df['plan_type'] = df['Contract']
47+
df['churned'] = df['Churn'].map({'Yes': 1, 'No': 0})
48+
49+
print(f"Churn rate: {df['churned'].mean():.2%}")
50+
51+
# Feature engineering
52+
df['days_since_last_login'] = (pd.Timestamp.now() - df['last_login']).dt.days
53+
df['usage_decline'] = df['usage_last_30d'] / (df['usage_prev_30d'] + 1e-5)
54+
55+
# Key insights
56+
churn_by_plan = df.groupby('plan_type')['churned'].agg(['count', 'mean'])
57+
print("\nChurn by Plan Type:")
58+
print(churn_by_plan)
59+
60+
# Visualizations
61+
fig, axes = plt.subplots(2, 2, figsize=(12, 8))
62+
63+
churn_by_plan['mean'].plot(kind='bar', ax=axes[0,0], title='Churn Rate by Plan')
64+
df.boxplot('usage_decline', by='churned', ax=axes[0,1])
65+
axes[0,1].set_title('Usage Decline: Churned vs Active')
66+
df.hist(column='days_since_last_login', by='churned', bins=20, ax=axes[1,0])
67+
axes[1,0].set_title('Days Since Last Login Distribution')
68+
sns.histplot(data=df, x='plan_value', hue='churned', kde=True, ax=axes[1,1])
69+
axes[1,1].set_title('Plan Value Distribution')
70+
71+
plt.tight_layout()
72+
plt.show()
73+
74+
# Predictive model
75+
features = ['days_since_last_login', 'usage_decline', 'support_tickets', 'plan_value']
76+
X = df[features]
77+
y = df['churned']
78+
79+
rf = RandomForestClassifier(random_state=42)
80+
rf.fit(X, y)
81+
82+
print(f"\nModel accuracy: {rf.score(X, y):.3f}")
83+
print("Feature importance:")
84+
for feature, importance in zip(features, rf.feature_importances_):
85+
print(f" {feature}: {importance:.3f}")
86+
```
87+
88+
This notebook does what data scientists do every day: loads data, engineers features, explores patterns, and builds predictive models. The analysis works, the insights are valuable, but it's stuck in a static format.
89+
90+
When stakeholders ask "Can we see this updating with fresh data?" you're back to the productionization problem.
91+
92+
## The Productionization Problem
93+
94+
Your notebook analysis is solid, but it has limitations. The plots are static images. The insights are buried in print statements. To see updated results, someone needs to rerun the entire notebook manually.
95+
96+
Traditional solutions force you to choose between complexity and capability:
97+
98+
**Flask + React**: Build a backend API, create React components, manage state, handle authentication. Weeks of work to recreate what you already built.
99+
100+
**Streamlit**: Quick to deploy, but limited interactivity. Complex analyses don't translate well to Streamlit's widget-based approach.
101+
102+
**Hand-off to engineering**: Wait months while engineers rebuild your analysis, often losing nuance in translation.
103+
104+
None of these options preserve your existing work or let you iterate quickly. What if you could keep your Python analysis logic and just make it interactive?
105+
106+
## Transforming to Reflex
107+
108+
Here's how to transform our notebook into an interactive dashboard. Your data processing logic stays the same—we just add Reflex components around it.
109+
110+
### Project Structure
111+
112+
First, let's set up a proper Reflex project structure:
113+
114+
```text
115+
churn-dashboard/
116+
├── app/
117+
│ ├── __init__.py
118+
│ ├── app.py
119+
│ ├── state.py
120+
│ └── components/
121+
│ ├── __init__.py
122+
│ ├── kpi_card.py
123+
│ └── bar_chart.py
124+
├── assets/
125+
├── requirements.txt
126+
└── rxconfig.py
127+
```
128+
129+
### Step 1: State Management (app/state.py)
130+
131+
Move your notebook's data processing logic into a Reflex state class:
132+
133+
```python
134+
import reflex as rx
135+
import pandas as pd
136+
import logging
137+
138+
class DashboardState(rx.State):
139+
"""The app state."""
140+
141+
is_loading: bool = True
142+
total_customers: int = 0
143+
total_churn: int = 0
144+
churn_rate: float = 0.0
145+
avg_monthly_charges: float = 0.0
146+
churn_by_contract: list[dict[str, str | int]] = []
147+
148+
@rx.event(background=True)
149+
async def load_data(self):
150+
"""Load and process the data - same logic as our notebook."""
151+
async with self:
152+
self.is_loading = True
153+
154+
try:
155+
# Same data loading from our notebook
156+
url = "https://raw.githubusercontent.com/IBM/telco-customer-churn-on-icp4d/master/data/Telco-Customer-Churn.csv"
157+
df = pd.read_csv(url)
158+
df["TotalCharges"] = pd.to_numeric(df["TotalCharges"], errors="coerce")
159+
df.dropna(inplace=True)
160+
161+
# Calculate the same metrics we had in the notebook
162+
total_customers = len(df)
163+
total_churn = len(df[df["Churn"] == "Yes"])
164+
churn_rate = total_churn / total_customers * 100 if total_customers > 0 else 0
165+
avg_monthly_charges = df["MonthlyCharges"].mean()
166+
167+
# Process churn data by contract type
168+
churn_data = df.groupby("Contract")["Churn"].value_counts().unstack(fill_value=0)
169+
churn_data.reset_index(inplace=True)
170+
churn_data.rename(columns={"No": "retained", "Yes": "churned"}, inplace=True)
171+
chart_data = churn_data.to_dict(orient="records")
172+
173+
async with self:
174+
self.total_customers = total_customers
175+
self.total_churn = total_churn
176+
self.churn_rate = round(churn_rate, 2)
177+
self.avg_monthly_charges = round(avg_monthly_charges, 2)
178+
self.churn_by_contract = chart_data
179+
self.is_loading = False
180+
181+
except Exception as e:
182+
logging.exception(f"Failed to load data: {e}")
183+
async with self:
184+
self.is_loading = False
185+
```
186+
187+
### Step 2: Chart Component (app/components/bar_chart.py)
188+
189+
Convert your matplotlib bar chart to an interactive Reflex chart:
190+
191+
```python
192+
import reflex as rx
193+
from app.state import DashboardState
194+
195+
def churn_bar_chart() -> rx.Component:
196+
"""A bar chart showing churn by contract type."""
197+
return rx.el.div(
198+
rx.el.h3("Customer Retention by Contract Type"),
199+
rx.recharts.bar_chart(
200+
rx.recharts.cartesian_grid(vertical=False),
201+
rx.recharts.x_axis(data_key="Contract"),
202+
rx.recharts.y_axis(),
203+
rx.recharts.legend(),
204+
rx.recharts.bar(
205+
data_key="retained",
206+
name="Retained",
207+
fill="#3b82f6",
208+
stack_id="a"
209+
),
210+
rx.recharts.bar(
211+
data_key="churned",
212+
name="Churned",
213+
fill="#ef4444",
214+
stack_id="a"
215+
),
216+
data=DashboardState.churn_by_contract,
217+
height=300,
218+
)
219+
)
220+
```
221+
222+
### Step 3: KPI Cards (app/components/kpi_card.py)
223+
224+
Create reusable metric cards to replace your print statements:
225+
226+
```python
227+
import reflex as rx
228+
229+
def kpi_card(title: str, value: str | int, icon: str, color: str) -> rx.Component:
230+
"""A reusable KPI card component."""
231+
return rx.el.div(
232+
rx.el.div(
233+
rx.icon(icon, class_name="w-6 h-6"),
234+
class_name=f"p-3 rounded-full {color}"
235+
),
236+
rx.el.div(
237+
rx.el.p(title, class_name="text-xs font-medium text-gray-500"),
238+
rx.el.p(value, class_name="text-xl font-semibold text-gray-800"),
239+
),
240+
class_name="flex items-center gap-4 p-4 bg-white border border-gray-200 rounded-xl shadow-sm",
241+
)
242+
```
243+
244+
### Step 4: Main Dashboard (app/app.py)
245+
246+
Bring everything together into a dashboard:
247+
248+
```python
249+
import reflex as rx
250+
from app.state import DashboardState
251+
from app.components.kpi_card import kpi_card
252+
from app.components.bar_chart import churn_bar_chart
253+
254+
def index() -> rx.Component:
255+
"""The main dashboard page."""
256+
return rx.el.main(
257+
rx.el.div(
258+
rx.el.h1("Telco Churn Dashboard"),
259+
260+
# KPI Cards - replacing our print statements
261+
rx.el.div(
262+
kpi_card("Total Customers", DashboardState.total_customers, "users", "bg-blue-100 text-blue-600"),
263+
kpi_card("Total Churn", DashboardState.total_churn, "user-minus", "bg-red-100 text-red-600"),
264+
kpi_card("Churn Rate", f"{DashboardState.churn_rate}%", "trending-down", "bg-yellow-100 text-yellow-600"),
265+
kpi_card("Avg Monthly Bill", f"${DashboardState.avg_monthly_charges}", "dollar-sign", "bg-green-100 text-green-600"),
266+
class_name="grid grid-cols-1 sm:grid-cols-2 lg:grid-cols-4 gap-4",
267+
),
268+
# Chart - replacing our matplotlib plot
269+
churn_bar_chart(),
270+
),
271+
on_mount=DashboardState.load_data,
272+
)
273+
274+
app = rx.App()
275+
app.add_page(index)
276+
```
277+
278+
Your notebook's pandas analysis logic stays intact—it just moves into the `load_data` method. The static matplotlib plots become interactive charts, and your print statements become clean KPI cards. The same insights, now accessible to anyone with a web browser.
279+
280+
If you want try this dashboard live, you can do so here on Reflex Build -> [Churn Dashboard](https://build.reflex.dev/gen/c100a12f-4f22-452a-8e3c-74cbf8baba98/)
281+
282+
You can edit, re-work, and improve it as you see fit!
283+
284+
# Deploying with Reflex
285+
286+
The final step is sharing your work. A dashboard is only valuable if others can access it, and deployment is where most data science projects stall.
287+
288+
With Reflex, deployment is built-in. You don’t need to worry about servers, Docker, or frontend builds. Your Python app can be published live with a single command:
289+
290+
```bash
291+
reflex deploy
292+
```
293+
294+
For detailed information on how deployment works, visit the [Cloud Deploy Docs](https://reflex.dev/docs/hosting/deploy-quick-start/) to find out how to begin.
295+
296+
# Wrapping Up
297+
298+
We started with a Jupyter notebook full of exploratory analysis—static plots and printouts that lived on your laptop. Then, we showed how to transform that work into a production-grade dashboard with Reflex, keeping your Python workflow intact. Finally, we saw how easy it is to deploy and share your dashboard.
299+
300+
With this workflow, data scientists can go from notebook → live dashboard → deployed app in hours instead of weeks.
301+
302+
Next steps:
303+
304+
- Try deploying your own analysis.
305+
306+
- Explore more Reflex components for interactive UIs.
307+
308+
- Experiment with refreshing your data sources.
309+
310+
The barrier between analysis and production is shrinking. With Reflex, your notebook insights can live on the web, fast.

0 commit comments

Comments
 (0)