Skip to content

Commit 3a34182

Browse files
committed
Remove added questions from form
1 parent d18cb8e commit 3a34182

File tree

8 files changed

+104
-249
lines changed

8 files changed

+104
-249
lines changed

CHANGELOG.md

Lines changed: 46 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,51 @@ All notable changes to this project will be documented in this file.
55
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
66
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
77

8+
## [v0.0.1-alpha.2] - 2025-06-02
9+
10+
### Added
11+
- JWT-based authentication system with user login/logout functionality
12+
- Protected edit and delete operations behind authentication
13+
- Personal "My Statistics" view for logged-in users to see their own usage data
14+
- Authentication middleware for secure session management
15+
- Entry management features in Raw Data tab:
16+
- Inline editing of existing entries with pre-filled forms
17+
- Entry duplication functionality for quick similar submissions
18+
- Entry deletion with confirmation dialogs
19+
- Row selection capability in Raw Data tab
20+
- Infrastructure destroy workflow for automated cleanup
21+
- OpenTofu support as alternative to Terraform
22+
- Environment-aware JWT configuration for deployment security
23+
- AWS Secrets Manager integration for JWT secrets
24+
25+
### Enhanced
26+
- Code organization with new utility modules:
27+
- `analytics_utils.py` for shared data processing functions
28+
- `form_utils.py` for survey form-related functionality
29+
- `visualization_utils.py` for data visualization components
30+
- `auth_middleware.py` for authentication handling
31+
- Tab structure and navigation:
32+
- Renamed tabs for better clarity
33+
- "Past Submissions" tab now requires authentication
34+
- Form validation and error handling across all features
35+
- Seed scripts now automatically create data directory if missing
36+
- Eliminated code duplication through shared utilities (~200 lines reduced)
37+
- Import consolidation and code organization
38+
39+
### Security
40+
- Secure cookie settings for production deployment
41+
- JWT audience configuration with environment awareness
42+
- Environment variable validation for deployed environments
43+
44+
### Infrastructure
45+
- Moved from `terraform/` to `infra/` directory structure
46+
- Added OpenTofu configuration files alongside Terraform
47+
- Enhanced GitHub Actions workflow with OpenTofu support
48+
- Infrastructure destroy automation with safety validations
49+
50+
### Dependencies
51+
- Added PyJWT for JSON Web Token authentication
52+
853
## [v0.0.1-alpha.1] - 2025-05-23
954

1055
### Added
@@ -55,4 +100,5 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
55100
- SQLite database/CSV
56101
- Environment variable configuration system
57102

103+
[v0.0.1-alpha.2]: https://github.com/suhailskhan/ai-usage-log/releases/tag/v0.0.1-alpha.2
58104
[v0.0.1-alpha.1]: https://github.com/suhailskhan/ai-usage-log/releases/tag/v0.0.1-alpha.1

README.md

Lines changed: 4 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -16,13 +16,10 @@ The application provides the following analytics features:
1616

1717
- **Purpose Distribution:** Visualize the distribution of AI tool usage purposes using pie charts.
1818
- **Duration Analysis:** Analyze the total and average duration of tasks by AI tool.
19-
- **Time Saved Analysis:** Compare the time taken with and without AI assistance, including total and average time saved.
20-
- **Tool Effectiveness Benchmarking:** Evaluate AI tools based on average time saved, satisfaction, and workflow impact.
21-
- **Complexity vs Impact:** Understand the relationship between task complexity and workflow impact.
22-
- **Satisfaction vs Efficiency:** Explore the correlation between user satisfaction and time saved.
23-
- **Manager/Team Insights:** Gain insights into team performance, including average time saved and satisfaction by manager.
24-
- **Purpose-based Use Cases:** Analyze average time saved, satisfaction, and workflow impact for different purposes.
25-
- **Trend & Seasonality Analysis:** Identify trends in AI tool usage over time, including daily and weekly patterns.
19+
- **Tool Effectiveness Benchmarking:** Evaluate AI tools based on average duration and task count.
20+
- **Manager/Team Insights:** Gain insights into team performance, including task count and average duration by manager.
21+
- **Purpose-based Use Cases:** Analyze task count and average duration for different purposes.
22+
- **Trend & Seasonality Analysis:** Identify trends in AI tool usage over time, including daily submissions and weekly duration patterns.
2623

2724
## Usage
2825

analytics_utils.py

Lines changed: 22 additions & 75 deletions
Original file line numberDiff line numberDiff line change
@@ -12,8 +12,8 @@ def prepare_dataframe(entries, workflow_impact_map=None, task_complexity_map=Non
1212
1313
Args:
1414
entries: List of entry dictionaries
15-
workflow_impact_map: Optional mapping for workflow impact reverse lookup
16-
task_complexity_map: Optional mapping for task complexity reverse lookup
15+
workflow_impact_map: Optional mapping for workflow impact reverse lookup (kept for backwards compatibility)
16+
task_complexity_map: Optional mapping for task complexity reverse lookup (kept for backwards compatibility)
1717
1818
Returns:
1919
Cleaned pandas DataFrame
@@ -22,14 +22,13 @@ def prepare_dataframe(entries, workflow_impact_map=None, task_complexity_map=Non
2222
if df.empty:
2323
return df
2424

25-
# Apply reverse mappings if provided
25+
# Legacy field handling for backwards compatibility with old data
2626
if workflow_impact_map and 'Workflow Impact' in df.columns:
2727
df['Workflow Impact'] = df['Workflow Impact'].map(workflow_impact_map).fillna(df['Workflow Impact'])
2828
if task_complexity_map and 'Task Complexity' in df.columns:
2929
df['Task Complexity'] = df['Task Complexity'].map(task_complexity_map).fillna(df['Task Complexity'])
3030

31-
# Calculate time saved and ensure timestamp is datetime
32-
df["Time Saved"] = df["Time Without AI"] - df["Duration"]
31+
# Ensure timestamp is datetime
3332
df["Timestamp"] = pd.to_datetime(df["Timestamp"], errors="coerce")
3433

3534
return df
@@ -68,9 +67,7 @@ def calculate_basic_stats(df):
6867

6968
stats = {
7069
'total_entries': len(df),
71-
'avg_time_saved': df["Time Saved"].mean() if "Time Saved" in df.columns else 0,
7270
'avg_duration': df["Duration"].mean() if "Duration" in df.columns else 0,
73-
'avg_satisfaction': df["Satisfaction"].mean() if "Satisfaction" in df.columns else 0,
7471
}
7572

7673
# Tool-specific stats
@@ -125,62 +122,29 @@ def calculate_tool_effectiveness(df):
125122
if df.empty or "AI Tool" not in df.columns:
126123
return pd.DataFrame()
127124

128-
agg_dict = {}
129-
if "Time Saved" in df.columns:
130-
agg_dict["Time Saved"] = "mean"
131-
if "Satisfaction" in df.columns:
132-
agg_dict["Satisfaction"] = "mean"
133-
if "Workflow Impact" in df.columns:
134-
agg_dict["Workflow Impact"] = lambda x: x.value_counts().index[0] if not x.empty else None
135-
136-
if not agg_dict:
137-
return pd.DataFrame()
125+
agg_dict = {
126+
"Duration": ["mean", "count"] # Average duration and task count
127+
}
138128

139129
tool_stats = df.groupby("AI Tool").agg(agg_dict).reset_index()
140130

141-
# Rename columns for clarity
142-
rename_dict = {
143-
"Time Saved": "Avg Time Saved",
144-
"Satisfaction": "Avg Satisfaction",
145-
"Workflow Impact": "Most Common Workflow Impact"
146-
}
147-
tool_stats.rename(columns=rename_dict, inplace=True)
131+
# Flatten column names
132+
tool_stats.columns = ["AI Tool", "Avg Duration", "# Tasks"]
148133

149134
return tool_stats
150135

151136

152137
def calculate_complexity_analysis(df):
153138
"""
154-
Calculate task complexity analysis.
139+
Calculate task complexity analysis (legacy function - returns empty for backwards compatibility).
155140
156141
Args:
157142
df: pandas DataFrame with usage data
158143
159144
Returns:
160-
DataFrame with complexity analysis
145+
Empty DataFrame (complexity analysis no longer supported)
161146
"""
162-
if df.empty or "Task Complexity" not in df.columns:
163-
return pd.DataFrame()
164-
165-
agg_dict = {}
166-
if "Time Saved" in df.columns:
167-
agg_dict["Time Saved"] = "mean"
168-
if "Satisfaction" in df.columns:
169-
agg_dict["Satisfaction"] = "mean"
170-
171-
if not agg_dict:
172-
return pd.DataFrame()
173-
174-
complexity_stats = df.groupby("Task Complexity").agg(agg_dict).reset_index()
175-
176-
# Rename columns for clarity
177-
rename_dict = {
178-
"Time Saved": "Avg Time Saved",
179-
"Satisfaction": "Avg Satisfaction"
180-
}
181-
complexity_stats.rename(columns=rename_dict, inplace=True)
182-
183-
return complexity_stats
147+
return pd.DataFrame()
184148

185149

186150
def calculate_manager_insights(df):
@@ -196,21 +160,14 @@ def calculate_manager_insights(df):
196160
if df.empty or "Manager" not in df.columns:
197161
return pd.DataFrame()
198162

199-
agg_dict = {"Duration": "count"} # Count of tasks
200-
if "Time Saved" in df.columns:
201-
agg_dict["Time Saved"] = "mean"
202-
if "Satisfaction" in df.columns:
203-
agg_dict["Satisfaction"] = "mean"
163+
agg_dict = {
164+
"Duration": ["count", "mean"] # Count of tasks and average duration
165+
}
204166

205167
manager_stats = df.groupby("Manager").agg(agg_dict).reset_index()
206168

207-
# Rename columns for clarity
208-
rename_dict = {
209-
"Time Saved": "Avg Time Saved",
210-
"Satisfaction": "Avg Satisfaction",
211-
"Duration": "# Tasks"
212-
}
213-
manager_stats.rename(columns=rename_dict, inplace=True)
169+
# Flatten column names
170+
manager_stats.columns = ["Manager", "# Tasks", "Avg Duration"]
214171

215172
return manager_stats
216173

@@ -228,23 +185,13 @@ def calculate_purpose_insights(df):
228185
if df.empty or "Purpose" not in df.columns:
229186
return pd.DataFrame()
230187

231-
agg_dict = {"Duration": "count"} # Count of tasks
232-
if "Time Saved" in df.columns:
233-
agg_dict["Time Saved"] = "mean"
234-
if "Satisfaction" in df.columns:
235-
agg_dict["Satisfaction"] = "mean"
236-
if "Workflow Impact" in df.columns:
237-
agg_dict["Workflow Impact"] = lambda x: x.value_counts().index[0] if not x.empty else None
188+
agg_dict = {
189+
"Duration": ["count", "mean"] # Count of tasks and average duration
190+
}
238191

239192
purpose_stats = df.groupby("Purpose").agg(agg_dict).reset_index()
240193

241-
# Rename columns for clarity
242-
rename_dict = {
243-
"Time Saved": "Avg Time Saved",
244-
"Satisfaction": "Avg Satisfaction",
245-
"Workflow Impact": "Most Common Workflow Impact",
246-
"Duration": "# Tasks"
247-
}
248-
purpose_stats.rename(columns=rename_dict, inplace=True)
194+
# Flatten column names
195+
purpose_stats.columns = ["Purpose", "# Tasks", "Avg Duration"]
249196

250197
return purpose_stats

app.py

Lines changed: 4 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -199,15 +199,10 @@ def prepare_dataframe(entries):
199199
manager_val = form_data['manager'][0] if form_data['manager'] else ""
200200
ai_tool_val = form_data['ai_tool'][0] if form_data['ai_tool'] else ""
201201
purpose_val = form_data['purpose'][0] if form_data['purpose'] else ""
202-
complexity_val = form_data['complexity'] if form_data['complexity'] != "(Select complexity)" else ""
203-
complexity_num = TASK_COMPLEXITY_MAP.get(complexity_val, None)
204-
workflow_impact_val = form_data['workflow_impact'] if form_data['workflow_impact'] != "(Select impact)" else ""
205-
workflow_impact_num = WORKFLOW_IMPACT_MAP.get(workflow_impact_val, None)
206202

207203
is_valid, error_message = validate_form_submission(
208204
form_data['name'], manager_val, ai_tool_val, purpose_val, form_data['result'],
209-
complexity_val, form_data['satisfaction'], form_data['time_without_ai'],
210-
workflow_impact_val, form_data['duration'], workflow_impact_num, complexity_num
205+
form_data['duration']
211206
)
212207

213208
if not is_valid:
@@ -216,8 +211,7 @@ def prepare_dataframe(entries):
216211
else:
217212
entry = create_entry_dict(
218213
form_data['name'], manager_val, ai_tool_val, purpose_val, form_data['duration'],
219-
complexity_num, form_data['satisfaction'], form_data['time_without_ai'],
220-
workflow_impact_num, form_data['result'], form_data['notes']
214+
form_data['result'], form_data['notes']
221215
)
222216
st.session_state.entries.append(entry)
223217
save_entries(st.session_state.entries)
@@ -298,10 +292,6 @@ def prepare_dataframe(entries):
298292
'ai_tool': ai_tool_default,
299293
'purpose': purpose_default,
300294
'duration': original_entry['Duration'],
301-
'complexity': REVERSE_TASK_COMPLEXITY_MAP.get(original_entry['Task Complexity'], 'Easy'),
302-
'satisfaction': original_entry['Satisfaction'],
303-
'time_without_ai': original_entry['Time Without AI'],
304-
'workflow_impact': REVERSE_WORKFLOW_IMPACT_MAP.get(original_entry['Workflow Impact'], 'Little to none'),
305295
'result': original_entry['Result/Outcome'],
306296
'notes': original_entry.get('Notes', '')
307297
}
@@ -355,10 +345,6 @@ def prepare_dataframe(entries):
355345
'ai_tool': ai_tool_default,
356346
'purpose': purpose_default,
357347
'duration': original_entry['Duration'],
358-
'complexity': REVERSE_TASK_COMPLEXITY_MAP.get(original_entry['Task Complexity'], 'Easy'),
359-
'satisfaction': original_entry['Satisfaction'],
360-
'time_without_ai': original_entry['Time Without AI'],
361-
'workflow_impact': REVERSE_WORKFLOW_IMPACT_MAP.get(original_entry['Workflow Impact'], 'Little to none'),
362348
'result': original_entry['Result/Outcome'],
363349
'notes': original_entry.get('Notes', '')
364350
}
@@ -457,15 +443,10 @@ def prepare_dataframe(entries):
457443
manager_val = form_data['manager'][0] if form_data['manager'] else ""
458444
ai_tool_val = form_data['ai_tool'][0] if form_data['ai_tool'] else ""
459445
purpose_val = form_data['purpose'][0] if form_data['purpose'] else ""
460-
complexity_val = form_data['complexity'] if form_data['complexity'] != "(Select complexity)" else ""
461-
complexity_num = TASK_COMPLEXITY_MAP.get(complexity_val, None)
462-
workflow_impact_val = form_data['workflow_impact'] if form_data['workflow_impact'] != "(Select impact)" else ""
463-
workflow_impact_num = WORKFLOW_IMPACT_MAP.get(workflow_impact_val, None)
464446

465447
is_valid, error_message = validate_form_submission(
466448
form_data['name'], manager_val, ai_tool_val, purpose_val, form_data['result'],
467-
complexity_val, form_data['satisfaction'], form_data['time_without_ai'],
468-
workflow_impact_val, form_data['duration'], workflow_impact_num, complexity_num
449+
form_data['duration']
469450
)
470451

471452
if not is_valid:
@@ -478,8 +459,7 @@ def prepare_dataframe(entries):
478459
# Update the entry
479460
updated_entry = create_entry_dict(
480461
form_data['name'], manager_val, ai_tool_val, purpose_val, form_data['duration'],
481-
complexity_num, form_data['satisfaction'], form_data['time_without_ai'],
482-
workflow_impact_num, form_data['result'], form_data['notes']
462+
form_data['result'], form_data['notes']
483463
)
484464
# Preserve the original timestamp
485465
updated_entry['Timestamp'] = original_entry['Timestamp']

0 commit comments

Comments
 (0)