A native iOS todo app built with SwiftUI featuring AI task breakdown, voice input, and beautiful animations.
- AI Task Breakdown: Long press (800ms) on any main task to automatically break it down into 2-4 smaller subtasks using OpenAI GPT-4o-mini
- Voice Input: Long press (500ms) on the bottom circular button to record voice and add tasks (supports Traditional Chinese)
- Task Completion: Tap the checkbox on subtask cards to mark as complete with visual feedback
- Task Deletion: Swipe left on main task cards to delete, or use the context menu
- Vertical Subtask Layout: Subtasks are displayed vertically below their parent task for easy viewing
- Data Persistence: All tasks are saved locally using SwiftData
- iOS-native aesthetic with glassmorphism effects
- SF Pro and SF Pro Rounded fonts throughout
- Smooth animations with spring physics
- Haptic feedback on all interactions
- Light and dark mode support
- Responsive layout for all iPhone sizes
- Full VoiceOver accessibility support
- Cell division animation when AI breakdown completes
- Completion animation with subtask reordering
- Progress bar during long press actions
- Scale feedback on all interactive elements
Added
- AI-powered icon selection using OpenAI GPT-4o-mini for intelligent SF Symbol selection
- Automatic icon and color assignment based on task semantics
- New
selectIcon()method inOpenAIServicefor real-time icon suggestions IconResponsestruct to handle SF Symbol and color data from API
Changed
- MainTaskCard now uses AI to select appropriate SF Symbols and colors for each task
- Simplified fallback icon selection (removed bilingual support, English-only)
- Reduced keyword matching code from ~170 lines to ~50 lines
- Icon selection is now context-aware and understands task semantics beyond simple keywords
Technical Details
- Uses GPT-4o-mini with 0.3 temperature for consistent icon selection
- 10-second timeout for quick icon loading
- Graceful fallback to keyword-based selection if API fails
- Supports all SF Symbol categories: sports, communication, work, health, travel, entertainment, etc.
git clone https://github.com/chiwulin/momentumApp.git
cd momentumAppIMPORTANT: Never commit your API key to git!
The app requires an OpenAI API key to use the AI task breakdown feature. Set it up in Xcode:
- Open
momentum.xcodeprojin Xcode - Select the momentum scheme (top bar, next to your device)
- Click Edit Scheme...
- Select Run > Arguments
- Under Environment Variables, find
OPENAI_API_KEY - Paste your OpenAI API key in the Value field
- Click Close
Your API key will be stored locally in your Xcode user data (not in git).
Get an OpenAI API Key:
- Visit https://platform.openai.com/api-keys
- Create a new secret key
- Copy and paste it into Xcode as described above
- Open the project in Xcode 16.4+
- Select an iOS 18.5+ simulator or device
- Press ⌘R to build and run
- iOS 18.5+
- Xcode 16.4+
- OpenAI API key (for task breakdown feature)
cd /path/to/your/projects
# The project is already in /Users/chiwulin/Documents/momentumYou have several options for setting up your OpenAI API key:
export OPENAI_API_KEY="your-api-key-here"Then run the app from Xcode.
Edit momentum/Services/OpenAIService.swift and modify the init method:
init(apiKey: String = "your-api-key-here") {
self.apiKey = apiKey.isEmpty ? ProcessInfo.processInfo.environment["OPENAI_API_KEY"] ?? "" : apiKey
}For production apps, store the API key in Keychain or use a secure backend service.
When you first run the app, you'll be prompted to grant:
- Microphone Access: Required for voice input
- Speech Recognition: Required for transcribing voice to text
These permissions can be managed in Settings > Privacy & Security.
# Open in Xcode
open momentum.xcodeproj
# Or build from command line
xcodebuild -project momentum.xcodeproj -scheme momentum -configuration Debug -sdk iphonesimulator buildManual Entry:
- Tap the blue "+" button at the bottom
- Enter your task title
- Tap "Add"
Voice Input:
- Long press (500ms) the blue microphone button at the bottom
- Speak your task (supports Traditional Chinese and English)
- Release to add the task
- Long press (800ms) on any main task card
- Wait for the AI to analyze and break down the task
- Subtasks will appear below the main task with estimated durations
- Long press (1000ms) on a subtask card
- Watch the progress bar fill from left to right
- Release when complete - the subtask will turn green and move to the bottom
- Tap the trash icon on the right side of any main task card
- The task and all its subtasks will be deleted
momentum/
├── Models/
│ └── TaskItem.swift # SwiftData model
├── Services/
│ ├── OpenAIService.swift # AI task breakdown
│ └── SpeechRecognitionService.swift # Voice recognition
├── Views/
│ ├── MainTaskCard.swift # Main task UI component
│ ├── SubtaskCard.swift # Subtask UI component
│ └── VoiceInputButton.swift # Voice input button
├── Utilities/
│ └── HapticManager.swift # Haptic feedback helper
├── ContentView.swift # Main app view
└── momentumApp.swift # App entry point
The app follows MVVM (Model-View-ViewModel) architecture:
- Models: SwiftData models for data persistence
- Views: SwiftUI views with reusable components
- Services: Business logic for AI and speech recognition
- Utilities: Helper classes for haptics and other shared functionality
- SwiftUI: Modern declarative UI framework
- SwiftData: Apple's persistence framework
- Speech Framework: Native speech recognition
- OpenAI API: GPT-3.5-turbo for task breakdown
- Combine: Reactive programming for state management
Edit the long press durations in:
- Voice Input:
VoiceInputButton.swift- line withLongPressGesture(minimumDuration: 0.5) - Task Breakdown:
MainTaskCard.swift- line withLongPressGesture(minimumDuration: 0.8) - Task Completion:
SubtaskCard.swift- line withLongPressGesture(minimumDuration: 1.0)
Animations use spring physics. Adjust in ContentView.swift:
withAnimation(.spring(response: 0.6, dampingFraction: 0.7)) {
// Animation code
}Edit the cardWidth computed property in SubtaskCard.swift:
private var cardWidth: CGFloat {
switch subtask.estimatedMinutes {
case ..<10: return 180 // 5 mins
case 10..<20: return 240 // 10 mins
case 20..<30: return 300 // 15 mins
default: return 360 // 30+ mins
}
}Error: "OpenAI API key is missing"
- Make sure you've set the
OPENAI_API_KEYenvironment variable - Or hardcode it in
OpenAIService.swiftfor testing
Error: "OpenAI API error: 401"
- Your API key is invalid
- Check your OpenAI account and generate a new key
Error: "Speech recognition authorization denied"
- Go to Settings > Privacy & Security > Speech Recognition
- Enable speech recognition for the Momentum app
Error: "Microphone permission denied"
- Go to Settings > Privacy & Security > Microphone
- Enable microphone access for the Momentum app
Error: "Multiple commands produce Info.plist"
- This has been fixed by removing the manual Info.plist
- Privacy permissions are now in the project build settings
- All task data is stored locally on your device using SwiftData
- Voice recordings are processed by Apple's Speech Recognition API
- Task breakdowns are sent to OpenAI's API (requires internet connection)
- No task data is stored on remote servers (except for the brief API call to OpenAI)
This project is for demonstration purposes. Feel free to use and modify as needed.
Built with SwiftUI and powered by:
- OpenAI GPT-3.5-turbo
- Apple's Speech Recognition Framework
- SF Pro and SF Pro Rounded fonts