-
-
Notifications
You must be signed in to change notification settings - Fork 48
Add voice-to-text input functionality #57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Add voice-to-text input functionality #57
Conversation
balmukund18
commented
Oct 5, 2025
- Create useSpeechRecognition hook with browser Speech Recognition API
- Add microphone button to NoteEditor for voice input
- Implement auto-punctuation (period, comma, question mark)
- Add visual recording indicators with pulse animation
- Fix duplicate useState imports in Index.tsx
- Clean up UI: remove voice button from sidebar, fix search positioning
- Support for voice commands (new line, new paragraph)
- Mobile-friendly with HTTPS compatibility for production deployment
- Create useSpeechRecognition hook with browser Speech Recognition API - Add microphone button to NoteEditor for voice input - Implement auto-punctuation (period, comma, question mark) - Add visual recording indicators with pulse animation - Fix duplicate useState imports in Index.tsx - Clean up UI: remove voice button from sidebar, fix search positioning - Support for voice commands (new line, new paragraph) - Mobile-friendly with HTTPS compatibility for production deployment
|
@balmukund18 is attempting to deploy a commit to the Dhanush Nehru's projects Team on Vercel. A member of the Team first needs to authorize it. |
|
@DhanushNehru i have solved the issue regarding merge conflict... |
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR adds voice-to-text input functionality to the note editor, enabling users to dictate notes using their browser's Speech Recognition API. The implementation includes auto-punctuation, visual recording indicators, and voice command support for formatting.
- Creates a custom
useSpeechRecognitionhook that wraps the browser's Speech Recognition API - Integrates voice input into the NoteEditor component with a microphone button and visual feedback
- Fixes UI positioning issues in the search component
Reviewed Changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 4 comments.
| File | Description |
|---|---|
| src/hooks/useSpeechRecognition.ts | New hook implementing browser Speech Recognition API with auto-punctuation and voice command processing |
| src/components/NotesSidebar.tsx | Adjusts search icon positioning and padding |
| src/components/NoteEditor.tsx | Integrates voice input functionality with the note editor, updates placeholder text |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
| SpeechRecognition: any; | ||
| webkitSpeechRecognition: any; |
Copilot
AI
Oct 17, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The any type bypasses TypeScript's type safety. Consider defining proper interfaces for SpeechRecognition and webkitSpeechRecognition based on the Web Speech API specification, or use a type assertion with the actual SpeechRecognition type from the DOM library.
| // Extend Window interface for Speech Recognition | ||
| declare global { | ||
| interface Window { | ||
| SpeechRecognition: any; | ||
| webkitSpeechRecognition: any; | ||
| } | ||
| } | ||
|
|
||
| export const useSpeechRecognition = (): SpeechRecognitionHook => { | ||
| const [transcript, setTranscript] = useState(''); | ||
| const [isListening, setIsListening] = useState(false); | ||
| const recognitionRef = useRef<any>(null); |
Copilot
AI
Oct 17, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using any for recognitionRef defeats TypeScript's type checking. Define a proper type for the SpeechRecognition instance or use SpeechRecognition | null to maintain type safety.
| // Extend Window interface for Speech Recognition | |
| declare global { | |
| interface Window { | |
| SpeechRecognition: any; | |
| webkitSpeechRecognition: any; | |
| } | |
| } | |
| export const useSpeechRecognition = (): SpeechRecognitionHook => { | |
| const [transcript, setTranscript] = useState(''); | |
| const [isListening, setIsListening] = useState(false); | |
| const recognitionRef = useRef<any>(null); | |
| // Minimal SpeechRecognition interface for type safety | |
| interface SpeechRecognition { | |
| continuous: boolean; | |
| interimResults: boolean; | |
| lang: string; | |
| start(): void; | |
| stop(): void; | |
| onstart: (() => void) | null; | |
| onresult: ((event: SpeechRecognitionEvent) => void) | null; | |
| onerror: ((event: SpeechRecognitionErrorEvent) => void) | null; | |
| onend: (() => void) | null; | |
| } | |
| interface SpeechRecognitionEvent { | |
| resultIndex: number; | |
| results: SpeechRecognitionResultList; | |
| } | |
| interface SpeechRecognitionResultList { | |
| length: number; | |
| [index: number]: SpeechRecognitionResult; | |
| } | |
| interface SpeechRecognitionResult { | |
| isFinal: boolean; | |
| [index: number]: { transcript: string }; | |
| } | |
| interface SpeechRecognitionErrorEvent { | |
| error: string; | |
| } | |
| // Extend Window interface for Speech Recognition | |
| declare global { | |
| interface Window { | |
| SpeechRecognition: typeof SpeechRecognition; | |
| webkitSpeechRecognition: typeof SpeechRecognition; | |
| } | |
| } | |
| export const useSpeechRecognition = (): SpeechRecognitionHook => { | |
| const [transcript, setTranscript] = useState(''); | |
| const [isListening, setIsListening] = useState(false); | |
| const recognitionRef = useRef<SpeechRecognition | null>(null); |
| setIsListening(true); | ||
| }; | ||
|
|
||
| recognitionRef.current.onresult = (event: any) => { |
Copilot
AI
Oct 17, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The event parameter is typed as any. Use the proper SpeechRecognitionEvent type from the Web Speech API to ensure type safety and enable IntelliSense for the event's properties.
| } | ||
| }; | ||
|
|
||
| recognitionRef.current.onerror = (event: any) => { |
Copilot
AI
Oct 17, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The event parameter should be typed as SpeechRecognitionErrorEvent instead of any to provide proper type checking for the error property.
| recognitionRef.current.onerror = (event: any) => { | |
| recognitionRef.current.onerror = (event: SpeechRecognitionErrorEvent) => { |
|
Please look into the issues and also the CI has failed @balmukund18 |