Skip to content

Conversation

@balmukund18
Copy link

  • Create useSpeechRecognition hook with browser Speech Recognition API
  • Add microphone button to NoteEditor for voice input
  • Implement auto-punctuation (period, comma, question mark)
  • Add visual recording indicators with pulse animation
  • Fix duplicate useState imports in Index.tsx
  • Clean up UI: remove voice button from sidebar, fix search positioning
  • Support for voice commands (new line, new paragraph)
  • Mobile-friendly with HTTPS compatibility for production deployment

- Create useSpeechRecognition hook with browser Speech Recognition API
- Add microphone button to NoteEditor for voice input
- Implement auto-punctuation (period, comma, question mark)
- Add visual recording indicators with pulse animation
- Fix duplicate useState imports in Index.tsx
- Clean up UI: remove voice button from sidebar, fix search positioning
- Support for voice commands (new line, new paragraph)
- Mobile-friendly with HTTPS compatibility for production deployment
@vercel
Copy link

vercel bot commented Oct 5, 2025

@balmukund18 is attempting to deploy a commit to the Dhanush Nehru's projects Team on Vercel.

A member of the Team first needs to authorize it.

@balmukund18
Copy link
Author

@DhanushNehru i have solved the issue regarding merge conflict...

@vercel
Copy link

vercel bot commented Oct 17, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Comments Updated (UTC)
scratchpad-scribe Ready Ready Preview Comment Oct 17, 2025 10:09am

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR adds voice-to-text input functionality to the note editor, enabling users to dictate notes using their browser's Speech Recognition API. The implementation includes auto-punctuation, visual recording indicators, and voice command support for formatting.

  • Creates a custom useSpeechRecognition hook that wraps the browser's Speech Recognition API
  • Integrates voice input into the NoteEditor component with a microphone button and visual feedback
  • Fixes UI positioning issues in the search component

Reviewed Changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 4 comments.

File Description
src/hooks/useSpeechRecognition.ts New hook implementing browser Speech Recognition API with auto-punctuation and voice command processing
src/components/NotesSidebar.tsx Adjusts search icon positioning and padding
src/components/NoteEditor.tsx Integrates voice input functionality with the note editor, updates placeholder text

Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.

Comment on lines +15 to +16
SpeechRecognition: any;
webkitSpeechRecognition: any;
Copy link

Copilot AI Oct 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The any type bypasses TypeScript's type safety. Consider defining proper interfaces for SpeechRecognition and webkitSpeechRecognition based on the Web Speech API specification, or use a type assertion with the actual SpeechRecognition type from the DOM library.

Copilot uses AI. Check for mistakes.
Comment on lines +12 to +23
// Extend Window interface for Speech Recognition
declare global {
interface Window {
SpeechRecognition: any;
webkitSpeechRecognition: any;
}
}

export const useSpeechRecognition = (): SpeechRecognitionHook => {
const [transcript, setTranscript] = useState('');
const [isListening, setIsListening] = useState(false);
const recognitionRef = useRef<any>(null);
Copy link

Copilot AI Oct 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using any for recognitionRef defeats TypeScript's type checking. Define a proper type for the SpeechRecognition instance or use SpeechRecognition | null to maintain type safety.

Suggested change
// Extend Window interface for Speech Recognition
declare global {
interface Window {
SpeechRecognition: any;
webkitSpeechRecognition: any;
}
}
export const useSpeechRecognition = (): SpeechRecognitionHook => {
const [transcript, setTranscript] = useState('');
const [isListening, setIsListening] = useState(false);
const recognitionRef = useRef<any>(null);
// Minimal SpeechRecognition interface for type safety
interface SpeechRecognition {
continuous: boolean;
interimResults: boolean;
lang: string;
start(): void;
stop(): void;
onstart: (() => void) | null;
onresult: ((event: SpeechRecognitionEvent) => void) | null;
onerror: ((event: SpeechRecognitionErrorEvent) => void) | null;
onend: (() => void) | null;
}
interface SpeechRecognitionEvent {
resultIndex: number;
results: SpeechRecognitionResultList;
}
interface SpeechRecognitionResultList {
length: number;
[index: number]: SpeechRecognitionResult;
}
interface SpeechRecognitionResult {
isFinal: boolean;
[index: number]: { transcript: string };
}
interface SpeechRecognitionErrorEvent {
error: string;
}
// Extend Window interface for Speech Recognition
declare global {
interface Window {
SpeechRecognition: typeof SpeechRecognition;
webkitSpeechRecognition: typeof SpeechRecognition;
}
}
export const useSpeechRecognition = (): SpeechRecognitionHook => {
const [transcript, setTranscript] = useState('');
const [isListening, setIsListening] = useState(false);
const recognitionRef = useRef<SpeechRecognition | null>(null);

Copilot uses AI. Check for mistakes.
setIsListening(true);
};

recognitionRef.current.onresult = (event: any) => {
Copy link

Copilot AI Oct 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The event parameter is typed as any. Use the proper SpeechRecognitionEvent type from the Web Speech API to ensure type safety and enable IntelliSense for the event's properties.

Copilot uses AI. Check for mistakes.
}
};

recognitionRef.current.onerror = (event: any) => {
Copy link

Copilot AI Oct 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The event parameter should be typed as SpeechRecognitionErrorEvent instead of any to provide proper type checking for the error property.

Suggested change
recognitionRef.current.onerror = (event: any) => {
recognitionRef.current.onerror = (event: SpeechRecognitionErrorEvent) => {

Copilot uses AI. Check for mistakes.
@DhanushNehru
Copy link
Owner

Please look into the issues and also the CI has failed @balmukund18

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants