A Next.js application that allows users to create AI-generated music from short videos, text descriptions, and images. Built with modern web technologies and designed with a beautiful, Suno-inspired interface.
- Real-time camera access and video recording
- Up to 30-second video capture
- Live preview and playback
- High-quality video processing
- Text descriptions for mood and style
- Image upload for visual context
- Multiple image support
- Drag-and-drop interface
- Integration with Sonu AI for music generation
- Mood and style detection from video content
- High-quality audio output
- Multiple genre support
- Advanced audio player with waveform visualization
- Playback controls (play, pause, seek, volume)
- Download functionality
- Social sharing options
- Suno-inspired design with gradient backgrounds
- Glass morphism effects
- Responsive design for all devices
- Dark theme optimized
- Smooth animations and transitions
- Framework: Next.js 15 with App Router
- Language: TypeScript
- Styling: Tailwind CSS v4
- UI Components: shadcn/ui
- Icons: Lucide React
- State Management: React Hooks
- Audio Processing: Web Audio API
- Video Recording: MediaRecorder API
-
Google Cloud Setup:
- Create a Google Cloud project
- Enable the Vertex AI API
- Set up authentication and get an access token
- Set the
GCLOUD_ACCESS_TOKENenvironment variable
-
Environment Variables: Create a
.env.localfile in the root directory:GCLOUD_ACCESS_TOKEN=your_google_cloud_access_token NEXT_PUBLIC_SUPABASE_URL=your_supabase_url NEXT_PUBLIC_SUPABASE_ANON_KEY=your_supabase_anon_key
# Install dependencies
pnpm install
# Run the development server
pnpm devOpen http://localhost:3000 with your browser to see the application.
The application uses a single, streamlined API endpoint:
- POST /api/generate - Generates music and returns audio directly
- Input: User enters a prompt describing the desired music
- Generation: API calls Google's Lyria-002 model via Vertex AI
- Processing: Base64 response is converted to audio using
audio-decode - Output: Audio file is returned directly to the user for immediate playback/download
import { musicApi, handleApiError } from '@/lib/axios';
try {
const result = await musicApi.generateMusic({
prompt: 'upbeat jazz piano with light drums',
negativeTags: 'heavy metal, screaming', // optional
});
// Audio blob and metadata are returned together
const audioUrl = URL.createObjectURL(result.audioBlob);
console.log(`Generated ${result.duration}s of ${result.sampleRate}Hz audio`);
} catch (error) {
const errorMessage = handleApiError(error);
console.error('Generation failed:', errorMessage);
}import axios from 'axios';
try {
const response = await axios.post('/api/generate', {
prompt: 'upbeat jazz piano with light drums',
negativeTags: 'heavy metal, screaming', // optional
}, {
responseType: 'blob',
timeout: 300000, // 5 minutes
headers: {
'Content-Type': 'application/json',
}
});
const audioBlob = response.data;
const audioUrl = URL.createObjectURL(audioBlob);
// Get metadata from headers
const duration = parseFloat(response.headers['x-audio-duration']);
const sampleRate = parseInt(response.headers['x-audio-sample-rate']);
const channels = parseInt(response.headers['x-audio-channels']);
} catch (error) {
if (axios.isAxiosError(error)) {
if (error.code === 'ECONNABORTED') {
console.error('Request timed out');
} else if (error.response) {
console.error('API Error:', error.response.data.error);
} else {
console.error('Network Error:', error.message);
}
}
}| Field | Type | Required | Description |
|---|---|---|---|
prompt |
string | Yes | Description of the music to generate |
negativeTags |
string | No | Elements to avoid in the generation |
- Success: Returns audio file (WAV format) with metadata in headers
- Error: Returns JSON with error message
| Header | Description |
|---|---|
Content-Type |
audio/wav |
Content-Disposition |
Filename for download |
X-Audio-Duration |
Duration in seconds |
X-Audio-Sample-Rate |
Sample rate in Hz |
X-Audio-Channels |
Number of audio channels |
"upbeat jazz piano with light drums""ambient electronic music with synthesizers""classical violin solo, melancholic""folk guitar with harmonica, happy""lo-fi hip hop beats for studying"
- Frontend: Next.js 15, React 19, TypeScript, Tailwind CSS
- Backend: Next.js API Routes
- HTTP Client: Axios for API requests
- AI Model: Google Vertex AI Lyria-002
- Audio Processing: audio-decode library
- Authentication: Supabase (configured but optional)
src/
├── app/
│ ├── api/
│ │ └── generate/
│ │ └── route.ts # Main generation endpoint
│ ├── globals.css
│ ├── layout.tsx
│ └── page.tsx # Main UI component
├── lib/
│ └── supabase.ts # Supabase configuration
└── middleware.ts # Auth middleware
# Development server
pnpm dev
# Build for production
pnpm build
# Start production server
pnpm start
# Lint code
pnpm lint- Connect your GitHub repository to Vercel
- Add environment variables in Vercel dashboard
- Deploy automatically on push
Make sure to set the required environment variables:
GCLOUD_ACCESS_TOKENNEXT_PUBLIC_SUPABASE_URLNEXT_PUBLIC_SUPABASE_ANON_KEY
- Requires valid Google Cloud access token
- Subject to Vertex AI API rate limits and quotas
- Generated audio is in WAV format
- No persistent storage (songs are not saved)
- Fork the repository
- Create a feature branch
- Make your changes
- Test thoroughly
- Submit a pull request
This project is private and proprietary.
- Node.js 18+
- npm or pnpm
- Clone the repository:
git clone https://github.com/yourusername/vibe-tune.git
cd vibe-tune- Install dependencies:
npm install
# or
pnpm install- Run the development server:
npm run dev
# or
pnpm dev- Open http://localhost:3000 in your browser.
vibe-tune/
├── src/
│ ├── app/
│ │ ├── create/ # Song creation page with video recording
│ │ ├── demo/ # Demo showcase page
│ │ ├── login/ # User authentication
│ │ ├── signup/ # User registration
│ │ ├── song/[id]/ # Individual song player page
│ │ ├── globals.css # Global styles
│ │ ├── layout.tsx # Root layout
│ │ └── page.tsx # Home page
│ ├── components/
│ │ └── ui/ # shadcn/ui components
│ └── lib/
│ └── utils.ts # Utility functions
├── public/ # Static assets
└── package.json
- Landing page with hero section
- Feature showcase
- Call-to-action buttons
- Navigation to other pages
- User authentication form
- Social login options
- Password visibility toggle
- Form validation
- User registration form
- Password confirmation
- Terms agreement
- Social signup options
- Video recording interface
- Text description input
- Image upload functionality
- Song generation trigger
- Audio player with controls
- Waveform visualization
- Song metadata display
- Download and sharing options
- App functionality showcase
- Example results
- Statistics and testimonials
The video recording functionality uses the MediaRecorder API to capture video from the user's camera:
const startRecording = async () => {
const stream = await navigator.mediaDevices.getUserMedia({
video: { width: { ideal: 1280 }, height: { ideal: 720 } },
audio: true,
});
const recorder = new MediaRecorder(stream, {
mimeType: "video/webm;codecs=vp9",
});
recorder.start();
};Custom audio player with waveform visualization and full controls:
const togglePlay = () => {
if (audioRef.current) {
if (isPlaying) {
audioRef.current.pause();
} else {
audioRef.current.play();
}
setIsPlaying(!isPlaying);
}
};The app uses a custom design system inspired by Suno with:
- Color Palette: Purple and pink gradients with dark backgrounds
- Typography: Geist Sans and Geist Mono fonts
- Effects: Glass morphism, backdrop blur, and smooth animations
- Components: Consistent card-based layout with subtle borders
The app is designed to integrate with:
- Sonu AI: For music generation from video content
- Authentication: User management and session handling
- File Storage: Video and image upload handling
- Audio Processing: Server-side audio generation and storage
- Create component in
src/components/ui/ - Follow shadcn/ui patterns
- Use TypeScript for type safety
- Include proper accessibility attributes
- Use Tailwind CSS classes
- Follow the established color scheme
- Maintain responsive design
- Include hover and focus states
- Use React hooks for local state
- Keep components focused and single-purpose
- Implement proper error handling
- Use TypeScript interfaces for data structures
- Connect your GitHub repository to Vercel
- Configure environment variables
- Deploy automatically on push to main branch
The app can be deployed to any platform that supports Next.js:
- Netlify
- Railway
- DigitalOcean App Platform
- AWS Amplify
Create a .env.local file with:
# API Keys
SONU_API_KEY=your_sonu_api_key
NEXT_PUBLIC_APP_URL=http://localhost:3000
# Authentication
NEXTAUTH_SECRET=your_nextauth_secret
NEXTAUTH_URL=http://localhost:3000
# Database (if using)
DATABASE_URL=your_database_url- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
This project is licensed under the MIT License - see the LICENSE file for details.
- Inspired by Suno design and functionality
- Built with shadcn/ui components
- Icons from Lucide
- Powered by Next.js
For support, email support@vibetune.com or create an issue in the GitHub repository.