This repository contains my submission for the Full Stack Engineer / Technical Product Manager (Fresher) role at BeyondChats.
The objective of this assignment was to demonstrate:
- End-to-end system thinking
- Backend–frontend integration
- Practical scraping & API design
- Ability to ship reliable software under time constraints
The system is divided into three logical phases, as requested:
- Scrapes the 5 oldest BeyondChats blog articles
- Normalizes and stores structured content in MySQL
- Exposes RESTful CRUD APIs
- Acts as the single source of truth for content
- Fetches the latest article from Laravel APIs
- Collects competitor references (mocked / configurable)
- Generates an improved version using an LLM-style workflow
- Publishes generated content back to Laravel as a new article
Phase 2 is intentionally minimal but functional, prioritizing correctness and extensibility over excessive orchestration.
- Fetches articles from the Laravel API
- Displays original vs generated articles distinctly
- Deployed as a static frontend for fast delivery
- Laravel 9
- PHP 8
- MySQL
- Guzzle HTTP Client
- Symfony DomCrawler
- Docker (production)
- React (Create React App)
- Fetch API
- Responsive CSS
- Node.js
- LLM-style text generation workflow
- Extensible for OpenAI / Gemini / Claude APIs
[ React (Netlify) ] | v [ Laravel API (Render) ] | v [ MySQL Database (Railway) ]
[ Node.js AI Service ] | v [ Laravel API ]
- Laravel scrapes BeyondChats blog listing
- Extracts valid article URLs
- Scrapes title & content
- Stores articles with
source_type = original
- Node.js fetches the latest article
- Generates an improved version (simulated LLM workflow)
- Saves new article with:
source_type = generatedreference_urls
- React frontend calls
/api/articles - Displays articles with clear labeling
- Separates original vs generated content visually
articles
| Column | Type |
|---|---|
| id | bigint |
| title | string |
| content | longText |
| source_url | string |
| source_type | enum (original, generated) |
| reference_urls | json (nullable) |
| created_at | timestamp |
| updated_at | timestamp |
GET /api/scrape-articles
Scrapes and stores the 5 oldest BeyondChats articles.
GET /api/articlesGET /api/articles/{id}POST /api/articlesPUT /api/articles/{id}DELETE /api/articles/{id}
git clone https://github.com/your-username/beyondchats-fullstack-assignment.git
cd beyondchats-fullstack-assignment
2. Backend Setup
cd backend-laravel/beyondchats-backend
composer install
php artisan key:generate
php artisan migrate
php artisan serve
3. Frontend Setup
cd frontend-react/beyondchats-frontend
npm install
npm start
4. AI Service
cd ai-node
npm install
node index.js
🌐 Live Deployment
Frontend (Netlify)
https://fullstack-and-ai-content-sytem.netlify.app/
Backend API (Render)
https://beyondchats-fullstack-assignment.onrender.com/
Database
Railway Cloud MySQL
⚖️ Trade-offs & Design Decisions
Prioritized a stable backend pipeline over complex AI orchestration
Used CRA for predictable frontend behavior
Implemented AI pipeline minimally but correctly
Focused on correctness, clarity, and extensibility
🧪 Evaluation Alignment
This submission emphasizes:
Backend completeness
Clear system design
Production-style deployment
Honest engineering trade-offs
🙋♂️ Final Notes
This project reflects how I approach real-world problems:
Build incrementally
Deploy early
Make conscious trade-offs
Optimize for reliability and clarity
Thank you for reviewing my submission.
— Rushikesh Dharme