A JavaScript example app that removes image backgrounds entirely in the browser using the RMBG-1.4 image segmentation model and Transformers.js — no server GPU or cloud inference API required. Original images and transparent PNG cutouts are stored in Backblaze B2 cloud storage.
Upload a photo (JPG, PNG, WEBP, GIF, BMP), remove its background client-side with one click, and save both the original and the transparent cutout to S3-compatible Backblaze B2 object storage. Inference runs via WebGPU with an automatic WebAssembly (WASM) fallback.
- No GPU server costs — the RMBG-1.4 model runs in your browser via WebGPU/WASM, so there's no inference server to pay for
- Privacy — images never leave the user's device for processing
- No API rate limits — process as many images as you want, completely offline after the model loads
- Simple to deploy — a static frontend + a lightweight Node.js backend for pre-signed URLs is all you need
- Transformers.js — Run Hugging Face AI models in the browser with WebGPU and WebAssembly
- RMBG-1.4 — State-of-the-art background removal model for image segmentation by BRIA AI
- Backblaze B2 — S3-compatible cloud object storage at $6/TB/month
- Client-side AI image segmentation: Run RMBG-1.4 background removal entirely in the browser — no server GPU required
- WebGPU-accelerated inference: Hardware-accelerated ML inference with automatic WASM fallback
- Cost-effective cloud storage: Store original images and transparent PNG cutouts in Backblaze B2
- Secure direct uploads: Browser-to-cloud uploads using S3 pre-signed URLs
- Simple architecture: End-to-end flow from upload → remove background → store
User → Upload Image → B2 Storage
↓
Browser RMBG-1.4 Inference (Transformers.js) → Remove Background
↓
Cutout Image → B2 Storage
- User selects/drops image file in browser
- Backend generates pre-signed PUT URL for B2
- Browser uploads original image directly to B2
- Browser loads RMBG-1.4 model via Transformers.js (briaai/RMBG-1.4)
- Browser performs client-side inference to remove background
- Browser generates transparent PNG cutout
- Backend generates pre-signed PUT URL for processed image
- Browser uploads background-removed cutout to B2
- E-commerce product photos — Remove backgrounds from product images for clean listings
- Profile pictures — Automatic portrait cutouts for avatars and headshots
- Design and marketing — Create transparent PNG assets without Photoshop or paid APIs
- Real estate — Clean up property photos for listings
- Fashion — Isolate models and clothing on transparent backgrounds
- Node.js 18+
- Backblaze B2 Account (free tier available)
- Create a bucket
- Generate an Application Key with
readFiles,writeFiles,writeBucketspermissions
git clone https://github.com/backblaze-b2-samples/b2-transformerjs-background-removal.git
cd b2-transformerjs-background-removal/backend
npm installcp .env.example .envEdit .env with your B2 credentials:
B2_ENDPOINT=https://s3.us-west-002.backblazeb2.com
B2_REGION=us-west-002
B2_KEY_ID=your_key_id_here
B2_APP_KEY=your_app_key_here
B2_BUCKET=your-bucket-nameGet your B2 endpoint and region from your bucket details page
npm startThat's it! The server automatically:
- ✅ Configures B2 CORS for browser uploads
- ✅ Serves both frontend and API
- ✅ Opens at
http://localhost:3000
- Open http://localhost:3000 in your browser
- Upload an image file (JPG, PNG, WEBP)
- Click "Remove Background with MODNET"
- View before/after comparison and access files in B2
⚠️ First run downloads the RMBG-1.4 model (~176MB) - this takes 2-3 minutes
This example uses RMBG-1.4 by BRIA AI, a state-of-the-art image segmentation model optimized for background removal. It runs in the browser via Transformers.js with WebGPU acceleration and an automatic WebAssembly fallback for broader browser support.
- Model: briaai/RMBG-1.4 — background removal / image segmentation
- Library: Transformers.js — Run Hugging Face transformer models in the browser
- Inference backend: WebGPU (automatic WASM fallback)
- Model size: ~176MB (cached in browser after first download)
- Speed: ~2-5 seconds per image (varies by resolution and GPU)
- Output: PNG with alpha transparency
This example demonstrates client-side transformer model inference using the Transformers.js library:
import { AutoModel, AutoProcessor, RawImage } from '@huggingface/transformers';
// Load RMBG-1.4 model for background removal
const model = await AutoModel.from_pretrained('briaai/RMBG-1.4', {
device: 'webgpu',
});
const processor = await AutoProcessor.from_pretrained('briaai/RMBG-1.4');
// Run inference on image
const image = await RawImage.fromURL(imageUrl);
const { pixel_values } = await processor(image);
const { output } = await model({ input: pixel_values });- Provider: Backblaze B2
- API: S3-compatible API with pre-signed URLs
- Pricing: $6/TB/month storage, uploads are FREE
- Documentation: B2 S3-Compatible API Docs
Input: JPG, PNG, WEBP, GIF, BMP Output: PNG with alpha transparency
- Chrome 113+ (WebGPU support)
- Edge 113+
- Opera 99+
- Safari 18+ (WebGPU experimental)
- Firefox (WASM fallback, no WebGPU yet)
Requires WebAssembly and ES6 modules support.
If auto-setup fails (missing permissions), run manually:
npm run setup-corsRequired B2 Key Permissions:
listBucketsreadFileswriteFileswriteBuckets← Required for CORS setup
Alternative - B2 CLI:
b2 update-bucket --cors-rules '[
{
"corsRuleName": "allowBrowserUploads",
"allowedOrigins": ["*"],
"allowedHeaders": ["*"],
"allowedOperations": ["s3_put", "s3_get", "s3_head"],
"maxAgeSeconds": 3600
}
]' <bucket-name> allPublicAlternative - B2 Web Console:
- Go to https://secure.backblaze.com/b2_buckets.htm
- Click your bucket → Bucket Settings → CORS Rules
- Add the rules shown above
Request:
{
"filename": "photo.jpg",
"contentType": "image/jpeg"
}Response:
{
"uploadUrl": "https://...",
"publicUrl": "https://...",
"key": "images/uuid.jpg",
"fileId": "uuid"
}Request:
{
"fileId": "uuid"
}Response:
{
"uploadUrl": "https://...",
"publicUrl": "https://...",
"key": "cutouts/uuid_cutout.png"
}Railway / Render / Fly.io:
- Set environment variables from
.env - Deploy
backend/directory - Update frontend
apiUrlto deployed URL
Docker:
FROM node:18-alpine
WORKDIR /app
COPY backend/package*.json ./
RUN npm install
COPY backend/ ./
CMD ["node", "server.js"]Static Hosting (Netlify, Vercel, Cloudflare Pages):
- Deploy
frontend/directory - Set API URL in settings or hardcode in HTML
B2 Static Hosting:
- Upload
frontend/index.htmlto B2 bucket - Enable website hosting on bucket
- Access via B2 website URL
- First load downloads model (~176MB, one-time)
- Processing time depends on image resolution
- Browser must stay open during inference
- Very large images (>4K) may be slow
- WebGPU not yet supported in Firefox (uses slower WASM)
- Add batch processing for multiple images
- Support custom background colors/images
- Add edge refinement controls
- Progressive rendering for large images
- Download button for processed images
- Comparison slider for before/after
- Try alternative models (U2-Net, MODNet)
- Add WebWorker for non-blocking inference
- Transformers.js Documentation — Run Hugging Face AI models in the browser with WebGPU and WebAssembly
- Transformers.js GitHub — Source code and examples
- RMBG-1.4 Model Card — Background removal image segmentation model by BRIA AI
- Backblaze B2 Documentation — Cloud storage API docs
- B2 S3-Compatible API — Use standard S3 SDKs with Backblaze B2
- WebGPU API — Browser GPU acceleration for ML inference
Problem: Browser shows CORS error when uploading.
Solution:
- Run
npm run setup-corsin the backend directory - Or manually configure CORS on your B2 bucket (see Setup section)
- Verify CORS is set: Go to B2 Console → Your Bucket → Settings → CORS Rules
Problem: First run takes a long time.
Solution:
- RMBG-1.4 is ~176MB and downloads on first use
- Model is cached by browser for subsequent uses
- Try using faster internet connection
- Check browser console for download progress
Problem: Browser doesn't support WebGPU.
Solution:
- Use Chrome 113+, Edge 113+, or Opera 99+
- Firefox will fall back to WASM (slower but works)
- Update browser to latest version
- Check chrome://gpu to verify WebGPU status
Problem: Frontend can't connect to backend API.
Solution:
- Verify backend is running:
curl http://localhost:3000/health - Check API URL in frontend matches backend (default:
http://localhost:3000) - Look for CORS errors in backend logs
This project is licensed under the MIT License. See the LICENSE file for details.
