Skip to content

Commit d07a062

Browse files
Read me updates
1 parent 90f3cd7 commit d07a062

File tree

1 file changed

+9
-109
lines changed

1 file changed

+9
-109
lines changed

README.md

Lines changed: 9 additions & 109 deletions
Original file line numberDiff line numberDiff line change
@@ -1,30 +1,19 @@
11
# Used EV Price Tracker
22

3-
Track used electric vehicle prices across multiple sources with automated scraping and visualization.
3+
A website tracking used EV prices, currently live [here](https://quicklywilliam.github.io/usedevpricetracker/). It's part market tracker, part price guide, part shopping tool. This is currently just a small hobby project, but bug reports and feature requests are welcome.
44

55
## Features
66

7-
- **Automated Daily Scraping**: GitHub Actions runs scrapers daily at midnight UTC
8-
- **Multiple Sources**: CarMax, Carvana, Platt Auto, and extensible to more
9-
- **11 EV Models Tracked**: Tesla Model 3/Y/S/X, Nissan Leaf/Ariya, Chevy Bolt EV/EUV, Ford Mustang Mach-E, Hyundai Ioniq 5, Volkswagen ID.4
7+
- **Multiple Sources**: CarMax, Carvana and Autotrader. Easiy extensible to more
8+
- **Tracks Individua EV Models**: Currently tracks 16 different models, see [tracked-models.json](https://github.com/quicklywilliam/usedevpricetracker/blob/main/config/tracked-models.json).
109
- **Interactive Visualizations**:
11-
- Overview chart showing average prices across all models
12-
- Click the labels for detailed charts with price ranges and individual listings
13-
- **Static Deployment**: Hosted on GitHub Pages with no backend required
14-
15-
## Live Demo
16-
17-
Visit the live tracker at: https://quicklywilliam.github.io/usedevpricetracker/
1810

1911
## Architecture
2012

21-
- **Data Storage**: JSON files in `/data/{source}/{date}.json` committed to repository
22-
- **Scraping**: Puppeteer-based scrapers running in GitHub Actions
13+
- **Static Deployment**: Hosted on GitHub Pages with no backend required
14+
- **Automated Daily Data Digestion**: GitHub Actions runs scrappers daily at midnight UTC
15+
- **Simple Data Storage**: JSON files in `/data/{source}/{date}.json` committed to repository
2316
- **Frontend**: React app with Chart.js visualizations
24-
- **Deployment**: GitHub Pages for static hosting
25-
- **Workflows**:
26-
- `deploy.yml`: Runs on every push to main, builds and deploys frontend only
27-
- `scrape-and-deploy.yml`: Runs daily at midnight UTC, scrapes all sources, commits data, then deploys
2817

2918
## Quick Start
3019

@@ -46,9 +35,7 @@ node scrapers/run-all.js
4635

4736
Run a specific scraper:
4837
```bash
49-
node scrapers/carmax/scrape.js
50-
node scrapers/carvana/scrape.js
51-
node scrapers/plattauto/scrape.js
38+
node scrapers/run-all.js --source=carvana
5239
```
5340

5441
Run mock scraper for testing:
@@ -60,7 +47,7 @@ node scrapers/mock-source/scrape.js
6047

6148
For testing visualizations with multiple days of data:
6249
```bash
63-
node scrapers/generate-mock-history.js
50+
node scrapers/mock-source/generate-mock-history.js
6451
```
6552

6653
## Project Structure
@@ -105,61 +92,7 @@ node scrapers/generate-mock-history.js
10592

10693
## Adding a New Scraper
10794

108-
All scrapers use shared utilities from `scrapers/shared/scraper-utils.js` for consistency:
109-
110-
1. **Create scraper directory**:
111-
```bash
112-
mkdir scrapers/newsource
113-
```
114-
115-
2. **Create `scrape.js`** using the template pattern:
116-
```javascript
117-
import { setupScraper, scrapeModel } from '../shared/scraper-utils.js';
118-
119-
const SOURCE_NAME = 'newsource';
120-
121-
async function scrapeListings(page, make, model) {
122-
const url = buildSearchUrl(make, model);
123-
await page.goto(url, { waitUntil: 'networkidle0' });
124-
125-
return await page.evaluate(() => {
126-
const listings = [];
127-
document.querySelectorAll('.listing-card').forEach(card => {
128-
listings.push({
129-
price: parseInt(card.querySelector('.price').textContent.replace(/\D/g, '')),
130-
year: parseInt(card.querySelector('.year').textContent),
131-
trim: card.querySelector('.trim').textContent.trim(),
132-
mileage: parseInt(card.querySelector('.mileage').textContent.replace(/\D/g, '')),
133-
url: card.querySelector('a').href
134-
});
135-
});
136-
return listings;
137-
});
138-
}
139-
140-
function buildSearchUrl(make, model) {
141-
return `https://newsource.com/cars/${make}-${model}`;
142-
}
143-
144-
(async () => {
145-
const { browser, models } = await setupScraper(SOURCE_NAME);
146-
147-
try {
148-
for (const { make, model } of models) {
149-
await scrapeModel(browser, SOURCE_NAME, make, model, scrapeListings);
150-
}
151-
} finally {
152-
await browser.close();
153-
}
154-
})();
155-
```
156-
157-
3. **Add to run-all.js** in the scrapers array
158-
159-
4. **Test locally**:
160-
```bash
161-
node scrapers/newsource/scrape.js
162-
```
95+
All scrapers use shared utilities from `scrapers/lib` for consistency. See [here](https://github.com/quicklywilliam/usedevpricetracker/blob/main/scrapers/TEMPLATE.md) for more information.
16396

16497
## GitHub Actions Workflows
16598

@@ -192,39 +125,6 @@ To manually run the scraping workflow:
192125
gh workflow run "Scrape Prices and Deploy"
193126
```
194127

195-
## Data Format
196-
197-
Each scraper outputs JSON files with this structure:
198-
199-
```json
200-
{
201-
"source": "carmax",
202-
"date": "2025-10-20",
203-
"listings": [
204-
{
205-
"make": "Tesla",
206-
"model": "Model 3",
207-
"year": 2023,
208-
"trim": "Long Range",
209-
"price": 35990,
210-
"mileage": 12500,
211-
"url": "https://..."
212-
}
213-
]
214-
}
215-
```
216-
217-
## Tracked Models
218-
219-
Configured in `config/models.json`:
220-
221-
- Tesla: Model 3, Model Y, Model S, Model X
222-
- Nissan: Leaf, Ariya
223-
- Chevrolet: Bolt EV, Bolt EUV
224-
- Ford: Mustang Mach-E
225-
- Hyundai: Ioniq 5
226-
- Volkswagen: ID.4
227-
228128
## Development
229129

230130
### Build for Production

0 commit comments

Comments
 (0)