SF Pulse is a TypeScript app for tracking San Francisco restaurant openings and local events. It serves Astro-rendered pages through the Node adapter, stores data in PostgreSQL, and can optionally publish realtime updates and browser push notifications.
Render deployment is defined in render.yaml:
- Web (
sf-pulse): builds the app, runs migrations pre-deploy, startsdist/server/entry.mjs. Health check at/api/healthz. - Cron (
sf-pulse-daily): runs daily at 7 AM PDT. Executesdist/bin/trigger-workflow.cjsto trigger the daily-refresh Render Workflow via the SDK API. - Database (
sf-pulse-db): PostgreSQL. - Key-value (
sf-pulse-realtime): Redis for cross-instance SSE fanout.
The workflow service (sf-pulse-workflow) must be created manually in the Render Dashboard as a Workflow — Render Workflows are not supported in Blueprint YAML. See below for instructions.
Dockerfile provides a production image based on node:22-slim.
For production, supply environment variables through the host platform. Do not rely on .env.local outside local development.
Before deploying the Blueprint, create a Render env group that all services will share.
Dashboard → Env Groups → New Env Group → name it sf-pulse-env.
Add the following variables:
| Variable | Value |
|---|---|
CRON_SECRET |
Generate with openssl rand -hex 32 |
HOST |
0.0.0.0 |
NODE_ENV |
production |
LLM_API_KEY |
Your OpenAI or Anthropic API key |
LLM_PROVIDER |
(optional) openai or anthropic — inferred from the key if omitted |
LLM_MODEL |
(optional) model override |
DATABASE_URL |
Leave empty for now — fill in after step 3 |
REDIS_URL |
Leave empty for now — fill in after step 3 |
VAPID_PUBLIC_KEY |
(optional) required only for push notifications |
VAPID_PRIVATE_KEY |
(optional) required only for push notifications |
Render Workflows are not supported in Blueprint YAML, so sf-pulse-workflow must be created by hand in the Dashboard.
- Dashboard → New → Workflow → connect the repo, branch
main. - Set Name to
sf-pulse-workflow. - Set Build Command to
npm ci --include=dev && npm run build. - Set Start Command to
node dist/bin/workflow.cjs. - Set Plan to Starter.
- Under Environment, add the
sf-pulse-envenv group. - Save and deploy. Once it's live, go to Settings and note the Slug — you'll need it for
SF_PULSE_WORKFLOW_SLUGin step 3.
This Blueprint creates four services from render.yaml: web, cron trigger, PostgreSQL, and Redis. All services pull shared config from the sf-pulse-env env group created in step 1.
Once the Blueprint deploys:
- Copy the
DATABASE_URLfrom thesf-pulse-dbdatabase and theREDIS_URLfrom thesf-pulse-realtimekey-value store, and set them in thesf-pulse-envenv group. - Set these two env vars on the
sf-pulse-dailycron service:
| Variable | How to get it |
|---|---|
RENDER_API_KEY |
Dashboard → Account Settings → API Keys → Create API Key |
SF_PULSE_WORKFLOW_SLUG |
The slug from sf-pulse-workflow Settings (step 2) |
To confirm the pipeline works before the first scheduled cron fires at 7 AM PDT:
- Go to
sf-pulse-dailyin the Dashboard → Trigger Run. - Check
sf-pulse-workflowlogs for task execution output. - Ensure the
sf-pulseweb service frontend URL is displauying restaurant information as expected.
- Node.js >=22.12.0
- npm
- TypeScript
- Astro 6 + Node adapter
- PostgreSQL
- Optional Redis for multi-instance realtime fanout
- Render Workflows (
@renderinc/sdk) for the daily scraping pipeline
src/pages/: Astro pages and API routessrc/scripts/: browser-side progressive enhancement for the home pagesrc/server/api/: request handlers shared between Astro routes and the test HTTP serverserver/: storage, migrations, refresh logic, security, and realtime plumbingshared/: isomorphic timeline/date/identity/filter helpers used by server and browserclient/: client-side date and timeline librarybin/cron-refresh/: source scrapers (Eater SF, SFist, Michelin, FunCheap, FAMSF, Cal Academy)bin/workflow/: Render Workflow task definitions for the daily scraping pipelinebin/: build, migration, cron trigger, and workflow entry scriptsmigrations/: plain SQL migrations (0001–0010)patches/: localpatch-packagefixes for pg-mem and pgsql-ast-parserrender.yaml: Render deployment definitionDockerfile: production container build
- Node.js 22.12.0 or newer
- npm
- A PostgreSQL database
- VAPID keys
- Redis only if you want cross-instance SSE/pubsub behavior locally
- Install dependencies:
npm cinpm ci runs patch-package after install. That is expected in this repo.
- Start PostgreSQL and create a database for the app:
brew install postgresql@14
brew services start postgresql@14
createdb sf_pulse_db- Create
.env.localin the repo root..env.localis gitignored and is the expected place for local secrets.
To generate VAPID keys locally:
npx web-push generate-vapid-keysThe full .env.local file should look like this.
DATABASE_URL=postgres://localhost/sf_pulse_db
# Optional local overrides
HOST=127.0.0.1
PORT=5000
APP_URL=http://127.0.0.1:5000
# Required
VAPID_PUBLIC_KEY=<public-key>
VAPID_PRIVATE_KEY=<private-key>
# Required for protected delete endpoints and for production parity
CRON_SECRET=<random-secret>
# Optional: only needed for multi-instance realtime fanout
REDIS_URL=redis://127.0.0.1:6379
# Required for menu and article parsing (see docs/openai-api-permissions.md)
OPENAI_API_KEY=- Run migrations:
npm run migrate- Start the dev server:
npm run dev- Open
http://127.0.0.1:5000.
A fresh database starts empty. If you want real app data locally, run the refresh job once after migrating:
node --env-file=.env.local --import tsx bin/cron-refresh.tsThat job fetches restaurant and event candidates, writes new items to Postgres, and then attempts menu discovery for restaurants that still need it.
If you only want to rerun menu discovery against existing restaurant rows:
node --env-file=.env.local --import tsx bin/seed-menus.ts- In one terminal, run:
render workflows dev -- npx tsx --env-file=.env.local bin/workflow.ts - In a second terminal, run:
render workflows tasks list --local-
Select
daily-refresh -
Select
run -
Enter
[] as input
That's it! Watch your workflow run
| Variable | Required | Purpose |
|---|---|---|
DATABASE_URL |
Yes | PostgreSQL connection string. The app will throw on first DB use if this is missing. |
HOST |
No | HTTP bind host. Defaults to 0.0.0.0; 127.0.0.1 is fine for local dev. |
PORT |
No | HTTP port. Defaults to 5000. |
APP_URL |
No in dev, yes in production unless RENDER_EXTERNAL_URL is present |
Public base URL used by RSS and other externally visible links. |
RENDER_EXTERNAL_URL |
Render only | Production fallback for the public base URL when APP_URL is not set. |
VAPID_PUBLIC_KEY |
Only for push notifications | Public web-push key exposed to the browser. |
VAPID_PRIVATE_KEY |
Only for push notifications | Private web-push key used on the server. |
CRON_SECRET |
Recommended locally, required in production | Protects mutation endpoints that require the x-cron-secret header. |
REDIS_URL |
No | Enables Redis-backed pub/sub for realtime fanout across instances. Without it, realtime stays in-process. |
OPENAI_API_KEY |
Yes (for cron/workflow) | Enables AI-powered dietary flag extraction and article parsing. Required when running the cron pipeline. See docs/openai-api-permissions.md. |
NODE_ENV |
Set by scripts/runtime | development, test, or production. |
RENDER_API_KEY |
Cron service only | Render API token used by the cron trigger to start workflows. |
SF_PULSE_WORKFLOW_SLUG |
Cron service only | Render Workflow slug used to identify the daily-refresh workflow. |
LLM_API_KEY |
No | API key for OpenAI or Anthropic. Enables LLM-based structured extraction from articles and menus. Without it, only regex-based sources (SFist, Michelin) produce results. |
LLM_PROVIDER |
No | openai (default) or anthropic. |
LLM_MODEL |
No | Model override. Defaults to gpt-4o-mini (OpenAI) or claude-sonnet-4-20250514 (Anthropic). |
To generate VAPID keys locally:
npx web-push generate-vapid-keysnpm run dev: start the app in development mode with Astro and.env.localnpm run migrate: apply SQL migrations frommigrations/npm run build: build the Astro app + esbuild server bundles (migrate, cron, workflow, trigger-workflow) intodist/npm start: start the production server fromdist/server/entry.mjsnpm run typecheck: run TypeScript checks for app and test configsnpm test: run the Node test suite (uses pg-mem, no real DB needed)
- There is a single Astro/Node process in local dev. You do not run a separate frontend dev server.
- Tests mostly use
pg-mem, sonpm testdoes not need a realDATABASE_URL. - The repo carries local patches for
pg-memandpgsql-ast-parser. If SQL-related tests start failing unexpectedly, checkpatches/anddocs/pg-mem-upstreaming.md. - Protected delete routes expect
x-cron-secretto matchCRON_SECRET. - Browser push is optional. If VAPID keys are missing, the main app still runs, but push endpoints and subscription flows will not.
GET /api/restaurants— list restaurantsDELETE /api/restaurants/:id— delete a restaurant (requiresx-cron-secret)GET /api/events— list eventsDELETE /api/events/:id— delete an event (requiresx-cron-secret)GET /api/updates— recent itemsGET /api/updates/last-updated— last-updated timestampGET /api/healthz— health checkGET /api/events-stream— SSE realtime streamGET /api/rss.xml— RSS feedGET /api/push/vapid-key— VAPID public keyPOST /api/push/subscribe— register push subscriptionPOST /api/push/unsubscribe— remove push subscriptionGET /api/push/subscription— check subscription statusPOST /api/push/preferences— update notification preferences