- About
- Getting Started
- Requirements
- Installation
- Usage
- API Reference
- Response Schema
- Running Locally
- Docker
- Deployment Notes
- Caveats & Operational Notes
- Contributing
- License
This is a small FastAPI service that scrapes exchange rates from the official website of Banco Central de Venezuela and returns them as JSON.
These instructions will get you a copy of the project up and running on your local machine for development and testing purposes. See Deployment Notes for live usage considerations.
- Python 3.12 or higher
- FastAPI
- Requests-HTML (uses lxml under the hood)
- Uvicorn
- Docker (optional)
- Clone the repository.
- Install the required packages using
pip install -r requirements.txt. - Run the application using
python run.py.
- Start the server.
- Send a GET request to
http://localhost:8000/get-data. - The response will be in JSON format.
GET /- Returns a short message with the main endpoint.
GET /get-data- Scrapes BCV and returns exchange rates as JSON.
The response is a JSON object with numeric exchange rates and metadata:
usd: numbereuro: numberyuan: numberlira: numberrublo: numberscrap_date: string (local timestamp, hour precision)valid_date: array of strings (as provided by the site)
Example response:
{
"usd": 36.12,
"euro": 39.38,
"yuan": 5.01,
"lira": 1.10,
"rublo": 0.41,
"scrap_date": "2026-02-02 15",
"valid_date": ["Lunes", " 02 Febrero 2026"]
}The default entrypoint is:
python run.pyFor local development with auto-reload, run Uvicorn directly:
uvicorn main:app --reloadYou can also configure runtime settings with environment variables:
HOST(default:0.0.0.0)PORT(default:8000)RELOAD(default:false)WORKERS(default:2 * CPU + 1)LOG_LEVEL(default:info)
The repository uses a lowercase dockerfile. Build using:
docker build -f dockerfile -t bcv-scraper .Run the container:
docker run -p 8000:8000 bcv-scraperBy default, the container starts with Gunicorn and Uvicorn workers. You can override worker and port settings via environment variables:
WORKERS(default:4)PORT(default:8000)
For production, a common pattern is to use Gunicorn with Uvicorn workers, for example:
gunicorn -k uvicorn.workers.UvicornWorker -w 4 -b 0.0.0.0:8000 main:appAdjust the worker count based on CPU and load. If you prefer run.py, set WORKERS, LOG_LEVEL, and RELOAD appropriately.
Every call to /get-data triggers a fresh scrape. For production usage, consider caching, scheduled scraping, or a data store to reduce load on the source website.
If you have the means, crawlab can run a scheduled crawler and store results for your API to read.
- TLS verification is disabled in the scraper; this is not recommended for production environments.
- The scraper relies on specific HTML/XPath selectors. If the BCV page structure changes, scraping may break.
- Rate limits and politeness policies should be respected when calling the endpoint frequently.
Contributions are welcome! Please feel free to submit a pull request.
No license specified yet.