Skip to content

Conversation

@jcheng5
Copy link
Collaborator

@jcheng5 jcheng5 commented Apr 4, 2025

Do not merge this PR. Already accounted for in #19, #20 via c6a24a7

Close this PR when #20 is merged.


This PR is intended to make querychat extensible beyond in-memory Pandas dataframes, including but not limited to:

  1. SQLite
  2. PostgreSQL
  3. Snowflake
  4. Databricks
  5. Redshift?
  6. Bigquery?

It introduces a protocol called DataSource, which so far has two concrete implementations: DataFrameSource and SQLiteSource.

TODO

  • Use narwhals instead of Pandas directly
  • Have options to limit the number of rows retrieved, or perhaps retrieve in chunks or a cursor, so big databases won't blow up your Shiny app
  • PostgreSQL
  • Snowflake
  • Databricks

@jcheng5
Copy link
Collaborator Author

jcheng5 commented Apr 18, 2025

I've decided to focus on SQLAlchemy, which gives us all the features we want for all the databases we care about, as far as I can tell. Now you can do this:

pip install SQLAlchemy 
from sqlalchemy import create_engine

querychat_config = querychat.init(
    SQLAlchemySource(
        create_engine(f"sqlite:///{Path(__file__).parent / 'titanic.sqlite3'}"),
        "titanic",
    ),
    greeting=greeting,
    data_description=data_desc,
)

This may simplify further in the future so you only need to pass the connection string and not call create_engine at all, but Claude 3.5 Sonnet feels strongly that this would be surprising for SQLAlchemy users 😄

@blairj09 See https://docs.snowflake.com/en/developer-guide/python-connector/sqlalchemy for doing this with Snowflake.

@jcheng5
Copy link
Collaborator Author

jcheng5 commented Jun 3, 2025

We've moved development to generic-datasource-improvements since James is presenting off of generic-datasource for Snowflake and Databricks conferences these next couple of weeks.

@chendaniely
Copy link
Contributor

closing since #20 is merged

cpsievert added a commit that referenced this pull request Dec 18, 2025
Apply text improvements across R and Python documentation:

- Reframe data privacy language: emphasize we're NOT sending raw data
  to LLM for complex math operations
- Soften tone around auto-generated greetings: describe as "downsides"
  rather than "slow, wasteful, and non-deterministic"
- Remove duplicate "under the hood" phrasing
- Add local model example (gpt-oss:20b)
- Use friendlier duckdb function in R (duckdb_read_csv)

Addresses unresolved feedback items #2, #4, #5, #7, #10 from PR #162

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants