Skip to content

Commit cursor rules to enhance development workflows #125

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 18 additions & 0 deletions .cursor/rules/general.mdc
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
---
description: General guidelines
globs:
alwaysApply: true
---
This project uses `uv` for dependency management. To add or remove a dependency, use `uv add <packagename>` or `uv remove <packagename>`. To update a dependency to the latest version, use `uv lock --upgrade-package <packagename>` For development dependencies, add the `--group dev` flag to these commands. Dependencies can be installed with `uv sync`.

When building out features, always keep changes atomic and make sure to write and run tests. To run tests, use:

```bash
uv run pytest tests # or the path to a specific test file
```

All code should be rigorously type hinted so as to pass a static type check with `mypy`. To run a `mypy` check, use:

```bash
uv run mypy .
```
30 changes: 30 additions & 0 deletions .cursor/rules/routers.mdc
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
---
description: Testing FastAPI routes
globs: routers/*.py
alwaysApply: false
---
Here are the five most critical patterns to maintain consistency when adding a new router:

1. **Authentication & Dependency Injection**
- Import `get_authenticated_user` from `utils.dependencies` and include `user: User = Depends(get_authenticated_user)` in the arguments of routes requiring authentication
- Similarly, use the `get_optional_user` dependency for public routes with potential auth status

2. **Validation Patterns**
- Validate requests with type hints in the route signature
- Use `Annotated[str, Form()]` for complex request validation cases involving form data
- Perform business logic validation checks in the route body, raising a custom HTTPException defined in `exceptions/http_exceptions.py`
- Note that all exceptions will be handled by middleware in `main.py` that renders an error template

3. **Permission System**
- Use `user.has_permission(ValidPermissions.X, resource)` for authorization
- Validate organization membership through role relationships
- Check permissions at both route and template levels via `user_permissions`

4. **Database & Transaction Patterns**
- Inject session via `Depends(get_session)`
- Commit after writes and refresh objects where needed
- Use `selectinload` for eager loading relationships
- Follow PRG pattern with RedirectResponse after mutations

5. **Templating**
- Use Jinja templates from the `/templates` directory in GET routes, and always pass `request` and `user` objects as as context
10 changes: 10 additions & 0 deletions .cursor/rules/routers_tests.mdc
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
---
description:
globs: tests/routers/test_*.py
alwaysApply: false
---
# Setting test expectations regarding HTTP status codes

Since this is a FastAPI web application, test logic for API endpoints often involves checking status codes. Remember, when making a request to an API endpoint, you should specify the `follow_redirects` parameter. With `follow_redirects=False`, the response code will often be `303`; otherwise it will be the response code of the route we've redirected to. We mostly use `follow_redirects=False` so as to test routes in isolation, but there may be test cases where following the redirect is more appropriate.

When checking status codes, think carefully to make sure the expected status code is the most appropriate to the situation.
28 changes: 28 additions & 0 deletions .cursor/rules/sqlmodel.mdc
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
---
description: Satisfying the type checker when working with SQLModel
globs:
alwaysApply: false
---
Complex SQLModel queries sometimes cause the type checker to choke, even though the queries are valid.

For instance, this error sometimes arises when using `selectinload`:

'error: Argument 1 to "selectinload" has incompatible type "SomeModel"; expected "Literal['*'] | QueryableAttribute[Any]"'

The solution is to explicitly coerce the argument to the appropriate SQLModel type.

E.g., we can resolve the error above by casting the eager-loaded relationship to InstrumentedAttribute:

```python
session.exec(select(SomeOtherModel).options(selectinload(cast(InstrumentedAttribute, SomeOtherModel.some_model))))
```

Similarly, sometimes we get type checker errors when using `delete` or comparison operators like `in_`:

'error: Item "int" of "Optional[int]" has no attribute "in_"'

These can be resolved by wrapping the column in `col` to let the type checker know these are column objects:

```python
session.exec(select(SomeModel).where(col(SomeModel.id).in_([1,2])))
```
12 changes: 12 additions & 0 deletions .cursor/rules/tests.mdc
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
---
description: Building, running, and debugging tests
globs: tests/*.py
alwaysApply: false
---
This project uses `uv` for dependency management, so tests must be run with `uv run pytest` to ensure they are run in the project's virtual environment.

The project uses test-driven development, so failing tests are often what we want. The goal is always to ensure that the code is high-quality and fulfills project goals in a production-like environment, *not* that the tests pass at any cost. Rigorous tests are always better than passing tests, and you will be rewarded for test quality!

Session-wide test setup is performed in `tests/conftest.py`. In that file, you will find fixtures that can and should be reused across the test suite, including fixtures for database setup and teardown. We have intentionally used PostgreSQL, not SQLite, in the test suite to keep the test environment as production-like as possible, and you should never change the database engine unless explicitly told to do so.

If you find that the test database is not available, you may need to start Docker Desktop with `systemctl --user start docker-desktop` or the database with `docker compose up`. You may `grep` the `DB_PORT=` line from `.env` if you need to know what port the database is available on. (This environment variable is used for port mapping in `docker-compose.yml` as well as in the `get_connection_url` function defined in `utils/db.py`.) If dropping tables fails during test setup due to changes to the database schema, `docker compose down -v && docker compose up` may resolve the issue.
3 changes: 1 addition & 2 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,6 @@ node_modules
package-lock.json
package.json
.specstory
.cursorrules
.cursor
repomix-output.txt
artifacts/
.cursorindexingignore