-
Notifications
You must be signed in to change notification settings - Fork 1
Refactor architecture with services layer #58
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
- Add new services module with AccountService, BudgetService, and TransactionService classes - Refactor FireflyClient methods to return raw API models instead of converted ones - Update MCP tools to use service classes for business logic and model conversions - Update test fixtures to use new service layer for test data setup
- Changed indentation style from tabs to spaces across all Python files - Added docstrings to modules in src/lampyrid/ and test files - Updated pyproject.toml ruff configuration for space-based indentation
📝 WalkthroughSummary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings. WalkthroughAdds a services layer and refactors the Firefly HTTP client to an async, parametric client; replaces config with a Pydantic Settings model; updates MCP tools to delegate to services; revises models (bulk results, atomic bulk flag); and adds extensive tests and fixtures. Changes
Sequence Diagram(s)sequenceDiagram
participant MCP as MCP Tool
participant Service as Service Layer
participant Client as FireflyClient
participant API as Firefly III API
MCP->>Service: request domain operation (e.g., get_budget_summary(req))
Service->>Client: call HTTP methods (list budgets / limits / available ...)
Client->>API: HTTP requests (GET/POST /api/v1/...)
API-->>Client: JSON responses
Client-->>Service: parsed Firefly models
Service->>Service: aggregate/compute (sums, percentages, mapping)
Service-->>MCP: return Lampyrid domain models
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Comment |
|
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #58 +/- ##
==========================================
+ Coverage 94.49% 99.81% +5.31%
==========================================
Files 11 15 +4
Lines 2560 2661 +101
==========================================
+ Hits 2419 2656 +237
+ Misses 141 5 -136
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
🤖 Fix all issues with AI agents
In `@src/lampyrid/clients/firefly.py`:
- Around line 32-43: The AsyncClient self._client created in __init__ is never
closed—add a cleanup API: implement an async aclose(self) that awaits
self._client.aclose(), and also make the class an async context manager by
adding async def __aenter__(self) -> "Firefly": return self and async def
__aexit__(self, exc_type, exc, tb): await self.aclose(); update any users to use
"async with Firefly(...) as client" or call await client.aclose() when finished.
In `@src/lampyrid/models/lampyrid_models.py`:
- Around line 96-98: Replace the naive datetime default in the model's date
field with the project's timezone-aware helper: change the Field default_factory
from datetime.now to utc_now for the date attribute in the class containing
"date: datetime = Field(...)" so it matches other request models like
CreateWithdrawalRequest; also add or update the import for utc_now where the
current code imports datetime so the symbol is available.
In `@src/lampyrid/services/transactions.py`:
- Around line 327-336: The bulk-update loop in the method currently aborts on
the first failure and re-raises, losing any successful results; instead, iterate
over req.updates calling update_transaction for each, on exception capture and
append an error record (including update_req.transaction_id and the error) to an
errors list while continuing to process remaining updates, and append successful
results to updated_transactions; after the loop, either return a structure
containing both updated_transactions and errors or raise a single aggregated
exception that includes the partial updated_transactions and the collected
errors so callers can access partial results (refer to req.updates,
update_transaction, and updated_transactions when making the change).
- Around line 131-161: The create_bulk_transactions method is non-atomic and can
leave partial creations on failure; update the docstring of
create_bulk_transactions to explicitly state the current "fail-fast" behavior
and its consequences, then add an optional parameter (e.g., fail_fast: bool =
True) to the method signature and implement logic in create_bulk_transactions
that if fail_fast is False it continues on exceptions from
self._client.create_transaction, collects successful
Transaction.from_transaction_single(...) results and a list of errors (include
the original transaction input and str(exception)), and returns a new structured
result type (e.g., BulkCreateResult with fields created: List[Transaction] and
errors: List[Dict]) instead of just List[Transaction]; ensure TransactionStore
and _client.create_transaction usage remains the same and update callers/tests
accordingly.
- Around line 244-257: The query builder is interpolating user strings
(description_contains, category, budget, account_contains) and account_id
directly into the query, which risks broken syntax; modify where query_parts are
appended in the transactions query construction to sanitize all user-provided
string values using FireflyClient._sanitize_value (or extract a small helper
like _sanitize_query_value and call it for fields description_contains,
category, budget, account_contains) and only format account_id as a raw integer;
ensure every f'...:"{...}"' uses the sanitized value before appending.
In `@tests/fixtures/transactions.py`:
- Around line 115-147: The fixture make_search_transactions_request declares
date parameters as datetime but the request model SearchTransactionsRequest
expects date objects; change the type annotations for date_on, date_after,
date_before from datetime | None to date | None, update the imports to bring in
datetime.date instead of datetime.datetime, and where the function constructs
the SearchTransactionsRequest ensure any datetime inputs are converted to
.date() (or enforce callers supply date) so the passed values match the
SearchTransactionsRequest date fields.
In `@tests/integration/test_budgets.py`:
- Around line 69-72: The test is passing wrong request keys to the
get_budget_spending tool: replace the 'start' and 'end' keys with 'start_date'
and 'end_date' so the GetBudgetSpendingRequest model is populated; update the
call to mcp_client.call_tool('get_budget_spending', {'req': {'budget_id':
test_budget.id, 'start_date': start.isoformat(), 'end_date': end.isoformat()}})
accordingly so the request fields match the request model.
♻️ Duplicate comments (2)
tests/integration/test_budgets.py (2)
106-108: Same request-key mismatch as intest_get_budget_spending. Applystart_date/end_datehere too.
158-160: Same request-key mismatch as intest_get_budget_spending. Applystart_date/end_datehere too.
🧹 Nitpick comments (10)
src/lampyrid/server.py (2)
39-44: Normalize Fernet key type before instantiating Fernet.
cryptography.Fernetexpects urlsafe base64 bytes; if settings provides astr, this will raise aTypeErrorat runtime. Consider normalizing to bytes here.🔧 Suggested guard for Fernet key types
- client_storage = FernetEncryptionWrapper( - key_value=disk_store, - fernet=Fernet(settings.oauth_storage_encryption_key), # ty:ignore[invalid-argument-type] - ) + key = settings.oauth_storage_encryption_key + if isinstance(key, str): + key = key.encode() + client_storage = FernetEncryptionWrapper( + key_value=disk_store, + fernet=Fernet(key), # ty:ignore[invalid-argument-type] + )
85-94: Avoidasyncio.runduring module import when a loop may already be running.
mcp = _initialize_server()triggersasyncio.run(...)at import time. Importing this module inside an active event loop (e.g., async apps) raisesRuntimeError. Consider lazy initialization or an async factory invoked frommain().src/lampyrid/services/budgets.py (2)
136-155: Parallelize per-budget spending calls to reduce latency.Right now each budget is awaited sequentially. For many budgets this becomes N+1 sequential HTTP calls. Consider
asyncio.gather(optionally with a semaphore) to run them concurrently.⚡ Possible refactor with asyncio.gather
+import asyncio from datetime import date from typing import List @@ - budget_spendings: list[BudgetSpending] = [] - total_spent = 0.0 - total_budgeted = 0.0 - - for budget in budgets_array.data: - spending_req = GetBudgetSpendingRequest( - budget_id=budget.id, - start_date=req.start_date, - end_date=req.end_date, - ) - budget_spending = await self.get_budget_spending(spending_req) - budget_spendings.append(budget_spending) - - total_spent += budget_spending.spent - if budget_spending.budgeted: - total_budgeted += budget_spending.budgeted + tasks = [ + self.get_budget_spending( + GetBudgetSpendingRequest( + budget_id=budget.id, + start_date=req.start_date, + end_date=req.end_date, + ) + ) + for budget in budgets_array.data + ] + budget_spendings = list(await asyncio.gather(*tasks)) + total_spent = 0.0 + total_budgeted = 0.0 + for budget_spending in budget_spendings: + total_spent += budget_spending.spent + if budget_spending.budgeted: + total_budgeted += budget_spending.budgeted
158-164: Consider populatingavailable_budgetin the summary (or document why it’s omitted).It’s currently always
None, which may be a functional gap if callers expect this field. If intended, consider wiring it viaget_available_budget.➕ Optional wiring
+ available = await self.get_available_budget( + GetAvailableBudgetRequest(start_date=req.start_date, end_date=req.end_date) + ) return BudgetSummary( budgets=budget_spendings, total_budgeted=total_budgeted if total_budgeted > 0 else None, total_spent=total_spent, total_remaining=total_remaining, - available_budget=None, # Would need additional API call to get available budget + available_budget=available.amount, )src/lampyrid/config.py (2)
40-42: Consider documenting the security implications of the default host binding.The default
mcp_host='0.0.0.0'binds to all network interfaces, which is appropriate for containerized deployments but may expose the service unintentionally in development. Consider adding a note in the description or using127.0.0.1as a safer default for local development.
118-121: Usepydantic.ValidationErrorinstead ofpydantic_core.ValidationError.The import
from pydantic_core import ValidationErrorworks but is an internal implementation detail. Prefer importing from the publicpydanticAPI for forward compatibility.Suggested fix
- from pydantic_core import ValidationError + from pydantic import ValidationErrorsrc/lampyrid/clients/firefly.py (1)
229-234: Simplify return value fordelete_transaction.After
raise_for_status(), any non-2xx response would already raise an exception. The 204 check is technically correct for Firefly III's behavior, but the method could also returnTrueunconditionally since reaching that line means success.src/lampyrid/services/accounts.py (1)
31-43: Consider exposing pagination forlist_accounts.The underlying
FireflyClient.list_accountssupports apageparameter, but the service method always uses the defaultpage=1. For accounts with many entries, this limits the results to the first page only.Suggested enhancement
Consider adding a
pagefield toListAccountRequestor returning pagination metadata to allow callers to iterate through all accounts.tests/conftest.py (2)
196-200: Prefer skipping tests instead of raising when config is missing.
Failing fast withRuntimeErrormakes local runs noisy; skipping keeps unrelated tests green while still signaling the missing env.♻️ Proposed change
- if not settings.firefly_base_url or not settings.firefly_token: - raise RuntimeError( - 'FIREFLY_BASE_URL and FIREFLY_TOKEN must be set in environment or .env.test file' - ) + if not settings.firefly_base_url or not settings.firefly_token: + pytest.skip( + 'FIREFLY_BASE_URL and FIREFLY_TOKEN must be set in environment or .env.test file' + )
290-314: Surface cleanup failures to avoid silent test pollution.
Right now failures are only printed, which can hide lingering data and flaky behavior across runs. Consider aggregating errors and raising at the end.♻️ Possible improvement
- for transaction_id in created_transaction_ids: - try: - await firefly_client.delete_transaction(transaction_id) - print(f'Cleaned up transaction: {transaction_id}') - except Exception as e: - print(f'Failed to cleanup transaction {transaction_id}: {e}') + cleanup_errors: List[str] = [] + for transaction_id in created_transaction_ids: + try: + await firefly_client.delete_transaction(transaction_id) + print(f'Cleaned up transaction: {transaction_id}') + except Exception as e: + cleanup_errors.append(f'{transaction_id}: {e}') + if cleanup_errors: + raise RuntimeError('Failed to cleanup transactions: ' + '; '.join(cleanup_errors))
- Add test_second_expense_account fixture for multi-account scenarios - Add integration tests for get_transactions with account_id filter - Add search_transactions tests with type, amount, date, and metadata filters - Add comprehensive update_transaction test covering all fields - Add bulk_update_transactions test with failure handling temp
- Budget service business logic and calculations - Configuration management and validation - HTTP client API communication and error handling - Entry point transport configuration - Pydantic model validation rules - Server initialization and authentication setup - Utility functions for assets and HTTP routes
25b0f87 to
58a4ada
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
tests/integration/test_transactions.py (1)
1-10: Fix Ruff formatting violations to unblock CI.The pipeline is failing due to Ruff formatting check. Run
ruff format tests/integration/test_transactions.pyto fix the formatting issues.tests/integration/test_accounts.py (1)
1-10: Fix Ruff formatting violations to unblock CI.The pipeline is failing due to Ruff formatting check. Run
ruff format tests/integration/test_accounts.pyto fix the formatting issues.
🤖 Fix all issues with AI agents
In `@tests/unit/test_firefly_client.py`:
- Around line 48-49: The test asserts client._client.base_url against a plain
string but httpx.URL compares differently; update the assertion in
tests/unit/test_firefly_client.py to convert the URL to a string (e.g., use
str(client._client.base_url)) before comparing to 'https://firefly.example.com'
so the equality check succeeds.
In `@tests/unit/test_main.py`:
- Around line 77-115: The tests currently patch lampyrid.__main__.__name__ and
call main() directly, which bypasses the module-level "__name__ == '__main__'"
guard; update the tests (test_main_called_when_name_is_main and
test_main_not_called_when_name_is_not_main) to use
runpy.run_module('lampyrid.__main__', run_name="__main__") for the "called" case
and runpy.run_module('lampyrid.__main__', run_name="not_main") for the "not
called" case while keeping patches for settings and mcp
(patch('lampyrid.__main__.settings') and patch('lampyrid.__main__.mcp')) so the
module-level import-time behavior is actually executed and mcp.run or main
invocation can be asserted correctly.
🧹 Nitpick comments (2)
tests/unit/test_utils.py (1)
98-123: Test doesn't fully verify the decorator pattern.The implementation uses
mcp.custom_route(...)(serve_favicon), but:
- Line 113:
assert mock_serve_favicon is not Noneis always true (mocks are never None).- Lines 115-123: This conditional block never executes because
serve_faviconis passed to the decorator, not called.- The test doesn't verify that the decorator was actually invoked with
serve_favicon.♻️ Suggested fix to properly verify decorator invocation
`@pytest.mark.asyncio` async def test_register_custom_routes(self): """Test registering custom routes with FastMCP server.""" - with patch('lampyrid.utils.serve_favicon') as mock_serve_favicon: - # Create a mock FastMCP server - mock_mcp = MagicMock(spec=FastMCP) + # Create a mock FastMCP server + mock_mcp = MagicMock(spec=FastMCP) + mock_decorator = MagicMock() + mock_mcp.custom_route.return_value = mock_decorator - # Call the function - register_custom_routes(mock_mcp) + # Call the function + register_custom_routes(mock_mcp) - # Verify custom_route was called with favicon endpoint - mock_mcp.custom_route.assert_called_once_with('/favicon.ico', methods=['GET']) + # Verify custom_route was called with favicon endpoint + mock_mcp.custom_route.assert_called_once_with('/favicon.ico', methods=['GET']) - # Verify that serve_favicon is referenced (even if not called directly) - # The fact that the function was imported and available is enough - assert mock_serve_favicon is not None - - # Check that serve_favicon was called - if mock_serve_favicon.called: - # Verify it was called with a Request object - call_args = mock_serve_favicon.call_args - assert len(call_args) > 0 - # The first argument should be a Request - request_arg = call_args[0][0] - # Should be a Request instance (or mock that acts like one) - assert hasattr(request_arg, 'method') or hasattr(request_arg, 'url') + # Verify the decorator was called with serve_favicon + from lampyrid.utils import serve_favicon + mock_decorator.assert_called_once_with(serve_favicon)tests/integration/test_accounts.py (1)
53-66: Inconsistent string quoting style.Some account IDs use double quotes (
"6","7") while others use single quotes ('5','1'). This inconsistency is likely causing the Ruff formatting failure and will be auto-corrected when you runruff format.
- Add atomic/non-atomic modes to bulk creation with rollback support - Introduce BulkCreateResult and BulkUpdateResult models for detailed operation outcomes - Improve error handling with partial success reporting and failure details - Update integration tests to cover new atomic behaviors and edge cases - Modify docker-compose to build from local Dockerfile for development
- Add aclose(), __aenter__(), and __aexit__() methods for proper resource management - Update test fixtures to use the new aclose() method - Add unit tests for async context manager behavior and exception handling
Add input sanitization to search_transactions method in TransactionService to escape special characters in user-provided values for description_contains, category, budget, and account_contains. Include comprehensive unit tests for sanitization logic and query construction.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@src/lampyrid/models/lampyrid_models.py`:
- Around line 117-153: Both from_transaction_single and from_transaction_read
currently index transactions[0] without validating the list, causing silent data
loss for multi-split transactions; update Transaction.from_transaction_single
(inspect trx.data.attributes.transactions) and Transaction.from_transaction_read
(inspect transaction_read.attributes.transactions) to first check that the
transactions list exists, raise a clear ValueError if it's empty, and raise
another ValueError if len(transactions) > 1 (indicating a multi-split) so
callers must handle splits explicitly before constructing a Transaction.
🧹 Nitpick comments (1)
src/lampyrid/models/lampyrid_models.py (1)
155-168: Avoid float → string drift for monetary amounts.
str(self.amount)can emit binary artifacts and alter money values. Consider usingDecimal(or a currency-aware formatter) before serialization.💰 Example using Decimal to reduce precision loss
-from datetime import date, datetime, timezone +from datetime import date, datetime, timezone +from decimal import Decimal ... def to_transaction_split_store(self) -> TransactionSplitStore: """Convert this transaction to a Firefly III TransactionSplitStore for API requests.""" return TransactionSplitStore( type=self.type, date=self.date, - amount=str(self.amount), + amount=str(Decimal(str(self.amount))), description=self.description, source_id=self.source_id, destination_id=self.destination_id, source_name=self.source_name, destination_name=self.destination_name, budget_id=self.budget_id, budget_name=self.budget_name, )
| @classmethod | ||
| def from_transaction_single(cls, trx: TransactionSingle) -> 'Transaction': | ||
| """Create a Transaction instance from a Firefly III TransactionSingle response.""" | ||
| inner_trx = trx.data.attributes.transactions[0] | ||
| return cls( | ||
| id=trx.data.id, | ||
| amount=float(inner_trx.amount), | ||
| description=inner_trx.description, | ||
| type=inner_trx.type, | ||
| date=inner_trx.date, | ||
| source_id=inner_trx.source_id, | ||
| destination_id=inner_trx.destination_id, | ||
| source_name=inner_trx.source_name, | ||
| destination_name=inner_trx.destination_name, | ||
| currency_code=inner_trx.currency_code, | ||
| budget_id=inner_trx.budget_id, | ||
| budget_name=inner_trx.budget_name, | ||
| ) | ||
|
|
||
| @classmethod | ||
| def from_transaction_read(cls, transaction_read: TransactionRead) -> 'Transaction': | ||
| """Create a Transaction from a Firefly TransactionRead object.""" | ||
| first_trx = transaction_read.attributes.transactions[0] | ||
| return cls( | ||
| id=transaction_read.id, | ||
| description=first_trx.description, | ||
| amount=float(first_trx.amount), | ||
| date=first_trx.date, | ||
| type=first_trx.type, | ||
| source_id=first_trx.source_id, | ||
| destination_id=first_trx.destination_id, | ||
| source_name=first_trx.source_name, | ||
| destination_name=first_trx.destination_name, | ||
| currency_code=first_trx.currency_code, | ||
| budget_id=first_trx.budget_id, | ||
| budget_name=first_trx.budget_name, | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# First, verify the file and lines exist
head -n 160 src/lampyrid/models/lampyrid_models.py | tail -n 50Repository: RadCod3/LamPyrid
Length of output: 2404
🏁 Script executed:
# Find the type definitions for TransactionSingle and TransactionRead
fd -e py | xargs rg -l "class TransactionSingle|class TransactionRead" | head -20Repository: RadCod3/LamPyrid
Length of output: 97
🏁 Script executed:
# Search for how these methods are called in tests
fd -e py -path "*/test*" | xargs rg "from_transaction_single|from_transaction_read" 2>/dev/null | head -30Repository: RadCod3/LamPyrid
Length of output: 1417
🏁 Script executed:
# Check if there's any documentation or API references
fd -e py -e md | xargs rg "transactions\[0\]|empty.*split|multi.*split" 2>/dev/null | head -20Repository: RadCod3/LamPyrid
Length of output: 257
🏁 Script executed:
# Find and inspect TransactionSingle and TransactionRead type definitions
rg "class TransactionSingle|class TransactionRead" src/lampyrid/models/firefly_models.py -A 30Repository: RadCod3/LamPyrid
Length of output: 908
🏁 Script executed:
# Search for any documentation or comments about transactions and splits
rg "split|Split" src/lampyrid/models/firefly_models.py | head -40Repository: RadCod3/LamPyrid
Length of output: 1238
🏁 Script executed:
# Check test files for how these methods are used
fd test -type f -name "*.py" | xargs rg "from_transaction" 2>/dev/null | head -20Repository: RadCod3/LamPyrid
Length of output: 1692
🏁 Script executed:
# Look for any validation or checks on the transactions list
rg "transactions" src/lampyrid/models/lampyrid_models.py -B 2 -A 2Repository: RadCod3/LamPyrid
Length of output: 4796
🏁 Script executed:
# Find the complete Transaction model definition in firefly_models.py
rg "class Transaction\(" src/lampyrid/models/firefly_models.py -A 50Repository: RadCod3/LamPyrid
Length of output: 1997
🏁 Script executed:
# Check if transactions field has any Pydantic validation constraints
rg "transactions.*Field|transactions.*list\[" src/lampyrid/models/firefly_models.py -B 1 -A 1Repository: RadCod3/LamPyrid
Length of output: 594
🏁 Script executed:
# Check API documentation or Firefly III comments about splits
rg -i "firefly|split|journal" src/lampyrid/models/firefly_models.py | grep -i "description\|comment\|doc" | head -20Repository: RadCod3/LamPyrid
Length of output: 2493
🏁 Script executed:
# Look at test files to see if there are any tests for empty transactions
fd -name "*.py" | xargs rg "transactions.*\[\]|from_transaction.*empty|from_transaction.*test" 2>/dev/null | head -20Repository: RadCod3/LamPyrid
Length of output: 290
🏁 Script executed:
# Check if there are any constraints on the transactions field in Pydantic configuration
rg "class Transaction\(" src/lampyrid/models/firefly_models.py -A 100 | grep -E "transactions|Field|min_|max_|validator"Repository: RadCod3/LamPyrid
Length of output: 1266
🏁 Script executed:
# Check Firefly III API documentation links or references
rg -i "firefly.*api|api.*firefly|https.*firefly" src/ -B 2 -A 2 | head -30Repository: RadCod3/LamPyrid
Length of output: 1876
🏁 Script executed:
# Search for any README or documentation about Firefly III integration
fd -name "README*" -o -name "*.md" | xargs cat 2>/dev/null | grep -i "firefly\|split" | head -20Repository: RadCod3/LamPyrid
Length of output: 290
🏁 Script executed:
# Check if there are any test fixtures or examples showing transaction structure
fd -name "*.json" -o -name "test*" | xargs grep -l "transaction" 2>/dev/null | head -5Repository: RadCod3/LamPyrid
Length of output: 290
🏁 Script executed:
# Look for any comments or documentation about the Transaction model design
rg "single.split|one split|multi.split" src/ -i -B 2 -A 2 | head -40Repository: RadCod3/LamPyrid
Length of output: 736
🏁 Script executed:
# Check the complete Firefly III API response for transactions
rg "class TransactionArray|class TransactionRead" src/lampyrid/models/firefly_models.py -B 2 -A 15Repository: RadCod3/LamPyrid
Length of output: 763
🏁 Script executed:
# Search for any error handling or try-catch around these conversions
rg "from_transaction_single|from_transaction_read" src/lampyrid/services/transactions.py -B 3 -A 3Repository: RadCod3/LamPyrid
Length of output: 2337
🏁 Script executed:
# Check if the actual API responses in practice have validation or if there's documentation
cat src/lampyrid/models/firefly_models.py | grep -A 20 "class TransactionArray"Repository: RadCod3/LamPyrid
Length of output: 261
Add validation to prevent data loss from multi-split transactions.
Both methods access transactions[0] without checking list length. Firefly III supports multi-split transactions (as indicated by the group_title field which is only populated for splits), so the code silently drops additional splits. Add guards to raise an error if the list is empty or contains multiple splits:
🛡️ Proposed validation
`@classmethod`
def from_transaction_single(cls, trx: TransactionSingle) -> 'Transaction':
"""Create a Transaction instance from a Firefly III TransactionSingle response."""
- inner_trx = trx.data.attributes.transactions[0]
+ splits = trx.data.attributes.transactions
+ if not splits:
+ raise ValueError('Transaction response contained no splits')
+ if len(splits) > 1:
+ raise ValueError('Multi-split transactions are not supported by Transaction')
+ inner_trx = splits[0]
return cls(
id=trx.data.id,
amount=float(inner_trx.amount),
description=inner_trx.description,
type=inner_trx.type,
date=inner_trx.date,
source_id=inner_trx.source_id,
destination_id=inner_trx.destination_id,
source_name=inner_trx.source_name,
destination_name=inner_trx.destination_name,
currency_code=inner_trx.currency_code,
budget_id=inner_trx.budget_id,
budget_name=inner_trx.budget_name,
)
`@classmethod`
def from_transaction_read(cls, transaction_read: TransactionRead) -> 'Transaction':
"""Create a Transaction from a Firefly TransactionRead object."""
- first_trx = transaction_read.attributes.transactions[0]
+ splits = transaction_read.attributes.transactions
+ if not splits:
+ raise ValueError('Transaction response contained no splits')
+ if len(splits) > 1:
+ raise ValueError('Multi-split transactions are not supported by Transaction')
+ first_trx = splits[0]
return cls(
id=transaction_read.id,
description=first_trx.description,
amount=float(first_trx.amount),
date=first_trx.date,
type=first_trx.type,
source_id=first_trx.source_id,
destination_id=first_trx.destination_id,
source_name=first_trx.source_name,
destination_name=first_trx.destination_name,
currency_code=first_trx.currency_code,
budget_id=first_trx.budget_id,
budget_name=first_trx.budget_name,
)🤖 Prompt for AI Agents
In `@src/lampyrid/models/lampyrid_models.py` around lines 117 - 153, Both
from_transaction_single and from_transaction_read currently index
transactions[0] without validating the list, causing silent data loss for
multi-split transactions; update Transaction.from_transaction_single (inspect
trx.data.attributes.transactions) and Transaction.from_transaction_read (inspect
transaction_read.attributes.transactions) to first check that the transactions
list exists, raise a clear ValueError if it's empty, and raise another
ValueError if len(transactions) > 1 (indicating a multi-split) so callers must
handle splits explicitly before constructing a Transaction.
Refactors codebase to separates business logic and firefly client. Does so by introducing a service layer to handle business logic. Also adds additional lint rules.
Resolves #34