-
Notifications
You must be signed in to change notification settings - Fork 521
feat: Add AUR (Arch User Repository) support for Arch Linux builds #946 #947
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…SSIE-Org#946 - Add PKGBUILD for binary package installation from releases - Add PKGBUILD-git for building from source - Add pictopy.install for post-install hooks - Add .SRCINFO metadata file - Add publish-aur.yml workflow for automatic AUR publishing - Update build-and-release.yml to generate tar.gz for AUR - Update tauri.conf.json to build all Linux targets (deb, rpm, appimage) - Update README.md with AUR installation instructions This enables Arch Linux users to install PictoPy via: yay -S pictopy paru -S pictopy Closes AOSSIE-Org#946
📝 WalkthroughWalkthroughAdds AUR packaging and publishing: new GitHub Actions workflow and job to create tarballs and publish to AUR, AUR packaging files (PKGBUILD, PKGBUILD-git, .SRCINFO, install script, README), Tauri bundle config tweaks, and wide database refactor to use a thread-local connection manager and transactional helpers. Changes
Sequence Diagram(s)sequenceDiagram
participant Release as GitHub Release
participant GHA as GitHub Actions
participant Assets as Release Assets
participant AUR as AUR Repo (via SSH)
participant Git as Git (AUR SSH)
Release->>GHA: release.published / workflow_dispatch
activate GHA
GHA->>GHA: derive version & tag
GHA->>Assets: download linux .deb asset(s)
alt asset found
Assets-->>GHA: .deb
GHA->>GHA: extract .deb, repackage .tar.gz
GHA->>GHA: compute sha256
GHA->>Release: upload tar.gz + sha256
else missing
GHA->>GHA: mark SKIP for arch
end
GHA->>GHA: update aur/PKGBUILD and aur/.SRCINFO (insert version, sources, checksums or SKIP)
GHA->>Git: authenticate via SSH
GHA->>AUR: push PKGBUILD + .SRCINFO (commit & force_push)
deactivate GHA
AUR->>AUR: package updated
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 8
Fix all issues with AI Agents 🤖
In @.github/workflows/build-and-release.yml:
- Around line 407-415: The GitHub Action step named "Upload tar.gz to release"
uses softprops/action-gh-release@v1 which targets an outdated runner; update the
action reference to softprops/action-gh-release@v2 in that step (the step that
sets tag_name: ${{ steps.get_version.outputs.tag }}, files: pictopy_${{
steps.get_version.outputs.version }}_amd64.tar.gz and the .sha256, and env:
GITHUB_TOKEN) so the workflow runs on a modern runner; simply change the action
version from @v1 to @v2 and verify the step still accepts the same inputs.
- Around line 379-389: The step "Download Linux deb from release" currently uses
a brittle sleep; replace the sleep 30 with a polling loop that queries the
release assets (using GitHub CLI `gh api
repos/:owner/:repo/releases/tags/${TAG}` or curl against the GitHub Releases
API) to wait until the expected .deb asset appears, then download the exact
asset name returned by the API instead of hardcoding "picto-py_..." or
"PictoPy_..."; keep using the existing VERSION and TAG variables, and fall back
only by inspecting the assets list to pick the correct filename before calling
curl so the download succeeds reliably without arbitrary sleeps.
In @.github/workflows/publish-aur.yml:
- Around line 33-36: The "Wait for release assets" step currently uses a fixed
"sleep 60" which is unreliable; replace it with a retrying polling loop that
queries the release assets until they appear (using the release tag/ID and
GITHUB_TOKEN via the GitHub API or `gh` CLI) with exponential/backoff and a max
retry count, returning non-zero if assets never become available. Update the
step named "Wait for release assets" to call the polling logic instead of `sleep
60`, ensure it checks asset count/filenames for the expected artifacts, and fail
the job after the configured timeout so the workflow doesn't hang indefinitely.
In @aur/.SRCINFO:
- Around line 39-42: The .SRCINFO currently lists source_aarch64 and
sha256sums_aarch64 with SKIP and a URL that will 404 because the CI only builds
the amd64 artifact; either remove the source_aarch64/sha256sums_aarch64 entries
from .SRCINFO (keeping source_x86_64 and sha256sums_x86_64) or update your
build-and-release.yml to produce an arm64 tarball and ensure source_aarch64
points to that artifact. After generating the tarballs, replace both
sha256sums_x86_64 and sha256sums_aarch64 SKIP values with the actual SHA256
checksums of the corresponding tar.gz files. Finally, add or extend the AUR
publishing workflow to compute and inject checksums automatically (e.g., a step
that computes sha256sum and updates .SRCINFO) so future releases aren’t left
with SKIP.
In @aur/PKGBUILD:
- Around line 22-38: The makedepends array currently lists many compile-time
tools (e.g., 'rust', 'cargo', 'nodejs', 'npm', 'python', 'python-pip',
'pyinstaller', 'base-devel', and GUI build libs like 'webkit2gtk-4.1',
'librsvg', 'appmenu-gtk-module') which are unnecessary for this binary-only
PKGBUILD; edit the makedepends declaration to remove those build-tool and
source-build-specific entries and leave only runtime/extraction utilities
required to fetch and unpack the release artifacts (for example keep utilities
like 'curl', 'wget', 'file', 'openssl' and add typical extraction tools if
missing such as 'tar'/'gzip' or 'unzip'), ensuring the change targets the
makedepends=( ...) block so the binary package no longer requires compilers or
packaging build tools.
- Around line 83-84: The install line uses "${srcdir}/../LICENSE" which points
outside the extracted tarball and silently fails; fix this by ensuring the
LICENSE is available in build sources (preferred: add the LICENSE URL or file
name to the source array so it lands in ${srcdir}), or include the license in
the release tarball, or create the license file inside the PKGBUILD during
prepare()/build(); then update the install command to reference the correct path
(e.g., "${srcdir}/LICENSE") and remove the silent failure redirect so
missing-license errors surface (keep using
${pkgdir}/usr/share/licenses/${pkgname}/LICENSE and the existing variables
pkgdir and pkgname).
In @aur/PKGBUILD-git:
- Around line 97-102: In package() update the install target name to match the
built binary: the build produces a binary named "picto-py" (from Cargo package
picto_py), so change the install destination in the install command from
"${pkgdir}/usr/bin/pictopy" to "${pkgdir}/usr/bin/picto-py" (or alternatively
verify the actual produced binary name and make both source and target
consistent); ensure the install invocation in package() references the
hyphenated "picto-py" binary name.
In @aur/README.md:
- Around line 87-99: The README's "Generating SSH Key for AUR" section contains
an incorrect URL (/account/YOUR_USERNAME/edit); update that line to point to the
correct AUR account page (https://aur.archlinux.org/account/) and add a short
note to navigate to "My Account" → "SSH Public Key" to add the public key so
users can manage SSH keys correctly.
🧹 Nitpick comments (3)
aur/PKGBUILD-git (1)
52-55: Reconsider the fallback version inpkgver().The fallback to
"1.1.0"whengit describefails is unusual for a git-based AUR package. Typically, if git describe fails, it indicates a problem with the repository (no tags, shallow clone, etc.). A fallback might mask these issues and produce unexpected version numbers.Consider either:
- Removing the fallback and letting the function fail explicitly, or
- Using a more appropriate fallback like
0.0.0.r$(git rev-list --count HEAD).$(git rev-parse --short HEAD)🔎 Alternative implementation
pkgver() { cd "${srcdir}/${pkgname}" - git describe --tags --long 2>/dev/null | sed 's/^v//;s/\([^-]*-g\)/r\1/;s/-/./g' || echo "1.1.0" + # Prefer git describe, fall back to commit count + short hash + git describe --tags --long 2>/dev/null | sed 's/^v//;s/\([^-]*-g\)/r\1/;s/-/./g' || \ + printf "0.0.0.r%s.%s" "$(git rev-list --count HEAD)" "$(git rev-parse --short HEAD)" }.github/workflows/publish-aur.yml (1)
89-147: Consider usingmakepkg --printsrcinfofor .SRCINFO generation.Manually generating
.SRCINFOwith a heredoc is error-prone and can drift out of sync with thePKGBUILD. The standard Arch practice is to generate it from thePKGBUILDusingmakepkg --printsrcinfo > .SRCINFO.However, this requires
pacmanandmakepkgto be available. If they're not available in theubuntu-latestrunner, you would need to install them or use a container with Arch Linux tools.🔎 Alternative approach using makepkg
If you can install Arch packaging tools:
- name: Generate .SRCINFO run: | + # Install pacman/makepkg tools (example, adjust for availability) + # sudo apt-get update && sudo apt-get install -y pacman + cd aur - - VERSION="${{ steps.get_version.outputs.version }}" - SHA256_X86_64="${{ steps.checksums.outputs.sha256_x86_64 }}" - SHA256_AARCH64="${{ steps.checksums.outputs.sha256_aarch64 }}" - - cat > .SRCINFO << EOF - ... - EOF - - # Remove leading whitespace - sed -i 's/^ //' .SRCINFO + makepkg --printsrcinfo > .SRCINFO cat .SRCINFOAlternatively, use an Arch Linux Docker container for this step, or continue with the current approach but ensure it's kept in sync with PKGBUILD structure.
aur/PKGBUILD (1)
58-60: Useinstallcommand with proper permissions for library files.The current approach copies
usr/librecursively withcp -r, which may not preserve the correct permissions and ownership. Consider using theinstallcommand orcp -afor better permission handling.🔎 Proposed fix
# Install libraries and resources if [ -d "usr/lib" ]; then - cp -r usr/lib "${pkgdir}/usr/" + cp -a usr/lib "${pkgdir}/usr/" fiThe
-aflag preserves permissions, timestamps, and symbolic links.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
.github/workflows/build-and-release.yml.github/workflows/publish-aur.ymlREADME.mdaur/.SRCINFOaur/PKGBUILDaur/PKGBUILD-gitaur/README.mdaur/pictopy.installfrontend/src-tauri/tauri.conf.json
🧰 Additional context used
🪛 actionlint (1.7.9)
.github/workflows/build-and-release.yml
408-408: the runner of "softprops/action-gh-release@v1" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
🔇 Additional comments (6)
README.md (1)
5-34: LGTM! Well-structured installation documentation.The installation section is clear, comprehensive, and user-friendly. It appropriately covers multiple installation methods and platforms, making PictoPy more accessible to users.
frontend/src-tauri/tauri.conf.json (2)
16-21: AppImage and RPM configurations look appropriate.The addition of AppImage with
bundleMediaFramework: trueand RPM withrelease: "1"aligns with the PR objective to support multiple Linux packaging formats.
10-10: ✓ The"all"value fortargetsis valid and supported.The Tauri bundle configuration officially supports
"all"as a valid value for thetargetsfield, which bundles for all available target types (deb, rpm, appimage, nsis, msi, app, dmg, updater). The change from the array to"all"is correct.aur/PKGBUILD-git (1)
57-95: Build process is comprehensive and well-structured.The build function properly handles all three components (backend server, sync microservice, and frontend) with appropriate virtual environments and dependency management.
aur/.SRCINFO (1)
16-16: No action required—pyinstalleris correctly specified.The
pyinstallerpackage is available in the AUR with that exact name and is the standard dependency for AUR packages. Usingmakedepends = pyinstallerin an AUR SRCINFO file is appropriate.Likely an incorrect or invalid review comment.
aur/pictopy.install (1)
3-47: LGTM! Well-structured install hooks.The install script follows Arch packaging best practices:
- Defensive checks for tool availability before execution
- Clear user messaging for installation, upgrade, and removal
- Proper GTK icon cache and desktop database updates
- Helpful guidance about data preservation and cleanup
- Add DatabaseConnectionManager class with per-thread connection storage - Implement threading.local() for thread-safe connection isolation - Add WAL mode for better concurrent read performance - Add write serialization with global lock for safe write operations - Provide context managers: get_db_connection, get_db_transaction, get_db_write_transaction - Refactor all database modules to use new connection manager: - images.py: Thread-safe image CRUD operations - folders.py: Thread-safe folder management - faces.py: Thread-safe face embedding storage - face_clusters.py: Thread-safe cluster operations - albums.py: Thread-safe album management - metadata.py: Thread-safe metadata storage - yolo_mapping.py: Thread-safe YOLO class mappings - Remove direct sqlite3.connect() calls and check_same_thread=False usage - Add proper transaction management with automatic commit/rollback - Configure SQLite pragmas for integrity and performance: - foreign_keys, busy_timeout, journal_mode=WAL, synchronous=NORMAL Benefits: - Prevents race conditions under concurrent FastAPI requests - Eliminates data corruption from shared connections across threads - Improves stability during parallel image processing workloads - Aligns with SQLite's recommended concurrency model Fixes AOSSIE-Org#943
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
backend/app/database/faces.py (1)
117-143: Multiple write transactions when inserting multiple faces - non-atomic and inefficient.When
embeddingsis a list, each face is inserted in a separateget_db_write_transaction()call viadb_insert_face_embeddings. This creates N separate transactions, which is:
- Inefficient: Each transaction acquires/releases the global write lock
- Non-atomic: If the 5th insert fails, the first 4 are already committed
Consider refactoring to batch all inserts within a single transaction.
🔎 Proposed fix
def db_insert_face_embeddings_by_image_id( image_id: ImageId, embeddings: Union[FaceEmbedding, List[FaceEmbedding]], confidence: Optional[Union[float, List[float]]] = None, bbox: Optional[Union[BoundingBox, List[BoundingBox]]] = None, cluster_id: Optional[Union[ClusterId, List[ClusterId]]] = None, ) -> Union[FaceId, List[FaceId]]: """...""" # Handle multiple faces in one image if ( isinstance(embeddings, list) and len(embeddings) > 0 and isinstance(embeddings[0], np.ndarray) ): - face_ids = [] - for i, emb in enumerate(embeddings): - conf = ( - confidence[i] - if isinstance(confidence, list) and i < len(confidence) - else confidence - ) - bb = bbox[i] if isinstance(bbox, list) and i < len(bbox) else bbox - cid = ( - cluster_id[i] - if isinstance(cluster_id, list) and i < len(cluster_id) - else cluster_id - ) - face_id = db_insert_face_embeddings(image_id, emb, conf, bb, cid) - face_ids.append(face_id) - return face_ids + with get_db_write_transaction() as conn: + cursor = conn.cursor() + face_ids = [] + for i, emb in enumerate(embeddings): + conf = ( + confidence[i] + if isinstance(confidence, list) and i < len(confidence) + else confidence + ) + bb = bbox[i] if isinstance(bbox, list) and i < len(bbox) else bbox + cid = ( + cluster_id[i] + if isinstance(cluster_id, list) and i < len(cluster_id) + else cluster_id + ) + embeddings_json = json.dumps([emb.tolist()]) + bbox_json = json.dumps(bb) if bb is not None else None + cursor.execute( + """ + INSERT INTO faces (image_id, cluster_id, embeddings, confidence, bbox) + VALUES (?, ?, ?, ?, ?) + """, + (image_id, cid, embeddings_json, conf, bbox_json), + ) + face_ids.append(cursor.lastrowid) + return face_ids else: # Single face return db_insert_face_embeddings( image_id, embeddings, confidence, bbox, cluster_id )
🧹 Nitpick comments (14)
backend/app/utils/face_clusters.py (1)
623-623: Consider moving the import to the module level for consistency.The inline import of
get_db_write_transactionis inconsistent with the module-level import ofget_db_connectionfrom the same module at line 16. Unless this is avoiding a circular import issue, prefer importing at the top of the file.🔎 Suggested refactor
Move the import to line 16:
from app.database.connection import get_db_connection +from app.database.connection import get_db_write_transactionThen remove the inline import:
- from app.database.connection import get_db_write_transaction - try:backend/app/database/connection.py (3)
23-27: Unused module-level_thread_localvariable.
_thread_localis defined at module level but never used. TheDatabaseConnectionManagerclass uses its ownself._localinstance instead (line 40). Consider removing this dead code.🔎 Proposed fix
-# Thread-local storage for database connections -_thread_local = threading.local() - # Lock for write operations to serialize database writes _write_lock = threading.Lock()
131-134: ROLLBACK could raise and mask the original exception.If the connection is in a bad state,
conn.execute("ROLLBACK")may itself raise an exception, causing the original erroreto be lost. Consider wrapping the rollback in a try-except.🔎 Proposed fix
except Exception as e: - conn.execute("ROLLBACK") + try: + conn.execute("ROLLBACK") + except Exception as rollback_error: + logger.warning(f"Rollback failed: {rollback_error}") logger.error(f"Transaction rolled back due to error: {e}") raise
228-247: Inconsistent connection configuration vs_create_connection.
get_new_connection()omitsisolation_level=Noneand several pragmas that_create_connection()sets (e.g.,synchronous,recursive_triggers,case_sensitive_like). This inconsistency could lead to different transaction behavior for standalone connections.🔎 Proposed fix
def get_new_connection() -> sqlite3.Connection: """ Create a new standalone database connection. Use this only when you need a connection outside of the thread-local management system (e.g., for background tasks or process pools). The caller is responsible for closing this connection. Returns: sqlite3.Connection: New database connection """ conn = sqlite3.connect( DATABASE_PATH, timeout=30.0, + isolation_level=None, # Autocommit mode, consistent with thread-local connections ) conn.execute("PRAGMA foreign_keys = ON;") conn.execute("PRAGMA journal_mode=WAL;") conn.execute("PRAGMA busy_timeout = 30000;") + conn.execute("PRAGMA synchronous = NORMAL;") return connbackend/app/database/yolo_mapping.py (1)
5-6: Logger initialized but unused.The
loggeris initialized but never used in this module. Consider removing it or adding appropriate logging statements.backend/app/database/metadata.py (2)
13-14: Logger initialized but unused.The
loggeris initialized but not used anywhere in this module. Consider adding error logging for theJSONDecodeErrorcase indb_get_metadataor removing the unused import.
70-83:cursor.rowcountfor INSERT may be unreliable for success indication.SQLite's
cursor.rowcountfor INSERT statements returns the number of rows inserted (typically 1), but it can be -1 in some edge cases. Since you're always inserting exactly one row, a successful INSERT without exception is sufficient. Consider simply returningTrueafter the INSERT completes without exception.However, the current implementation will work correctly in practice.
🔎 Proposed fix
if cursor is not None: # Use provided cursor (external transaction management) cursor.execute("DELETE FROM metadata") cursor.execute("INSERT INTO metadata (metadata) VALUES (?)", (metadata_json,)) - return cursor.rowcount > 0 + return True else: # Use our own transaction with get_db_write_transaction() as conn: cur = conn.cursor() cur.execute("DELETE FROM metadata") cur.execute("INSERT INTO metadata (metadata) VALUES (?)", (metadata_json,)) - return cur.rowcount > 0 + return Truebackend/app/database/faces.py (1)
13-14: Logger initialized but unused.The
loggeris initialized but not used in this module. Consider adding logging for error cases or removing the unused import.backend/app/database/folders.py (2)
12-13: Logger initialized but unused.The
loggeris initialized but never used in this module. Consider adding logging or removing the unused import.
76-78: TOCTOU race condition on directory validation.The
os.path.isdir()check on line 77 could race with the actual filesystem state - the directory could be deleted between the check and the database insert. This is a minor edge case but worth noting.backend/app/database/images.py (1)
163-167: Move import to module level.The import
from app.utils.images import image_util_parse_metadatais performed inside the loop body. This is inefficient as the import is executed on every iteration. Move it to the module level.🔎 Proposed fix at module level
Add to the imports section at the top of the file:
from app.utils.images import image_util_parse_metadataThen remove lines 165 and 228.
backend/app/database/face_clusters.py (2)
10-13: Logger initialized but not used.The logger is initialized but never used in this module. Consider adding logging for important operations (e.g., batch inserts, errors) or remove the unused logger initialization.
173-208: Consider deprecation strategy for the conn parameter.The
connparameter is marked as deprecated but is silently ignored. Consider either:
- Adding a deprecation warning when the parameter is passed (e.g., using Python's
warningsmodule)- Planning removal in a future major version
- Documenting the deprecation timeline in the docstring
This helps users transition away from the old API more smoothly.
backend/app/database/albums.py (1)
9-12: Logger initialized but not used.The logger is initialized but never used in this module. Consider adding logging for important operations (e.g., album creation, password verification failures) or remove the unused logger initialization.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (10)
backend/app/database/albums.pybackend/app/database/connection.pybackend/app/database/face_clusters.pybackend/app/database/faces.pybackend/app/database/folders.pybackend/app/database/images.pybackend/app/database/metadata.pybackend/app/database/yolo_mapping.pybackend/app/utils/face_clusters.pybackend/test.py
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-10-31T17:00:50.132Z
Learnt from: Hemil36
Repo: AOSSIE-Org/PictoPy PR: 570
File: backend/app/database/connection.py:16-24
Timestamp: 2025-10-31T17:00:50.132Z
Learning: In PictoPy backend, the user prefers not to use database connection retry logic or extended busy timeouts in the centralized get_db_connection() context manager, even though the app has concurrent access patterns via ProcessPoolExecutor and FastAPI.
Applied to files:
backend/app/database/images.pybackend/app/database/connection.py
🧬 Code graph analysis (8)
backend/app/database/yolo_mapping.py (1)
backend/app/database/connection.py (1)
get_db_transaction(178-195)
backend/app/database/folders.py (2)
backend/app/database/connection.py (3)
get_db_connection(157-174)get_db_transaction(178-195)get_db_write_transaction(199-216)backend/app/logging/setup_logging.py (2)
setup_logging(134-196)get_logger(199-209)
backend/app/database/metadata.py (1)
backend/app/database/connection.py (3)
get_db_connection(157-174)get_db_transaction(178-195)get_db_write_transaction(199-216)
backend/app/database/faces.py (2)
backend/app/database/connection.py (3)
get_db_connection(157-174)get_db_transaction(178-195)get_db_write_transaction(199-216)backend/app/logging/setup_logging.py (2)
setup_logging(134-196)get_logger(199-209)
backend/app/database/albums.py (1)
backend/app/database/connection.py (3)
get_db_connection(157-174)get_db_transaction(178-195)get_db_write_transaction(199-216)
backend/app/database/images.py (2)
backend/app/database/connection.py (3)
get_db_connection(157-174)get_db_transaction(178-195)get_db_write_transaction(199-216)backend/app/utils/images.py (1)
image_util_parse_metadata(496-513)
backend/app/database/face_clusters.py (2)
backend/app/database/connection.py (3)
get_db_connection(157-174)get_db_transaction(178-195)get_db_write_transaction(199-216)backend/app/logging/setup_logging.py (2)
setup_logging(134-196)get_logger(199-209)
backend/app/utils/face_clusters.py (1)
backend/app/database/connection.py (1)
get_db_write_transaction(199-216)
🔇 Additional comments (28)
backend/app/utils/face_clusters.py (1)
610-635: Dual-path transaction management approach looks correct.The function now supports two execution modes with intentionally different error handling:
- With external cursor (lines 610-620): Uses the provided cursor and raises exceptions, allowing the caller to manage the transaction and handle errors.
- Without cursor (lines 621-635): Manages its own transaction via
get_db_write_transaction()and returns False on error, encapsulating transaction management.This pattern aligns with the broader database refactor and correctly handles transaction ownership. The caller at line 144 passes a cursor and will receive raised exceptions as expected.
backend/app/database/connection.py (1)
30-52: LGTM!The
DatabaseConnectionManagerclass provides a clean thread-safe connection management pattern. Per-thread connection storage, lazy initialization, and proper pragma configuration are well implemented.backend/test.py (1)
7-30: LGTM!Clean migration to the context-managed connection pattern. All database operations are properly scoped within the
withblock, and the return statement correctly returns data after processing.backend/app/database/yolo_mapping.py (1)
9-28: LGTM!Clean refactoring to use
get_db_transaction. The table creation and class name population are now atomic within a single transaction.backend/app/database/metadata.py (1)
17-32: LGTM!The table creation with automatic initialization of an empty metadata row is well-structured within a transaction context.
backend/app/database/faces.py (2)
38-55: LGTM!Table creation is properly wrapped in a transaction context.
282-332: LGTM!The dual-path approach (external cursor vs own transaction) is consistent with the pattern used in other modules and provides flexibility for callers managing their own transactions.
backend/app/database/folders.py (2)
23-39: LGTM!Clean migration to transaction context for table creation.
156-176: LGTM!Batch delete properly uses write transaction and returns the affected row count.
backend/app/database/images.py (3)
46-78: LGTM!Table creation for both
imagesandimage_classesis properly wrapped in a single transaction, ensuring atomicity.
81-107: LGTM!Bulk insert with upsert logic is well-implemented. The ON CONFLICT clause correctly preserves
isTaggedif the new record doesn't have it set.
363-392: LGTM!The toggle implementation correctly checks for image existence before attempting the update, preventing silent failures on non-existent images.
backend/app/database/face_clusters.py (7)
31-43: LGTM!Correct use of
get_db_transaction()for DDL operations. The context manager ensures proper transaction handling.
46-65: LGTM!Excellent pattern for supporting both standalone usage and participation in larger transactions via the optional cursor parameter. The implementation correctly uses
get_db_write_transaction()when managing its own transaction.
68-116: LGTM!Well-structured batch insert with proper support for external transaction management. The refactoring to prepare data before entering the transaction is clean and efficient.
119-143: LGTM!Correct use of
get_db_connection()for read operations.
146-170: LGTM!Correct use of
get_db_connection()for read operations.
211-251: LGTM!Correct use of
get_db_connection()for read operations with proper JOIN and aggregation.
254-318: LGTM!Correct use of
get_db_connection()for read operations. JSON parsing includes proper null checks before deserialization.backend/app/database/albums.py (9)
15-46: LGTM!Both table creation functions correctly use
get_db_transaction()for DDL operations. Foreign key constraints with CASCADE delete behavior are properly defined.
49-99: LGTM!All read operations correctly use
get_db_connection(). Excellent addition of comprehensive docstrings and proper type hints.
102-132: LGTM!Correct use of
get_db_write_transaction()for write operations. Proper password hashing with bcrypt and appropriate type conversions for SQLite.
135-176: LGTM!Correct use of
get_db_write_transaction(). The conditional logic for optional password updates is well-structured and properly handles both cases.
179-207: LGTM!Both functions correctly use appropriate transaction types:
get_db_write_transaction()for delete andget_db_connection()for read.
210-236: LGTM!Excellent implementation with proper validation, error handling, and use of
INSERT OR IGNOREto handle duplicates gracefully. The image existence check prevents invalid associations.
239-265: LGTM!Good implementation with proper existence validation before deletion. Appropriate error handling when the image is not found in the album.
268-281: LGTM!Efficient batch deletion using
executemany(). The lack of existence validation is acceptable for bulk operations and consistent with typical batch operation patterns.
284-303: LGTM!Correct implementation of password verification using bcrypt. Properly handles the case where no password is set for the album.
fix #946
This PR adds AUR (Arch User Repository) support for PictoPy, enabling easy installation on Arch-based Linux distributions (Arch, Manjaro, EndeavourOS, etc.).
Changes Made
PKGBUILDfor binary package installation from releasesPKGBUILD-gitfor building from sourcepictopy.installfor post-install hooks.SRCINFOmetadata filepublish-aur.ymlworkflow for automatic AUR publishingbuild-and-release.ymlto generate tar.gz for AURtauri.conf.jsonto build all Linux targets (deb, rpm, appimage)README.mdwith AUR installation instructionsInstallation
This enables Arch Linux users to install PictoPy via: