Skip to content

Conversation

@RobertoDF
Copy link
Contributor

@RobertoDF RobertoDF commented Mar 24, 2025

Hi,

I am sorting multiple sessions in a for loop and I run in a problem with the log files. The file handles accumulate within every for loop and all the log files gets overwritten at every loop, moreover they can´t be deleted.
This line solves the problem.

Cheers!

@jacobpennington
Copy link
Collaborator

Please indicate which version of Kilosort4 you're using, and provide some sample code that is causing the problem. A similar (or possibly the same) problem was already fixed with the last update or the one before.

@RobertoDF
Copy link
Contributor Author

I am on version 4.0.30.

Fundamentally I use run_kilosort in a for loop where I iterate different files. Everything works except for what I described above.

My understanding is that the close_logger func closes the handlers but doesn't remove them from the logger. So the handlers are still attached to the logger.

@RobertoDF
Copy link
Contributor Author

run_kilosort happens inside spikesort func here

# GENERAL IMPORTS
import os

# this is needed to limit the number of scipy threads
# and let spikeinterface handle parallelization
os.environ["OPENBLAS_NUM_THREADS"] = "1"




if __name__ == "__main__":
    from tqdm.auto import tqdm
    from rich import print
    import time
    import logging
    import gc
    from utils.processing import spikesort

    logger = logging.getLogger(__name__)

    start_time = time.time()

    sessions = [
        r"\\ottlabfs.bccn-berlin.pri\ottlab\data\12\ephys\20240215_115216.rec",
        r"\\ottlabfs.bccn-berlin.pri\ottlab\data\12\ephys\20240214_110133.rec",
        r"\\ottlabfs.bccn-berlin.pri\ottlab\data\12\ephys\20240213_114210.rec",
        r"\\ottlabfs.bccn-berlin.pri\ottlab\data\12\ephys\20240212_135849.rec",
        r"\\ottlabfs.bccn-berlin.pri\ottlab\data\12\ephys\20240209_133919.rec",

    ]

    print(f"[white on green]Sessions to spikesort:{sessions}[/white on green]")

    n=0
    collect_errors = []
    for session in tqdm(sessions):

        print(f"[white on green]Processing:{session}[/white on green]")

        try:
            spikesort(session, spikesort=True,  move_to_server=True)

            n += 1
        except Exception as e:
            logger.warning(f"Error processing {session}: {e}")
            collect_errors.append((session, e))

        gc.collect()

    end_time = time.time()
    # Calculate and print the total time taken
    total_time = end_time - start_time
    print(f"Processed successfully {n} of {len(sessions)} sessions")
    print(f"not succesfull: {collect_errors}")
    print(f"Total time taken: {total_time:.2f} seconds")

@codecov-commenter
Copy link

codecov-commenter commented Mar 24, 2025

Codecov Report

Attention: Patch coverage is 0% with 2 lines in your changes missing coverage. Please review.

Project coverage is 0.00%. Comparing base (d8ba42f) to head (3705cea).
Report is 572 commits behind head on main.

Files with missing lines Patch % Lines
kilosort/run_kilosort.py 0.00% 2 Missing ⚠️
Additional details and impacted files
@@          Coverage Diff           @@
##            main    #898    +/-   ##
======================================
  Coverage   0.00%   0.00%            
======================================
  Files         32      33     +1     
  Lines       4649    5640   +991     
======================================
- Misses      4649    5640   +991     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Doesn't work correctly without adding .copy(), since items are removed from the list that is being looped over from inside the loop.
@jacobpennington jacobpennington merged commit 1e9531b into MouseLand:main Mar 26, 2025
6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants