Skip to content

Commit 2729757

Browse files
dependabot[bot]d33bspre-commit-ci-lite[bot]
authored
Bump the python-packages group across 1 directory with 8 updates (#408)
* Bump the python-packages group across 1 directory with 8 updates Bumps the python-packages group with 7 updates in the / directory: | Package | From | To | | --- | --- | --- | | [pyarrow](https://github.com/apache/arrow) | `22.0.0` | `23.0.0` | | [parsl](https://github.com/Parsl/parsl) | `2026.1.5` | `2026.1.12` | | [botocore](https://github.com/boto/botocore) | `1.42.22` | `1.42.30` | | [jupyterlab](https://github.com/jupyterlab/jupyterlab) | `4.5.1` | `4.5.2` | | [black](https://github.com/psf/black) | `25.12.0` | `26.1.0` | | [jupytext](https://github.com/mwouts/jupytext) | `1.18.1` | `1.19.0` | | [sphinxcontrib-mermaid](https://github.com/mgaitan/sphinxcontrib-mermaid) | `1.2.3` | `2.0.0` | Updates `pyarrow` from 22.0.0 to 23.0.0 - [Release notes](https://github.com/apache/arrow/releases) - [Commits](apache/arrow@apache-arrow-22.0.0...apache-arrow-23.0.0) Updates `parsl` from 2026.1.5 to 2026.1.12 - [Commits](Parsl/parsl@2026.01.05...2026.01.12) Updates `botocore` from 1.42.22 to 1.42.30 - [Commits](boto/botocore@1.42.22...1.42.30) Updates `pycytominer` from 1.3.0 to 1.3.1 - [Release notes](https://github.com/cytomining/pycytominer/releases) - [Changelog](https://github.com/cytomining/pycytominer/blob/main/CHANGELOG.md) - [Commits](cytomining/pycytominer@v1.3.0...v1.3.1) Updates `jupyterlab` from 4.5.1 to 4.5.2 - [Release notes](https://github.com/jupyterlab/jupyterlab/releases) - [Changelog](https://github.com/jupyterlab/jupyterlab/blob/main/RELEASE.md) - [Commits](https://github.com/jupyterlab/jupyterlab/compare/@jupyterlab/lsp@4.5.1...@jupyterlab/lsp@4.5.2) Updates `black` from 25.12.0 to 26.1.0 - [Release notes](https://github.com/psf/black/releases) - [Changelog](https://github.com/psf/black/blob/main/CHANGES.md) - [Commits](psf/black@25.12.0...26.1.0) Updates `jupytext` from 1.18.1 to 1.19.0 - [Release notes](https://github.com/mwouts/jupytext/releases) - [Changelog](https://github.com/mwouts/jupytext/blob/main/CHANGELOG.md) - [Commits](mwouts/jupytext@v1.18.1...v1.19.0) Updates `sphinxcontrib-mermaid` from 1.2.3 to 2.0.0 - [Changelog](https://github.com/mgaitan/sphinxcontrib-mermaid/blob/master/CHANGELOG.md) - [Commits](mgaitan/sphinxcontrib-mermaid@1.2.3...2.0.0) --- updated-dependencies: - dependency-name: pyarrow dependency-version: 23.0.0 dependency-type: direct:production update-type: version-update:semver-major dependency-group: python-packages - dependency-name: parsl dependency-version: 2026.1.12 dependency-type: direct:production update-type: version-update:semver-patch dependency-group: python-packages - dependency-name: botocore dependency-version: 1.42.30 dependency-type: direct:production update-type: version-update:semver-patch dependency-group: python-packages - dependency-name: pycytominer dependency-version: 1.3.1 dependency-type: direct:development update-type: version-update:semver-patch dependency-group: python-packages - dependency-name: jupyterlab dependency-version: 4.5.2 dependency-type: direct:development update-type: version-update:semver-patch dependency-group: python-packages - dependency-name: black dependency-version: 26.1.0 dependency-type: direct:development update-type: version-update:semver-major dependency-group: python-packages - dependency-name: jupytext dependency-version: 1.19.0 dependency-type: direct:development update-type: version-update:semver-minor dependency-group: python-packages - dependency-name: sphinxcontrib-mermaid dependency-version: 2.0.0 dependency-type: direct:development update-type: version-update:semver-major dependency-group: python-packages ... Signed-off-by: dependabot[bot] <support@github.com> * Update .pre-commit-config.yaml * [pre-commit.ci lite] apply automatic fixes --------- Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: d33bs <ekgto445@gmail.com> Co-authored-by: pre-commit-ci-lite[bot] <117423508+pre-commit-ci-lite[bot]@users.noreply.github.com>
1 parent faa5d3b commit 2729757

File tree

11 files changed

+161
-228
lines changed

11 files changed

+161
-228
lines changed

.pre-commit-config.yaml

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ repos:
1111
- id: check-yaml
1212
- id: check-toml
1313
- repo: https://github.com/python-poetry/poetry
14-
rev: 2.2.1
14+
rev: 2.3.1
1515
hooks:
1616
- id: poetry-check
1717
- repo: https://github.com/tox-dev/pyproject-fmt
@@ -36,20 +36,20 @@ repos:
3636
- mdformat-myst
3737
- mdformat-gfm
3838
- repo: https://github.com/adrienverge/yamllint
39-
rev: v1.37.1
39+
rev: v1.38.0
4040
hooks:
4141
- id: yamllint
4242
exclude: ".pre-commit-config.yaml"
4343
- repo: https://github.com/psf/black
44-
rev: 25.12.0
44+
rev: 26.1.0
4545
hooks:
4646
- id: black
4747
- repo: https://github.com/asottile/blacken-docs
4848
rev: 1.20.0
4949
hooks:
5050
- id: blacken-docs
5151
- repo: https://github.com/PyCQA/bandit
52-
rev: 1.9.2
52+
rev: 1.9.3
5353
hooks:
5454
- id: bandit
5555
args: ["-c", "pyproject.toml"]
@@ -77,7 +77,7 @@ repos:
7777
hooks:
7878
- id: validate-cff
7979
- repo: https://github.com/software-gardening/almanack
80-
rev: v0.1.11
80+
rev: v0.1.13
8181
hooks:
8282
- id: almanack-check
8383
- repo: https://github.com/PyCQA/pylint

cytotable/convert.py

Lines changed: 8 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -502,14 +502,12 @@ def _source_pageset_to_parquet(
502502
# and export to parquet
503503
with _duckdb_reader() as ddb_reader:
504504
_write_parquet_table_with_metadata(
505-
table=ddb_reader.execute(
506-
f"""
505+
table=ddb_reader.execute(f"""
507506
{base_query}
508507
WHERE {source['page_key']} BETWEEN {pageset[0]} AND {pageset[1]}
509508
/* optional ordering per pageset */
510509
{"ORDER BY " + source['page_key'] if sort_output else ""};
511-
"""
512-
).fetch_arrow_table(),
510+
""").fetch_arrow_table(),
513511
where=result_filepath,
514512
)
515513
# Include exception handling to read mixed-type data
@@ -914,8 +912,7 @@ def _join_source_pageset(
914912
from cytotable.utils import _duckdb_reader, _write_parquet_table_with_metadata
915913

916914
with _duckdb_reader() as ddb_reader:
917-
result = ddb_reader.execute(
918-
f"""
915+
result = ddb_reader.execute(f"""
919916
WITH joined AS (
920917
{joins}
921918
)
@@ -924,8 +921,7 @@ def _join_source_pageset(
924921
{f"WHERE {page_key} BETWEEN {pageset[0]} AND {pageset[1]}" if pageset is not None else ""}
925922
/* optional sorting per pagset */
926923
{"ORDER BY " + page_key if sort_output else ""};
927-
"""
928-
).fetch_arrow_table()
924+
""").fetch_arrow_table()
929925

930926
# drop nulls if specified
931927
if drop_null:
@@ -1065,18 +1061,14 @@ def _concat_join_sources(
10651061
)
10661062
+ "'"
10671063
)
1068-
df_numeric = ddb_reader.execute(
1069-
f"""
1064+
df_numeric = ddb_reader.execute(f"""
10701065
SELECT {",".join(numeric_colnames)}
10711066
FROM read_parquet([{all_files}])
1072-
"""
1073-
).df()
1074-
df_nonnumeric = ddb_reader.execute(
1075-
f"""
1067+
""").df()
1068+
df_nonnumeric = ddb_reader.execute(f"""
10761069
SELECT {",".join(nonnumeric_colnames)}
10771070
FROM read_parquet([{all_files}])
1078-
"""
1079-
).df()
1071+
""").df()
10801072

10811073
# create the anndata object with numeric features
10821074
adata = ad.AnnData(X=df_numeric)

cytotable/utils.py

Lines changed: 2 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -268,20 +268,15 @@ def _sqlite_affinity_data_type_lookup(col_type: str) -> str:
268268
)
269269

270270
# create cases for mixed-type handling in each column discovered above
271-
query_parts = tablenumber_sql + ", ".join(
272-
[
273-
f"""
271+
query_parts = tablenumber_sql + ", ".join([f"""
274272
CASE
275273
/* when the storage class type doesn't match the column, return nulltype */
276274
WHEN typeof({col['column_name']}) !=
277275
'{_sqlite_affinity_data_type_lookup(col['column_type'].lower())}' THEN NULL
278276
/* else, return the normal value */
279277
ELSE {col['column_name']}
280278
END AS {col['column_name']}
281-
"""
282-
for col in column_info
283-
]
284-
)
279+
""" for col in column_info])
285280

286281
# perform the select using the cases built above and using chunksize + offset
287282
sql_stmt = f"""

poetry.lock

Lines changed: 119 additions & 136 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

pyproject.toml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -52,12 +52,12 @@ pillow = ">=11.3,<13.0" # added to help visualize images within examples
5252
[tool.poetry.group.docs.dependencies]
5353
jupyterlab = "^4.4.3"
5454
jupyterlab-code-formatter = "^3.0.2"
55-
black = "^25.1.0"
55+
black = ">=25.1,<27.0"
5656
isort = ">=6.0.1,<8.0.0"
5757
jupytext = "^1.17.1"
5858
Sphinx = ">=6,<9"
5959
myst-parser = ">=2,<5"
60-
sphinxcontrib-mermaid = ">=0.9,<1.3"
60+
sphinxcontrib-mermaid = ">=0.9,<2.1"
6161
myst-nb = "^1.2.0"
6262
typing-extensions = "^4.14.0"
6363

tests/conftest.py

Lines changed: 4 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -482,10 +482,7 @@ def col_renames(name: str, table: pa.Table):
482482
cells_table = col_renames(name="Cells", table=cells_table)
483483
nuclei_table = col_renames(name="Nuclei", table=nuclei_table)
484484

485-
control_result = (
486-
duckdb.connect()
487-
.execute(
488-
"""
485+
control_result = duckdb.connect().execute("""
489486
SELECT
490487
*
491488
FROM
@@ -498,10 +495,7 @@ def col_renames(name: str, table: pa.Table):
498495
LEFT JOIN nuclei_table AS nuclei ON
499496
nuclei.Metadata_ImageNumber = cytoplasm.Metadata_ImageNumber
500497
AND nuclei.Metadata_ObjectNumber = cytoplasm.Metadata_Cytoplasm_Parent_Nuclei
501-
"""
502-
)
503-
.fetch_arrow_table()
504-
)
498+
""").fetch_arrow_table()
505499

506500
# reversed order column check as col removals will change index order
507501
cols = []
@@ -546,8 +540,7 @@ def fixture_cellprofiler_merged_nf1data(
546540
f"{data_dir_cellprofiler}/NF1_SchwannCell_data/all_cellprofiler.sqlite"
547541
],
548542
)
549-
.execute(
550-
"""
543+
.execute("""
551544
/* perform query on sqlite tables through duckdb */
552545
SELECT
553546
image.ImageNumber,
@@ -564,8 +557,7 @@ def fixture_cellprofiler_merged_nf1data(
564557
WHERE
565558
cells.Cells_Number_Object_Number = cytoplasm.Cytoplasm_Parent_Cells
566559
AND nuclei.Nuclei_Number_Object_Number = cytoplasm.Cytoplasm_Parent_Nuclei
567-
"""
568-
)
560+
""")
569561
.fetch_arrow_table()
570562
.drop_null()
571563
)

tests/data/cellprofiler/nf1_cellpainting_data/shrink_Plate_3_nf1_analysis.sqlite_for_testing.py

Lines changed: 4 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -20,20 +20,17 @@
2020

2121
with sqlite3.connect(SQLITE_TARGET) as conn:
2222
# delete data except that related to two tablenumbers
23-
conn.execute(
24-
"""
23+
conn.execute("""
2524
DELETE FROM Per_Image
2625
/* use site and well which are known to
2726
contain imagenumbers that don't persist
2827
to compartment tables */
2928
WHERE Image_Metadata_Site != '1'
3029
AND Image_Metadata_Well != 'B1';
31-
"""
32-
)
30+
""")
3331
# do the same for compartment tables, also removing objectnumbers > 3
3432
for table in ["Cells", "Nuclei", "Cytoplasm"]:
35-
conn.execute(
36-
f"""
33+
conn.execute(f"""
3734
DELETE FROM Per_{table}
3835
WHERE
3936
/* filter using only imagenumbers which exist in modified
@@ -43,8 +40,7 @@
4340
for each compartment table so as to keep the test dataset
4441
very small. */
4542
OR {table}_Number_Object_Number > 2
46-
"""
47-
)
43+
""")
4844

4945
conn.commit()
5046
conn.execute("VACUUM;")

tests/data/cytominer-database/Cell-Health/shrink_SQ00014613.sqlite_for_testing.py

Lines changed: 4 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -21,8 +21,7 @@
2121

2222
with sqlite3.connect(SQLITE_TARGET) as conn:
2323
# delete data except that related to two tablenumbers
24-
conn.execute(
25-
"""
24+
conn.execute("""
2625
DELETE FROM Image
2726
WHERE TableNumber NOT IN
2827
/* TableNumber 88ac13033d9baf49fda78c3458bef89e includes
@@ -31,20 +30,17 @@
3130
Nuclei_Correlation_Costes_AGP_DNA */
3231
('88ac13033d9baf49fda78c3458bef89e',
3332
'1e5d8facac7508cfd4086f3e3e950182')
34-
"""
35-
)
33+
""")
3634
# do the same for compartment tables, also removing objectnumbers > 3
3735
for table in ["Cells", "Nuclei", "Cytoplasm"]:
38-
conn.execute(
39-
f"""
36+
conn.execute(f"""
4037
DELETE FROM {table}
4138
WHERE TableNumber NOT IN (SELECT TableNumber FROM Image)
4239
/* Here we limit the number of objects which are returned
4340
for each compartment table so as to keep the test dataset
4441
very small. */
4542
OR ObjectNumber > 6
46-
"""
47-
)
43+
""")
4844

4945
conn.commit()
5046
conn.execute("VACUUM;")

tests/data/in-carta/colas-lab/shrink_colas_lab_data_for_tests.py

Lines changed: 4 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -23,14 +23,10 @@
2323
schema_collection.append(
2424
{
2525
"file": data_file,
26-
"schema": ddb.execute(
27-
f"""
26+
"schema": ddb.execute(f"""
2827
SELECT *
2928
FROM read_csv_auto('{data_file}')
30-
"""
31-
)
32-
.fetch_arrow_table()
33-
.schema,
29+
""").fetch_arrow_table().schema,
3430
}
3531
)
3632

@@ -56,16 +52,14 @@
5652

5753
csv.write_csv(
5854
# we use duckdb to filter the original dataset in SQL
59-
data=ddb.execute(
60-
f"""
55+
data=ddb.execute(f"""
6156
SELECT *
6257
FROM read_csv_auto('{data_file}') as data_file
6358
/* select only the first three objects to limit the dataset */
6459
WHERE data_file."OBJECT ID" in (1,2,3)
6560
/* select rows C and D to limit the dataset */
6661
AND data_file."ROW" in ('C', 'D')
67-
"""
68-
).fetch_arrow_table(),
62+
""").fetch_arrow_table(),
6963
# output the filtered data as a CSV to a new location
7064
output_file=(
7165
f"{TARGET_DATA_DIR}/{output_filename}"

tests/test_convert.py

Lines changed: 8 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -682,17 +682,11 @@ def test_run_export_workflow(
682682

683683
flattened_results = list(itertools.chain(*list(result.values())))
684684
for i, flattened_result in enumerate(flattened_results):
685-
csv_source = (
686-
_duckdb_reader()
687-
.execute(
688-
f"""
685+
csv_source = _duckdb_reader().execute(f"""
689686
select * from
690687
read_csv_auto('{str(flattened_example_sources[i]["source_path"])}',
691688
ignore_errors=TRUE)
692-
"""
693-
)
694-
.fetch_arrow_table()
695-
)
689+
""").fetch_arrow_table()
696690
parquet_result = parquet.ParquetDataset(
697691
path_or_paths=flattened_result["table"],
698692
# set the order of the columns uniformly for schema comparison
@@ -747,17 +741,11 @@ def test_run_export_workflow_unsorted(
747741

748742
flattened_results = list(itertools.chain(*list(result.values())))
749743
for i, flattened_result in enumerate(flattened_results):
750-
csv_source = (
751-
_duckdb_reader()
752-
.execute(
753-
f"""
744+
csv_source = _duckdb_reader().execute(f"""
754745
select * from
755746
read_csv_auto('{str(flattened_example_sources[i]["source_path"])}',
756747
ignore_errors=TRUE)
757-
"""
758-
)
759-
.fetch_arrow_table()
760-
)
748+
""").fetch_arrow_table()
761749
parquet_result = parquet.ParquetDataset(
762750
path_or_paths=flattened_result["table"],
763751
# set the order of the columns uniformly for schema comparison
@@ -1185,14 +1173,12 @@ def test_sqlite_mixed_type_query_to_parquet(
11851173

11861174
try:
11871175
# attempt to read the data using DuckDB
1188-
result = _duckdb_reader().execute(
1189-
f"""COPY (
1176+
result = _duckdb_reader().execute(f"""COPY (
11901177
select * from sqlite_scan('{example_sqlite_mixed_types_database}','{table_name}')
11911178
LIMIT 2 OFFSET 0
11921179
) TO '{result_filepath}'
11931180
(FORMAT PARQUET)
1194-
"""
1195-
)
1181+
""")
11961182
except duckdb.Error as duckdb_exc:
11971183
# if we see a mismatched type error
11981184
# run a more nuanced query through sqlite
@@ -1349,12 +1335,10 @@ def test_in_carta_to_parquet(
13491335
for data_dir in data_dirs_in_carta:
13501336
# read the directory of data with wildcard
13511337
with duckdb.connect() as ddb:
1352-
ddb_result = ddb.execute(
1353-
f"""
1338+
ddb_result = ddb.execute(f"""
13541339
SELECT *
13551340
FROM read_csv_auto('{data_dir}/*.csv')
1356-
"""
1357-
).fetch_arrow_table()
1341+
""").fetch_arrow_table()
13581342

13591343
# process the data with cytotable using in-carta preset
13601344
cytotable_result = convert(

0 commit comments

Comments
 (0)