Skip to content

Conversation

@isuruf
Copy link
Member

@isuruf isuruf commented Aug 1, 2025

Checklist

  • Used a personal fork of the feedstock to propose changes
  • Bumped the build number (if the version is unchanged)
  • Reset the build number to 0 (if the version changed)
  • Re-rendered with the latest conda-smithy (Use the phrase @conda-forge-admin, please rerender in a comment in this PR for automated rerendering)
  • Ensured the license file is being packaged.

@conda-forge-admin
Copy link
Contributor

conda-forge-admin commented Aug 1, 2025

Hi! This is the friendly automated conda-forge-linting service.

I just wanted to let you know that I linted all conda-recipes in your PR (recipe/meta.yaml) and found it was in an excellent condition.

I do have some suggestions for making it better though...

For recipe/meta.yaml:

  • ℹ️ The recipe is not parsable by parser conda-souschef (grayskull). This parser is not currently used by conda-forge, but may be in the future. We are collecting information to see which recipes are compatible with grayskull.
  • ℹ️ The recipe is not parsable by parser conda-recipe-manager. The recipe can only be automatically migrated to the new v1 format if it is parseable by conda-recipe-manager.

This message was generated by GitHub Actions workflow run https://github.com/conda-forge/conda-forge-webservices/actions/runs/16888793150. Examine the logs at this URL for more detail.

Copy link
Contributor

@mgorny mgorny left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're the expert :-).

Copy link
Member

@h-vetinari h-vetinari left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mechanics LGTM, I only have a question about the rationale

Copy link
Member

@h-vetinari h-vetinari left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, comment apparently didn't get sent previously. This was the question about the rationale I was referring to.

@h-vetinari
Copy link
Member

The win+CPU build here now picked up some test failures, unfortunately

=========================== short test summary info ===========================
FAILED [0.0000s] test/test_linalg.py::TestLinalgCPU::test_addmm_gelu_cpu_float32
FAILED [0.0000s] test/test_autograd.py::TestAutograd::test_gradcheck_multiple_mkldnn_inputs
FAILED [0.0000s] test/test_modules.py::TestModuleCPU::test_to_nn_Conv1d_swap_True_set_grad_False_cpu_float32
FAILED [0.0000s] test/test_xnnpack_integration.py::TestXNNPACKRewritePass::test_linear
FAILED [0.0000s] test/test_modules.py::TestModuleCPU::test_non_contiguous_tensors_nn_Conv2d_cpu_float32
FAILED [0.0000s] test/test_modules.py::TestModuleCPU::test_forward_nn_TransformerDecoderLayer_cpu_float32
FAILED [0.0000s] test/test_xnnpack_integration.py::TestXNNPACKConv1dTransformPass::test_conv1d_basic
FAILED [0.0000s] test/test_modules.py::TestModuleCPU::test_non_contiguous_tensors_nn_LSTM_eval_mode_cpu_float32
FAILED [0.0000s] test/test_modules.py::TestModuleCPU::test_if_train_and_eval_modes_differ_nn_Conv1d_cpu_float32
FAILED [0.0000s] test/test_modules.py::TestModuleCPU::test_forward_nn_Conv1d_cpu_float32
= 10 failed, 1547 passed, 68 skipped, 4 xfailed, 132 warnings in 79.44s (0:01:19) =

@h-vetinari
Copy link
Member

And unfortunately the failures are the unfriendly kind:

2025-08-01T17:19:12.4224458Z ================================== FAILURES ===================================
2025-08-01T17:19:12.4224886Z _____________________________ test/test_linalg.py _____________________________
2025-08-01T17:19:12.4225308Z [gw1] win32 -- Python 3.10.18 %PREFIX%\python.exe
2025-08-01T17:19:12.4225847Z worker 'gw1' crashed while running 'test/test_linalg.py::TestLinalgCPU::test_addmm_gelu_cpu_float32'
2025-08-01T17:19:12.4247654Z ____________________________ test/test_autograd.py ____________________________
2025-08-01T17:19:12.4248040Z [gw0] win32 -- Python 3.10.18 %PREFIX%\python.exe
2025-08-01T17:19:12.4248528Z worker 'gw0' crashed while running 'test/test_autograd.py::TestAutograd::test_gradcheck_multiple_mkldnn_inputs'
2025-08-01T17:19:12.4257268Z ____________________________ test/test_modules.py _____________________________
2025-08-01T17:19:12.4258313Z [gw2] win32 -- Python 3.10.18 %PREFIX%\python.exe
2025-08-01T17:19:12.4258977Z worker 'gw2' crashed while running 'test/test_modules.py::TestModuleCPU::test_to_nn_Conv1d_swap_True_set_grad_False_cpu_float32'
2025-08-01T17:19:12.4267194Z ______________________ test/test_xnnpack_integration.py _______________________
2025-08-01T17:19:12.4267653Z [gw3] win32 -- Python 3.10.18 %PREFIX%\python.exe
2025-08-01T17:19:12.4268148Z worker 'gw3' crashed while running 'test/test_xnnpack_integration.py::TestXNNPACKRewritePass::test_linear'
2025-08-01T17:19:12.4276194Z ____________________________ test/test_modules.py _____________________________
2025-08-01T17:19:12.4276535Z [gw4] win32 -- Python 3.10.18 %PREFIX%\python.exe
2025-08-01T17:19:12.4277076Z worker 'gw4' crashed while running 'test/test_modules.py::TestModuleCPU::test_non_contiguous_tensors_nn_Conv2d_cpu_float32'
2025-08-01T17:19:12.4291683Z ____________________________ test/test_modules.py _____________________________
2025-08-01T17:19:12.4292233Z [gw6] win32 -- Python 3.10.18 %PREFIX%\python.exe
2025-08-01T17:19:12.4293039Z worker 'gw6' crashed while running 'test/test_modules.py::TestModuleCPU::test_forward_nn_TransformerDecoderLayer_cpu_float32'
2025-08-01T17:19:12.4295807Z ______________________ test/test_xnnpack_integration.py _______________________
2025-08-01T17:19:12.4296182Z [gw5] win32 -- Python 3.10.18 %PREFIX%\python.exe
2025-08-01T17:19:12.4296720Z worker 'gw5' crashed while running 'test/test_xnnpack_integration.py::TestXNNPACKConv1dTransformPass::test_conv1d_basic'
2025-08-01T17:19:12.4305312Z ____________________________ test/test_modules.py _____________________________
2025-08-01T17:19:12.4305633Z [gw7] win32 -- Python 3.10.18 %PREFIX%\python.exe
2025-08-01T17:19:12.4306161Z worker 'gw7' crashed while running 'test/test_modules.py::TestModuleCPU::test_non_contiguous_tensors_nn_LSTM_eval_mode_cpu_float32'
2025-08-01T17:19:12.4314844Z ____________________________ test/test_modules.py _____________________________
2025-08-01T17:19:12.4315171Z [gw8] win32 -- Python 3.10.18 %PREFIX%\python.exe
2025-08-01T17:19:12.4315699Z worker 'gw8' crashed while running 'test/test_modules.py::TestModuleCPU::test_if_train_and_eval_modes_differ_nn_Conv1d_cpu_float32'
2025-08-01T17:19:12.4324273Z ____________________________ test/test_modules.py _____________________________
2025-08-01T17:19:12.4324589Z [gw9] win32 -- Python 3.10.18 %PREFIX%\python.exe
2025-08-01T17:19:12.4325033Z worker 'gw9' crashed while running 'test/test_modules.py::TestModuleCPU::test_forward_nn_Conv1d_cpu_float32'
2025-08-01T17:19:12.4334226Z ================== xdist: maximum crashed workers reached: 8 ==================

@h-vetinari
Copy link
Member

2025-08-12T04:55:48.6505073Z (%PREFIX%) %SRC_DIR%>"%PREFIX%\python.exe" -s "%SRC_DIR%\run_test.py" 
2025-08-12T04:55:48.6699918Z import: 'torch'
2025-08-12T04:55:49.4265468Z OMP: Error #15: Initializing libomp.dll, but found libomp.dll already initialized.
2025-08-12T04:55:49.4268280Z OMP: Hint This means that multiple copies of the OpenMP runtime have been linked into the program. That is dangerous, since it can degrade performance or cause incorrect results. The best thing to do is to ensure that only a single OpenMP runtime is linked into the process, e.g. by avoiding static linking of the OpenMP runtime in any library. As an unsafe, unsupported, undocumented workaround you can set the environment variable KMP_DUPLICATE_LIB_OK=TRUE to allow the program to continue to execute, but that may cause crashes or silently produce incorrect results. For more information, please see http://openmp.llvm.org/
2025-08-12T04:55:50.0973431Z 
2025-08-12T04:55:50.0974113Z (%PREFIX%) %SRC_DIR%>IF 3 NEQ 0 exit /B 1

@h-vetinari h-vetinari mentioned this pull request Aug 13, 2025
5 tasks
@conda-forge-admin
Copy link
Contributor

Hi! This is the friendly automated conda-forge-linting service.

I failed to even lint the recipe, probably because of a conda-smithy bug 😢. This likely indicates a problem in your meta.yaml, though. To get a traceback to help figure out what's going on, install conda-smithy and run conda smithy recipe-lint --conda-forge . from the recipe directory. You can also examine the workflow logs for more detail.

This message was generated by GitHub Actions workflow run https://github.com/conda-forge/conda-forge-webservices/actions/runs/17153744501. Examine the logs at this URL for more detail.

@conda-forge-admin
Copy link
Contributor

conda-forge-admin commented Aug 22, 2025

Hi! This is the friendly automated conda-forge-linting service.

I just wanted to let you know that I linted all conda-recipes in your PR (recipe/meta.yaml) and found it was in an excellent condition.

I do have some suggestions for making it better though...

For recipe/meta.yaml:

  • ℹ️ The recipe is not parsable by parser conda-souschef (grayskull). This parser is not currently used by conda-forge, but may be in the future. We are collecting information to see which recipes are compatible with grayskull.
  • ℹ️ The recipe is not parsable by parser conda-recipe-manager. The recipe can only be automatically migrated to the new v1 format if it is parseable by conda-recipe-manager.

This message was generated by GitHub Actions workflow run https://github.com/conda-forge/conda-forge-webservices/actions/runs/17444579550. Examine the logs at this URL for more detail.

@hmaarrfk
Copy link
Contributor

hmaarrfk commented Sep 4, 2025

(unsubscribed since it seems this is still a WIP, please ping when ready for review)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants