Skip to content

Conversation

@jeffng-or
Copy link
Contributor

aes

Stage Error code Description Count
GRT GRT-0116 Global routing finished with congestion 6
GRT GRT-0232 Routing congestion too high 2
DPL DPL-0036 Detailed placement failed - density 0.8066801432365491 1

ibex

Stage Error code Description Count
GRT GRT-0116 Global routing finished with congestion 1
GRT GRT-0232 Routing congestion too high 3
DPL DPL-0036 Detailed placement failed - density 0.753235324222441 1

jpeg

No AutoTuner failed runs

@maliberty
Copy link
Member

@jeffng-or do you understand what happened with /tmp/workspace/low-scripts-Public_PR-2848-merge/tools/AutoTuner/autotuner_env/bin/python3: Error while finding module specification for 'autotuner.distributed' (ModuleNotFoundError: No module named 'autotuner')

@jeffng-or
Copy link
Contributor Author

@jeffng-or do you understand what happened with /tmp/workspace/low-scripts-Public_PR-2848-merge/tools/AutoTuner/autotuner_env/bin/python3: Error while finding module specification for 'autotuner.distributed' (ModuleNotFoundError: No module named 'autotuner')

No, I haven't and this PR didn't touch any AutoTuner code or tests (failure was in asap7 gcd).

@luarss or @vvbandeira , any ideas?

@vvbandeira
Copy link
Member

@jeffng-or do you understand what happened with /tmp/workspace/low-scripts-Public_PR-2848-merge/tools/AutoTuner/autotuner_env/bin/python3: Error while finding module specification for 'autotuner.distributed' (ModuleNotFoundError: No module named 'autotuner')

No, I haven't and this PR didn't touch any AutoTuner code or tests (failure was in asap7 gcd).

@luarss or @vvbandeira , any ideas?

There was an error before that message, which prevented the module from being generated. We have seen this error before, but I thought we had implemented a retry mechanism to avoid it:

 ERROR: THESE PACKAGES DO NOT MATCH THE HASHES FROM THE REQUIREMENTS FILE. If you have updated the package versions, please update the hashes. Otherwise, examine the package contents carefully; someone may have tampered with them.
     torch>=1.13.1 from https://files.pythonhosted.org/packages/37/81/aa9ab58ec10264c1abe62c8b73f5086c3c558885d6beecebf699f0dbeaeb/torch-2.6.0-cp310-cp310-manylinux1_x86_64.whl#sha256=6860df13d9911ac158f4c44031609700e1eba07916fff62e21e6ffa0a9e01961 (from botorch==0.10.0->ax-platform<=0.3.7,>=0.3.3->autotuner==0.0.1):
         Expected sha256 6860df13d9911ac158f4c44031609700e1eba07916fff62e21e6ffa0a9e01961
              Got        65705993c2fea435f2c2f08e1f6a7cdade940e99e67e7ef7626b9e092eaad304

@luarss
Copy link
Contributor

luarss commented Feb 14, 2025

Hi all, I have an draft PR addressing this. #2538

The root cause is that this is a temporary network issue causing corruption, so we should implement retry when file hashes dont match.

@vvbandeira vvbandeira enabled auto-merge February 14, 2025 15:55
@vvbandeira
Copy link
Member

Hi all, I have an draft PR addressing this. #2538

The root cause is that this is a temporary network issue causing corruption, so we should implement retry when file hashes dont match.

@luarss
Please fix the merge conflict or separate the concern in a different PR for review

@vvbandeira vvbandeira merged commit 55d277c into The-OpenROAD-Project:master Feb 14, 2025
7 checks passed
@jeffng-or jeffng-or deleted the at-sky130hd-invariant-minmax branch February 15, 2025 00:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants