Skip to content

fix(core): lazy loading transformers, numpy, simsimd#35466

Open
Won-Kyu Park (wkpark) wants to merge 2 commits intolangchain-ai:masterfrom
wkpark:lazy_load_numpy_transformers
Open

fix(core): lazy loading transformers, numpy, simsimd#35466
Won-Kyu Park (wkpark) wants to merge 2 commits intolangchain-ai:masterfrom
wkpark:lazy_load_numpy_transformers

Conversation

@wkpark
Copy link

@wkpark Won-Kyu Park (wkpark) commented Feb 27, 2026

fix(core): lazy loading transformers, numpy, simsimd

Summary

Optimized the initial import speed of langchain_core by converting top-level imports of transformers, numpy, and simsimd into lazy, local imports.

Details

Confirmed with python -X importtime that these dependencies are no longer loaded on initial import, and verified functionality through make format/lint and est.

Disclaimer

I used the Gemini CLI to help with the specific implementation details and to ensure compliance with linting and type checking and drafting this PR description.

before
without transformers / torch env.

$ time uv run python -c "from langchain_core.prompts import BasePromptTemplate"
real    0m0.600s
user    0m0.509s
sys     0m0.074s

with transformers / torch installed env.

$ uv pip install transformers
...
$ uv pip install numpy
...

$ time uv run python -c "from langchain_core.prompts import BasePromptTemplate"
real    0m9.664s
user    0m4.268s
sys     0m0.752s

after

$ time uv run python -c "from langchain_core.prompts import BasePromptTemplate"
real    0m1.916s
user    0m0.802s
sys     0m0.132s

@github-actions github-actions bot added core `langchain-core` package issues & PRs external labels Feb 27, 2026
@wkpark Won-Kyu Park (wkpark) changed the title Lazy load numpy transformers fix(core): lazy loading transformers, numpy, simsimd Feb 27, 2026
@github-actions github-actions bot added the fix For PRs that implement a fix label Feb 27, 2026
@codspeed-hq
Copy link

codspeed-hq bot commented Feb 27, 2026

Merging this PR will improve performance by 20.01%

⚠️ Unknown Walltime execution environment detected

Using the Walltime instrument on standard Hosted Runners will lead to inconsistent data.

For the most accurate results, we recommend using CodSpeed Macro Runners: bare-metal machines fine-tuned for performance measurement consistency.

⚡ 1 improved benchmark
✅ 12 untouched benchmarks
⏩ 23 skipped benchmarks1

Performance Changes

Mode Benchmark BASE HEAD Efficiency
WallTime test_import_time[InMemoryVectorStore] 651 ms 542.5 ms +20.01%

Comparing wkpark:lazy_load_numpy_transformers (b5f28ac) with master (e939c96)

Open in CodSpeed

Footnotes

  1. 23 benchmarks were skipped, so the baseline results were used instead. If they were deleted from the codebase, click here and archive them to remove them from the performance reports.

@wkpark Won-Kyu Park (wkpark) force-pushed the lazy_load_numpy_transformers branch from 88e4983 to a0b778f Compare February 27, 2026 19:47
@wkpark Won-Kyu Park (wkpark) force-pushed the lazy_load_numpy_transformers branch from a0b778f to b5f28ac Compare February 27, 2026 20:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

core `langchain-core` package issues & PRs external fix For PRs that implement a fix

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant