Replies: 6 comments
-
|
Hi, sorry about this! Have you configured anything in the json config file? If not, could you try setting the following in the json and see if it persists? {
"embedding_params": {
"model_name": "sentence-transformers/all-MiniLM-L6-v2"
}
} |
Beta Was this translation helpful? Give feedback.
-
|
If the issue persists, please re-run your command with logging enabled and attach the log file here. |
Beta Was this translation helpful? Give feedback.
-
|
I have place the json above in config.json in the .vectorcode folder of a git repo. But it still fails. Here are the logs: DEBUG: asyncio : Using selector: KqueueSelector
INFO: vectorcode.main : Collected CLI arguments: Config(no_stderr=False, recursive=False, include_hidden=False, to_be_deleted=[], pipe=False, action=<CliAction.vectorise: 'vectorise'>, files=['src/lib.rs'], project_root=None, query=None, host='127.0.0.1', port=8000, embedding_function='SentenceTransformerEmbeddingFunction', embedding_params={}, n_result=1, force=False, db_path='~/.local/share/vectorcode/chromadb/', db_log_path='~/.local/share/vectorcode/', db_settings=None, chunk_size=-1, overlap_ratio=None, query_multiplier=-1, query_exclude=[], reranker='CrossEncoderReranker', reranker_params={}, check_item=None, use_absolute_path=False, include=[<QueryInclude.path: 'path'>, <QueryInclude.document: 'document'>], hnsw={}, chunk_filters={}, encoding='utf8')
INFO: vectorcode.main : Project root is set to /Users/beli/LogParser
DEBUG: vectorcode.cli_utils : Loading config from /Users/beli/LogParser/.vectorcode/config.json
INFO: vectorcode.main : Final configuration has been built: Config(no_stderr=False, recursive=False, include_hidden=False, to_be_deleted=[], pipe=False, action=<CliAction.vectorise: 'vectorise'>, files=['src/lib.rs'], project_root='/Users/beli/LogParser', query=None, host='localhost', port=8000, embedding_function='SentenceTransformerEmbeddingFunction', embedding_params={'model_name': 'sentence-transformers/all-MiniLM-L6-v2'}, n_result=1, force=False, db_path='/Users/beli/.local/share/vectorcode/chromadb/', db_log_path='/Users/beli/.local/share/vectorcode/', db_settings=None, chunk_size=-1, overlap_ratio=0.2, query_multiplier=-1, query_exclude=[], reranker='CrossEncoderReranker', reranker_params={}, check_item=None, use_absolute_path=False, include=[<QueryInclude.path: 'path'>, <QueryInclude.document: 'document'>], hnsw={}, chunk_filters={}, encoding='utf8')
DEBUG: httpcore.connection : connect_tcp.started host='localhost' port=8000 local_address=None timeout=5.0 socket_options=None
DEBUG: httpcore.connection : connect_tcp.failed exception=ConnectError(OSError('All connection attempts failed'))
DEBUG: httpcore.connection : connect_tcp.started host='localhost' port=8000 local_address=None timeout=5.0 socket_options=None
DEBUG: httpcore.connection : connect_tcp.failed exception=ConnectError(OSError('All connection attempts failed'))
WARNING: vectorcode.common : Starting bundled ChromaDB server at localhost:56666.
DEBUG: httpcore.connection : connect_tcp.started host='localhost' port=56666 local_address=None timeout=5.0 socket_options=None
DEBUG: httpcore.connection : connect_tcp.failed exception=ConnectError(OSError('All connection attempts failed'))
DEBUG: httpcore.connection : connect_tcp.started host='localhost' port=56666 local_address=None timeout=5.0 socket_options=None
DEBUG: httpcore.connection : connect_tcp.failed exception=ConnectError(OSError('All connection attempts failed'))
DEBUG: httpcore.connection : connect_tcp.started host='localhost' port=56666 local_address=None timeout=5.0 socket_options=None
DEBUG: httpcore.connection : connect_tcp.failed exception=ConnectError(OSError('All connection attempts failed'))
DEBUG: httpcore.connection : connect_tcp.started host='localhost' port=56666 local_address=None timeout=5.0 socket_options=None
DEBUG: httpcore.connection : connect_tcp.failed exception=ConnectError(OSError('All connection attempts failed'))
DEBUG: httpcore.connection : connect_tcp.started host='localhost' port=56666 local_address=None timeout=5.0 socket_options=None
DEBUG: httpcore.connection : connect_tcp.failed exception=ConnectError(OSError('All connection attempts failed'))
DEBUG: httpcore.connection : connect_tcp.started host='localhost' port=56666 local_address=None timeout=5.0 socket_options=None
DEBUG: httpcore.connection : connect_tcp.failed exception=ConnectError(OSError('All connection attempts failed'))
DEBUG: httpcore.connection : connect_tcp.started host='localhost' port=56666 local_address=None timeout=5.0 socket_options=None
DEBUG: httpcore.connection : connect_tcp.failed exception=ConnectError(OSError('All connection attempts failed'))
DEBUG: httpcore.connection : connect_tcp.started host='localhost' port=56666 local_address=None timeout=5.0 socket_options=None
DEBUG: httpcore.connection : connect_tcp.failed exception=ConnectError(OSError('All connection attempts failed'))
DEBUG: httpcore.connection : connect_tcp.started host='localhost' port=56666 local_address=None timeout=5.0 socket_options=None
DEBUG: httpcore.connection : connect_tcp.failed exception=ConnectError(OSError('All connection attempts failed'))
DEBUG: httpcore.connection : connect_tcp.started host='localhost' port=56666 local_address=None timeout=5.0 socket_options=None
DEBUG: httpcore.connection : connect_tcp.failed exception=ConnectError(OSError('All connection attempts failed'))
DEBUG: httpcore.connection : connect_tcp.started host='localhost' port=56666 local_address=None timeout=5.0 socket_options=None
DEBUG: httpcore.connection : connect_tcp.failed exception=ConnectError(OSError('All connection attempts failed'))
DEBUG: httpcore.connection : connect_tcp.started host='localhost' port=56666 local_address=None timeout=5.0 socket_options=None
DEBUG: httpcore.connection : connect_tcp.failed exception=ConnectError(OSError('All connection attempts failed'))
DEBUG: httpcore.connection : connect_tcp.started host='localhost' port=56666 local_address=None timeout=5.0 socket_options=None
DEBUG: httpcore.connection : connect_tcp.failed exception=ConnectError(OSError('All connection attempts failed'))
DEBUG: httpcore.connection : connect_tcp.started host='localhost' port=56666 local_address=None timeout=5.0 socket_options=None
DEBUG: httpcore.connection : connect_tcp.failed exception=ConnectError(OSError('All connection attempts failed'))
DEBUG: httpcore.connection : connect_tcp.started host='localhost' port=56666 local_address=None timeout=5.0 socket_options=None
DEBUG: httpcore.connection : connect_tcp.complete return_value=<httpcore._backends.anyio.AnyIOStream object at 0x10476a660>
DEBUG: httpcore.http11 : send_request_headers.started request=<Request [b'GET']>
DEBUG: httpcore.http11 : send_request_headers.complete
DEBUG: httpcore.http11 : send_request_body.started request=<Request [b'GET']>
DEBUG: httpcore.http11 : send_request_body.complete
DEBUG: httpcore.http11 : receive_response_headers.started request=<Request [b'GET']>
DEBUG: httpcore.http11 : receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Fri, 02 May 2025 00:01:23 GMT'), (b'server', b'uvicorn'), (b'content-length', b'44'), (b'content-type', b'application/json'), (b'chroma-trace-id', b'0')])
INFO: httpx : HTTP Request: GET http://localhost:56666/api/v1/heartbeat "HTTP/1.1 200 OK"
DEBUG: httpcore.http11 : receive_response_body.started request=<Request [b'GET']>
DEBUG: httpcore.http11 : receive_response_body.complete
DEBUG: httpcore.http11 : response_closed.started
DEBUG: httpcore.http11 : response_closed.complete
DEBUG: vectorcode.common : Heartbeat http://localhost:56666/api/v1/heartbeat returned response=<Response [200 OK]>
DEBUG: httpcore.connection : close.started
DEBUG: httpcore.connection : close.complete
DEBUG: chromadb.config : Starting component System
DEBUG: chromadb.config : Starting component Posthog
DEBUG: chromadb.config : Starting component OpenTelemetryClient
DEBUG: chromadb.config : Starting component AsyncFastAPI
DEBUG: httpcore.connection : connect_tcp.started host='localhost' port=56666 local_address=None timeout=None socket_options=None
DEBUG: httpcore.connection : connect_tcp.complete return_value=<httpcore._backends.anyio.AnyIOStream object at 0x104c28a50>
DEBUG: httpcore.http11 : send_request_headers.started request=<Request [b'GET']>
DEBUG: httpcore.http11 : send_request_headers.complete
DEBUG: httpcore.http11 : send_request_body.started request=<Request [b'GET']>
DEBUG: httpcore.http11 : send_request_body.complete
DEBUG: httpcore.http11 : receive_response_headers.started request=<Request [b'GET']>
DEBUG: httpcore.http11 : receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Fri, 02 May 2025 00:01:23 GMT'), (b'server', b'uvicorn'), (b'content-length', b'91'), (b'content-type', b'application/json'), (b'chroma-trace-id', b'0')])
INFO: httpx : HTTP Request: GET http://localhost:56666/api/v2/auth/identity "HTTP/1.1 200 OK"
DEBUG: httpcore.http11 : receive_response_body.started request=<Request [b'GET']>
DEBUG: httpcore.http11 : receive_response_body.complete
DEBUG: httpcore.http11 : response_closed.started
DEBUG: httpcore.http11 : response_closed.complete
DEBUG: chromadb.config : Starting component System
DEBUG: chromadb.config : Starting component Posthog
DEBUG: chromadb.config : Starting component OpenTelemetryClient
DEBUG: chromadb.config : Starting component AsyncFastAPI
DEBUG: httpcore.http11 : send_request_headers.started request=<Request [b'GET']>
DEBUG: httpcore.http11 : send_request_headers.complete
DEBUG: httpcore.http11 : send_request_body.started request=<Request [b'GET']>
DEBUG: httpcore.http11 : send_request_body.complete
DEBUG: httpcore.http11 : receive_response_headers.started request=<Request [b'GET']>
DEBUG: httpcore.http11 : receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Fri, 02 May 2025 00:01:23 GMT'), (b'server', b'uvicorn'), (b'content-length', b'25'), (b'content-type', b'application/json'), (b'chroma-trace-id', b'0')])
INFO: httpx : HTTP Request: GET http://localhost:56666/api/v2/tenants/default_tenant "HTTP/1.1 200 OK"
DEBUG: httpcore.http11 : receive_response_body.started request=<Request [b'GET']>
DEBUG: httpcore.http11 : receive_response_body.complete
DEBUG: httpcore.http11 : response_closed.started
DEBUG: httpcore.http11 : response_closed.complete
DEBUG: httpcore.http11 : send_request_headers.started request=<Request [b'GET']>
DEBUG: httpcore.http11 : send_request_headers.complete
DEBUG: httpcore.http11 : send_request_body.started request=<Request [b'GET']>
DEBUG: httpcore.http11 : send_request_body.complete
DEBUG: httpcore.http11 : receive_response_headers.started request=<Request [b'GET']>
DEBUG: httpcore.http11 : receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Fri, 02 May 2025 00:01:23 GMT'), (b'server', b'uvicorn'), (b'content-length', b'97'), (b'content-type', b'application/json'), (b'chroma-trace-id', b'0')])
INFO: httpx : HTTP Request: GET http://localhost:56666/api/v2/tenants/default_tenant/databases/default_database?tenant=default_tenant "HTTP/1.1 200 OK"
DEBUG: httpcore.http11 : receive_response_body.started request=<Request [b'GET']>
DEBUG: httpcore.http11 : receive_response_body.complete
DEBUG: httpcore.http11 : response_closed.started
DEBUG: httpcore.http11 : response_closed.complete
DEBUG: vectorcode.common : Hashing beli@beli--MacBookPro18:/Users/beli/LogParser as the collection name for /Users/beli/LogParser.
INFO: sentence_transformers.SentenceTransformer : Load pretrained SentenceTransformer: sentence-transformers/all-MiniLM-L6-v2
DEBUG: urllib3.connectionpool : Starting new HTTPS connection (1): huggingface.co:443
WARNING: sentence_transformers.SentenceTransformer : No sentence-transformers model found with name sentence-transformers/all-MiniLM-L6-v2. Creating a new one with mean pooling.
DEBUG: urllib3.connectionpool : Starting new HTTPS connection (2): huggingface.co:443
DEBUG: urllib3.connectionpool : Starting new HTTPS connection (3): huggingface.co:443
ERROR: vectorcode.common : Failed to use SentenceTransformerEmbeddingFunction with the following error:
ERROR: vectorcode.common : Traceback (most recent call last):
File "/Users/beli/.local/pipx/venvs/vectorcode/lib/python3.13/site-packages/vectorcode/common.py", line 149, in get_embedding_function
return getattr(embedding_functions, configs.embedding_function)(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
**configs.embedding_params
^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/Users/beli/.local/pipx/venvs/vectorcode/lib/python3.13/site-packages/chromadb/utils/embedding_functions/sentence_transformer_embedding_function.py", line 37, in __init__
self.models[model_name] = SentenceTransformer(
~~~~~~~~~~~~~~~~~~~^
model_name, device=device, **kwargs
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/Users/beli/.local/pipx/venvs/vectorcode/lib/python3.13/site-packages/sentence_transformers/SentenceTransformer.py", line 321, in __init__
modules = self._load_auto_model(
model_name_or_path,
...<7 lines>...
config_kwargs=config_kwargs,
)
File "/Users/beli/.local/pipx/venvs/vectorcode/lib/python3.13/site-packages/sentence_transformers/SentenceTransformer.py", line 1606, in _load_auto_model
transformer_model = Transformer(
model_name_or_path,
...<4 lines>...
backend=self.backend,
)
File "/Users/beli/.local/pipx/venvs/vectorcode/lib/python3.13/site-packages/sentence_transformers/models/Transformer.py", line 80, in __init__
config, is_peft_model = self._load_config(model_name_or_path, cache_dir, backend, config_args)
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/beli/.local/pipx/venvs/vectorcode/lib/python3.13/site-packages/sentence_transformers/models/Transformer.py", line 145, in _load_config
return AutoConfig.from_pretrained(model_name_or_path, **config_args, cache_dir=cache_dir), False
~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/beli/.local/pipx/venvs/vectorcode/lib/python3.13/site-packages/transformers/models/auto/configuration_auto.py", line 1151, in from_pretrained
raise ValueError(
...<3 lines>...
)
ValueError: Unrecognized model in sentence-transformers/all-MiniLM-L6-v2. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: albert, align, altclip, aria, aria_text, audio-spectrogram-transformer, autoformer, aya_vision, bamba, bark, bart, beit, bert, bert-generation, big_bird, bigbird_pegasus, biogpt, bit, blenderbot, blenderbot-small, blip, blip-2, bloom, bridgetower, bros, camembert, canine, chameleon, chinese_clip, chinese_clip_vision_model, clap, clip, clip_text_model, clip_vision_model, clipseg, clvp, code_llama, codegen, cohere, cohere2, colpali, conditional_detr, convbert, convnext, convnextv2, cpmant, ctrl, cvt, dab-detr, dac, data2vec-audio, data2vec-text, data2vec-vision, dbrx, deberta, deberta-v2, decision_transformer, deepseek_v3, deformable_detr, deit, depth_anything, depth_pro, deta, detr, diffllama, dinat, dinov2, dinov2_with_registers, distilbert, donut-swin, dpr, dpt, efficientformer, efficientnet, electra, emu3, encodec, encoder-decoder, ernie, ernie_m, esm, falcon, falcon_mamba, fastspeech2_conformer, flaubert, flava, fnet, focalnet, fsmt, funnel, fuyu, gemma, gemma2, gemma3, gemma3_text, git, glm, glm4, glpn, got_ocr2, gpt-sw3, gpt2, gpt_bigcode, gpt_neo, gpt_neox, gpt_neox_japanese, gptj, gptsan-japanese, granite, granitemoe, granitemoeshared, granitevision, graphormer, grounding-dino, groupvit, helium, hiera, hubert, ibert, idefics, idefics2, idefics3, idefics3_vision, ijepa, imagegpt, informer, instructblip, instructblipvideo, jamba, jetmoe, jukebox, kosmos-2, layoutlm, layoutlmv2, layoutlmv3, led, levit, lilt, llama, llama4, llama4_text, llava, llava_next, llava_next_video, llava_onevision, longformer, longt5, luke, lxmert, m2m_100, mamba, mamba2, marian, markuplm, mask2former, maskformer, maskformer-swin, mbart, mctct, mega, megatron-bert, mgp-str, mimi, mistral, mistral3, mixtral, mllama, mobilebert, mobilenet_v1, mobilenet_v2, mobilevit, mobilevitv2, modernbert, moonshine, moshi, mpnet, mpt, mra, mt5, musicgen, musicgen_melody, mvp, nat, nemotron, nezha, nllb-moe, nougat, nystromformer, olmo, olmo2, olmoe, omdet-turbo, oneformer, open-llama, openai-gpt, opt, owlv2, owlvit, paligemma, patchtsmixer, patchtst, pegasus, pegasus_x, perceiver, persimmon, phi, phi3, phi4_multimodal, phimoe, pix2struct, pixtral, plbart, poolformer, pop2piano, prompt_depth_anything, prophetnet, pvt, pvt_v2, qdqbert, qwen2, qwen2_5_vl, qwen2_audio, qwen2_audio_encoder, qwen2_moe, qwen2_vl, qwen3, qwen3_moe, rag, realm, recurrent_gemma, reformer, regnet, rembert, resnet, retribert, roberta, roberta-prelayernorm, roc_bert, roformer, rt_detr, rt_detr_resnet, rt_detr_v2, rwkv, sam, sam_vision_model, seamless_m4t, seamless_m4t_v2, segformer, seggpt, sew, sew-d, shieldgemma2, siglip, siglip2, siglip_vision_model, smolvlm, smolvlm_vision, speech-encoder-decoder, speech_to_text, speech_to_text_2, speecht5, splinter, squeezebert, stablelm, starcoder2, superglue, superpoint, swiftformer, swin, swin2sr, swinv2, switch_transformers, t5, table-transformer, tapas, textnet, time_series_transformer, timesformer, timm_backbone, timm_wrapper, trajectory_transformer, transfo-xl, trocr, tvlt, tvp, udop, umt5, unispeech, unispeech-sat, univnet, upernet, van, video_llava, videomae, vilt, vipllava, vision-encoder-decoder, vision-text-dual-encoder, visual_bert, vit, vit_hybrid, vit_mae, vit_msn, vitdet, vitmatte, vitpose, vitpose_backbone, vits, vivit, wav2vec2, wav2vec2-bert, wav2vec2-conformer, wavlm, whisper, xclip, xglm, xlm, xlm-prophetnet, xlm-roberta, xlm-roberta-xl, xlnet, xmod, yolos, yoso, zamba, zamba2, zoedepth
For errors caused by missing dependency, consult the documentation of pipx (or whatever package manager that you installed VectorCode with) for instructions to inject libraries into the virtual environment.
Traceback (most recent call last):
File "/Users/beli/.local/pipx/venvs/vectorcode/lib/python3.13/site-packages/vectorcode/main.py", line 91, in async_main
return_val = await vectorise(final_configs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/beli/.local/pipx/venvs/vectorcode/lib/python3.13/site-packages/vectorcode/subcommands/vectorise.py", line 165, in vectorise
collection = await get_collection(client, configs, True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/beli/.local/pipx/venvs/vectorcode/lib/python3.13/site-packages/vectorcode/common.py", line 182, in get_collection
embedding_function = get_embedding_function(configs)
File "/Users/beli/.local/pipx/venvs/vectorcode/lib/python3.13/site-packages/vectorcode/common.py", line 149, in get_embedding_function
return getattr(embedding_functions, configs.embedding_function)(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
**configs.embedding_params
^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/Users/beli/.local/pipx/venvs/vectorcode/lib/python3.13/site-packages/chromadb/utils/embedding_functions/sentence_transformer_embedding_function.py", line 37, in __init__
self.models[model_name] = SentenceTransformer(
~~~~~~~~~~~~~~~~~~~^
model_name, device=device, **kwargs
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/Users/beli/.local/pipx/venvs/vectorcode/lib/python3.13/site-packages/sentence_transformers/SentenceTransformer.py", line 321, in __init__
modules = self._load_auto_model(
model_name_or_path,
...<7 lines>...
config_kwargs=config_kwargs,
)
File "/Users/beli/.local/pipx/venvs/vectorcode/lib/python3.13/site-packages/sentence_transformers/SentenceTransformer.py", line 1606, in _load_auto_model
transformer_model = Transformer(
model_name_or_path,
...<4 lines>...
backend=self.backend,
)
File "/Users/beli/.local/pipx/venvs/vectorcode/lib/python3.13/site-packages/sentence_transformers/models/Transformer.py", line 80, in __init__
config, is_peft_model = self._load_config(model_name_or_path, cache_dir, backend, config_args)
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/beli/.local/pipx/venvs/vectorcode/lib/python3.13/site-packages/sentence_transformers/models/Transformer.py", line 145, in _load_config
return AutoConfig.from_pretrained(model_name_or_path, **config_args, cache_dir=cache_dir), False
~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/beli/.local/pipx/venvs/vectorcode/lib/python3.13/site-packages/transformers/models/auto/configuration_auto.py", line 1151, in from_pretrained
raise ValueError(
...<3 lines>...
)
ValueError: Unrecognized model in sentence-transformers/all-MiniLM-L6-v2. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: albert, align, altclip, aria, aria_text, audio-spectrogram-transformer, autoformer, aya_vision, bamba, bark, bart, beit, bert, bert-generation, big_bird, bigbird_pegasus, biogpt, bit, blenderbot, blenderbot-small, blip, blip-2, bloom, bridgetower, bros, camembert, canine, chameleon, chinese_clip, chinese_clip_vision_model, clap, clip, clip_text_model, clip_vision_model, clipseg, clvp, code_llama, codegen, cohere, cohere2, colpali, conditional_detr, convbert, convnext, convnextv2, cpmant, ctrl, cvt, dab-detr, dac, data2vec-audio, data2vec-text, data2vec-vision, dbrx, deberta, deberta-v2, decision_transformer, deepseek_v3, deformable_detr, deit, depth_anything, depth_pro, deta, detr, diffllama, dinat, dinov2, dinov2_with_registers, distilbert, donut-swin, dpr, dpt, efficientformer, efficientnet, electra, emu3, encodec, encoder-decoder, ernie, ernie_m, esm, falcon, falcon_mamba, fastspeech2_conformer, flaubert, flava, fnet, focalnet, fsmt, funnel, fuyu, gemma, gemma2, gemma3, gemma3_text, git, glm, glm4, glpn, got_ocr2, gpt-sw3, gpt2, gpt_bigcode, gpt_neo, gpt_neox, gpt_neox_japanese, gptj, gptsan-japanese, granite, granitemoe, granitemoeshared, granitevision, graphormer, grounding-dino, groupvit, helium, hiera, hubert, ibert, idefics, idefics2, idefics3, idefics3_vision, ijepa, imagegpt, informer, instructblip, instructblipvideo, jamba, jetmoe, jukebox, kosmos-2, layoutlm, layoutlmv2, layoutlmv3, led, levit, lilt, llama, llama4, llama4_text, llava, llava_next, llava_next_video, llava_onevision, longformer, longt5, luke, lxmert, m2m_100, mamba, mamba2, marian, markuplm, mask2former, maskformer, maskformer-swin, mbart, mctct, mega, megatron-bert, mgp-str, mimi, mistral, mistral3, mixtral, mllama, mobilebert, mobilenet_v1, mobilenet_v2, mobilevit, mobilevitv2, modernbert, moonshine, moshi, mpnet, mpt, mra, mt5, musicgen, musicgen_melody, mvp, nat, nemotron, nezha, nllb-moe, nougat, nystromformer, olmo, olmo2, olmoe, omdet-turbo, oneformer, open-llama, openai-gpt, opt, owlv2, owlvit, paligemma, patchtsmixer, patchtst, pegasus, pegasus_x, perceiver, persimmon, phi, phi3, phi4_multimodal, phimoe, pix2struct, pixtral, plbart, poolformer, pop2piano, prompt_depth_anything, prophetnet, pvt, pvt_v2, qdqbert, qwen2, qwen2_5_vl, qwen2_audio, qwen2_audio_encoder, qwen2_moe, qwen2_vl, qwen3, qwen3_moe, rag, realm, recurrent_gemma, reformer, regnet, rembert, resnet, retribert, roberta, roberta-prelayernorm, roc_bert, roformer, rt_detr, rt_detr_resnet, rt_detr_v2, rwkv, sam, sam_vision_model, seamless_m4t, seamless_m4t_v2, segformer, seggpt, sew, sew-d, shieldgemma2, siglip, siglip2, siglip_vision_model, smolvlm, smolvlm_vision, speech-encoder-decoder, speech_to_text, speech_to_text_2, speecht5, splinter, squeezebert, stablelm, starcoder2, superglue, superpoint, swiftformer, swin, swin2sr, swinv2, switch_transformers, t5, table-transformer, tapas, textnet, time_series_transformer, timesformer, timm_backbone, timm_wrapper, trajectory_transformer, transfo-xl, trocr, tvlt, tvp, udop, umt5, unispeech, unispeech-sat, univnet, upernet, van, video_llava, videomae, vilt, vipllava, vision-encoder-decoder, vision-text-dual-encoder, visual_bert, vit, vit_hybrid, vit_mae, vit_msn, vitdet, vitmatte, vitpose, vitpose_backbone, vits, vivit, wav2vec2, wav2vec2-bert, wav2vec2-conformer, wavlm, whisper, xclip, xglm, xlm, xlm-prophetnet, xlm-roberta, xlm-roberta-xl, xlnet, xmod, yolos, yoso, zamba, zamba2, zoedepth
For errors caused by missing dependency, consult the documentation of pipx (or whatever package manager that you installed VectorCode with) for instructions to inject libraries into the virtual environment.
ERROR: vectorcode.main : Traceback (most recent call last):
File "/Users/beli/.local/pipx/venvs/vectorcode/lib/python3.13/site-packages/vectorcode/main.py", line 91, in async_main
return_val = await vectorise(final_configs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/beli/.local/pipx/venvs/vectorcode/lib/python3.13/site-packages/vectorcode/subcommands/vectorise.py", line 165, in vectorise
collection = await get_collection(client, configs, True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/beli/.local/pipx/venvs/vectorcode/lib/python3.13/site-packages/vectorcode/common.py", line 182, in get_collection
embedding_function = get_embedding_function(configs)
File "/Users/beli/.local/pipx/venvs/vectorcode/lib/python3.13/site-packages/vectorcode/common.py", line 149, in get_embedding_function
return getattr(embedding_functions, configs.embedding_function)(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
**configs.embedding_params
^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/Users/beli/.local/pipx/venvs/vectorcode/lib/python3.13/site-packages/chromadb/utils/embedding_functions/sentence_transformer_embedding_function.py", line 37, in __init__
self.models[model_name] = SentenceTransformer(
~~~~~~~~~~~~~~~~~~~^
model_name, device=device, **kwargs
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/Users/beli/.local/pipx/venvs/vectorcode/lib/python3.13/site-packages/sentence_transformers/SentenceTransformer.py", line 321, in __init__
modules = self._load_auto_model(
model_name_or_path,
...<7 lines>...
config_kwargs=config_kwargs,
)
File "/Users/beli/.local/pipx/venvs/vectorcode/lib/python3.13/site-packages/sentence_transformers/SentenceTransformer.py", line 1606, in _load_auto_model
transformer_model = Transformer(
model_name_or_path,
...<4 lines>...
backend=self.backend,
)
File "/Users/beli/.local/pipx/venvs/vectorcode/lib/python3.13/site-packages/sentence_transformers/models/Transformer.py", line 80, in __init__
config, is_peft_model = self._load_config(model_name_or_path, cache_dir, backend, config_args)
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/beli/.local/pipx/venvs/vectorcode/lib/python3.13/site-packages/sentence_transformers/models/Transformer.py", line 145, in _load_config
return AutoConfig.from_pretrained(model_name_or_path, **config_args, cache_dir=cache_dir), False
~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/beli/.local/pipx/venvs/vectorcode/lib/python3.13/site-packages/transformers/models/auto/configuration_auto.py", line 1151, in from_pretrained
raise ValueError(
...<3 lines>...
)
ValueError: Unrecognized model in sentence-transformers/all-MiniLM-L6-v2. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: albert, align, altclip, aria, aria_text, audio-spectrogram-transformer, autoformer, aya_vision, bamba, bark, bart, beit, bert, bert-generation, big_bird, bigbird_pegasus, biogpt, bit, blenderbot, blenderbot-small, blip, blip-2, bloom, bridgetower, bros, camembert, canine, chameleon, chinese_clip, chinese_clip_vision_model, clap, clip, clip_text_model, clip_vision_model, clipseg, clvp, code_llama, codegen, cohere, cohere2, colpali, conditional_detr, convbert, convnext, convnextv2, cpmant, ctrl, cvt, dab-detr, dac, data2vec-audio, data2vec-text, data2vec-vision, dbrx, deberta, deberta-v2, decision_transformer, deepseek_v3, deformable_detr, deit, depth_anything, depth_pro, deta, detr, diffllama, dinat, dinov2, dinov2_with_registers, distilbert, donut-swin, dpr, dpt, efficientformer, efficientnet, electra, emu3, encodec, encoder-decoder, ernie, ernie_m, esm, falcon, falcon_mamba, fastspeech2_conformer, flaubert, flava, fnet, focalnet, fsmt, funnel, fuyu, gemma, gemma2, gemma3, gemma3_text, git, glm, glm4, glpn, got_ocr2, gpt-sw3, gpt2, gpt_bigcode, gpt_neo, gpt_neox, gpt_neox_japanese, gptj, gptsan-japanese, granite, granitemoe, granitemoeshared, granitevision, graphormer, grounding-dino, groupvit, helium, hiera, hubert, ibert, idefics, idefics2, idefics3, idefics3_vision, ijepa, imagegpt, informer, instructblip, instructblipvideo, jamba, jetmoe, jukebox, kosmos-2, layoutlm, layoutlmv2, layoutlmv3, led, levit, lilt, llama, llama4, llama4_text, llava, llava_next, llava_next_video, llava_onevision, longformer, longt5, luke, lxmert, m2m_100, mamba, mamba2, marian, markuplm, mask2former, maskformer, maskformer-swin, mbart, mctct, mega, megatron-bert, mgp-str, mimi, mistral, mistral3, mixtral, mllama, mobilebert, mobilenet_v1, mobilenet_v2, mobilevit, mobilevitv2, modernbert, moonshine, moshi, mpnet, mpt, mra, mt5, musicgen, musicgen_melody, mvp, nat, nemotron, nezha, nllb-moe, nougat, nystromformer, olmo, olmo2, olmoe, omdet-turbo, oneformer, open-llama, openai-gpt, opt, owlv2, owlvit, paligemma, patchtsmixer, patchtst, pegasus, pegasus_x, perceiver, persimmon, phi, phi3, phi4_multimodal, phimoe, pix2struct, pixtral, plbart, poolformer, pop2piano, prompt_depth_anything, prophetnet, pvt, pvt_v2, qdqbert, qwen2, qwen2_5_vl, qwen2_audio, qwen2_audio_encoder, qwen2_moe, qwen2_vl, qwen3, qwen3_moe, rag, realm, recurrent_gemma, reformer, regnet, rembert, resnet, retribert, roberta, roberta-prelayernorm, roc_bert, roformer, rt_detr, rt_detr_resnet, rt_detr_v2, rwkv, sam, sam_vision_model, seamless_m4t, seamless_m4t_v2, segformer, seggpt, sew, sew-d, shieldgemma2, siglip, siglip2, siglip_vision_model, smolvlm, smolvlm_vision, speech-encoder-decoder, speech_to_text, speech_to_text_2, speecht5, splinter, squeezebert, stablelm, starcoder2, superglue, superpoint, swiftformer, swin, swin2sr, swinv2, switch_transformers, t5, table-transformer, tapas, textnet, time_series_transformer, timesformer, timm_backbone, timm_wrapper, trajectory_transformer, transfo-xl, trocr, tvlt, tvp, udop, umt5, unispeech, unispeech-sat, univnet, upernet, van, video_llava, videomae, vilt, vipllava, vision-encoder-decoder, vision-text-dual-encoder, visual_bert, vit, vit_hybrid, vit_mae, vit_msn, vitdet, vitmatte, vitpose, vitpose_backbone, vits, vivit, wav2vec2, wav2vec2-bert, wav2vec2-conformer, wavlm, whisper, xclip, xglm, xlm, xlm-prophetnet, xlm-roberta, xlm-roberta-xl, xlnet, xmod, yolos, yoso, zamba, zamba2, zoedepth
For errors caused by missing dependency, consult the documentation of pipx (or whatever package manager that you installed VectorCode with) for instructions to inject libraries into the virtual environment.
INFO: vectorcode.main : Shutting down the bundled Chromadb instance.
Saving log to /Users/beli/.local/share/vectorcode/logs/vectorcode-2025-05-01_17-01-22.log.
|
Beta Was this translation helpful? Give feedback.
-
|
ok so the embedding configs were propagated correctly. Could you verify that you can access huggingface? what happens if you run the following: curl https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2/resolve/main/config_sentence_transformers.jsonin terminal? |
Beta Was this translation helpful? Give feedback.
-
|
I get the following result {
"__version__": {
"sentence_transformers": "2.0.0",
"transformers": "4.6.1",
"pytorch": "1.8.1"
}
} |
Beta Was this translation helpful? Give feedback.
-
|
Hi, sorry for replying late. If you haven't been able to fix this, I'd suggest that you try this, which basically forces the underlying libraries to use your system certificate. It solved a similar problem for another user. Otherwise, I feel like this is a problem with your particular setup (system, internet, etc.) rather than a VectorCode problem. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi there I just installed Vectorcode on a fresh machine but when trying to index a project, I ran into the following error:
WARNING: sentence_transformers.SentenceTransformer : No sentence-transformers model found with name sentence-transformers/all-MiniLM-L6-v2. Creating a new one with mean pooling. ERROR: vectorcode.common : Failed to use SentenceTransformerEmbeddingFunction with the following error: ERROR: vectorcode.common : Traceback (most recent call last): File "/Users/beli/.local/pipx/venvs/vectorcode/lib/python3.13/site-packages/vectorcode/common.py", line 150, in get_embedding_function return getattr(embedding_functions, configs.embedding_function)( ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^ **configs.embedding_params ^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/beli/.local/pipx/venvs/vectorcode/lib/python3.13/site-packages/chromadb/utils/embedding_functions/sentence_transformer_embedding_function.py", line 37, in __init__ self.models[model_name] = SentenceTransformer( ~~~~~~~~~~~~~~~~~~~^ model_name, device=device, **kwargs ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/beli/.local/pipx/venvs/vectorcode/lib/python3.13/site-packages/sentence_transformers/SentenceTransformer.py", line 321, in __init__ modules = self._load_auto_model( model_name_or_path, ...<7 lines>... config_kwargs=config_kwargs, ) File "/Users/beli/.local/pipx/venvs/vectorcode/lib/python3.13/site-packages/sentence_transformers/SentenceTransformer.py", line 1606, in _load_auto_model transformer_model = Transformer( model_name_or_path, ...<4 lines>... backend=self.backend, ) File "/Users/beli/.local/pipx/venvs/vectorcode/lib/python3.13/site-packages/sentence_transformers/models/Transformer.py", line 80, in __init__ config, is_peft_model = self._load_config(model_name_or_path, cache_dir, backend, config_args) ~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/beli/.local/pipx/venvs/vectorcode/lib/python3.13/site-packages/sentence_transformers/models/Transformer.py", line 145, in _load_config return AutoConfig.from_pretrained(model_name_or_path, **config_args, cache_dir=cache_dir), False ~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/beli/.local/pipx/venvs/vectorcode/lib/python3.13/site-packages/transformers/models/auto/configuration_auto.py", line 1151, in from_pretrained raise ValueError( ...<3 lines>... ) ValueError: Unrecognized model in sentence-transformers/all-MiniLM-L6-v2. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: albert, align, altclip, aria, aria_text, audio-spectrogram-transformer, autoformer, aya_vision, bamba, bark, bart, beit, bert, bert-generation, big_bird, bigbird_pegasus, biogpt, bit, blenderbot, blenderbot-small, blip, blip-2, bloom, bridgetower, bros, camembert, canine, chameleon, chinese_clip, chinese_clip_vision_model, clap, clip, clip_text_model, clip_vision_model, clipseg, clvp, code_llama, codegen, cohere, cohere2, colpali, conditional_detr, convbert, convnext, convnextv2, cpmant, ctrl, cvt, dab-detr, dac, data2vec-audio, data2vec-text, data2vec-vision, dbrx, deberta, deberta-v2, decision_transformer, deepseek_v3, deformable_detr, deit, depth_anything, depth_pro, deta, detr, diffllama, dinat, dinov2, dinov2_with_registers, distilbert, donut-swin, dpr, dpt, efficientformer, efficientnet, electra, emu3, encodec, encoder-decoder, ernie, ernie_m, esm, falcon, falcon_mamba, fastspeech2_conformer, flaubert, flava, fnet, focalnet, fsmt, funnel, fuyu, gemma, gemma2, gemma3, gemma3_text, git, glm, glm4, glpn, got_ocr2, gpt-sw3, gpt2, gpt_bigcode, gpt_neo, gpt_neox, gpt_neox_japanese, gptj, gptsan-japanese, granite, granitemoe, granitemoeshared, granitevision, graphormer, grounding-dino, groupvit, helium, hiera, hubert, ibert, idefics, idefics2, idefics3, idefics3_vision, ijepa, imagegpt, informer, instructblip, instructblipvideo, jamba, jetmoe, jukebox, kosmos-2, layoutlm, layoutlmv2, layoutlmv3, led, levit, lilt, llama, llama4, llama4_text, llava, llava_next, llava_next_video, llava_onevision, longformer, longt5, luke, lxmert, m2m_100, mamba, mamba2, marian, markuplm, mask2former, maskformer, maskformer-swin, mbart, mctct, mega, megatron-bert, mgp-str, mimi, mistral, mistral3, mixtral, mllama, mobilebert, mobilenet_v1, mobilenet_v2, mobilevit, mobilevitv2, modernbert, moonshine, moshi, mpnet, mpt, mra, mt5, musicgen, musicgen_melody, mvp, nat, nemotron, nezha, nllb-moe, nougat, nystromformer, olmo, olmo2, olmoe, omdet-turbo, oneformer, open-llama, openai-gpt, opt, owlv2, owlvit, paligemma, patchtsmixer, patchtst, pegasus, pegasus_x, perceiver, persimmon, phi, phi3, phi4_multimodal, phimoe, pix2struct, pixtral, plbart, poolformer, pop2piano, prompt_depth_anything, prophetnet, pvt, pvt_v2, qdqbert, qwen2, qwen2_5_vl, qwen2_audio, qwen2_audio_encoder, qwen2_moe, qwen2_vl, qwen3, qwen3_moe, rag, realm, recurrent_gemma, reformer, regnet, rembert, resnet, retribert, roberta, roberta-prelayernorm, roc_bert, roformer, rt_detr, rt_detr_resnet, rt_detr_v2, rwkv, sam, sam_vision_model, seamless_m4t, seamless_m4t_v2, segformer, seggpt, sew, sew-d, shieldgemma2, siglip, siglip2, siglip_vision_model, smolvlm, smolvlm_vision, speech-encoder-decoder, speech_to_text, speech_to_text_2, speecht5, splinter, squeezebert, stablelm, starcoder2, superglue, superpoint, swiftformer, swin, swin2sr, swinv2, switch_transformers, t5, table-transformer, tapas, textnet, time_series_transformer, timesformer, timm_backbone, timm_wrapper, trajectory_transformer, transfo-xl, trocr, tvlt, tvp, udop, umt5, unispeech, unispeech-sat, univnet, upernet, van, video_llava, videomae, vilt, vipllava, vision-encoder-decoder, vision-text-dual-encoder, visual_bert, vit, vit_hybrid, vit_mae, vit_msn, vitdet, vitmatte, vitpose, vitpose_backbone, vits, vivit, wav2vec2, wav2vec2-bert, wav2vec2-conformer, wavlm, whisper, xclip, xglm, xlm, xlm-prophetnet, xlm-roberta, xlm-roberta-xl, xlnet, xmod, yolos, yoso, zamba, zamba2, zoedepth For errors caused by missing dependency, consult the documentation of pipx (or whatever package manager that you installed VectorCode with) for instructions to inject libraries into the virtual environment. Traceback (most recent call last): File "/Users/beli/.local/pipx/venvs/vectorcode/lib/python3.13/site-packages/vectorcode/main.py", line 91, in async_main return_val = await vectorise(final_configs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/beli/.local/pipx/venvs/vectorcode/lib/python3.13/site-packages/vectorcode/subcommands/vectorise.py", line 165, in vectorise collection = await get_collection(client, configs, True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/beli/.local/pipx/venvs/vectorcode/lib/python3.13/site-packages/vectorcode/common.py", line 183, in get_collection embedding_function = get_embedding_function(configs) File "/Users/beli/.local/pipx/venvs/vectorcode/lib/python3.13/site-packages/vectorcode/common.py", line 150, in get_embedding_function return getattr(embedding_functions, configs.embedding_function)( ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^ **configs.embedding_params ^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/beli/.local/pipx/venvs/vectorcode/lib/python3.13/site-packages/chromadb/utils/embedding_functions/sentence_transformer_embedding_function.py", line 37, in __init__ self.models[model_name] = SentenceTransformer( ~~~~~~~~~~~~~~~~~~~^ model_name, device=device, **kwargs ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/beli/.local/pipx/venvs/vectorcode/lib/python3.13/site-packages/sentence_transformers/SentenceTransformer.py", line 321, in __init__ modules = self._load_auto_model( model_name_or_path, ...<7 lines>... config_kwargs=config_kwargs, ) File "/Users/beli/.local/pipx/venvs/vectorcode/lib/python3.13/site-packages/sentence_transformers/SentenceTransformer.py", line 1606, in _load_auto_model transformer_model = Transformer( model_name_or_path, ...<4 lines>... backend=self.backend, ) File "/Users/beli/.local/pipx/venvs/vectorcode/lib/python3.13/site-packages/sentence_transformers/models/Transformer.py", line 80, in __init__ config, is_peft_model = self._load_config(model_name_or_path, cache_dir, backend, config_args) ~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/beli/.local/pipx/venvs/vectorcode/lib/python3.13/site-packages/sentence_transformers/models/Transformer.py", line 145, in _load_config return AutoConfig.from_pretrained(model_name_or_path, **config_args, cache_dir=cache_dir), False ~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/beli/.local/pipx/venvs/vectorcode/lib/python3.13/site-packages/transformers/models/auto/configuration_auto.py", line 1151, in from_pretrained raise ValueError( ...<3 lines>... ) ValueError: Unrecognized model in sentence-transformers/all-MiniLM-L6-v2. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: albert, align, altclip, aria, aria_text, audio-spectrogram-transformer, autoformer, aya_vision, bamba, bark, bart, beit, bert, bert-generation, big_bird, bigbird_pegasus, biogpt, bit, blenderbot, blenderbot-small, blip, blip-2, bloom, bridgetower, bros, camembert, canine, chameleon, chinese_clip, chinese_clip_vision_model, clap, clip, clip_text_model, clip_vision_model, clipseg, clvp, code_llama, codegen, cohere, cohere2, colpali, conditional_detr, convbert, convnext, convnextv2, cpmant, ctrl, cvt, dab-detr, dac, data2vec-audio, data2vec-text, data2vec-vision, dbrx, deberta, deberta-v2, decision_transformer, deepseek_v3, deformable_detr, deit, depth_anything, depth_pro, deta, detr, diffllama, dinat, dinov2, dinov2_with_registers, distilbert, donut-swin, dpr, dpt, efficientformer, efficientnet, electra, emu3, encodec, encoder-decoder, ernie, ernie_m, esm, falcon, falcon_mamba, fastspeech2_conformer, flaubert, flava, fnet, focalnet, fsmt, funnel, fuyu, gemma, gemma2, gemma3, gemma3_text, git, glm, glm4, glpn, got_ocr2, gpt-sw3, gpt2, gpt_bigcode, gpt_neo, gpt_neox, gpt_neox_japanese, gptj, gptsan-japanese, granite, granitemoe, granitemoeshared, granitevision, graphormer, grounding-dino, groupvit, helium, hiera, hubert, ibert, idefics, idefics2, idefics3, idefics3_vision, ijepa, imagegpt, informer, instructblip, instructblipvideo, jamba, jetmoe, jukebox, kosmos-2, layoutlm, layoutlmv2, layoutlmv3, led, levit, lilt, llama, llama4, llama4_text, llava, llava_next, llava_next_video, llava_onevision, longformer, longt5, luke, lxmert, m2m_100, mamba, mamba2, marian, markuplm, mask2former, maskformer, maskformer-swin, mbart, mctct, mega, megatron-bert, mgp-str, mimi, mistral, mistral3, mixtral, mllama, mobilebert, mobilenet_v1, mobilenet_v2, mobilevit, mobilevitv2, modernbert, moonshine, moshi, mpnet, mpt, mra, mt5, musicgen, musicgen_melody, mvp, nat, nemotron, nezha, nllb-moe, nougat, nystromformer, olmo, olmo2, olmoe, omdet-turbo, oneformer, open-llama, openai-gpt, opt, owlv2, owlvit, paligemma, patchtsmixer, patchtst, pegasus, pegasus_x, perceiver, persimmon, phi, phi3, phi4_multimodal, phimoe, pix2struct, pixtral, plbart, poolformer, pop2piano, prompt_depth_anything, prophetnet, pvt, pvt_v2, qdqbert, qwen2, qwen2_5_vl, qwen2_audio, qwen2_audio_encoder, qwen2_moe, qwen2_vl, qwen3, qwen3_moe, rag, realm, recurrent_gemma, reformer, regnet, rembert, resnet, retribert, roberta, roberta-prelayernorm, roc_bert, roformer, rt_detr, rt_detr_resnet, rt_detr_v2, rwkv, sam, sam_vision_model, seamless_m4t, seamless_m4t_v2, segformer, seggpt, sew, sew-d, shieldgemma2, siglip, siglip2, siglip_vision_model, smolvlm, smolvlm_vision, speech-encoder-decoder, speech_to_text, speech_to_text_2, speecht5, splinter, squeezebert, stablelm, starcoder2, superglue, superpoint, swiftformer, swin, swin2sr, swinv2, switch_transformers, t5, table-transformer, tapas, textnet, time_series_transformer, timesformer, timm_backbone, timm_wrapper, trajectory_transformer, transfo-xl, trocr, tvlt, tvp, udop, umt5, unispeech, unispeech-sat, univnet, upernet, van, video_llava, videomae, vilt, vipllava, vision-encoder-decoder, vision-text-dual-encoder, visual_bert, vit, vit_hybrid, vit_mae, vit_msn, vitdet, vitmatte, vitpose, vitpose_backbone, vits, vivit, wav2vec2, wav2vec2-bert, wav2vec2-conformer, wavlm, whisper, xclip, xglm, xlm, xlm-prophetnet, xlm-roberta, xlm-roberta-xl, xlnet, xmod, yolos, yoso, zamba, zamba2, zoedepth For errors caused by missing dependency, consult the documentation of pipx (or whatever package manager that you installed VectorCode with) for instructions to inject libraries into the virtual environment. ERROR: vectorcode.main : Traceback (most recent call last): File "/Users/beli/.local/pipx/venvs/vectorcode/lib/python3.13/site-packages/vectorcode/main.py", line 91, in async_main return_val = await vectorise(final_configs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/beli/.local/pipx/venvs/vectorcode/lib/python3.13/site-packages/vectorcode/subcommands/vectorise.py", line 165, in vectorise collection = await get_collection(client, configs, True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/beli/.local/pipx/venvs/vectorcode/lib/python3.13/site-packages/vectorcode/common.py", line 183, in get_collection embedding_function = get_embedding_function(configs) File "/Users/beli/.local/pipx/venvs/vectorcode/lib/python3.13/site-packages/vectorcode/common.py", line 150, in get_embedding_function return getattr(embedding_functions, configs.embedding_function)( ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^ **configs.embedding_params ^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/beli/.local/pipx/venvs/vectorcode/lib/python3.13/site-packages/chromadb/utils/embedding_functions/sentence_transformer_embedding_function.py", line 37, in __init__ self.models[model_name] = SentenceTransformer( ~~~~~~~~~~~~~~~~~~~^ model_name, device=device, **kwargs ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/beli/.local/pipx/venvs/vectorcode/lib/python3.13/site-packages/sentence_transformers/SentenceTransformer.py", line 321, in __init__ modules = self._load_auto_model( model_name_or_path, ...<7 lines>... config_kwargs=config_kwargs, ) File "/Users/beli/.local/pipx/venvs/vectorcode/lib/python3.13/site-packages/sentence_transformers/SentenceTransformer.py", line 1606, in _load_auto_model transformer_model = Transformer( model_name_or_path, ...<4 lines>... backend=self.backend, ) File "/Users/beli/.local/pipx/venvs/vectorcode/lib/python3.13/site-packages/sentence_transformers/models/Transformer.py", line 80, in __init__ config, is_peft_model = self._load_config(model_name_or_path, cache_dir, backend, config_args) ~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/beli/.local/pipx/venvs/vectorcode/lib/python3.13/site-packages/sentence_transformers/models/Transformer.py", line 145, in _load_config return AutoConfig.from_pretrained(model_name_or_path, **config_args, cache_dir=cache_dir), False ~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/beli/.local/pipx/venvs/vectorcode/lib/python3.13/site-packages/transformers/models/auto/configuration_auto.py", line 1151, in from_pretrained raise ValueError( ...<3 lines>... ) ValueError: Unrecognized model in sentence-transformers/all-MiniLM-L6-v2. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: albert, align, altclip, aria, aria_text, audio-spectrogram-transformer, autoformer, aya_vision, bamba, bark, bart, beit, bert, bert-generation, big_bird, bigbird_pegasus, biogpt, bit, blenderbot, blenderbot-small, blip, blip-2, bloom, bridgetower, bros, camembert, canine, chameleon, chinese_clip, chinese_clip_vision_model, clap, clip, clip_text_model, clip_vision_model, clipseg, clvp, code_llama, codegen, cohere, cohere2, colpali, conditional_detr, convbert, convnext, convnextv2, cpmant, ctrl, cvt, dab-detr, dac, data2vec-audio, data2vec-text, data2vec-vision, dbrx, deberta, deberta-v2, decision_transformer, deepseek_v3, deformable_detr, deit, depth_anything, depth_pro, deta, detr, diffllama, dinat, dinov2, dinov2_with_registers, distilbert, donut-swin, dpr, dpt, efficientformer, efficientnet, electra, emu3, encodec, encoder-decoder, ernie, ernie_m, esm, falcon, falcon_mamba, fastspeech2_conformer, flaubert, flava, fnet, focalnet, fsmt, funnel, fuyu, gemma, gemma2, gemma3, gemma3_text, git, glm, glm4, glpn, got_ocr2, gpt-sw3, gpt2, gpt_bigcode, gpt_neo, gpt_neox, gpt_neox_japanese, gptj, gptsan-japanese, granite, granitemoe, granitemoeshared, granitevision, graphormer, grounding-dino, groupvit, helium, hiera, hubert, ibert, idefics, idefics2, idefics3, idefics3_vision, ijepa, imagegpt, informer, instructblip, instructblipvideo, jamba, jetmoe, jukebox, kosmos-2, layoutlm, layoutlmv2, layoutlmv3, led, levit, lilt, llama, llama4, llama4_text, llava, llava_next, llava_next_video, llava_onevision, longformer, longt5, luke, lxmert, m2m_100, mamba, mamba2, marian, markuplm, mask2former, maskformer, maskformer-swin, mbart, mctct, mega, megatron-bert, mgp-str, mimi, mistral, mistral3, mixtral, mllama, mobilebert, mobilenet_v1, mobilenet_v2, mobilevit, mobilevitv2, modernbert, moonshine, moshi, mpnet, mpt, mra, mt5, musicgen, musicgen_melody, mvp, nat, nemotron, nezha, nllb-moe, nougat, nystromformer, olmo, olmo2, olmoe, omdet-turbo, oneformer, open-llama, openai-gpt, opt, owlv2, owlvit, paligemma, patchtsmixer, patchtst, pegasus, pegasus_x, perceiver, persimmon, phi, phi3, phi4_multimodal, phimoe, pix2struct, pixtral, plbart, poolformer, pop2piano, prompt_depth_anything, prophetnet, pvt, pvt_v2, qdqbert, qwen2, qwen2_5_vl, qwen2_audio, qwen2_audio_encoder, qwen2_moe, qwen2_vl, qwen3, qwen3_moe, rag, realm, recurrent_gemma, reformer, regnet, rembert, resnet, retribert, roberta, roberta-prelayernorm, roc_bert, roformer, rt_detr, rt_detr_resnet, rt_detr_v2, rwkv, sam, sam_vision_model, seamless_m4t, seamless_m4t_v2, segformer, seggpt, sew, sew-d, shieldgemma2, siglip, siglip2, siglip_vision_model, smolvlm, smolvlm_vision, speech-encoder-decoder, speech_to_text, speech_to_text_2, speecht5, splinter, squeezebert, stablelm, starcoder2, superglue, superpoint, swiftformer, swin, swin2sr, swinv2, switch_transformers, t5, table-transformer, tapas, textnet, time_series_transformer, timesformer, timm_backbone, timm_wrapper, trajectory_transformer, transfo-xl, trocr, tvlt, tvp, udop, umt5, unispeech, unispeech-sat, univnet, upernet, van, video_llava, videomae, vilt, vipllava, vision-encoder-decoder, vision-text-dual-encoder, visual_bert, vit, vit_hybrid, vit_mae, vit_msn, vitdet, vitmatte, vitpose, vitpose_backbone, vits, vivit, wav2vec2, wav2vec2-bert, wav2vec2-conformer, wavlm, whisper, xclip, xglm, xlm, xlm-prophetnet, xlm-roberta, xlm-roberta-xl, xlnet, xmod, yolos, yoso, zamba, zamba2, zoedepth For errors caused by missing dependency, consult the documentation of pipx (or whatever package manager that you installed VectorCode with) for instructions to inject libraries into the virtual environment.I checked the version of the the transformer Python library that was installed when I ran
pipx install vectorcode, and it is 4.51.3, which is the latest. Vectorcode was able to index when I installed it a few days agoBeta Was this translation helpful? Give feedback.
All reactions