Skip to content

Sweep: Add llama3.3 support #780

@mdabir1203

Description

@mdabir1203

Details

Let's add opensource llama model. can you give me a hint so I can try it out


Add Llama 2 LLM Support to Core LLM Module

Description:

Extend the LLM module to support Llama 2 models through LangChain's integration, following the existing pattern for other LLM providers.

Tasks:

  1. In gpt_all_star/core/llm.py:

    • Add LLAMA to LLM_TYPE enum
    • Create new _create_chat_llama helper function
    • Update create_llm to handle Llama case
  2. In .env.sample:

    • Add Llama-specific environment variables section
    • Include model path and parameters

Test:

  1. In tests/core/test_llm.py:
    • Add test case for Llama LLM creation
    • Mock Llama model initialization
    • Test configuration parameters

Implementation Notes:

  • Follow existing pattern of other LLM implementations
  • Use LangChain's Llama integration
  • Maintain consistent temperature and streaming settings
  • Support model path configuration via environment variables

Metadata

Metadata

Assignees

No one assigned

    Labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions