Skip to content

refactor VLM configuration to use dependency pattern#78

Merged
AdemBoukhris457 merged 5 commits intomainfrom
refactor/vlm_engine_dependency
Nov 9, 2025
Merged

refactor VLM configuration to use dependency pattern#78
AdemBoukhris457 merged 5 commits intomainfrom
refactor/vlm_engine_dependency

Conversation

@AdemBoukhris457
Copy link
Owner

Summary

This MR refactors VLM (Vision Language Model) configuration across all parser classes to use a dependency pattern, similar to the previous OCR engine refactoring. Instead of exposing individual VLM parameters (use_vlm, vlm_provider, vlm_model, vlm_api_key) directly in parser constructors, users now initialize a VLMStructuredExtractor instance externally and pass it to the parser.

Changes

Code Changes

  • Parser Classes: Refactored StructuredPDFParser, EnhancedPDFParser, ChartTablePDFParser, and StructuredDOCXParser to accept an optional vlm: Optional[VLMStructuredExtractor] parameter
  • CLI: Updated all CLI commands (parse, enhance, parse-docx, extract) to create VLMStructuredExtractor instances before passing to parsers
  • UI Components: Updated all Gradio apps (gradio_app.py, hf_space/app.py, hf_space/app_fixed.py, doctra/ui/full_parse_ui.py, doctra/ui/enhanced_parser_ui.py) to create VLM engine instances based on user inputs

Documentation Changes

  • README.md: Updated all VLM usage examples and added VLM Engine Configuration section
  • Documentation: Updated all docs files including:
    • VLM integration guide
    • API reference
    • Parser guides
    • Usage examples
    • Quick start guide

Benefits

  1. Clearer API: VLM configuration is now separated from parser logic, making the API more intuitive
  2. Prevents Mixed Configurations: Users can't accidentally mix VLM settings, as they must create a single, pre-configured engine
  3. Engine Reuse: VLM engines can be created once and reused across multiple parsers
  4. Consistency: Matches the pattern already established for OCR engines
  5. Better Error Handling: VLM initialization errors are caught earlier, before parser creation

Migration Guide

Before:

parser = StructuredPDFParser(
    use_vlm=True,
    vlm_provider="openai",
    vlm_api_key="your-key"
)

After:

from doctra.engines.vlm.service import VLMStructuredExtractor

vlm_engine = VLMStructuredExtractor(
    vlm_provider="openai",
    api_key="your-key"
)

parser = StructuredPDFParser(vlm=vlm_engine)

Replace individual VLM parameters (use_vlm, vlm_provider, vlm_model vlm_api_key) with VLMStructuredExtractor instance dependency in all parser classes. Update CLI and UI components to create VLM engines externally before passing to parsers. This provides clearer API, prevents configuration conflicts, and enables VLM engine reuse across multiple parsers.
Replace individual VLM parameters (use_vlm, vlm_provider, vlm_model vlm_api_key) with VLMStructuredExtractor instance dependency in all parser classes. Update CLI and UI components to create VLM engines externally before passing to parsers. This provides clearer API, prevents configuration conflicts, and enables VLM engine reuse across multiple parsers.
Replace individual VLM parameters (use_vlm, vlm_provider, vlm_model vlm_api_key) with VLMStructuredExtractor instance dependency in all parser classes. Update CLI and UI components to create VLM engines externally before passing to parsers. This provides clearer API, prevents configuration conflicts, and enables VLM engine reuse across multiple parsers.
Update all VLM usage examples in README.md to show VLMStructuredExtractor initialization and passing to parsers. Add VLM Engine Configuration section and update all parser examples to reflect the new API.
Update all documentation files to show VLMStructuredExtractor initialization and passing to parsers. Replace all examples using individual VLM parameters (use_vlm, vlm_provider, ...) with the new dependency pattern where VLM engines are created externally.
@AdemBoukhris457 AdemBoukhris457 self-assigned this Nov 9, 2025
@AdemBoukhris457 AdemBoukhris457 added documentation Improvements or additions to documentation enhancement New feature or request refactor labels Nov 9, 2025
@AdemBoukhris457 AdemBoukhris457 merged commit 2b301c9 into main Nov 9, 2025
1 check passed
@AdemBoukhris457 AdemBoukhris457 deleted the refactor/vlm_engine_dependency branch November 9, 2025 15:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation enhancement New feature or request refactor

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant