Skip to content

morganross/API_Cost_Multiplier

Repository files navigation

this downloads and installs all the other software

this readme file is outdated, ignore all the content here. there are other readmes in other folders and with differnt names

gptr-eval-process

the process markdown script gets the files from process markdown

the main script then sends those files to eval

the main scirpt then writes that file to the correct location

the input folder and output folder are specifyed by a config file

THE PROCESS-MARKDOWN script is repsonsible for getting the query together, sending it to gpt-R and obtaining its results

the process markdown FOLDER contains the process markdown script and a config file AND a seperate file that contains the logic for creating output files and dir, becuase we use it from differnt places.

This project serves as a central orchestrator for integrating and managing workflows involving:

  • gpt-researcher: For generating research reports.
  • llm-doc-eval: For evaluating documents.
  • process_markdown: A new module for markdown processing utilities.
  • review-revise: A module for future review and revision functionalities.

All project dependencies are managed in the central requirements.txt file located in this root directory.

Configuration Files

Key configuration files within this project include:

also the multi agent config in task.json

Documentation

For more detailed information, please refer to the following documentation files:

Troubleshooting and Solutions

Resolved: RuntimeWarning: coroutine '...' was never awaited in llm-doc-eval CLI

Problem: When running llm-doc-eval CLI commands, particularly run-all-evaluations, a RuntimeWarning: coroutine '...' was never awaited and a RuntimeError: asyncio.run() cannot be called from a running event loop were encountered. This was due to the @sync_command decorator (which internally calls asyncio.run()) being applied to both the top-level command (run_all_evaluations) and its internally called sub-commands (run_single, run_pairwise). This led to an invalid attempt to create nested asyncio event loops.

Solution: The fix involved modifying gptr-eval-process/llm-doc-eval/cli.py. The @sync_command decorator was removed from the run_single and run_pairwise function definitions. The @sync_command decorator was retained only on the top-level run_all_evaluations command. This ensures that asyncio.run() is called only once at the entry point of the CLI command, allowing internal async function calls to proceed within the same event loop without conflict.

Verification: The fix was verified by successfully executing python cli.py run-all-evaluations ../test/finaldocs from the gptr-eval-process/llm-doc-eval directory. The command completed without any RuntimeWarning or RuntimeError, confirming the resolution of the asynchronous execution issue.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published