Skip to content

Commit 97816c8

Browse files
andhreljaKernDerKernigeFeuerpfeilJWittmeyeranmarhindi
authored
LLM attribute calculation (#287)
* build: init LLM attribute calculation * chore: submodules/model * perf: separate exec_env from sample LLM ac * perf: add additional_config column to attribute table * chore: update submodules/model * feat: attribute calculation DB updates * fix: revert custom llm functions * adds llm template function for llm attribute calc * adds llm code wrap to the attribute calculation of exec env * chore: update submodules * style: recalculate alembic migration * feat: LLM_RESPONSE attribute calculation * style: syntax updates * chore: development cleanup * fix: PR review * perf: avoid variable overrides by users * adds better LLM config error message * adds llm connection check before running LLM attribution calc * adds user prompt validation + code improvements * adds JSON mode to openai API code * Adds add config to id endpoint for attributes * style: fix flake8 warnings * fix: error message definition * perf: llm_response_tmpl LLM kwargs update * perf: add `run-llm-playground` route * fix: variable formatting in tmpl + llm_config for playground * fix: set defaults for endpoint and apiVersion llm_config * In call changes * fix: minor fixes * fix: singular string in f-string expression * perf: skip setting progress for llm playground * fix: playground logs + raising exceptions * error message fix * Adds str cast only for none str values in result dict * Removes logs from none playgroudn ac return * style: omit error_message variable definitions * style: minor fixes * fix: apiBase and apiVersion as non-default keys * add single quote stop sequence to ensure json dumps works * Adds correct defalult values * multi line sys prompt * feat: add asynchronous chat completion and configurable number of workers * feat: add retry mechanism for LLM API calls with configurable parameters * Change to recusive approach for str conversion & async in playground code * Adds retry execeded error message * Return result on error case as error result * Adds LLM response to text like datatypes * perf: llm ac caching * chore: pending llm cache endpoint * feat: enhance get_llm_response with caching mechanism * perf: llm-ac-cache endpoint * style: llm-ac-cache return payload * refactor: rename get_llm_config to get_llm_config_a2vybg for clarity * style: variable rename * perf: add s3 obj deletion on attr deletion * perf: simpler conditional in controller/attribute/util.py Co-authored-by: anmarhindi <[email protected]> * style: PR fixes * fix: apiKey deletion from frontend - attributes * perf: remove global ref to static var * chore: pr review * chore: pr review * Fix alembic * Submodule merge --------- Co-authored-by: Moritz Feuerpfeil <[email protected]> Co-authored-by: JWittmeyer <[email protected]> Co-authored-by: anmarhindi <[email protected]>
1 parent b41a1d6 commit 97816c8

File tree

14 files changed

+888
-30
lines changed

14 files changed

+888
-30
lines changed
Lines changed: 28 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,28 @@
1+
"""adds attribute `additional_config` and llm_config
2+
3+
Revision ID: 10c48793371d
4+
Revises: 0c8eb3ff1c71
5+
Create Date: 2025-01-15 17:08:30.137845
6+
7+
"""
8+
from alembic import op
9+
import sqlalchemy as sa
10+
11+
12+
# revision identifiers, used by Alembic.
13+
revision = '10c48793371d'
14+
down_revision = '0c8eb3ff1c71'
15+
branch_labels = None
16+
depends_on = None
17+
18+
19+
def upgrade():
20+
# ### commands auto generated by Alembic - please adjust! ###
21+
op.add_column('attribute', sa.Column('additional_config', sa.JSON(), nullable=True, comment='used when data_type == LLM_RESPONSE'))
22+
# ### end Alembic commands ###
23+
24+
25+
def downgrade():
26+
# ### commands auto generated by Alembic - please adjust! ###
27+
op.drop_column('attribute', 'additional_config')
28+
# ### end Alembic commands ###

0 commit comments

Comments
 (0)