Releases: Aleph-Alpha/intelligence-layer-sdk
v0.9.0
Breaking Changes
- breaking change: Renamed the field
chunkofAnswerSourcetosearch_resultfor multi chunk retriever qa. - breaking change: The implementation of the HuggingFace repository creation and deletion got moved to
HuggingFaceRepository
New Features
- feature: HuggingFaceDataset- & AggregationRepositories now have an explicit
create_repositoryfunction. - feature: Add
MultipleChunkRetrieverBasedQa, a task that performs better on faster on retriever-QA, especially with longer context models
Full Changelog: v0.8.2...v0.9.0
v0.8.2
0.8.2
New Features
- feature: Add
SearchEvaluationLogicandSearchAggregationLogicto evaluateSearch-use-cases - feature: Trace viewer and IL python package are now deployed to artifactory
Fixes
- Documentation
- fix: Add missing link to
issue_classification_user_journeynotebook to the tutorials section of README. - fix: Confusion matrix in
issue_classification_user_journeynow have rounded numbers.
- fix: Add missing link to
Full Changelog: v0.8.1...v0.8.2
v0.8.1
v0.8.0
What's Changed
New Features
-
feature: Expose start and end index in DocumentChunk
-
feature: Add sorted_scores property to
SingleLabelClassifyOutput. -
feature: Error information is printed to the console on failed runs and evaluations.
-
feature: The stack trace of a failed run/evaluation is included in the
FailedExampleRun/FailedExampleEvaluationobject -
feature: The
Runner.run_dataset(..)andEvaluator.evaluate_run(..)have an optional flagabort_on_errorto stop running/evaluating when an error occurs. -
feature: Added
Runner.failed_runs(..)andEvaluator.failed_evaluations(..)to retrieve all failed run / evaluation lineages -
feature: Added
.successful_example_outputs(..)and.failed_example_outputs(..)toRunRepositoryto match the evaluation repository -
feature: Added optional argument to set an id when creating a
DatasetviaDatasetRepository.create_dataset(..) -
feature: Traces now log exceptions using the
ErrorValuetype. -
Documentation:
- feature: Add info on how to run tests in VSCode
- feature: Add
issue_classification_user_journeynotebook. - feature: Add documentation of newly added data retrieval methods
how_to_retrieve_data_for_analysis - feature: Add documentation of release workflow
Fixes
- fix: Fix version number in pyproject.toml in IL
- fix: Fix instructions for installing IL via pip.
Full Changelog: v0.7.0...v0.8.0
v0.7.0
Overview
- Refactoring in Evaluation
- Many changes to Evaluation repository structure and renaming to make the overall handling more intuitive and consistent,
- New How-To’s and improved documentation
- Simplified repository access via data selection methods
- Better text highlighting
- Better tracer viewer integration
- Displaying
InMemoryTracerobjects in a jupyter notebook will load them into an active trace viewer.
- Displaying
Breaking Changes
- breaking change:
FScoresare now correctly exposed asFScoresand no longer asRougeScores - breaking change:
HuggingFaceAggregationRepositoryandHuggingFaceDatasetRepositorynow consistently follow the same folder structure asFileDatasetRepositorywhen creating data sets. This means that datasets will be stored in a folder datasets and additional sub-folders named according to the respective dataset ID. - breaking change: Split
run_repositoryintofile_run_repository,in_memory_run_repository. - breaking change: Split
evaluation_repositoryintoargilla_evaluation_repository,file_evaluation_repositoryandin_memory_evaluation_repository - breaking change: Split
dataset_repositoryinto file_dataset_repository andin_memory_dataset_respository - breaking change: Split
aggregation_respositoryintofile_aggragation_repositoryandin_memory_aggregation_repository - breaking change: Renamed
evaluation/run.pytoevaluation/run_evaluator.py - breaking change: Split
evaluation/domainand distribute it across aggregation, evaluation, dataset and run packages. - breaking change: Split
evaluation/argillaand distribute it across aggregation and evaluation packages. - breaking change: Split evaluation into separate dataset, run, evaluation and aggregation packages.
- breaking change: Split
evaluation/hugging_face.pyinto dataset and aggregation repository files indata_storagepackage. - breaking change:
create_datasetnow returns the newDatasettype instead of a dataset ID. - breaking change: Consistent naming for repository root directories when creating evaluations or aggregations: .../eval → .../evaluations and .../aggregation → aggregations.
- breaking change: Core tasks not longer provide defaults for the applied models.
- breaking change: Methods returning entities from repositories now return the results ordered by their IDs.
- breaking change: Renamed crashed_during_eval_count to crashed_during_evaluation_count in AggregationOverview.
- breaking change: Renamed
create_evaluation_datasettoinitialize_evaluationinEvaluationRepository. - breaking change: Renamed
to_explanation_responsetoto_explanation_requestinExplainInput. - breaking change: Removed
TextHighlight::textin favor of TextHighlight::start andTextHighlight::end - breaking change: Removed
IntelligenceAppandIntelligenceStarterApp - breaking change:
RetrieverBasedQauses nowMultiChunkQainstead of generic taskSingleChunkQa - breaking change:
EvaluationRepository::failed_example_evaluationsno longer abstract - breaking change:
- Elo calculation simplified:
Payofffrom elo package has been removed PayoffMatrixfrom elo package renamed toMatchOutcomeSingleChunkQauses logit_bias to promote not answering for German
- Elo calculation simplified:
- breaking change: Remove
ChunkOverlaptask. - breaking change: Rename
ChunktoTextChunk. - breaking change: Rename
ChunkTasktoChunk. - breaking change: Rename
EchoTasktoEcho. - breaking change: Rename
TextHighlightTasktoTextHighlight - breaking change: Rename
ChunkOverlaptTasktoChunkOverlap
New Features
Aggregation:
- feature:
InstructComparisonArgillaAggregationLogicuses full evaluation set instead of sample for aggregation
Documentation
- feature: Added How-To’s (linked in the README):
- how to define a task
- how to implement a task
- how to create a dataset
- how to run a task on a dataset
- how to perform aggregation
- how to evaluate runs
- feature: Restructured and cleaned up README for more conciseness.
- feature: Add illustrations to Concepts.md.
- feature: Added tutorial for adding task to a FastAPI app (linked in README).
- feature: Improved and added various DocStrings.
- feature: Added a README section about the client URL.
- feature: Add python naming convention to README
Classify
- feature:
PromptBasedClassifynow supports changing of the prompt instruction via the instruction parameter. - feature: Add default model for
PromptBasedClassify - feature: Add default task for
PromptBasedesClassify
Evaluation
- feature: All repositories will return a
ValueErrorwhen trying to access a dataset that does not exist while also trying to access an entry of the dataset. If only the dataset is retrieved, it will return None. - feature:
ArgillaEvaluationRepositorynow handles failed evaluations. - feature: Added
SingleHuggingfaceDatasetRepository. - feature: Added
HighlightCoverageGrader. - feature: Added
LanguageMatchesGrader. - feature: Added prettier default printing behavior of repository entities by providing overloads to
__str__and__repr__methods. - feature: Added abstract
HuggingFacerepository base-class. - feature: Refactoring of
HuggingFacerepository - feature: Added
HuggingFaceAggregationRepository. - feature: Added template method to individual repository
- feature: Added Dataset model to dataset repository. This allows to store a short descriptive name for the dataset for easier identification.
- feature:
SingleChunkQainternally now uses the same model inTextHighlightby default. - feature:
MeanAccumulatortracks standard deviation and standard error. - feature:
EloCalculatornow updates ranking after each match. - feature: Add data selection methods to repositories:
AggregationRepository::aggregation_overviewsEvaluationRepository::run_overviewsEvaluationRepository::run_overview_idsEvaluationRepository::example_outputEvaluationRepository::example_outputsEvaluationRepository::example_output_idsEvaluationRepository::example_traceEvaluationRepository::example_tracerRunRepository::run_overviewsRunRepository::run_overview_idsRunRepository::example_outputRunRepository::example_outputsRunRepository::example_output_idsRunRepository::example_traceRunRepository::example_tracer
- feature:
Evaluatorcontinues in case of no successful outputs
Q & A
- feature: Define default parameters for
LongContextQa,SingleChunkQa - feature: Define default task for
RetrieverBasedQa - feature: Define default model for
KeyWordExtract,MultiChunkQa - feature: Improved focus of highlights in
TextHighlighttasks. - feature: Added filtering for
TextHighlighttasks. - feature: Introduce
logit_biastoSingleChunkQa
Summarize
- feature: Added
RecursiveSummarizeInput. - feature: Define defaults for
SteerableSingleChunkSummarize,SteerableLongContexSummarize,RecursiveSummarize
Tracer
- feature: Added better trace viewer integration:
- Added trace storage to trace viewer server
- Added submit_to_tracer_viewer method to
InMemoryTracer - UI and navigation improvements for trace viewer
- Add exception handling for tracers during log entry writing
Others
- feature: The following classes are now exposed:
DocumentChunkMultipleChunkQaOutputSubanswer
- feature: Simplified internal imports.
- feature: Stream lining of
__init__-parameters of all tasks- Sub-tasks are typically exposed as
__init__-parameters with sensible defaults. - Defaults for non-trivial parameters like models or tasks are defined in
__init__while the default parameter is None. - Instead of exposing parameters that are passed on to sub-tasks the sub-task themselves are exposed.
- Sub-tasks are typically exposed as
- feature: Update supported models
Fixes
- fix: Fixed exception handling in language detection of
LanguageMatchesGrader. - fix: Fixed a bug that could lead to cut-off highlight ranges in
TextHighlighttasks. - fix: Fixed
list_idsmethods to usepath_to_str - fix: Disallow traces without end in the trace viewer
- fix:
ArgillaClientnow correctly uses provided API-URL instead of hard-coded localhost
Full Changelog: v0.6.0...v0.7.0
v0.6.0
Breaking Changes
- breaking change: The evaluation module is moved from core to evaluation.
- breaking change: RetrieverBasedQa task answers now contain document ids in each subanswer.
- breaking change: LongcontextSummarize no longer supports the max_loops parameter.
- breaking change: Rich Model Representation
- The LLM-based tasks no longer accept client, but rather an AlephAlphaModel, which holds the client. The available model classes are AlephAlphaModel and LuminousControlModel.
- The AlephAlphaModel is responsible for its prompt format, tokenizers, complete task and explain task. These responsibilities were moved into the model classes.
- The default client url is now configurable via the environment variable CLIENT_URL.
- breaking change: PromptWithMetadata is removed in favor of RichPrompt . The semantics remain largely unchanged.
- breaking change: The compression-dependent long context summarize classes as well as the few-shot summarize class were removed. Use the better-performing steerable summary classes.
- breaking change: Runner, Evaluator & Aggregation
- The EvaluationRepository has been split up. There is now a total of four repositories: dataset , run, evaluation and aggregation. These repositories save information from their respective steps
- The evaluation and evaluation aggregation have been split and are now provided by the classes Evaluator and Aggregator, respectively. These two classes have no abstract methods. The evaluation and aggregation logic is provided by implementing the abstract methods of the classes EvaluationLogic and AggregationLogic which are passed on to an instance of the Evaluator and Aggregator class, respectively.
New Features
- Documentation
- feature: Added an intro to the Intelligence Layer concepts in Concepts.md.
- feature: Added documentation on how to execute tasks in parallel. See the performance_tips notebook for more information.
- QA
- feature: RetrieverBasedQa task no longer sources its final from all sources, but only the most relevant. This performed better in evaluation.
- feature: The notebooks for RetrieverBasedQa have been updated to use SingleChunkQa.
- feature: SingleChunkQa now supports a custom no-answer phrase.
- feature: MultiChunkQA and LongContextQa allow for more configuration of the used qa-task.
- feature: Make the distance metric configurable in QdrantInMemoryRetriever.
- features: Added list_namespaces to DocumentIndexClient to list all available namespaces in DocumentIndex.
- Evaluation
- feature: The argilla now supports splitting a dataset for multiple people via the split_dataset function.
- feature: Utilities for ELO score/ranking calculation
- The build_tournaments utility function has been added to facilitate the computation of ELO scores when evaluating two models. See InstructComparisonArgillaEvaluator for an example how it can be used to compute the ELO scores.
- feature: The Evaluator can run multiple evaluation tasks in parallel.
- Intelligence app
- feature: IntelligenceApp returns 204 if the output is None
- feature: Allow registering tasks with a task dependency in IntelligenceApp.
- Others
- feature: Runner accepts in run_dataset a new parameter num_examples specifying how many of the first n examples should be run.
- feature: Support None as return type in Task
- feature: Added a new task: ChunkOverlapTask splits a longer text into overlapping chunks.
Full Changelog: v0.5.1...v0.6.0
v0.5.1
Fix failed tag
Full Changelog: v0.5.0...v0.5.1
v0.5.0
Breaking Changes
- Document Index search results now properly return
DocumentChunks instead ofDocumentobjects to make it clear it is only a portion of the document. InstructandFewShottasks now take the model name in the constructor instead of the input.Datasets have now been moved toDatasetRepositorys, which are responsible for loading and storing datasets. This allows for more flexibility in how datasets are loaded and stored.
New Features
- Introduced an
OpenTelemetryTracerto allow for sending trace spans to an OpenTelemetry collector. - Notebook walking through how to use Argilla for human evaluation
SteerableLongContextSummarizetask that allows for steering the summarization process by providing a natural language instruction.- Document index
SearchResults now also return the document ID for each chunk, to make it easier to retrieve the full document. - Retrievers now supply a way to retrieve the full document by ID.
- Introduced the concept of
Accumulators to evaluation for incrementally calculating metrics. - Added
EloCalculatormetrics for calculating Elo scores in evaluation methods. - Introduced new
HuggingFaceDatasetRepositoryfor loading datasets from the HuggingFace datasets library. - Made it easier to evaluate two tasks and or models against each other.
Fixes
- Argilla client properly handles pagination when retrieving records
- Ensured file-based repositories are writing and reading in UTF-8
Full Changelog: v0.4.1...v0.5.0
v0.4.1
Fix missing version bump in the packages
Full Changelog: v0.4.0...v0.4.1
v0.4.0
Breaking Changes
Evaluatormethods changed to support asynchronous processing for human eval. To run everything at once, changeevaluator.evaluate()calls toevaluator.run_and_evaluate- An evaluation also now returns a
EvaluationOverview, with much more information about the output of the evaluation.
- An evaluation also now returns a
EmbeddingBasedClassify: init arguments swapped places, fromlabels_with_examples, clienttoclient, label_with_examplesPromptOutputforInstructtasks now inherits fromCompleteOutputto make it easier to use more information about the raw completion response.
New Features
- New
IntelligenceAppbuilder to quickly spin up a FastAPI server with yourTasks - Integration with Argilla for human evaluation
CompleteOutputandPromptOutputnow support getting thegenerated_tokensin the completion for downstream calculations.- Summarization use cases now allow for overriding the default model
- New
RecursiveSummarizerallows for recursively calling one of theLongContextSummarizetasks until certain thresholds are reached
Fixes
LimitedConcurrencyClient'sfrom_tokenmethod now supports a custom API host
Full Changelog: v0.3.0...v0.4.0