Skip to content

Releases: Aleph-Alpha/intelligence-layer-sdk

v0.9.0

16 Apr 07:32
8b6078a

Choose a tag to compare

Breaking Changes

  • breaking change: Renamed the field chunk of AnswerSource to search_result for multi chunk retriever qa.
  • breaking change: The implementation of the HuggingFace repository creation and deletion got moved to HuggingFaceRepository

New Features

  • feature: HuggingFaceDataset- & AggregationRepositories now have an explicit create_repository function.
  • feature: Add MultipleChunkRetrieverBasedQa, a task that performs better on faster on retriever-QA, especially with longer context models

Full Changelog: v0.8.2...v0.9.0

v0.8.2

11 Apr 12:05
b829383

Choose a tag to compare

0.8.2

New Features

  • feature: Add SearchEvaluationLogic and SearchAggregationLogic to evaluate Search-use-cases
  • feature: Trace viewer and IL python package are now deployed to artifactory

Fixes

  • Documentation
    • fix: Add missing link to issue_classification_user_journey notebook to the tutorials section of README.
    • fix: Confusion matrix in issue_classification_user_journey now have rounded numbers.

Full Changelog: v0.8.1...v0.8.2

v0.8.1

08 Apr 12:45

Choose a tag to compare

What's Changed

Fixes:

  • fix: Linting for release version

Full Changelog: v0.8.0...v0.8.1

v0.8.0

08 Apr 11:55

Choose a tag to compare

What's Changed

New Features

  • feature: Expose start and end index in DocumentChunk

  • feature: Add sorted_scores property to SingleLabelClassifyOutput.

  • feature: Error information is printed to the console on failed runs and evaluations.

  • feature: The stack trace of a failed run/evaluation is included in the FailedExampleRun/FailedExampleEvaluation object

  • feature: The Runner.run_dataset(..) and Evaluator.evaluate_run(..) have an optional flag abort_on_error to stop running/evaluating when an error occurs.

  • feature: Added Runner.failed_runs(..) and Evaluator.failed_evaluations(..) to retrieve all failed run / evaluation lineages

  • feature: Added .successful_example_outputs(..) and .failed_example_outputs(..) to RunRepository to match the evaluation repository

  • feature: Added optional argument to set an id when creating a Dataset via DatasetRepository.create_dataset(..)

  • feature: Traces now log exceptions using the ErrorValue type.

  • Documentation:

    • feature: Add info on how to run tests in VSCode
    • feature: Add issue_classification_user_journey notebook.
    • feature: Add documentation of newly added data retrieval methods how_to_retrieve_data_for_analysis
    • feature: Add documentation of release workflow

Fixes

  • fix: Fix version number in pyproject.toml in IL
  • fix: Fix instructions for installing IL via pip.

Full Changelog: v0.7.0...v0.8.0

v0.7.0

28 Mar 10:23

Choose a tag to compare

Overview

  • Refactoring in Evaluation
    • Many changes to Evaluation repository structure and renaming to make the overall handling more intuitive and consistent,
  • New How-To’s and improved documentation
  • Simplified repository access via data selection methods
  • Better text highlighting
  • Better tracer viewer integration
    • Displaying InMemoryTracer objects in a jupyter notebook will load them into an active trace viewer.

Breaking Changes

  • breaking change: FScores are now correctly exposed as FScores and no longer as RougeScores
  • breaking change: HuggingFaceAggregationRepository and HuggingFaceDatasetRepository now consistently follow the same folder structure as FileDatasetRepository when creating data sets. This means that datasets will be stored in a folder datasets and additional sub-folders named according to the respective dataset ID.
  • breaking change: Split run_repository into file_run_repository, in_memory_run_repository.
  • breaking change: Split evaluation_repository into argilla_evaluation_repository, file_evaluation_repository and in_memory_evaluation_repository
  • breaking change: Split dataset_repository into file_dataset_repository and in_memory_dataset_respository
  • breaking change: Split aggregation_respository into file_aggragation_repository and in_memory_aggregation_repository
  • breaking change: Renamed evaluation/run.py to evaluation/run_evaluator.py
  • breaking change: Split evaluation/domain and distribute it across aggregation, evaluation, dataset and run packages.
  • breaking change: Split evaluation/argilla and distribute it across aggregation and evaluation packages.
  • breaking change: Split evaluation into separate dataset, run, evaluation and aggregation packages.
  • breaking change: Split evaluation/hugging_face.py into dataset and aggregation repository files in data_storage package.
  • breaking change: create_dataset now returns the new Dataset type instead of a dataset ID.
  • breaking change: Consistent naming for repository root directories when creating evaluations or aggregations: .../eval → .../evaluations and .../aggregation → aggregations.
  • breaking change: Core tasks not longer provide defaults for the applied models.
  • breaking change: Methods returning entities from repositories now return the results ordered by their IDs.
  • breaking change: Renamed crashed_during_eval_count to crashed_during_evaluation_count in AggregationOverview.
  • breaking change: Renamed create_evaluation_dataset to initialize_evaluation in EvaluationRepository.
  • breaking change: Renamed to_explanation_response to to_explanation_request in ExplainInput.
  • breaking change: Removed TextHighlight::text in favor of TextHighlight::start and TextHighlight::end
  • breaking change: Removed IntelligenceApp and IntelligenceStarterApp
  • breaking change: RetrieverBasedQa uses now MultiChunkQa instead of generic task SingleChunkQa
  • breaking change: EvaluationRepository::failed_example_evaluations no longer abstract
  • breaking change:
    • Elo calculation simplified: Payoff from elo package has been removed
    • PayoffMatrix from elo package renamed to MatchOutcome
    • SingleChunkQa uses logit_bias to promote not answering for German
  • breaking change: Remove ChunkOverlap task.
  • breaking change: Rename Chunk to TextChunk.
  • breaking change: Rename ChunkTask to Chunk .
  • breaking change: Rename EchoTask to Echo.
  • breaking change: Rename TextHighlightTask to TextHighlight
  • breaking change: Rename ChunkOverlaptTask to ChunkOverlap

New Features

Aggregation:

  • feature: InstructComparisonArgillaAggregationLogic uses full evaluation set instead of sample for aggregation

Documentation

  • feature: Added How-To’s (linked in the README):
    • how to define a task
    • how to implement a task
    • how to create a dataset
    • how to run a task on a dataset
    • how to perform aggregation
    • how to evaluate runs
  • feature: Restructured and cleaned up README for more conciseness.
  • feature: Add illustrations to Concepts.md.
  • feature: Added tutorial for adding task to a FastAPI app (linked in README).
  • feature: Improved and added various DocStrings.
  • feature: Added a README section about the client URL.
  • feature: Add python naming convention to README

Classify

  • feature: PromptBasedClassify now supports changing of the prompt instruction via the instruction parameter.
  • feature: Add default model for PromptBasedClassify
  • feature: Add default task for PromptBasedesClassify

Evaluation

  • feature: All repositories will return a ValueError when trying to access a dataset that does not exist while also trying to access an entry of the dataset. If only the dataset is retrieved, it will return None.
  • feature: ArgillaEvaluationRepository now handles failed evaluations.
  • feature: Added SingleHuggingfaceDatasetRepository.
  • feature: Added HighlightCoverageGrader.
  • feature: Added LanguageMatchesGrader.
  • feature: Added prettier default printing behavior of repository entities by providing overloads to __str__ and __repr__ methods.
  • feature: Added abstract HuggingFace repository base-class.
  • feature: Refactoring of HuggingFace repository
  • feature: Added HuggingFaceAggregationRepository.
  • feature: Added template method to individual repository
  • feature: Added Dataset model to dataset repository. This allows to store a short descriptive name for the dataset for easier identification.
  • feature: SingleChunkQa internally now uses the same model in TextHighlight by default.
  • feature: MeanAccumulator tracks standard deviation and standard error.
  • feature: EloCalculator now updates ranking after each match.
  • feature: Add data selection methods to repositories:
    • AggregationRepository::aggregation_overviews
    • EvaluationRepository::run_overviews
    • EvaluationRepository::run_overview_ids
    • EvaluationRepository::example_output
    • EvaluationRepository::example_outputs
    • EvaluationRepository::example_output_ids
    • EvaluationRepository::example_trace
    • EvaluationRepository::example_tracer
    • RunRepository::run_overviews
    • RunRepository::run_overview_ids
    • RunRepository::example_output
    • RunRepository::example_outputs
    • RunRepository::example_output_ids
    • RunRepository::example_trace
    • RunRepository::example_tracer
  • feature: Evaluator continues in case of no successful outputs

Q & A

  • feature: Define default parameters for LongContextQa, SingleChunkQa
  • feature: Define default task for RetrieverBasedQa
  • feature: Define default model for KeyWordExtract, MultiChunkQa
  • feature: Improved focus of highlights in TextHighlight tasks.
  • feature: Added filtering for TextHighlight tasks.
  • feature: Introduce logit_bias to SingleChunkQa

Summarize

  • feature: Added RecursiveSummarizeInput.
  • feature: Define defaults for SteerableSingleChunkSummarize, SteerableLongContexSummarize, RecursiveSummarize

Tracer

  • feature: Added better trace viewer integration:
    • Added trace storage to trace viewer server
    • Added submit_to_tracer_viewer method to InMemoryTracer
    • UI and navigation improvements for trace viewer
    • Add exception handling for tracers during log entry writing

Others

  • feature: The following classes are now exposed:
    • DocumentChunk
    • MultipleChunkQaOutput
    • Subanswer
  • feature: Simplified internal imports.
  • feature: Stream lining of __init__-parameters of all tasks
    • Sub-tasks are typically exposed as __init__-parameters with sensible defaults.
    • Defaults for non-trivial parameters like models or tasks are defined in __init__ while the default parameter is None.
    • Instead of exposing parameters that are passed on to sub-tasks the sub-task themselves are exposed.
  • feature: Update supported models

Fixes

  • fix: Fixed exception handling in language detection of LanguageMatchesGrader.
  • fix: Fixed a bug that could lead to cut-off highlight ranges in TextHighlight tasks.
  • fix: Fixed list_ids methods to use path_to_str
  • fix: Disallow traces without end in the trace viewer
  • fix: ArgillaClient now correctly uses provided API-URL instead of hard-coded localhost

Full Changelog: v0.6.0...v0.7.0

v0.6.0

27 Feb 13:09

Choose a tag to compare

Breaking Changes

  • breaking change: The evaluation module is moved from core to evaluation.
  • breaking change: RetrieverBasedQa task answers now contain document ids in each subanswer.
  • breaking change: LongcontextSummarize no longer supports the max_loops parameter.
  • breaking change: Rich Model Representation
    • The LLM-based tasks no longer accept client, but rather an AlephAlphaModel, which holds the client. The available model classes are AlephAlphaModel and LuminousControlModel.
    • The AlephAlphaModel is responsible for its prompt format, tokenizers, complete task and explain task. These responsibilities were moved into the model classes.
    • The default client url is now configurable via the environment variable CLIENT_URL.
  • breaking change: PromptWithMetadata is removed in favor of RichPrompt . The semantics remain largely unchanged.
  • breaking change: The compression-dependent long context summarize classes as well as the few-shot summarize class were removed. Use the better-performing steerable summary classes.
  • breaking change: Runner, Evaluator & Aggregation
    • The EvaluationRepository has been split up. There is now a total of four repositories: dataset , run, evaluation and aggregation. These repositories save information from their respective steps
    • The evaluation and evaluation aggregation have been split and are now provided by the classes Evaluator and Aggregator, respectively. These two classes have no abstract methods. The evaluation and aggregation logic is provided by implementing the abstract methods of the classes EvaluationLogic and AggregationLogic which are passed on to an instance of the Evaluator and Aggregator class, respectively.

New Features

  • Documentation
    • feature: Added an intro to the Intelligence Layer concepts in Concepts.md.
    • feature: Added documentation on how to execute tasks in parallel. See the performance_tips notebook for more information.
  • QA
    • feature: RetrieverBasedQa task no longer sources its final from all sources, but only the most relevant. This performed better in evaluation.
    • feature: The notebooks for RetrieverBasedQa have been updated to use SingleChunkQa.
    • feature: SingleChunkQa now supports a custom no-answer phrase.
    • feature: MultiChunkQA and LongContextQa allow for more configuration of the used qa-task.
    • feature: Make the distance metric configurable in QdrantInMemoryRetriever.
    • features: Added list_namespaces to DocumentIndexClient to list all available namespaces in DocumentIndex.
  • Evaluation
    • feature: The argilla now supports splitting a dataset for multiple people via the split_dataset function.
    • feature: Utilities for ELO score/ranking calculation
      • The build_tournaments utility function has been added to facilitate the computation of ELO scores when evaluating two models. See InstructComparisonArgillaEvaluator for an example how it can be used to compute the ELO scores.
    • feature: The Evaluator can run multiple evaluation tasks in parallel.
  • Intelligence app
    • feature: IntelligenceApp returns 204 if the output is None
    • feature: Allow registering tasks with a task dependency in IntelligenceApp.
  • Others
    • feature: Runner accepts in run_dataset a new parameter num_examples specifying how many of the first n examples should be run.
    • feature: Support None as return type in Task
    • feature: Added a new task: ChunkOverlapTask splits a longer text into overlapping chunks.

Full Changelog: v0.5.1...v0.6.0

v0.5.1

10 Jan 12:57

Choose a tag to compare

Fix failed tag
Full Changelog: v0.5.0...v0.5.1

v0.5.0

10 Jan 12:47

Choose a tag to compare

Breaking Changes

  • Document Index search results now properly return DocumentChunks instead of Document objects to make it clear it is only a portion of the document.
  • Instruct and FewShot tasks now take the model name in the constructor instead of the input.
  • Datasets have now been moved to DatasetRepositorys, which are responsible for loading and storing datasets. This allows for more flexibility in how datasets are loaded and stored.

New Features

  • Introduced an OpenTelemetryTracer to allow for sending trace spans to an OpenTelemetry collector.
  • Notebook walking through how to use Argilla for human evaluation
  • SteerableLongContextSummarize task that allows for steering the summarization process by providing a natural language instruction.
  • Document index SearchResults now also return the document ID for each chunk, to make it easier to retrieve the full document.
  • Retrievers now supply a way to retrieve the full document by ID.
  • Introduced the concept of Accumulators to evaluation for incrementally calculating metrics.
  • Added EloCalculator metrics for calculating Elo scores in evaluation methods.
  • Introduced new HuggingFaceDatasetRepository for loading datasets from the HuggingFace datasets library.
  • Made it easier to evaluate two tasks and or models against each other.

Fixes

  • Argilla client properly handles pagination when retrieving records
  • Ensured file-based repositories are writing and reading in UTF-8

Full Changelog: v0.4.1...v0.5.0

v0.4.1

13 Dec 11:32

Choose a tag to compare

Fix missing version bump in the packages

Full Changelog: v0.4.0...v0.4.1

v0.4.0

13 Dec 11:29

Choose a tag to compare

Breaking Changes

  • Evaluator methods changed to support asynchronous processing for human eval. To run everything at once, change evaluator.evaluate() calls to evaluator.run_and_evaluate
    • An evaluation also now returns a EvaluationOverview, with much more information about the output of the evaluation.
  • EmbeddingBasedClassify: init arguments swapped places, from labels_with_examples, client to client, label_with_examples
  • PromptOutput for Instruct tasks now inherits from CompleteOutput to make it easier to use more information about the raw completion response.

New Features

  • New IntelligenceApp builder to quickly spin up a FastAPI server with your Tasks
  • Integration with Argilla for human evaluation
  • CompleteOutput and PromptOutput now support getting the generated_tokens in the completion for downstream calculations.
  • Summarization use cases now allow for overriding the default model
  • New RecursiveSummarizer allows for recursively calling one of the LongContextSummarize tasks until certain thresholds are reached

Fixes

  • LimitedConcurrencyClient's from_token method now supports a custom API host

Full Changelog: v0.3.0...v0.4.0