Skip to content

Releases: CivicDataLab/DataSpaceBackend

SDK v0.4.0

29 Dec 08:58

Choose a tag to compare

Updated AImodel flow

SDK v0.3.3

27 Nov 13:13

Choose a tag to compare

Login fixes and more

SDK v0.3.2

27 Nov 12:59

Choose a tag to compare

Login fixes and more

SDK v0.3.1

20 Nov 10:02

Choose a tag to compare

Bug fixes and improvements

SDK v0.3.0

20 Nov 09:34

Choose a tag to compare

Ability to login with username and password

SDK v0.2.0

12 Nov 16:34
ba9912f

Choose a tag to compare

SDK v0.2.0 Pre-release
Pre-release

DataSpace SDK v0.2.0 Release Notes

Release Date: November 12, 2025
Type: Minor Release (Feature Addition)


๐Ÿš€ New Features

AI Model Execution Support

Added comprehensive support for executing AI models directly through the SDK:

  • call_model(model_id, input_text, parameters) - Synchronous model execution
  • call_model_async(model_id, input_text, parameters) - Asynchronous execution (placeholder for Celery integration)

Supported Providers:

  • OpenAI (GPT-3.5, GPT-4, etc.)
  • Llama (Ollama, Together AI, Replicate, Custom endpoints)
  • HuggingFace (local and remote inference)
  • Custom API endpoints

Example:

from dataspace_sdk import DataSpaceClient

client = DataSpaceClient(base_url="https://api.dataspace.example.com")
client.login(keycloak_token="your_token")

result = client.aimodels.call_model(
    model_id="uuid",
    input_text="Hello, world!",
    parameters={"temperature": 0.7, "max_tokens": 100}
)

print(result["output"])  # Model response

HuggingFace Integration

Added comprehensive HuggingFace model support with new fields:

Field Type Description
hf_use_pipeline Boolean Use Pipeline inference API
hf_auth_token String Authentication token for gated models
hf_model_class Enum Model head specification (8 variants)
hf_attn_implementation String Attention function (e.g., flash_attention_2)
framework Enum PyTorch or TensorFlow

Supported Model Classes:

  • AutoModelForCausalLM
  • AutoModelForSeq2SeqLM
  • AutoModelForSequenceClassification
  • AutoModelForNextSentencePrediction
  • AutoModelForMultipleChoice
  • AutoModelForTokenClassification
  • AutoModelForQuestionAnswering
  • AutoModelForMaskedLM

CRUD Operations

Full model lifecycle management:

# Create
model = client.aimodels.create({
    "name": "my-gpt-model",
    "display_name": "My GPT Model",
    "provider": "OPENAI",
    "provider_model_id": "gpt-4",
    "model_type": "TEXT_GENERATION",
    "is_public": True
})

# Update
client.aimodels.update(model_id, {
    "status": "ACTIVE",
    "is_public": False
})

# Delete
client.aimodels.delete_model(model_id)

๐Ÿ“Š API Enhancements

New Endpoints

Method Endpoint Description
POST /api/aimodels/{id}/call/ Execute model inference
POST /api/aimodels/{id}/call-async/ Async model execution
POST /api/aimodels/ Create new model
PATCH /api/aimodels/{id}/ Update model
DELETE /api/aimodels/{id}/ Delete model

Response Format

Standardized response for model execution:

{
  "success": true,
  "output": "Model generated response...",
  "latency_ms": 245.67,
  "status_code": 200,
  "provider": "OpenAI",
  "raw_response": {
    "choices": [...],
    "usage": {...}
  }
}

Error Response:

{
  "success": false,
  "error": "Error message...",
  "latency_ms": 123.45,
  "status_code": 500
}

๐Ÿ” Security & Access Control

Organization-Based Access

Models can now be restricted to organization members:

# Only organization members can access private models
if not model.is_public and model.organization:
    # Check user's organization membership
    if user not in model.organization.members:
        return 403 Forbidden

Authentication

  • Full Keycloak integration
  • Automatic token refresh
  • Secure API key storage with encryption

๐Ÿ—๏ธ Backend Services

ModelAPIClient

Features:

  • Multiple authentication methods (Bearer, API Key, Custom)
  • Request template system with placeholder replacement
  • Provider-specific request/response handling
  • Automatic retry with exponential backoff (3 attempts)
  • Endpoint statistics tracking

Supported Patterns:

# OpenAI format
{"model": "gpt-4", "messages": [...]}

# Llama/Ollama format
{"model": "llama2", "prompt": "...", "stream": false}

# Custom template
{"input": "{input}", "params": {...}}

ModelHFClient

Features:

  • Pipeline-based inference
  • Manual model loading with custom configurations
  • Automatic GPU/CPU device selection
  • Batch processing support
  • Comprehensive error handling

๐Ÿ“ฆ New Enums

AIModelProvider

OPENAI = "OPENAI"
LLAMA_OLLAMA = "LLAMA_OLLAMA"
LLAMA_TOGETHER = "LLAMA_TOGETHER"
LLAMA_REPLICATE = "LLAMA_REPLICATE"
LLAMA_CUSTOM = "LLAMA_CUSTOM"
CUSTOM = "CUSTOM"
HUGGINGFACE = "HUGGINGFACE"  # NEW

AIModelFramework

PYTORCH = "pt"
TENSORFLOW = "tf"

HFModelClass

CAUSAL_LM = "AutoModelForCausalLM"
SEQ2SEQ_LM = "AutoModelForSeq2SeqLM"
SEQUENCE_CLASSIFICATION = "AutoModelForSequenceClassification"
# ... and 5 more

๐Ÿ”ง Technical Improvements

Type Safety

  • Full mypy compatibility
  • Comprehensive type annotations
  • Proper handling of Union types
  • Type ignores for third-party libraries

Error Handling

  • Graceful degradation for missing endpoints
  • Detailed error messages
  • Proper exception hierarchy
  • Async/sync context handling

Performance

  • Connection pooling with httpx
  • Retry logic with exponential backoff
  • Endpoint statistics tracking
  • Async support throughout

๐Ÿ“‹ Migration Guide

From v0.1.0 to v0.2.0

โœ… No Breaking Changes - Fully backward compatible

New Capabilities:

  1. Model Execution

    # Old: Not available
    # New: Execute models directly
    result = client.aimodels.call_model(model_id, "input text")
  2. CRUD Operations

    # Old: Read-only access
    # New: Full CRUD support
    client.aimodels.create({...})
    client.aimodels.update(id, {...})
    client.aimodels.delete_model(id)
  3. HuggingFace Support

    # Old: Limited provider support
    # New: Full HuggingFace integration
    model = client.aimodels.get_by_id_graphql(hf_model_id)
    print(model["hfModelClass"])

๐Ÿ”— Dependencies

New Dependencies

httpx>=0.28.1          # Async HTTP client
tenacity>=9.1.2        # Retry logic
torch>=2.9.0           # PyTorch (optional, for HuggingFace)
transformers>=4.57.1   # HuggingFace models (optional)
nest-asyncio>=1.6.0    # Nested event loop support

Installation

# Basic installation
pip install dataspace-sdk>=0.2.0

# With HuggingFace support
pip install dataspace-sdk[huggingface]>=0.2.0

๐Ÿงช Testing

Unit Tests

from unittest.mock import patch

@patch('dataspace_sdk.resources.aimodels.AIModelClient.call_model')
def test_model_execution(mock_call):
    mock_call.return_value = {
        "success": True,
        "output": "Test response"
    }
    
    result = client.aimodels.call_model("uuid", "test")
    assert result["success"] is True

Integration Tests

# Requires running DataSpace backend
client = DataSpaceClient(base_url="http://localhost:8000")
client.login(keycloak_token="test_token")

models = client.aimodels.list_all()
assert len(models) > 0

๐Ÿ› Bug Fixes

  • Fixed type annotations for mypy compatibility
  • Improved error handling for missing endpoints
  • Corrected async/sync execution context handling
  • Fixed response extraction for various providers
  • Resolved Union type issues with AnonymousUser

๐ŸŽฏ Use Cases

  1. Centralized Model Registry - Manage all AI models in one place
  2. Multi-Provider Execution - Unified interface for different providers
  3. Access Control - Organization-based model access
  4. Monitoring - Track model usage and performance
  5. A/B Testing - Compare different models easily

๐Ÿ”ฎ Roadmap (v0.3.0)

  • Celery-based async execution
  • Streaming response support
  • Batch inference API
  • Model versioning
  • Cost tracking and analytics
  • Rate limiting per organization
  • Model performance metrics

๐Ÿ“– Documentation


๐Ÿ™ Acknowledgments

Special thanks to all contributors who helped make this release possible!


๐Ÿ“ฆ Installation

pip install dataspace-sdk==0.2.0

Requirements:

  • Python >= 3.11
  • Django >= 5.2
  • PostgreSQL (for backend)

๐Ÿ”— Links


Full Changelog: v0.1.0...v0.2.0

Initial Release

12 Nov 16:20
3845bdf

Choose a tag to compare

Supports CRUD Operations on Dataset, UseCase, Collaboratives
Ability to create sharts for datafiles.