This guide provides comprehensive instructions for migrating between SAFLA versions, upgrading from legacy systems, and ensuring compatibility across different environments. It includes step-by-step procedures, compatibility matrices, and troubleshooting guidance.
- Version Migration
- Legacy System Migration
- Data Migration
- Configuration Migration
- API Migration
- Deployment Migration
- Rollback Procedures
- Testing and Validation
| From Version | To Version | Migration Type | Complexity | Estimated Time |
|---|---|---|---|---|
| 0.7.x | 0.8.x | Major | High | 4-8 hours |
| 0.8.x | 0.9.x | Major | Medium | 2-4 hours |
| 0.9.x | 1.0.0 | Major | Medium | 2-4 hours |
| 1.0.x | 1.1.x | Minor | Low | 30-60 minutes |
| 1.1.x | 1.2.x | Minor | Low | 30-60 minutes |
This is the most common migration path for current users.
-
Backup your data:
# Backup configuration cp -r config/ config-backup-$(date +%Y%m%d)/ # Backup memory data safla backup create --output backup-$(date +%Y%m%d).tar.gz # Backup custom extensions cp -r extensions/ extensions-backup-$(date +%Y%m%d)/
-
Check system requirements:
safla system-check --target-version 1.0.0
-
Review breaking changes:
- Memory API changes
- Configuration format updates
- MCP protocol updates
- Safety constraint modifications
Step 1: Update Dependencies
# Update Python dependencies
pip install --upgrade safla==1.0.0
# Update Node.js dependencies (if using Node.js components)
npm update safla@1.0.0Step 2: Migrate Configuration
# Use the migration tool
safla migrate config --from 0.9 --to 1.0 --config-dir ./config
# Manual migration if needed
safla config validate --version 1.0Configuration Changes (0.9.x → 1.0.0):
# OLD (0.9.x)
memory:
vector_db:
type: "faiss"
index_type: "flat"
episodic:
storage: "sqlite"
max_size: 1000000
# NEW (1.0.0)
memory:
vector:
provider: "faiss"
index_type: "hnsw"
dimensions: 768
episodic:
provider: "sqlite"
max_episodes: 1000000
retention_policy: "time_based"Step 3: Migrate Memory Data
# Migrate vector memory
safla migrate memory vector --from-version 0.9 --to-version 1.0
# Migrate episodic memory
safla migrate memory episodic --from-version 0.9 --to-version 1.0
# Migrate semantic memory
safla migrate memory semantic --from-version 0.9 --to-version 1.0Step 4: Update Custom Code
API changes that require code updates:
# OLD (0.9.x)
from safla.memory import MemorySystem
memory = MemorySystem()
result = memory.store_vector(data, metadata)
# NEW (1.0.0)
from safla.memory import VectorMemory
memory = VectorMemory()
result = await memory.store(data, metadata=metadata)# OLD (0.9.x)
from safla.agents import AgentManager
manager = AgentManager()
manager.add_agent(agent_config)
# NEW (1.0.0)
from safla.coordination import AgentCoordinator
coordinator = AgentCoordinator()
await coordinator.register_agent(agent_instance)Step 5: Update MCP Servers
# OLD (0.9.x)
mcp:
servers:
- name: "custom-server"
command: ["node", "server.js"]
# NEW (1.0.0)
mcp:
servers:
custom-server:
command: "node"
args: ["server.js"]
env:
LOG_LEVEL: "info"Step 6: Validate Migration
# Run validation tests
safla validate --comprehensive
# Check memory integrity
safla memory validate --all-types
# Test agent coordination
safla agents test --all- Agent API Restructure: Complete rewrite of agent coordination
- Memory Consolidation: New consolidation algorithms
- Safety Framework: Enhanced safety constraints
# 1. Backup and prepare
safla backup create --version 0.8
# 2. Update configuration
safla migrate config --from 0.8 --to 0.9
# 3. Migrate agent definitions
safla migrate agents --from 0.8 --to 0.9
# 4. Update memory settings
safla migrate memory --from 0.8 --to 0.9
# 5. Validate
safla validate --target-version 0.9This is a major architectural migration requiring significant changes.
- Complete API Rewrite: All APIs have changed
- Memory System Overhaul: New memory architecture
- Configuration Format: YAML-based configuration
- Agent System: Introduction of MCP-based coordination
Due to the extensive changes, this migration requires a fresh installation approach:
# 1. Export data from 0.7.x
safla-0.7 export --format json --output legacy-data.json
# 2. Install SAFLA 0.8.x
pip install safla==0.8.0
# 3. Import data using migration tool
safla import legacy --source legacy-data.json --format 0.7
# 4. Rewrite custom code for new APIs
# (Manual process - see API migration guide)# Migration helper for TensorFlow models
from safla.migration import TensorFlowMigrator
migrator = TensorFlowMigrator()
# Convert TensorFlow model to SAFLA agent
agent = migrator.convert_model(
model_path="path/to/model.pb",
agent_type="classification",
capabilities=["data_analysis"]
)
# Register in SAFLA
coordinator = AgentCoordinator()
await coordinator.register_agent(agent)# Migration helper for scikit-learn models
from safla.migration import SklearnMigrator
import joblib
# Load existing model
model = joblib.load('model.pkl')
# Convert to SAFLA agent
migrator = SklearnMigrator()
agent = migrator.convert_model(
model=model,
model_type="regression",
feature_names=feature_names
)from safla.migration import PineconeMigrator
migrator = PineconeMigrator(
api_key="your-pinecone-key",
environment="us-west1-gcp"
)
# Migrate vectors to SAFLA
await migrator.migrate_index(
index_name="source-index",
target_memory="vector",
batch_size=1000
)from safla.migration import WeaviateMigrator
migrator = WeaviateMigrator(url="http://localhost:8080")
# Migrate schema and data
await migrator.migrate_class(
class_name="Document",
target_memory="semantic",
include_vectors=True
)from safla.migration import VectorMemoryMigrator
async def migrate_vector_data():
migrator = VectorMemoryMigrator()
# Configure source and target
await migrator.configure_source(
provider="faiss",
path="./old_vectors.index"
)
await migrator.configure_target(
provider="qdrant",
url="http://localhost:6333"
)
# Perform migration with progress tracking
async for progress in migrator.migrate():
print(f"Migration progress: {progress.percentage}%")
print(f"Vectors migrated: {progress.vectors_migrated}")
print(f"Estimated time remaining: {progress.eta}")from safla.migration import EpisodicMemoryMigrator
async def migrate_episodic_data():
migrator = EpisodicMemoryMigrator()
# Migrate from SQLite to PostgreSQL
await migrator.migrate(
source_db="sqlite:///episodes.db",
target_db="postgresql://user:pass@localhost/safla",
batch_size=10000,
preserve_timestamps=True
)For datasets larger than available memory:
from safla.migration import StreamingMigrator
async def migrate_large_dataset():
migrator = StreamingMigrator()
# Configure streaming migration
await migrator.configure(
source_path="./large_dataset/",
target_memory="vector",
chunk_size="1GB",
parallel_workers=4
)
# Start streaming migration
async for chunk_result in migrator.stream_migrate():
print(f"Chunk {chunk_result.chunk_id} completed")
print(f"Records processed: {chunk_result.records}")
# Optional: pause migration if needed
if chunk_result.chunk_id % 10 == 0:
await migrator.pause(duration=30) # 30 second pause# Use the built-in migration tool
safla config migrate \
--from-version 0.9 \
--to-version 1.0 \
--config-dir ./config \
--backup-dir ./config-backup \
--validate# OLD FORMAT (0.9.x)
memory_config:
vector_store:
backend: "faiss"
index: "flat"
metric: "cosine"
episode_store:
backend: "sqlite"
file: "episodes.db"
# NEW FORMAT (1.0.0)
memory:
vector:
provider: "faiss"
index_type: "hnsw"
similarity_metric: "cosine"
dimensions: 768
episodic:
provider: "sqlite"
database_url: "sqlite:///episodes.db"
max_episodes: 1000000# OLD FORMAT (0.9.x)
security:
auth_enabled: true
jwt_secret: "secret"
# NEW FORMAT (1.0.0)
security:
authentication:
provider: "jwt"
secret_key: "${JWT_SECRET}"
token_expiry: "1h"
mfa_required: false
authorization:
rbac_enabled: true
default_role: "viewer"# OLD (0.9.x)
export SAFLA_MEMORY_BACKEND=faiss
export SAFLA_AUTH_SECRET=mysecret
# NEW (1.0.0)
export SAFLA_MEMORY_VECTOR_PROVIDER=faiss
export SAFLA_SECURITY_JWT_SECRET=mysecret
export SAFLA_MEMORY_VECTOR_DIMENSIONS=768| Old Endpoint (0.9.x) | New Endpoint (1.0.0) | Notes |
|---|---|---|
/api/memory/store |
/api/v1/memory/vector/store |
Separated by memory type |
/api/memory/search |
/api/v1/memory/vector/search |
Enhanced search parameters |
/api/agents/list |
/api/v1/coordination/agents |
New coordination API |
/api/safety/check |
/api/v1/safety/validate |
Enhanced safety validation |
# OLD (0.9.x)
import requests
response = requests.post('/api/memory/store', {
'data': 'text data',
'type': 'vector'
})
# NEW (1.0.0)
response = requests.post('/api/v1/memory/vector/store', {
'content': 'text data',
'metadata': {'source': 'user_input'},
'embedding_model': 'text-embedding-ada-002'
})# OLD (0.9.x)
from safla import SAFLA
safla = SAFLA()
result = safla.store_memory(data, memory_type='vector')
search_results = safla.search_memory(query, memory_type='vector')
# NEW (1.0.0)
from safla import SAFLA
from safla.memory import VectorMemory
safla = SAFLA()
vector_memory = VectorMemory()
result = await vector_memory.store(data, metadata={'type': 'user_input'})
search_results = await vector_memory.search(query, limit=10, threshold=0.7)// OLD (0.9.x)
import { SAFLA } from 'safla';
const safla = new SAFLA();
const result = await safla.storeMemory(data, 'vector');
// NEW (1.0.0)
import { SAFLA, VectorMemory } from 'safla';
const safla = new SAFLA();
const vectorMemory = new VectorMemory();
const result = await vectorMemory.store(data, {
metadata: { source: 'user_input' }
});FROM safla/safla:0.9
COPY config.json /app/config.json
EXPOSE 8080
CMD ["safla", "start"]FROM safla/safla:1.0
COPY config/ /app/config/
COPY .env /app/.env
EXPOSE 8080 8081
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
CMD safla health-check
CMD ["safla", "start", "--config-dir", "/app/config"]#!/bin/bash
# k8s-migration.sh
# Backup current deployment
kubectl get deployment safla -o yaml > safla-deployment-backup.yaml
# Scale down old deployment
kubectl scale deployment safla --replicas=0
# Apply new configuration
kubectl apply -f k8s/safla-1.0-deployment.yaml
# Wait for rollout
kubectl rollout status deployment/safla
# Verify migration
kubectl exec -it deployment/safla -- safla validate# k8s/safla-1.0-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: safla
labels:
app: safla
version: "1.0"
spec:
replicas: 3
selector:
matchLabels:
app: safla
template:
metadata:
labels:
app: safla
version: "1.0"
spec:
containers:
- name: safla
image: safla/safla:1.0.0
ports:
- containerPort: 8080
name: http
- containerPort: 8081
name: metrics
env:
- name: SAFLA_CONFIG_DIR
value: "/app/config"
volumeMounts:
- name: config
mountPath: /app/config
- name: data
mountPath: /app/data
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
volumes:
- name: config
configMap:
name: safla-config
- name: data
persistentVolumeClaim:
claimName: safla-data# Create rollback point before migration
safla backup create --name "pre-migration-$(date +%Y%m%d)"
# If migration fails, rollback
safla rollback --to "pre-migration-$(date +%Y%m%d)" --force-
Stop SAFLA services:
safla stop --all-services
-
Restore configuration:
cp -r config-backup-20250101/ config/
-
Restore data:
safla restore --backup backup-20250101.tar.gz
-
Downgrade software:
pip install safla==0.9.5 # Previous version -
Restart services:
safla start --validate-config
# Rollback to previous deployment
kubectl rollout undo deployment/safla
# Rollback to specific revision
kubectl rollout undo deployment/safla --to-revision=2
# Check rollback status
kubectl rollout status deployment/safla# Test current system
safla test --comprehensive --output pre-migration-test.json
# Validate data integrity
safla validate data --all-types --checksum
# Performance baseline
safla benchmark --output pre-migration-benchmark.json# Validate migration success
safla validate migration --from-version 0.9 --to-version 1.0
# Test all functionality
safla test --comprehensive --output post-migration-test.json
# Compare performance
safla benchmark --compare-with pre-migration-benchmark.json
# Validate data integrity
safla validate data --all-types --verify-checksums# custom-validation.py
import asyncio
from safla import SAFLA
from safla.testing import ValidationSuite
async def custom_validation():
safla = SAFLA()
validator = ValidationSuite()
# Test memory operations
memory_results = await validator.test_memory_operations()
print(f"Memory tests: {memory_results.passed}/{memory_results.total}")
# Test agent coordination
agent_results = await validator.test_agent_coordination()
print(f"Agent tests: {agent_results.passed}/{agent_results.total}")
# Test safety mechanisms
safety_results = await validator.test_safety_mechanisms()
print(f"Safety tests: {safety_results.passed}/{safety_results.total}")
# Custom business logic tests
custom_results = await validator.run_custom_tests([
test_custom_workflow,
test_integration_points,
test_performance_requirements
])
return validator.generate_report()
if __name__ == "__main__":
report = asyncio.run(custom_validation())
print(report)Issue: Vector migration fails with dimension mismatch
Error: Vector dimension mismatch. Source: 512, Target: 768Solution:
# Use dimension transformation
from safla.migration import DimensionTransformer
transformer = DimensionTransformer()
await transformer.transform_vectors(
source_dim=512,
target_dim=768,
method="pad_zeros" # or "truncate", "interpolate"
)Issue: Invalid configuration after migration
Error: Invalid memory configuration. Missing required field: dimensionsSolution:
# Use configuration fixer
safla config fix --auto-fix --backup
# Or manually update
safla config validate --fix-suggestionsIssue: Custom agents fail to register after migration
Error: Agent capability 'custom_analysis' not recognizedSolution:
# Update agent capabilities
from safla.agents import AgentCapabilities
# Register custom capability
AgentCapabilities.register_custom('custom_analysis')
# Or use standard capabilities
class CustomAgent(Agent):
def __init__(self):
super().__init__(
'custom-agent',
capabilities=[AgentCapabilities.DATA_ANALYSIS] # Use standard
)- Documentation: Check migration-specific docs
- Community Forum: Ask migration questions
- GitHub Issues: Report migration bugs
- Professional Support: Enterprise migration assistance
# Migration health check
safla migration health-check --from 0.9 --to 1.0
# Migration dry run
safla migration dry-run --config ./config --data ./data
# Migration progress monitoring
safla migration status --watch
# Migration cleanup
safla migration cleanup --remove-backups --older-than 30dThis comprehensive migration guide ensures smooth transitions between SAFLA versions while maintaining data integrity and system functionality. Always test migrations in a staging environment before applying to production systems.
Last Updated: January 2025
Version: 1.0.0
Maintained by: SAFLA Migration Team