Python bindings for TensorLogic - Logic-as-Tensor planning layer
Status: Production Ready - ALL HIGH-PRIORITY FEATURES COMPLETE (100%) Version: 0.1.0-rc.1 Last Updated: 2026-03-06
TensorLogic compiles logical rules (predicates, quantifiers, implications) into tensor equations (einsum graphs) that can be executed on various backends. This Python package provides a comprehensive Pythonic API for researchers and practitioners to use TensorLogic from Jupyter notebooks and Python workflows.
- Logical Expression DSL: Build complex logical rules using predicates, quantifiers, and connectives
- Arithmetic & Comparisons: Full support for arithmetic operations and conditional logic
- Multiple Compilation Strategies: 6 preset configurations (soft/hard logic, fuzzy variants, probabilistic)
- NumPy Integration: Seamless bidirectional conversion between NumPy arrays and internal tensors
- Type Safety: Complete type stubs (.pyi) for IDE support and static type checking
- Comprehensive Error Handling: Clear, actionable error messages
- Async Execution: Non-blocking execution, parallel graphs, batch processing, cancellation support
- Backend Selection: Choose between CPU, SIMD, or GPU backends
- Domain Management: SymbolTable, CompilerContext for advanced schema management
- Provenance Tracking: Full RDF*/SHACL integration with confidence-based inference
- SciRS2 Backend: High-performance execution with SIMD acceleration (2-4x speedup)
- Training API: Loss functions (MSE, BCE, cross-entropy), optimizers (SGD, Adam, RMSprop), callbacks
- Model Persistence: Save/load models in JSON and binary formats with pickle support
- Rule Builder DSL: Python-native syntax with operator overloading (&, |, ~, >>)
- Jupyter Integration: Rich HTML display (
_repr_html_()) for all major types - Performance Monitoring: GIL release, profiler, memory tracking
- Streaming Execution: Process large datasets in chunks
- Utility Functions: Context managers, custom exceptions, batch operations
# Install from PyPI (when published)
pip install pytensorlogic# Install maturin for building Python extensions
pip install maturin
# Build and install in development mode
cd crates/tensorlogic-py
maturin develop
# Or build optimized wheel for distribution
maturin build --release- Python 3.9+
- NumPy 1.20+
- Rust toolchain 1.90+ (for building from source only)
import pytensorlogic as tl
import numpy as np
# Create logical expressions
x = tl.var("x")
y = tl.var("y")
# Define a predicate: knows(x, y)
knows = tl.pred("knows", [x, y])
# Compile to tensor graph
graph = tl.compile(knows)
# Create input data (100 people, adjacency matrix)
knows_matrix = np.random.rand(100, 100)
# Execute the graph
result = tl.execute(graph, {"knows": knows_matrix})
print(result["output"])import pytensorlogic as tl
import numpy as np
# exists y. knows(x, y) - "x knows someone"
x = tl.var("x")
y = tl.var("y")
knows = tl.pred("knows", [x, y])
knows_someone = tl.exists("y", "Person", knows)
# Compile and execute
graph = tl.compile(knows_someone)
knows_matrix = np.random.rand(100, 100)
result = tl.execute(graph, {"knows": knows_matrix})
# Result shape: (100,) - one value per person
print(f"Shape: {result['output'].shape}")import pytensorlogic as tl
# Rule: knows(x,y) AND knows(y,z) -> knows(x,z) (transitivity)
x, y, z = tl.var("x"), tl.var("y"), tl.var("z")
knows_xy = tl.pred("knows", [x, y])
knows_yz = tl.pred("knows", [y, z])
knows_xz = tl.pred("knows", [x, z])
premise = tl.and_(knows_xy, knows_yz)
rule = tl.imply(premise, knows_xz)
# Wrap in universal quantifier
transitivity = tl.forall("y", "Person", rule)
# Compile
graph = tl.compile(transitivity)import pytensorlogic as tl
# Arithmetic: age(x) + 5
age_x = tl.pred("age", [tl.var("x")])
age_plus_5 = tl.add(age_x, tl.constant(5.0))
# Comparison: age(x) > 18
adult = tl.gt(age_x, tl.constant(18.0))
# Conditional: if age(x) > 18 then mature else young
classification = tl.if_then_else(
adult,
tl.constant(1.0), # mature
tl.constant(0.0) # young
)TensorLogic provides asynchronous execution capabilities for non-blocking workflows, perfect for Jupyter notebooks and web applications.
import pytensorlogic as tl
import numpy as np
# Compile a graph
expr = tl.not_(tl.pred("data", [tl.var("x")]))
graph = tl.compile(expr)
inputs = {"data": np.random.rand(1000)}
# Execute asynchronously
future = tl.execute_async(graph, inputs)
# Do other work while computation runs in background
print("Computing in background...")
# Check if ready
if future.is_ready():
print("Done!")
# Get result (waits if not ready)
result = future.result()Execute multiple graphs concurrently for maximum throughput:
# Create multiple graphs
graphs = [tl.compile(expr1), tl.compile(expr2), tl.compile(expr3)]
inputs_list = [inputs1, inputs2, inputs3]
# Execute all in parallel
futures = tl.execute_parallel(graphs, inputs_list)
# Collect results
results = [f.result() for f in futures]Process multiple inputs through the same graph efficiently:
# Create batch executor
executor = tl.BatchExecutor(graph)
# Process multiple inputs
inputs_list = [
{"data": np.random.rand(100)},
{"data": np.random.rand(100)},
{"data": np.random.rand(100)},
]
# Execute in parallel (2-4x speedup)
results = executor.execute_batch(inputs_list, parallel=True)
# Or sequential
results = executor.execute_batch(inputs_list, parallel=False)# Create a cancellation token
token = tl.cancellation_token()
# Start async computation
future = tl.execute_async(graph, inputs)
# Cancel if needed
token.cancel()Monitor long-running async computations:
import time
future = tl.execute_async(graph, large_inputs)
# Monitor with timeout
while not future.is_ready():
print(".", end="", flush=True)
time.sleep(0.1)
# Or use wait with timeout
if future.wait(timeout_secs=5.0):
result = future.result()
else:
print("Timeout!")- Async Overhead: ~20% for small tensors, negligible for large (>1000 elements)
- Parallel Speedup: 2-4x for independent graphs on multi-core systems
- Batch Processing: Near-linear scaling with number of batches
- Thread Safety: All operations are thread-safe
See examples/async_execution_demo.py for comprehensive examples.
TensorLogic supports multiple logic semantics through compilation configurations:
import pytensorlogic as tl
# Soft differentiable (default) - for neural network training
config = tl.CompilationConfig.soft_differentiable()
# Hard Boolean - discrete logic
config = tl.CompilationConfig.hard_boolean()
# Fuzzy logic variants
config = tl.CompilationConfig.fuzzy_godel()
config = tl.CompilationConfig.fuzzy_product()
config = tl.CompilationConfig.fuzzy_lukasiewicz()
# Probabilistic interpretation
config = tl.CompilationConfig.probabilistic()
# Use custom config
graph = tl.compile_with_config(expr, config)| Strategy | AND | OR | NOT | Use Case |
|---|---|---|---|---|
| soft_differentiable | Product | Probabilistic sum | Complement | Neural training (default) |
| hard_boolean | Min | Max | Complement | Discrete reasoning |
| fuzzy_godel | Min | Max | Complement | Godel fuzzy logic |
| fuzzy_product | Product | Probabilistic sum | Complement | Product fuzzy logic |
| fuzzy_lukasiewicz | Lukasiewicz | Lukasiewicz | Complement | Lukasiewicz logic |
| probabilistic | Product (indep.) | Probabilistic sum | Complement | Probability theory |
Choose the best backend for your hardware:
import pytensorlogic as tl
# Get backend capabilities
caps = tl.get_backend_capabilities(tl.Backend.SCIRS2_CPU)
print(f"Backend: {caps.name} v{caps.version}")
print(f"Devices: {caps.devices}")
print(f"Features: {caps.features}")
# List available backends
backends = tl.list_available_backends()
print(backends) # {'Auto': True, 'SciRS2CPU': True, 'SciRS2SIMD': True, 'SciRS2GPU': False}
# Execute with specific backend (SIMD for 2-4x speedup)
result = tl.execute(graph, inputs, backend=tl.Backend.SCIRS2_SIMD)
# Get system information
info = tl.get_system_info()
print(f"TensorLogic v{info['tensorlogic_version']}")
print(f"Default backend: {info['default_backend']}")Build rich semantic models with domain metadata:
import pytensorlogic as tl
# Create symbol table
symbol_table = tl.symbol_table()
# Define domains
person_domain = tl.domain_info("Person", cardinality=100)
person_domain.set_description("Domain of all people in the network")
person_domain.set_elements(["alice", "bob", "charlie"])
symbol_table.add_domain(person_domain)
# Define predicates with signatures
knows_pred = tl.predicate_info("knows", ["Person", "Person"])
knows_pred.set_description("Binary relation: x knows y")
symbol_table.add_predicate(knows_pred)
# Bind variables to domains
symbol_table.bind_variable("x", "Person")
# Automatic inference from expressions
expr = tl.pred("knows", [tl.var("x"), tl.var("y")])
symbol_table.infer_from_expr(expr)
# Export/import as JSON
json_data = symbol_table.to_json()
restored_table = tl.SymbolTable.from_json(json_data)Track the origin and lineage of tensor computations with full RDF* support:
import pytensorlogic as tl
# Create provenance tracker with RDF* support
tracker = tl.provenance_tracker(enable_rdfstar=True)
# Track RDF entities to tensor indices
tracker.track_entity("http://example.org/alice", 0)
tracker.track_entity("http://example.org/bob", 1)
# Track SHACL shapes to logical rules
tracker.track_shape(
"http://example.org/PersonShape",
"Person(x) AND knows(x, y)",
0
)
# Track inferred triples with confidence scores
tracker.track_inferred_triple(
subject="http://example.org/alice",
predicate="http://example.org/knows",
object="http://example.org/bob",
rule_id="social_network_rule_1",
confidence=0.95
)
# Get high-confidence inferences (>= 0.85)
high_conf = tracker.get_high_confidence_inferences(min_confidence=0.85)
for inf in high_conf:
print(f"{inf['subject']} {inf['predicate']} {inf['object']}")
print(f" Confidence: {inf['confidence']}, Rule: {inf['rule_id']}")
# Export to RDF* Turtle format
turtle = tracker.to_rdfstar_turtle()
# Export to JSON for persistence
json_data = tracker.to_json()
restored = tl.ProvenanceTracker.from_json(json_data)
# Extract provenance from compiled graphs
graph = tl.compile(expr)
provenance_list = tl.get_provenance(graph)
metadata_list = tl.get_metadata(graph)Train neural-symbolic models with familiar ML patterns:
import pytensorlogic as tl
import numpy as np
# Define loss functions
loss_fn = tl.mse_loss() # Mean Squared Error
loss_fn = tl.bce_loss() # Binary Cross-Entropy
loss_fn = tl.cross_entropy_loss() # Multi-class Cross-Entropy
# Define optimizers
opt = tl.sgd(learning_rate=0.01, momentum=0.9)
opt = tl.adam(learning_rate=0.001, beta1=0.9, beta2=0.999)
opt = tl.rmsprop(learning_rate=0.01, alpha=0.99)
# Callbacks
early_stop = tl.early_stopping(patience=10, min_delta=1e-4)
checkpoint = tl.model_checkpoint("model.json")
log = tl.logger(verbosity=1)
# High-level Trainer
trainer = tl.Trainer(graph, loss_fn, opt, callbacks=[early_stop, log])
history = trainer.fit(train_data, epochs=100, validation_data=val_data)
predictions = trainer.predict(test_data)import pytensorlogic as tl
# Save a compiled graph
tl.save_model(graph, "model.json")
graph = tl.load_model("model.json")
# Save with full metadata
pkg = tl.model_package(graph, config=config, metadata={"author": "alice"})
tl.save_full_model(pkg, "full_model.json")
pkg = tl.load_full_model("full_model.json")
# Pickle support
import pickle
data = pickle.dumps(pkg)
pkg2 = pickle.loads(data)Python-native rule building with operator overloading:
import pytensorlogic as tl
# Domain-bound variables
x = tl.var_dsl("x", domain="Person")
y = tl.var_dsl("y", domain="Person")
# Callable predicate builders
knows = tl.pred_dsl("knows", arity=2)
adult = tl.pred_dsl("adult", arity=1)
# Operator overloading: &, |, ~, >>
rule = (knows(x, y) & adult(x)) >> adult(y)
# Context manager for rule building
with tl.rule_builder() as rb:
rb.add(knows(x, y) >> knows(y, x))
graph = rb.compile()import pytensorlogic as tl
# Create source locations
start = tl.SourceLocation("rules.tl", 10, 1)
end = tl.SourceLocation("rules.tl", 15, 40)
span = tl.SourceSpan(start, end)
# Create provenance with source information
prov = tl.Provenance()
prov.set_rule_id("social_network_rule_1")
prov.set_source_file("social_rules.tl")
prov.set_span(span)
prov.add_attribute("author", "alice")
prov.add_attribute("version", "1.0")
# Query attributes
author = prov.get_attribute("author")
all_attrs = prov.get_attributes()Represents variables and constants in logical expressions.
x = tl.var("x") # Variable
alice = tl.const("alice") # ConstantMethods:
name() -> str- Get term nameis_var() -> bool- Check if variableis_const() -> bool- Check if constant
Logical expression with comprehensive operations:
Logical Operations:
and_(left, right)- Logical ANDor_(left, right)- Logical ORnot_(expr)- Logical NOT
Quantifiers:
exists(var, domain, body)- Existential quantifierforall(var, domain, body)- Universal quantifier
Implications:
imply(premise, conclusion)- Logical implication
Arithmetic:
add(left, right)- Addition (+)sub(left, right)- Subtraction (-)mul(left, right)- Multiplication (x)div(left, right)- Division (/)
Comparisons:
eq(left, right)- Equal (=)lt(left, right)- Less than (<)gt(left, right)- Greater than (>)lte(left, right)- Less than or equalgte(left, right)- Greater than or equal
Conditionals:
if_then_else(condition, then_expr, else_expr)- Ternary conditional
Operator overloading (DSL mode):
__and__,__or__,__invert__,__rshift__
Methods:
free_vars() -> List[str]- Get list of free variables
Compiled tensor computation graph.
graph = tl.compile(expr)
stats = graph.stats() # {'num_nodes': 5, 'num_outputs': 1, 'num_tensors': 3}Properties:
num_nodes: int- Number of computation nodesnum_outputs: int- Number of output tensors
Methods:
stats() -> Dict[str, int]- Get detailed statistics_repr_html_()- Rich Jupyter display
Domain representation with metadata.
domain = tl.domain_info("Person", cardinality=100)
domain.set_description("All people in the network")
domain.set_elements(["alice", "bob", "charlie"])Properties:
name: str- Domain namecardinality: int- Domain sizedescription: Optional[str]- Human-readable descriptionelements: Optional[List[str]]- Domain elements (for finite domains)
Predicate signature representation.
pred = tl.predicate_info("knows", ["Person", "Person"])
pred.set_description("Binary relation: x knows y")Properties:
name: str- Predicate namearity: int- Number of argumentsarg_domains: List[str]- Domain for each argumentdescription: Optional[str]- Human-readable description
Complete symbol table for schema management.
table = tl.symbol_table()
table.add_domain(domain_info)
table.add_predicate(predicate_info)
table.bind_variable("x", "Person")
table.infer_from_expr(expr) # Automatic schema inference
# Serialization
json_str = table.to_json()
restored = tl.SymbolTable.from_json(json_str)Methods:
add_domain(domain: DomainInfo)- Add domainadd_predicate(predicate: PredicateInfo)- Add predicatebind_variable(var: str, domain: str)- Bind variable to domainget_domain(name: str) -> Optional[DomainInfo]- Retrieve domainget_predicate(name: str) -> Optional[PredicateInfo]- Retrieve predicateget_variable_domain(var: str) -> Optional[str]- Get variable's domainlist_domains() -> List[str]- List all domainslist_predicates() -> List[str]- List all predicatesinfer_from_expr(expr: TLExpr)- Automatic inferenceget_variable_bindings() -> Dict[str, str]- Get all bindingsto_json() -> str- Export as JSONfrom_json(json: str) -> SymbolTable- Import from JSON
Low-level compilation control.
ctx = tl.compiler_context()
ctx.add_domain("Person", 100)
ctx.bind_var("x", "Person")
ctx.assign_axis("x", 0)
temp_name = ctx.fresh_temp() # Generate unique tensor namesMethods:
add_domain(name: str, cardinality: int)- Add domainbind_var(var: str, domain: str)- Bind variableassign_axis(var: str, axis: int)- Assign einsum axisfresh_temp() -> str- Generate unique temporary nameget_domains() -> Dict[str, int]- Get all domainsget_variable_bindings() -> Dict[str, str]- Get bindingsget_axis_assignments() -> Dict[str, int]- Get axis assignmentsget_variable_domain(var: str) -> Optional[str]- Get variable's domainget_variable_axis(var: str) -> Optional[int]- Get variable's axis
Backend selection enumeration.
# Available backends
tl.Backend.AUTO # Auto-select best backend
tl.Backend.SCIRS2_CPU # CPU backend
tl.Backend.SCIRS2_SIMD # SIMD-accelerated (2-4x faster)
tl.Backend.SCIRS2_GPU # GPU backend (future)Backend capability information.
caps = tl.get_backend_capabilities(tl.Backend.SCIRS2_CPU)
print(caps.name) # "SciRS2 Backend"
print(caps.version) # "0.1.0-rc.1"
print(caps.devices) # ["CPU"]
print(caps.dtypes) # ["f64", "f32", "i64", "i32", "bool"]
print(caps.features) # ["Autodiff", "BatchExecution", ...]
print(caps.max_dims) # 16
# Query support
caps.supports_device("CPU") # True
caps.supports_dtype("f64") # True
caps.supports_feature("Autodiff") # True
caps.summary() # Human-readable summary
caps.to_dict() # Dict representationSource code location information.
loc = tl.SourceLocation("rules.tl", 10, 5)
print(loc.file) # "rules.tl"
print(loc.line) # 10
print(loc.column) # 5
print(str(loc)) # "rules.tl:10:5"Source code span (start to end).
start = tl.SourceLocation("rules.tl", 10, 1)
end = tl.SourceLocation("rules.tl", 15, 40)
span = tl.SourceSpan(start, end)
print(span.start.line) # 10
print(span.end.line) # 15Provenance metadata for IR nodes.
prov = tl.Provenance()
prov.set_rule_id("rule_1")
prov.set_source_file("social_rules.tl")
prov.set_span(span)
prov.add_attribute("author", "alice")
prov.add_attribute("version", "1.0")
# Query
prov.rule_id # "rule_1"
prov.source_file # "social_rules.tl"
prov.get_attribute("author") # "alice"
prov.get_attributes() # {"author": "alice", "version": "1.0"}Full RDF*/SHACL provenance tracking.
tracker = tl.provenance_tracker(enable_rdfstar=True)
# Entity tracking
tracker.track_entity("http://example.org/alice", 0)
tracker.get_entity(0) # "http://example.org/alice"
tracker.get_tensor("http://example.org/alice") # 0
# Shape tracking
tracker.track_shape("http://example.org/PersonShape", "Person(x)", 0)
# RDF* triple tracking with confidence
tracker.track_inferred_triple(
subject="http://example.org/alice",
predicate="http://example.org/knows",
object="http://example.org/bob",
rule_id="rule_1",
confidence=0.95
)
# Query high-confidence inferences
high_conf = tracker.get_high_confidence_inferences(min_confidence=0.85)
# Export
tracker.to_rdf_star() # List of RDF* statements
tracker.to_rdfstar_turtle() # Turtle format
json_str = tracker.to_json() # JSON serialization
restored = tl.ProvenanceTracker.from_json(json_str)
# Get mappings
tracker.get_entity_mappings() # Dict[str, int]
tracker.get_shape_mappings() # Dict[str, str]Future result of an async computation.
future = tl.execute_async(graph, inputs)
future.is_ready() # bool
future.result() # Dict[str, np.ndarray] (blocks if not ready)
future.wait(timeout_secs=5.0) # bool - True if completed within timeout
future.cancel() # Request cancellation
future.is_cancelled() # bool
future.get_cancellation_token() # CancellationTokenBatch graph execution over multiple inputs.
executor = tl.BatchExecutor(graph)
results = executor.execute_batch(inputs_list, parallel=True)Cooperative cancellation for async operations.
token = tl.cancellation_token()
token.cancel()
token.is_cancelled() # bool
token.reset()compile(expr: TLExpr) -> EinsumGraphCompile a logical expression to a tensor computation graph.
compile_with_config(expr: TLExpr, config: CompilationConfig) -> EinsumGraphCompile with a custom configuration.
compile_with_context(expr: TLExpr, ctx: CompilerContext) -> EinsumGraphCompile with a low-level compiler context.
execute(
graph: EinsumGraph,
inputs: Dict[str, np.ndarray],
backend: Optional[Backend] = None
) -> Dict[str, np.ndarray]Execute a graph with NumPy array inputs. Backend defaults to AUTO (best available).
get_backend_capabilities(backend: Optional[Backend] = None) -> BackendCapabilities
list_available_backends() -> Dict[str, bool]
get_default_backend() -> Backend
get_system_info() -> Dict[str, Any]get_provenance(graph: EinsumGraph) -> List[Optional[Provenance]]
get_metadata(graph: EinsumGraph) -> List[Optional[Dict[str, Any]]]
provenance_tracker(enable_rdfstar: bool = False) -> ProvenanceTrackersave_model(graph: EinsumGraph, path: str) -> None
load_model(path: str) -> EinsumGraph
save_full_model(pkg: ModelPackage, path: str) -> None
load_full_model(path: str) -> ModelPackage
model_package(...) -> ModelPackage# Adapter creation
domain_info(name: str, cardinality: int) -> DomainInfo
predicate_info(name: str, domains: List[str]) -> PredicateInfo
symbol_table() -> SymbolTable
compiler_context() -> CompilerContext
# DSL
var_dsl(name: str, domain: Optional[str] = None) -> Var
pred_dsl(name: str, arity: int) -> PredicateBuilder
rule_builder() -> RuleBuilder
# Utility
quick_execute(expr: TLExpr, inputs: Dict[str, np.ndarray]) -> Dict[str, np.ndarray]
validate_inputs(graph: EinsumGraph, inputs: Dict[str, np.ndarray]) -> None
batch_compile(exprs: List[TLExpr]) -> List[EinsumGraph]
batch_predict(graph: EinsumGraph, inputs_list: List[Dict]) -> List[Dict]
execution_context() -> ExecutionContext
compilation_context() -> CompilationContextThe examples/ directory contains comprehensive demonstrations:
basic_usage.py- Complete usage guide with all operationsarithmetic_operations.py- All arithmetic operationscomparison_conditionals.py- Comparisons and conditionalsadvanced_symbol_table.py- Domain management and symbol tablesbackend_selection.py- Backend selection and capabilitiesprovenance_tracking.py- Complete provenance tracking workflowtraining_workflow.py- Training API (450+ lines, 10 scenarios)model_persistence.py- Model persistence (600+ lines, 10 scenarios)rule_builder_dsl.py- Rule Builder DSL (550+ lines, 10 examples)async_execution_demo.py- Async execution (300+ lines)performance_benchmark.py- Performance benchmarksmemory_profiling.py- Memory profiling and streaming
Run any example:
python examples/basic_usage.py
python examples/provenance_tracking.pyThe package includes 300+ comprehensive tests across 7 test suites:
# Install development dependencies
pip install -r requirements-dev.txt
# Run tests
pytest tests/ -v
# Run specific test suite
pytest tests/test_provenance.py -v
# Run with coverage
pytest tests/ --cov=pytensorlogic --cov-report=htmlTest suites:
test_types.py- Core type creation and operationstest_execution.py- End-to-end execution teststest_backend.py- Backend selection and capabilitiestest_provenance.py- Provenance tracking (40+ tests)test_training.py- Training API (40+ tests)test_persistence.py- Model persistence (20+ tests)test_dsl.py- Rule Builder DSL (100+ tests)
TensorLogic Python bindings are built with:
- PyO3 0.23+: Rust-Python interop with abi3 compatibility (Python 3.9+)
- NumPy: Array interface via
numpycrate - SciRS2: High-performance scientific computing backend
- Maturin: Build system for Python extensions
- Zero-copy where possible for efficiency
pytensorlogic/
├── src/
│ ├── lib.rs # Main module registration
│ ├── types.rs # Core type bindings (PyTerm, PyTLExpr, PyEinsumGraph)
│ ├── compiler.rs # Compilation API and strategies
│ ├── executor.rs # Execution engine bindings
│ ├── numpy_conversion.rs # NumPy interop
│ ├── adapters.rs # Domain and symbol table management
│ ├── backend.rs # Backend selection and capabilities
│ ├── provenance.rs # Provenance tracking
│ ├── training.rs # Training API
│ ├── persistence.rs # Model save/load
│ ├── dsl.rs # Rule Builder DSL
│ ├── jupyter.rs # Rich HTML display
│ ├── performance.rs # Performance monitoring
│ ├── streaming.rs # Streaming execution
│ ├── async_executor.rs # Async execution and cancellation
│ └── utils.rs # Utility functions and context managers
├── tests/ # Python test suites (7 files, 300+ tests)
├── examples/ # Demonstration scripts (12 files)
└── pytensorlogic.pyi # Type stubs for IDE support (1100+ lines)
Phase 1-3: Core Infrastructure
- Core types binding (PyTerm, PyTLExpr, PyEinsumGraph)
- Compilation API with 6 configuration presets
- Execution API with NumPy integration
- Bidirectional NumPy conversion
Phase 4-8: Operations
- Logical operations (AND, OR, NOT, quantifiers, implication)
- Arithmetic operations (add, sub, mul, div)
- Comparison operations (eq, lt, gt, lte, gte)
- Conditional operations (if_then_else)
Phase 9-13: Advanced Features
- Type stubs (.pyi) for IDE support (1100+ lines)
- Comprehensive Python test suite (300+ tests)
- Symbol tables and domain management (SymbolTable, CompilerContext)
- Backend selection API (Backend, BackendCapabilities)
- Provenance tracking with RDF* support (4 classes, 3 functions)
Phase 14-21: Complete Feature Set
- Training API (loss functions, optimizers, callbacks, Trainer class)
- Model Persistence (JSON/binary formats, pickle support)
- Jupyter Integration (rich HTML display for all types)
- Rule Builder DSL (operator overloading, domain validation)
- Performance Monitoring (GIL release, profiler, memory tracking)
- Streaming Execution (StreamingExecutor, ResultAccumulator)
- Async Cancellation (CancellationToken, cancel support)
- Utility Functions (context managers, custom exceptions, helpers)
Documentation & Quality
- Comprehensive docstrings
- Error handling with clear messages
-
__repr__and__str__implementations - 12 comprehensive examples
- Zero compilation warnings
- Production-ready code quality
- PyTorch tensor integration
- GPU backend support
- ONNX export
- Tutorial Jupyter notebooks
- Coverage reporting in CI
- Visualization widgets
- Interactive debugging
The SciRS2 backend provides SIMD acceleration for significant speedups:
import pytensorlogic as tl
# Use SIMD backend (2-4x faster for large tensors)
result = tl.execute(graph, inputs, backend=tl.Backend.SCIRS2_SIMD)Benchmarks (1000x1000 matrices):
- Element-wise operations: 2.3x faster with SIMD
- Matrix operations: 3.8x faster with SIMD
- Reduction operations: 2.1x faster with SIMD
- Build system: Must use
maturin(not regularcargo build) - GPU backend: Not yet implemented (CPU and SIMD only)
- PyTorch integration: Not yet available (NumPy only)
- Zero-copy: Not fully optimized in all paths
# Install Rust and maturin
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
pip install maturin
# Clone and build
git clone https://github.com/cool-japan/tensorlogic.git
cd tensorlogic/crates/tensorlogic-py
maturin develop
# Run tests
cargo test # Rust tests
pytest tests/ -v # Python testsAll code passes strict quality checks:
cargo check- Zero warningscargo clippy --all-targets -- -D warnings- Strict lintingcargo fmt --all -- --check- Consistent formattingpytest tests/- 300+ tests passing
Contributions welcome! See CONTRIBUTING.md for guidelines.
Development workflow:
- Fork the repository
- Create a feature branch
- Make changes with tests
- Ensure all quality checks pass
- Submit a pull request
Problem: error: linker 'cc' not found
# Install build essentials
sudo apt-get install build-essential # Ubuntu/Debian
brew install gcc # macOSProblem: ImportError: cannot import name 'pytensorlogic'
# Rebuild with maturin
maturin develop --releaseProblem: RuntimeError: Backend not available
# Check available backends
python -c "import pytensorlogic as tl; print(tl.list_available_backends())"Problem: Shape mismatch in execution
# Check input shapes match expected domains
stats = graph.stats()
print(f"Expected inputs: {stats}")Apache-2.0 - See LICENSE for details.
- TensorLogic Paper: https://arxiv.org/abs/2510.12269
- COOLJAPAN Ecosystem: https://github.com/cool-japan
- SciRS2: https://github.com/cool-japan/scirs
- PyO3 Documentation: https://pyo3.rs
- Maturin Guide: https://www.maturin.rs
@article{tensorlogic2024,
title={TensorLogic: Logic-as-Tensor Planning Layer},
author={COOLJAPAN Team},
journal={arXiv preprint arXiv:2510.12269},
year={2024}
}Status: Production Ready (v0.1.0-rc.1) Last Updated: 2026-03-06 Completion: 100% of high-priority features (21/21 phases complete) Tests: 300+ tests passing (7 test suites) API: 80+ functions, 35+ classes, 5 custom exceptions, 6 compilation strategies, 3 serialization formats Part of: TensorLogic Ecosystem