| Version | Supported |
|---|---|
| 0.1.x | ✅ |
The COOLJAPAN ecosystem takes security seriously. If you discover a security vulnerability in Tensorlogic, please report it responsibly.
DO NOT open a public GitHub issue for security vulnerabilities.
Instead, please report security issues to:
- Email: security@cool-japan.org (or open a private security advisory on GitHub)
- Subject: [SECURITY] Tensorlogic - Brief description
Please include the following information in your report:
- Description: Clear description of the vulnerability
- Impact: Potential impact and severity assessment
- Reproduction: Step-by-step instructions to reproduce the issue
- Environment: Rust version, OS, and relevant configuration
- Proposed Fix: If you have a suggested solution (optional)
- Initial Response: Within 48 hours
- Status Update: Within 7 days
- Fix Timeline: Depends on severity
- Critical: Within 7 days
- High: Within 14 days
- Medium: Within 30 days
- Low: Next release cycle
- We follow coordinated disclosure practices
- We will work with you to understand and address the issue
- We will credit reporters in security advisories (unless anonymity is requested)
- Please allow us reasonable time to fix issues before public disclosure
Tensorlogic is a planning layer that compiles logic rules to tensor equations. Security considerations include:
- Input Validation: Malformed logic expressions could cause panics or excessive resource consumption
- Resource Limits: Large graphs may consume excessive memory or CPU
- Dependency Security: We audit dependencies regularly
- Type Safety: Rust's type system provides strong guarantees, but unsafe code requires careful review
- Theoretical attacks on logic compilation algorithms (research papers welcome)
- Issues in upstream dependencies (report to those projects)
- Performance issues without security implications
- No Unsafe Code in planning layer (IR, compiler, infer traits)
- Dependency Auditing: Regular
cargo auditchecks - Static Analysis: Clippy with strict lints enabled
- Minimal Attack Surface: Planning layer has no network or filesystem access
The SciRS2 backend may use:
- SIMD operations (platform-specific)
- GPU operations (future)
- Numerical computations that could overflow or lose precision
We validate inputs and provide safe abstractions over low-level operations.
The OxiRS bridge handles:
- GraphQL query parsing
- RDF/SPARQL execution
- SHACL constraint validation
These components have their own security policies. See the OxiRS documentation for details.
Always validate untrusted input before compiling:
use tensorlogic_compiler::compile_to_einsum;
// Validate before compilation
if expr_is_too_large(&expr) {
return Err("Expression exceeds size limit");
}
let graph = compile_to_einsum(&expr)?;Set appropriate limits for production use:
const MAX_GRAPH_NODES: usize = 10000;
const MAX_TENSOR_SIZE: usize = 100_000_000;
if graph.nodes.len() > MAX_GRAPH_NODES {
return Err("Graph too large");
}- Keep dependencies up to date
- Run
cargo auditregularly - Review security advisories for the COOLJAPAN ecosystem
- Run with minimal privileges
- Isolate execution environments (containers, VMs)
- Monitor resource usage
- Implement timeout mechanisms for long-running operations
We perform regular security reviews:
- Code Review: All PRs reviewed for security implications
- Dependency Audits: Monthly
cargo auditchecks - Static Analysis: Continuous Clippy and rustfmt enforcement
- Fuzzing: Planned for Phase 8 (validation & scale)
Current known limitations (not vulnerabilities):
- No Resource Limits: Early versions do not enforce graph size limits
- No Timeout Mechanisms: Long compilations may run indefinitely
- Limited Input Validation: Assumes well-formed expressions
These will be addressed in future releases.
Security advisories will be published:
- As GitHub Security Advisories
- In release notes
- On the COOLJAPAN security mailing list (when available)
For security-related questions:
- Email: security@cool-japan.org
- GitHub: Open a private security advisory
For non-security issues:
- Open a public GitHub issue
- See CONTRIBUTING.md for guidelines
Thank you for helping keep Tensorlogic secure!