Every model is bound to a specific Prisma schema via SHA256 hash:
- Normalizes schema (removes whitespace, comments)
- Computes deterministic hash
- Records hash in metadata
- At runtime, rejects predictions if hash doesn't match
Effect: Prevents silent bugs from schema drift after model compilation.
This is non-negotiable and causes hard failures (not warnings).
Predictions are deterministic within platform guarantees:
- Same entity + same artifacts → same output (always)
- No random number generation in predictions
- No external service calls
- No time-dependent behavior
Effect: Models are reproducible and auditable.
Full TypeScript strict mode:
- No implicit
any - All types explicitly annotated
- Type checking at compile time
- Runtime validation for user inputs
Effect: Catches errors before runtime.
No silent failures. All errors are typed:
try {
await session.predict(model, entity, resolvers);
} catch (error) {
if (error instanceof SchemaDriftError) {
// Schema changed - FATAL
} else if (error instanceof UnseenCategoryError) {
// New category - handle gracefully or retrain
}
}Effect: Developers can't ignore errors.
Models compile to immutable artifacts:
- Never modified after generation
- Committed to git (auditable)
- No runtime mutations
- Deterministic within numeric bounds
Effect: Models don't change unexpectedly.
PrisML does not:
- Store user data
- Log sensitive information
- Transmit data to external services
- Cache predictions
Data flows:
- Training: Prisma → Python backend (local) → ONNX artifacts
- Prediction: Application → session.predict() → output
During prisml train:
- Data is extracted via your Prisma instance
- Processed locally via Python backend
- Not sent to external services
- Deleted after training completes
At runtime:
- Models execute in-process
- No data leaves your application
- No telemetry or logging
- Feature data is transient (not stored)
Models must pass quality gates before export:
qualityGates: [
{
metric: 'rmse',
threshold: 500,
comparison: 'lte',
}
]If any gate fails:
- Artifact generation aborts
- Exit code is non-zero
- No model is exported
- You must fix and retrain
Effect: Prevents deploying low-quality models.
Batch predictions validate all entities before predictions:
const results = await session.predictBatch(model, entities, resolvers);If any entity fails validation:
- Entire batch is aborted
- No partial results
- Application must handle error
- Caller can retry or skip
Effect: Prevents partial failures and inconsistent state.
If Prisma schema changes after model compilation:
// Schema changed - hash mismatch
await session.predict(model, entity, resolvers);
// → throws SchemaDriftError
// → prediction STOPSNo predictions happen if schema drifts.
Effect: Prevents using stale models with new schema.
Feature resolvers are analyzed via TypeScript AST:
// SAFE: static property access
revenue: (user) => user.revenue
// UNSAFE: dynamic code
value: (user) => eval(user.expression) // Type error! [CAUGHT]TypeScript's type system prevents this.
ONNX models are:
- Binary format (not executable code)
- Serialized numerical weights
- No dynamic code generation
- Safe to load from untrusted sources (within reason)
Python backend should be:
- Run in isolated environment
- No external network access
- No dynamic code loading
- Pinned dependency versions
Dependencies should be:
- Pinned to specific versions
- Regularly audited for vulnerabilities
- Updated carefully with testing
- Scanned by npm audit
If using PrisML in GDPR/CCPA jurisdiction:
- Models don't store personal data
- Feature extraction is transient
- Artifacts are not PII
- Training data is your responsibility
PrisML enforces:
- Reproducible training (git artifacts)
- Version tracking (metadata)
- Quality gates (automatic checks)
- Error reporting (typed errors)
This supports audit trails and compliance requirements.
PrisML does not provide:
- Bias detection
- Fairness metrics
- Protected group testing
- Explainability
These are your responsibility as a developer.
- Use clean, representative data
- Set meaningful quality gates
- Review metrics before deploying
- Test on hold-out set
- Log training date and version
- Commit artifacts to git
- Use feature branches for model updates
- Require code review for artifact changes
- Monitor prediction errors in production
- Retrain when schema changes
Track in production:
- Prediction latency
- Error rates
- Unseen categories
- Schema drift failures
- Model version distribution
When to retrain:
- Schema changes (required)
- Distribution shift (data drift)
- New categories observed
- Quality gates lower threshold
- Algorithm update available
If you find a security issue:
- Do not create public GitHub issue
- Email security@prisml.dev (or project lead)
- Include:
- Description of issue
- Reproduction steps
- Potential impact
- Suggested fix
We will:
- Acknowledge within 48 hours
- Investigate and confirm
- Develop fix in private branch
- Release security patch
- Credit discoverer (if desired)
PrisML has not undergone formal security audit.
Before using in production:
- Review this document
- Review error handling code
- Test error cases
- Consider hiring security firm
- Monitor for CVEs in dependencies
PrisML depends on:
@prisma/client— database ORMonnxruntime-node— model predictionsyargs— CLI parsing- TypeScript & Node.js — runtime
Monitor these for security issues.
V1.0+ roadmap includes:
- Formal security audit
- SBOM (Software Bill of Materials)
- Security policy
- CVE monitoring
- Incident response plan