Agentics is built around a small set of concepts that work together:
- Pydantic types – how you describe structured data
- Transducible functions – LLM-powered, type-safe transformations
- Typed state containers (AGs) – collections of typed rows/documents
- Logical Transduction Algebra (LTA) – the formal backbone
- Map–Reduce – the programming model used to execute large-scale workloads
This page gives you the mental model you need before diving into code.
At the heart of Agentics is the idea that everything is a type.
You describe your data using Pydantic models:
from pydantic import BaseModel
fromp typing import Optional
class Product(BaseModel):
id: Optional[str] = None
title: Optional[str] = None
description: Optional[str] = None
price: Optional[float] = NoneThese models serve three roles:
- Schema – they define the fields, types, and optionality
- Validation – they validate inputs and outputs at runtime
- Contract – they act as the contract between your code and the LLM
In Agentics, any LLM-powered transformation is expressed as:
“Given a
Sourcetype, produce aTargettype.”
Instead of prompt engineering around raw strings, you define transformations between types.
A transducible function is the core abstraction in Agentics.
Informally:
A transducible function is an LLM-backed function
that maps inputs of typeSourceto outputs of typeTarget
under a set of instructions and constraints.
Conceptually:
Target << Source
Example:
from pydantic import BaseModel
class Review(BaseModel):
text: Optional[str] = None
class ReviewSummary(BaseModel):
sentiment: Optional[str] = None
summary: Optional[str] = NoneA transducible function might be:
fn: Review -> ReviewSummarywith instructions like:
“Given a review, detect its sentiment (positive/negative/neutral) and produce a one-sentence summary.”
Key properties:
- Typed I/O – the function is bound to
SourceandTargetPydantic models. - Single Source of Truth for Instructions – instructions live alongside the function definition.
- LLM-Agnostic – the function describes what to transform; the underlying model can change.
- Composable – functions can be chained, branched, or merged into larger workflows.
You don’t call the LLM directly; you call the transducible function, which manages LLM calls, validation, retries, and evidence tracking.
Transformations rarely happen on a single object. You typically work with collections of items (rows, documents, events, etc.).
Agentics introduces typed state containers (called AG, short for "Agentics") to:
- Hold a collection of instances of a given Pydantic type
- Preserve that type information across operations
- Provide a uniform interface for Map–Reduce, filtering, joining, etc.
Conceptually, you can think of an AG[Source] like a type-aware table:
AG[Review]
├─ row 0: Review(text="…")
├─ row 1: Review(text="…")
└─ row n: Review(text="…")
Applying a transducible function Review -> ReviewSummary over an AG with atype Review conceptually yields an AG of type ReviewSummary.
Typed state containers give you:
- Clarity – you always know what type you're holding.
- Safety – operations can check types and schemas instead of guessing.
- Composability – containers can flow between functions and stages.
You can think of state containers (AGs) as the data plane of Agentics.
from agentics import AG # Recommended alias
movies = AG(atype=Movie) # Create a typed containerHistorical Note: In Agentics 1.0, data models and transformations were blended into the same object. Agentics 2.0 separates concerns by introducing transducible functions as first-class citizens, while AG containers focus on data management. The v1.0 API is still supported for backward compatibility.
Transducible functions and typed states are not just coding patterns; they are backed by a formal framework called Logical Transduction Algebra (LTA).
You do not need to understand the full mathematics to use Agentics, but the intuition is important:
-
Transductions as Morphisms
Each transducible function is treated as a morphism between types:
Source ⟶ Target. -
Composability
If you havef: A ⟶ Bandg: B ⟶ C, then you can form a composite transductiong ∘ f: A ⟶ C. Agentics gives you a practical way to do this over LLM-based functions. -
Explainability & Evidence
Because transductions are modeled as structured mappings, Agentics can track which fields and which steps contributed to the final outputs. This underpins evidence tracking and traceability.
In short:
LTA provides the theoretical foundation
for why your pipelines are composable and explainable,
even though they are powered by probabilistic models.
Once you have:
- Typed collections (
AG[Source]) and - Typed transformations (
Source -> Target),
you need a way to run these at scale. Agentics uses a familiar pattern: Map–Reduce.
The map phase applies a transducible function to each element (or batch) of a collection.
Conceptually:
list[Source] --amap(f)--> list[Target]
Where f: Source -> Target.
Properties:
- Parallelizable – each element can be processed independently.
- Asynchronous –
amapis designed for async I/O and concurrent execution. - Typed In/Out – both input and output containers carry their types.
Typical use cases:
- Extracting structured info from documents
- Enriching rows with LLM-derived attributes
- Normalizing or cleaning text fields at scale
The reduce phase aggregates a collection back into a smaller structure (often a single summary or global view).
list[Target] --areduce(g)--> GlobalSummary
Where g is a transducible function or aggregation operation that takes many items and produces fewer (often one).
Examples:
- Summarizing a whole dataset into a report object
- Producing global statistics or flags
- Clustering and relation induction
Map–Reduce in Agentics is a logical pattern, not tied to any specific infrastructure:
amap= “apply a typed transformation to many items”areduce= “aggregate many results into fewer structured outputs”
Together, they define how large-scale reasoning workflows are expressed in Agentics.
A typical workflow looks like this:
-
Define your types
Use Pydantic to describe your raw data (Source) and desired outputs (Target,Report, etc.). -
Define transducible functions
For each logical step, define a transducible function:
extraction → normalization → classification → enrichment → summarization. -
Load data into typed state containers (Optional)
Wrap your dataset into a container such asAG[Source]. You can also use simple python lists of objects of the intended type. -
Apply Map–Reduce
- Use
amapto apply transducible functions over the collection. - Use
areduceto build global summaries or reports.
- Use
-
Rely on LTA properties
Because everything is a typed transduction, you can:- Compose steps cleanly,
- Trace outputs back to inputs,
- Reason about structure and invariants in your pipeline.
- Pydantic types give you schemas and validation.
- Transducible functions turn LLM calls into typed, reusable transformations.
- Typed state containers hold collections of those types with clear semantics.
- Logical Transduction Algebra (LTA) explains why these transformations compose and remain interpretable.
- Map–Reduce provides the pattern for scaling these transductions to large datasets.
- 👉 Transducible Functions for concrete examples of defining and using transducible functions
- 👉 Index