You don’t train a giant opaque blob of weights. You build explicit knowledge modules — conceptual units that know when they apply, how they compute, and how to interact with other knowledge. This turns intelligence into software again.
No scheduler. No central controller. Each knowledge class is an autonomous agent:
- It decides when to activate
- It maintains its own memory
- It updates itself through experience
- It collaborates by reading/writing shared context
This makes the system composable, extensible, and inherently multi-task.
A single GeneralIntelligence model can contain:
- Mathematical hypothesis testers
- Logical relational modules
- Symbolic rules
- Statistical heuristics
- Tree/graph-based reasoning
- Domain-specific knowledge
- Autonomous background agents
- Prompt/dialog knowledge
- Perception plug-ins (e.g., DL model wrappers)
Modules that don’t apply simply hand off. This creates a parallel hypothesis-testing architecture where exact solutions are found whenever they exist.
Instead of forcing every dataset into linear models or trees, this architecture allows a single module to test hundreds or thousands of structured hypotheses, such as:
- numeric relations
- logical compositions
- hybrid numeric-logical rules
- approximate equalities
- relational constraints
- multi-layer rules discovered via nesting
Each hypothesis tracks its own failures and survives only if within tolerance. This is structured conceptual induction, not blind optimization.
Knowledge modules:
- are self-contained blocks of intelligence
- don’t need to be registered in any config
- don’t break when others change
- can be activated reactively or run autonomously
They interop with the model and other knowledge through lifecycle methods and shared context enabling powerful cooperation patterns. This keeps the system modular, inspectable, and robust.
Knowledge can participate in compositional flows:
gi.compose(ctx, finalizer)Each module can modify the context during composition, enabling:
- multi-step pipelines
- layered reasoning
- implicit collaboration
- custom “chains of thought”
- tailorable reasoning workflows
This is structural composition, not sequential scripting.
The architecture uses only:
- classes / objects
- small methods
- shared context objects
Zero reliance on Python-only tricks. This makes it trivially portable to other languages:
- Julia
- R
- Rust
- Go
- C/C++
- TypeScript
- Java/Kotlin
- Swift
- C#
- Zig/Others
The entire ecosystem can be replicated across languages and share conceptual knowledge.
This library is not just an API. It defines an ecosystem pattern where people can contribute:
- universal hypothesis testers
- symbolic reasoning modules
- numerical/ML hybrids
- perception plug-ins
- planning/goal modules
- domain knowledge packs
This scales intelligence with contributors rather than compute.
The architecture supports:
- traditional machine learning
- online learning
- multi-subsystem reasoning
- multi-task execution
- hybrid exact + fuzzy learning
- agentic autonomous modules
- continual refinement
- transparent introspection
This is what symbolic AI and deep learning have been trying to achieve.
GeneralIntelligence is:
A modular, distributed, multi-knowledge cognitive engine designed to build real intelligence by composing explicit, testable, autonomous knowledge modules.
It’s small. It’s simple. And it’s powerful enough that entire ML workflows, symbolic reasoning processes, dialog systems, and autonomous agents can all live in the same model without conflict.
Demonstrates how GeneralIntelligence can learn addition rules in tabular data:
from gi import GeneralIntelligence, Knowledge
from itertools import combinations
gi = GeneralIntelligence()
class AdditionKnowledge(Knowledge):
def __init__(self, n_features):
self.n_features = n_features
self.valid_combinations = []
def on(self, ctx, gi):
if hasattr(ctx, "row") and hasattr(ctx, "target"):
row, target = ctx.row, ctx.target
# First time: cache all single-element combinations
if not self.valid_combinations:
all_combs = []
for r in range(1, self.n_features + 1):
all_combs.extend(combinations(range(self.n_features), r))
self.valid_combinations = [
comb for comb in all_combs if sum(row[i] for i in comb) == target
]
# Keep only combinations that continue to hold
self.valid_combinations = [
comb for comb in self.valid_combinations
if sum(row[i] for i in comb) == target
]
elif hasattr(ctx, "row"):
row = ctx.row
for comb in self.valid_combinations:
return sum(row[i] for i in comb)
additive = AdditionKnowledge(n_features=3)
gi.learn(additive)
# Training
class TrainCtx:
def __init__(self, row, target):
self.row = row
self.target = target
for row, target in [([1, 2, 3], 3), ([0, 3, 1], 3)]:
list(gi.on(TrainCtx(row, target)))
# Prediction
class PredictCtx:
def __init__(self, row):
self.row = row
print(next(gi.on(PredictCtx([2, 1, 0])))) # Output: sum of matching combinationfrom gi import GeneralIntelligence, Knowledge
gi = GeneralIntelligence()
class DialogKnowledge(Knowledge):
def __init__(self):
self.history = []
def on(self, ctx, gi):
if hasattr(ctx, "user"):
self.history.append(ctx.user)
return f"Bot: I heard '{ctx.user}'"
dialog = DialogKnowledge()
gi.learn(dialog)
class MsgCtx:
def __init__(self, user):
self.user = user
for response in gi.on(MsgCtx("Hello")):
print(response) # Bot: I heard 'Hello'import threading, time
from gi import GeneralIntelligence, Knowledge
gi = GeneralIntelligence()
import threading, time
class TimerKnowledge(Knowledge):
def __init__(self):
self.count = 0
def on_add(self, knowledge, gi):
if self is knowledge:
self.running = True
self.thread = threading.Thread(target=self.run, args=(gi,), daemon=True)
self.thread.start()
def on_remove(self, knowledge, gi):
if self is knowledge:
self.running = False
def run(self, gi):
while getattr(self, "running", False):
print("Tick:", self.count)
self.count += 1
time.sleep(1)
class TickCtx: pass
timer = TimerKnowledge()
gi.learn(timer)
# Let autonomous timer run a few ticks
time.sleep(5)
gi.unlearn(timer) # stop autonomous loopKnowledge can modify shared context and collaborate:
from gi import GeneralIntelligence, Knowledge
gi = GeneralIntelligence()
class AccumulateKnowledge(Knowledge):
def compose(self, ctx, composer, gi):
if not hasattr(ctx, "accum"):
ctx.accum = []
ctx.accum.append("step")
acc = AccumulateKnowledge()
gi.learn(acc)
def final_composer(ctx):
return getattr(ctx, "accum", [])
class DummyCtx: pass
print(gi.compose(DummyCtx(), final_composer)) # ['step']Multiple knowledge types—ML-style, dialog, autonomous—can coexist in the same model.
FunctionKnowledge is a hypothesis-driven learner that discovers relations between subsets of input features and some RHS value.
It is plugged into a GeneralIntelligence instance, and it participates in learning and prediction through the
standard gi.on(Context(...)) interface.
A hypothesis has the form:
(fn_index, value_index, (rhs_type, rhs_value))
Where:
- fn_index → index into the provided list of functions
- value_index → index into the list of values returned by that function
- rhs_type →
"target" | "feature" | "constant" - rhs_value
- None for
"target" - a feature index for
"feature" - literal value for
"constant"
- None for
Each function tuple consists of (row_compute_fn, compare_fn):
# row_compute_fn: computes all subset values at once
row_compute_fn(row) → [value1, value2, ...]
# compare_fn: used during training
compare_fn(computed_value, rhs) → boolDuring training (when row[target_index] is not None), FunctionKnowledge checks which hypotheses remain consistent and which fail. Failing hypotheses are allowed to survive up to a configurable tolerance. When the RHS refers to another feature or a constant, a child FunctionKnowledge is spawned to infer deeper dependencies up to max_depth.
During prediction (when row[target_index] is None), any surviving hypothesis will yield predicted outputs through GI.
We don't call a .train() or .predict() on this. We use the same gi.on() interface invoking it with an instance of Context exported from the FunctionKnowledge module.
from gi.knowledge.functions import FunctionKnowledge, Context
FunctionKnowledge(
functions, # list of (row_compute_fn, compare_fn) tuples
*,
constants=None, # constant RHS candidates
tolerance=2, # how many failures a hypothesis can survive
max_depth=4, # nested hypotheses levels
parent_keys=(), # prevents cycles in the hierarchy
)FunctionKnowledge provides utility functions to easily create row-level compute functions:
from gi.knowledge.functions import make_combinations, make_permutations
# Create a function that computes over all combinations of subsets
row_compute_fn = make_combinations(
lambda subset: sum(subset), # subset-level compute function
min_size=2,
max_size=2
)
# Create a function that computes over all permutations (order matters)
row_compute_fn = make_permutations(
lambda subset: subset[0] - subset[1], # subset-level compute function
min_size=2,
max_size=2
)These wrappers handle iterating through all subsets and collecting results, so you only need to define the computation for a single subset.
import statistics as stats
from gi.knowledge.functions import make_combinations
def compare_equals(computed, rhs):
return computed == rhs
def compare_greater(computed, rhs):
return computed > rhs
def compare_less(computed, rhs):
return computed < rhs
# Create row-level functions using the utility wrappers
equals_sum = (
make_combinations(lambda s: sum(s), min_size=2, max_size=2),
compare_equals
)
greater_than = (
make_combinations(lambda s: max(s), min_size=1, max_size=3),
compare_greater
)
less_than = (
make_combinations(lambda s: min(s), min_size=1, max_size=3),
compare_less
)
equals_median = (
make_combinations(lambda s: stats.median(s), min_size=1, max_size=3),
compare_equals
)You can also create custom row-level functions directly:
# Custom function that returns multiple values per row
def custom_compute(row):
results = []
# Your custom logic here
results.append(row[0] + row[1])
results.append(row[0] * row[2])
return results
custom_fn = (custom_compute, compare_equals)from gi import GeneralIntelligence
from gi.knowledge.functions import FunctionKnowledge, Context, make_combinations
def compare_equals(computed, rhs):
return computed == rhs
equals_sum = (
make_combinations(lambda s: sum(s), min_size=2, max_size=2),
compare_equals
)
gi = GeneralIntelligence()
fk = FunctionKnowledge([equals_sum])
gi.learn(fk)
# Train: third entry is target
gi.on(Context([3, 5, 8], target_index=2))
gi.on(Context([2, 4, 6], target_index=2))
gi.on(Context([10, -1, 9], target_index=2))list(gi.on(Context([7, 1, None], target_index=2)))
# → [8]from gi import GeneralIntelligence
from gi.knowledge.functions import FunctionKnowledge, Context, make_combinations
import statistics as stats
def compare_equals(computed, rhs):
return computed == rhs
def compare_greater(computed, rhs):
return computed > rhs
def compare_less(computed, rhs):
return computed < rhs
equals_sum = (make_combinations(lambda s: sum(s), min_size=2, max_size=2), compare_equals)
greater_than = (make_combinations(lambda s: max(s), min_size=1, max_size=3), compare_greater)
less_than = (make_combinations(lambda s: min(s), min_size=1, max_size=3), compare_less)
equals_median = (make_combinations(lambda s: stats.median(s), min_size=1, max_size=3), compare_equals)
fk = FunctionKnowledge(
[equals_sum, greater_than, less_than, equals_median],
constants=[0, 10],
)
gi = GeneralIntelligence()
gi.learn(fk)
# Suppose the target is feature 3
train_rows = [
[2, 3, 5, 5],
[1, 9, 10, 10],
[6, 1, 7, 7],
]
for r in train_rows:
gi.on(Context(r, target_index=3))list(gi.on(Context([4, 2, 6, None], target_index=3)))
# may yield:
# [6, 7, 10]Multiple hypotheses may fire — GI simply yields them all.
from gi import GeneralIntelligence
from gi.knowledge.functions import FunctionKnowledge, Context, make_permutations
def compare_equals(computed, rhs):
return computed == rhs
# Difference function where order matters
equals_difference = (
make_permutations(lambda s: s[0] - s[1], min_size=2, max_size=2),
compare_equals
)
gi = GeneralIntelligence()
fk = FunctionKnowledge([equals_difference])
gi.learn(fk)
# Train: target = difference (order matters!)
gi.on(Context([5, 2, 3], target_index=2)) # 5 - 2 = 3
gi.on(Context([8, 3, 5], target_index=2)) # 8 - 3 = 5list(gi.on(Context([10, 3, None], target_index=2)))
# → [7] (10 - 3 = 7)If a hypothesis says:
lhs → feature[k]
then FK creates a child FunctionKnowledge to nest similarly to tree models.
from gi import GeneralIntelligence
from gi.knowledge.functions import FunctionKnowledge, Context, make_combinations
import statistics as stats
def compare_equals(computed, rhs):
return computed == rhs
equals_sum = (make_combinations(lambda s: sum(s), min_size=1, max_size=3), compare_equals)
equals_median = (make_combinations(lambda s: stats.median(s), min_size=1, max_size=3), compare_equals)
fk = FunctionKnowledge(
[equals_sum, equals_median],
max_depth=3,
)
gi = GeneralIntelligence()
gi.learn(fk)
# Train where target = column 3
gi.on(Context([1, 2, 3, 3], target_index=3))
gi.on(Context([5, 1, 4, 4], target_index=3))list(gi.on(Context([7, 1, 2, None], target_index=3)))
# possible cascaded predictions from parent + child hypothesesGeneralIntelligence shifts AI from algorithm-driven to knowledge-driven.
Knowledge is:
- Composable
- Inspectable
- Autonomous
- Extensible across tasks and domains
A single model can host diverse knowledge types that cooperate, compete, or ignore irrelevant contexts.
- Hierarchical or multimodal reasoning systems
- Interactive chatbots or agents
- Tabular ML tasks and feature discovery
- Autonomous monitoring or simulation agents
- Hybrid AI systems combining specialized knowledge modules
- Specialized knowledge modules
- Community-built knowledge libraries
- Port to other languages
- Tutorials demonstrating cross-cutting knowledge interactions
MIT License Copyright (c) 2025