Skip to content

dsl-unibe-ch/AI-ToolBox

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MATLAB AI Toolbox (Python Port - Core)

Tests

This package is a lightweight, FULLY VIBE-CODED, MATLAB-inspired Python module that implements core neural network workflows for:

  • MLP (classification + regression)
  • CNN (image classification + regression)
  • RNN (sequence classification + regression, with RNN / GRU / LSTM cells)

It is designed as a practical starting point, not a full reimplementation of MATLAB Deep Learning Toolbox.

Install

pip install ai-toolbox-py

For development:

pip install -e .

Quick Start

import numpy as np
from ai_toolbox import MLPClassifier

X = np.random.randn(200, 10).astype("float32")
y = (X[:, 0] + X[:, 1] > 0).astype("int64")

model = MLPClassifier(hidden_layers=(64, 32), epochs=5)
model.fit(X, y)
pred = model.predict(X[:5])
print(pred)
print(model.evaluate(X, y))

CNN Example

import numpy as np
from ai_toolbox import CNNClassifier

X = np.random.randn(128, 1, 28, 28).astype("float32")
y = np.random.randint(0, 10, size=128)

cnn = CNNClassifier(conv_channels=(16, 32), epochs=2)
cnn.fit(X, y)
print(cnn.predict(X[:4]))

RNN Example

import numpy as np
from ai_toolbox import RNNClassifier

X = np.random.randn(100, 20, 8).astype("float32")  # (batch, time, features)
y = np.random.randint(0, 3, size=100)

rnn = RNNClassifier(hidden_size=32, rnn_type="gru", epochs=3)
rnn.fit(X, y)
print(rnn.evaluate(X, y))

MLP Regression Example

import numpy as np
from ai_toolbox import MLPRegressor

X = np.random.randn(200, 10).astype("float32")
y = (1.5 * X[:, 0] - 0.7 * X[:, 1]).astype("float32")

reg = MLPRegressor(hidden_layers=(64, 32), epochs=5)
reg.fit(X, y)
pred = reg.predict(X[:5])
print(pred)
print(reg.evaluate(X, y))  # loss, mse, mae

CNN Regression Example

import numpy as np
from ai_toolbox import CNNRegressor

X = np.random.randn(64, 1, 28, 28).astype("float32")
y = X.mean(axis=(1, 2, 3)).astype("float32")

cnn_reg = CNNRegressor(conv_channels=(16, 32), epochs=2)
cnn_reg.fit(X, y)
print(cnn_reg.predict(X[:4]))

RNN Regression Example

import numpy as np
from ai_toolbox import RNNRegressor

X = np.random.randn(100, 20, 8).astype("float32")
y = (0.3 * X[:, -1, 0] - 0.2 * X[:, :, 1].mean(axis=1)).astype("float32")

rnn_reg = RNNRegressor(hidden_size=32, rnn_type="lstm", epochs=3)
rnn_reg.fit(X, y)
print(rnn_reg.evaluate(X, y))  # loss, mse, mae

Variable-Length Sequence Example (LSTM / GRU / RNN)

Pad sequences to a common length and pass valid lengths via lengths=....

import numpy as np
from ai_toolbox import RNNClassifier, pad_sequences

# Example raw variable-length sequences:
seqs = [np.random.randn(np.random.randint(5, 31), 8).astype("float32") for _ in range(64)]
X_padded, lengths = pad_sequences(seqs, value=0.0)
y = np.random.randint(0, 2, size=64)

model = RNNClassifier(rnn_type="lstm", hidden_size=32, aggregate="mean", epochs=3)
model.fit(X_padded, y, lengths=lengths)

pred = model.predict(X_padded[:4], lengths=lengths[:4])
metrics = model.evaluate(X_padded, y, lengths=lengths)
print(pred, metrics)

# Validation with lengths:
# model.fit(X_train, y_train, lengths=train_lengths, val_data=(X_val, y_val, val_lengths))

PyTorch DataLoader Collate Helper

from torch.utils.data import DataLoader
from ai_toolbox import sequence_collate_fn

# dataset items are (sequence, target), where sequence has shape (T, F)
loader = DataLoader(dataset, batch_size=32, shuffle=True, collate_fn=sequence_collate_fn)

for x_padded, y, lengths in loader:
    pass

Resume Training From Checkpoint

model.fit(X_train, y_train, checkpoint_path="artifacts/best_model.pt")

# Continue training later (restores weights + optimizer state + history if present)
model.fit(
    X_train,
    y_train,
    resume_from_checkpoint="artifacts/best_model.pt",
    checkpoint_path="artifacts/best_model.pt",
)

Scheduler, Callbacks, and Weighting

import numpy as np
from ai_toolbox import Callback, MLPClassifier

class StopAtEpoch2(Callback):
    def on_epoch_end(self, epoch, logs=None):
        if epoch >= 2:
            self.stop_training = True

X = np.random.randn(128, 10).astype("float32")
y = (X[:, 0] > 0).astype("int64")
sample_weight = np.where(y == 1, 2.0, 1.0).astype("float32")

model = MLPClassifier(epochs=10)
model.fit(
    X,
    y,
    class_weight={0: 1.0, 1: 1.5},
    sample_weight=sample_weight,
    scheduler="step",                     # or "plateau", "cosine", "exponential"
    scheduler_kwargs={"step_size": 1, "gamma": 0.9},
    callbacks=[StopAtEpoch2()],
)

MATLAB Mapping (Approximate)

  • MATLAB patternnet / feedforwardnet -> MLPClassifier
  • MATLAB fitnet -> MLPRegressor
  • MATLAB trainNetwork + conv layers -> CNNClassifier
  • MATLAB trainNetwork + regression head -> CNNRegressor
  • MATLAB sequence models (rnnLayer, lstmLayer, gruLayer) -> RNNClassifier
  • MATLAB sequence models + regression head -> RNNRegressor

Scope

Implemented core functionality:

  • training (fit)
  • early stopping + best-checkpoint saving during fit (early_stopping, checkpoint_path)
  • prediction (predict, predict_proba)
  • evaluation (evaluate)
  • saving/loading weights (save, load)
  • regression variants (MLPRegressor, CNNRegressor, RNNRegressor)
  • variable-length sequence support for RNNClassifier / RNNRegressor via lengths=...
  • utilities: pad_sequences(...), sequence_collate_fn(...)
  • richer metrics: classification (precision, recall, f1) and regression (rmse, r2)
  • checkpoint resume support during fit (resume_from_checkpoint=...)
  • scheduler support during fit (scheduler=..., scheduler_kwargs=...)
  • callback API (Callback, CallbackList) with epoch hooks and callback-driven stop
  • weighting support in fit: classifier class_weight + sample_weight, regressor sample_weight

Not included (yet):

  • full MATLAB layer graph equivalents
  • advanced callbacks/schedulers
  • data augmentation pipelines

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages