Skip to content

Pointyware/Ember

Repository files navigation

Ember

This project started as a small attempt to recreate some machine learning primitives from scratch in Kotlin. I quickly realized while doing research that I could fairly easily build a small set of primitives to build larger components and scale up complexity rapidly. At the same time that I was learning about different optimization and visualization techniques for my own use, I realized they would make fantastic learning tools for others as well.

Related: https://github.com/Pointyware/AI-Licensing

ML Primitives

  • Tensors
    • Pools to store and reuse tensors by dimension
  • Activation Functions
    • ReLU
    • Logistic
    • Tanh
  • Layers
    • Linear (Fully Connected)
    • Exp: Convolutional
  • Networks
    • Sequential Networks
    • Residual Networks
  • Loss Functions
    • Mean Squared Error
    • Cross Entropy
  • Optimizers
    • Stochastic (Gradient Descent)
    • Exp: Adam
  • Training
    • Sequential Trainer
    • Exp: Organic Trainer
classDiagram
    class Tensor {
        dimensions: List~Int~
        get(indices: List~Int~): Double
    }
    class ActivationFunction {
        calculate(value: Double): Double
    }
    class Layer {
        weights: Tensor
        biases: Tensor
        activation: ActivationFunction
    }
    Layer *--> Tensor
    Layer *--> ActivationFunction

    note for Network "A neural network composed of neurons."
    class Network
    class SequentialNetwork {
        layers: List~Layer~
    }
    SequentialNetwork *--> "1..*" Layer
    Network <|-- SequentialNetwork

    class Loss {
        calculate(expected: Tensor, actual: Tensor): Double
    }
    note for Optimizer "An optimizer is responsible for <br>adjusting the weights and biases <br>of a layer based on the error <br>gradient."
    class Optimizer {
        batch()
        update()
    }

    class EpochStatistics {
        onEpochStart()
        onEpochEnd()
    }
    class BatchStatistics {
        onBatchStart()
        onBatchEnd()
    }
    class SampleStatistics {
        onSampleStart()
        onSampleEnd()
    }
    class LayerStatistics {
        onLayerStart()
        onLayerEnd()
    }
    class GradientDescent
    GradientDescent --|> Optimizer
    GradientDescent --|> SampleStatistics
    class StochasticGradientDescent
    StochasticGradientDescent --|> Optimizer
    StochasticGradientDescent --|> BatchStatistics
    class Adam
    Adam --|> Optimizer
    Adam --|> BatchStatistics

    note for StudyCase "A study case associates an <br>input with an expected output."
    class StudyCase {
        input: Tensor
        output: Tensor
    }

    note for SequentialTrainer "A trainer presents cases to <br>a network and tracks gradients <br>for back-propagation."
    class SequentialTrainer {
        network: SequentialNetwork
        cases: StudyCase
        lossFunction: Loss
        optimizer: Optimizer
    }
    SequentialTrainer *--> SequentialNetwork
    SequentialTrainer *--> "1.." StudyCase
    SequentialTrainer *--> Loss
    SequentialTrainer *--> Optimizer

    class LearningTensor
    class SimpleTensor
    Tensor <|-- LearningTensor
    Tensor <|-- SimpleTensor

    class ReLU
    class Sigmoid
    class Linear
    ActivationFunction <|-- ReLU
    ActivationFunction <|-- Sigmoid
    ActivationFunction <|-- Linear

    class MeanSquaredError {
    }
    Loss <|-- MeanSquaredError

    class StochasticGradientDescent {
    }
    Optimizer <|-- StochasticGradientDescent

Loading

Project Structure

graph
    subgraph apps
    :app-android --> :app-shared
    :app-desktop --> :app-shared
    end
    apps --> features

    subgraph features
    :feature-training --> :feature-simulation
    :feature-simulation-training --> :feature-simulation
    :feature-simulation-training --> :feature-training
    :feature-training
    :feature-evolution --> :feature-simulation
    end
    features --> core

    subgraph core
    :core-ui --> :core-viewmodels --> :core-interactors --> :core-data --> :core-entities --> :core-common
    end
Loading

Research Citations

  1. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin. Attention is All You Need. arXiv preprint arXiv:IDvN, 2017
  2. Ravid Schwartz-Ziv, Naftali Tishby. Opening the Black Box of Deep Neural Networks via Information. arXiv preprint arXiv:1703.00810v3, 2017.
  3. Author, Author. Title. Publication, Year.

About

ML primitives have moved to https://github.com/Pointyware/Disco while I focus on my personal research interests in this repo.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages