diff --git a/guides/cheatsheets/axon_pytorch.cheatmd b/guides/cheatsheets/axon_pytorch.cheatmd new file mode 100644 index 000000000..c30589d10 --- /dev/null +++ b/guides/cheatsheets/axon_pytorch.cheatmd @@ -0,0 +1,569 @@ +# Axon -> PyTorch + +This cheatsheet is designed to assist PyTorch developers in transitioning to Elixir and Axon, +providing equivalent commands and code examples for common neural network tasks. + +## Core Paradigm: Functional vs. Object-Oriented + +A key difference between Axon and PyTorch lies in their core design paradigms: + +### Axon (Functional) +Axon follows a functional approach, inspired by libraries like JAX. +Models are defined as compositions of functions that transform input data and parameters into output data. +State (like model parameters) is managed explicitly and passed into functions. +This promotes purity, explicit data flow, and composability, often leveraging Just-In-Time (JIT) compilation +via Nx backends and compilers (like EXLA) for performance. + +### PyTorch (Object-Oriented) +PyTorch uses an object-oriented approach. +Models are typically defined as classes inheriting from `torch.nn.Module`. +These classes encapsulate layers and parameters as internal state. +The forward pass is defined as a method (`forward`) operating on this internal state. +This provides a familiar structure for many developers but can sometimes obscure +data flow and state management compared to the functional style. + +This cheatsheet will highlight how common tasks are achieved in both paradigms. + +## Model Definition + +### Sequential Models + +#### Axon + +```elixir +model = + Axon.input("input", shape: {nil, 784}) + |> Axon.dense(128, activation: :relu) + |> Axon.dense(10, activation: :softmax) +``` + +#### PyTorch + +```python +import torch +import torch.nn as nn + +model = nn.Sequential( + nn.Linear(784, 128), + nn.ReLU(), + nn.Linear(128, 10), + nn.Softmax(dim=1) +) +``` + +## Common Layer Types + +### Dense / Linear + +Applies a linear transformation to the incoming data: `y = xW^T + b`. + +#### Axon +```elixir +input = Axon.input("features") +dense_layer = Axon.dense(input, out_features, activation: :relu, name: "my_dense_layer") +dense_layer = Axon.dense(input, 128) + +#### PyTorch +```python +dense_layer = nn.Linear(in_features=784, out_features=128) +relu = nn.ReLU() +output = relu(dense_layer(x)) +``` + +### Convolutional (Conv2D) + +Applies a 2D convolution over an input signal composed of several input planes. + +#### Axon +```elixir +# Example: 32 filters, 3x3 kernel, ReLU activation +x = Axon.input("features") +Axon.conv(x, 32, kernel_size: 3, activation: :relu, padding: :same, name: "conv1") + +# Stride, padding, etc., are options: +Axon.conv(x, 64, kernel_size: {3, 3}, strides: 2, padding: :valid) +``` + +*Note: Axon typically uses NHWC (Batch, Height, Width, Channels) format by default, common in TensorFlow/Keras.* + +#### PyTorch + +```python +# Example: 32 filters, 3x3 kernel, ReLU activation +conv1 = nn.Conv2d(in_channels=3, out_channels=32, kernel_size=3, padding='same') +relu = nn.ReLU() +output = relu(conv1(x)) + +# Stride, padding, etc., are arguments: +conv2 = nn.Conv2d(in_channels=32, out_channels=64, kernel_size=3, stride=2, padding=0) # padding=0 is 'valid' +``` + +*Note: PyTorch uses NCHW (Batch, Channels, Height, Width) format by default.* + +### Pooling (MaxPool2D) + +Applies 2D max pooling over an input signal. + +#### Axon +```elixir +# Example: 2x2 pool size, stride 2 +Axon.max_pool(previous_layer, kernel_size: 2, strides: 2, name: "pool1") + +# Padding can also be specified (default is :valid) +Axon.max_pool(previous_layer, kernel_size: {3, 3}, strides: 1, padding: :same) +``` + +*Note: Operates on NHWC format by default.* + +#### PyTorch + +Applies 2D max pooling over an input signal. + +```python +import torch.nn as nn + +# Example: 2x2 pool size, stride 2 +pool1 = nn.MaxPool2d(kernel_size=2, stride=2) + +# Padding can also be specified (default is 0 / 'valid') +# To achieve 'same' padding, calculation might be needed or use ceil_mode=True +pool2 = nn.MaxPool2d(kernel_size=3, stride=1, padding=1) # padding=1 for 3x3 kernel approximates 'same' + +# Example usage: +# Assuming input tensor `x` with shape (N, C, H, W) +output = pool1(x) +``` + +*Note: Operates on NCHW format by default.* + +### Dropout + +Randomly zeroes some elements of the input tensor with probability `p` during training. This is a regularization technique. + +#### Axon + +Applies dropout during training (`mode: :train`). It's a no-op during inference (`mode: :infer`). + +```elixir +# Rate is the probability of an element being zeroed. +Axon.dropout(previous_layer, rate: 0.5, name: "dropout1") + +# Usage is implicit within the model's structure. +# The mode (:train or :infer) is passed to the model execution function. +# {init_fn, predict_fn} = Axon.build(model, mode: :train) +# predict_fn.(params, inputs, mode: :train) +``` + +#### PyTorch + +Applies dropout during training (`model.train()` mode). It's a no-op during evaluation (`model.eval()` mode). + +```python +# p is the probability of an element being zeroed. +dropout1 = nn.Dropout(p=0.5) + +# Example usage: +# model.train() # Set model to training mode +# output = dropout1(x) + +# model.eval() # Set model to evaluation mode (dropout becomes identity) +# output_eval = dropout1(x) # dropout1 has no effect here +``` + +### Normalization (LayerNorm) + +Applies Layer Normalization over a mini-batch of inputs. +Normalizes the activations of the previous layer for each given example independently. + +#### Axon +```elixir +# Typically applied to the feature dimension(s). +Axon.layer_norm(previous_layer, name: "layernorm1") + +# Can specify the axis/axes for normalization (default is usually the last axis) +# Axon.layer_norm(previous_layer, axis: -1, epsilon: 1.0e-5) +``` + +#### PyTorch +```python +# Provide the shape of the features to normalize over. +# This typically means the last dimension(s) of the tensor. +# Example 1: Input (N, features_dim), normalize over features_dim +# features_dim = 128 +# layernorm1 = nn.LayerNorm(features_dim) + +# Example 2: Input (N, C, H, W), normalize over C, H, W: +# normalized_shape = [C, H, W] # Needs actual channel, height, width values +# layernorm2 = nn.LayerNorm(normalized_shape) + +# Example 3: Common case in Transformers (Input: N, SeqLen, EmbedDim): +embed_dim = 512 +layernorm_transformers = nn.LayerNorm(embed_dim) + +# Example usage (assuming input `x` with shape (N, SeqLen, embed_dim)): +# output = layernorm_transformers(x) +``` + +## Activation Functions + +#### Axon + +Activations are typically specified as options within layers (like `Axon.dense`) or applied as separate layers in the model definition pipeline. + +```elixir +# Option 1: As layer option +model = Axon.input("input", shape: {nil, 784}) + |> Axon.dense(128, activation: :relu) + +# Option 2: As separate layer +model = Axon.input("input", shape: {nil, 10}) + |> Axon.dense(128) + |> Axon.softmax() + +# Common activation atoms: :relu, :softmax, :sigmoid, :tanh, :identity, etc. +# Custom functions can also be used. +# Axon.activation(layer, name) can also be used with the function name atoms. +``` + +#### PyTorch + +PyTorch also supports a variety of activation functions, including built-in ones and custom implementations. + +```python +relu = nn.ReLU() +softmax = nn.Softmax(dim=1) +sigmoid = nn.Sigmoid() +tanh = nn.Tanh() + +output = relu(x) +output = softmax(x) +output = sigmoid(x) +output = tanh(x) +``` + +## Defining Custom Layers/Models + +#### Axon + +Axon allows for the definition of custom layers and models. +`Axon.block/1` as shown below allows us to reuse the same parameters +for an arbitrary Axon subgraph. + +This means that the difference between the 2 examples below is that +while the first has separate weights for the first and second dense layers, +the second example uses the same weights for both. + +```elixir +# Example: +defmodule MyCustomLayers do + def dense(x) do + Axon.dense(x, 128, activation: :relu, name: "my_dense_layer") + end + + def block do + Axon.block(&dense/1) + end +end + +# Usage: +input = Axon.input("input", shape: {nil, 784}) +model = + input + |> MyCustomLayers.dense() + |> MyCustomLayers.dense() + +dense_block = MyCustomLayers.block() +model = + input + |> then(dense_block) + |> then(dense_block) +``` + +#### PyTorch + +PyTorch allows for the definition of custom layers and models. + +```python +# Example: +class MyCustomLayer(nn.Module): + def __init__(self): + super(MyCustomLayer, self).__init__() + self.dense = nn.Linear(784, 128) + self.relu = nn.ReLU() + + def forward(self, x): + return self.relu(self.dense(x)) + +# Usage: +model = MyCustomLayer() +``` + +## Model Initialization + +Initialization refers to creating the initial set of parameters (weights, biases) for the model. + +#### Axon + +Model definition is separate from initialization. `Axon.build/2` compiles the model definition and returns an initialization function (`init_fn`) and a prediction function (`predict_fn`). + +```elixir +# 1. Define the model +model = Axon.input("input", shape: {nil, 784}) + |> Axon.dense(128, activation: :relu) + +# 2. Build the model to get init_fn +{init_fn, _predict_fn} = Axon.build(model) + +# 3. Initialize parameters using an input shape template and a map with optional values for the parameters. +# The second argument is useful when loading saved parameters. +input_template = Nx.template({1, 784}, :f32) +params = init_fn.(input_template, %{}) +# `params` now holds the initialized parameters (e.g., a nested map) +``` + +#### PyTorch + +In PyTorch, basic parameter initialization happens when the model class (an `nn.Module`) is instantiated. +Layers like `nn.Linear` have default initialization schemes (often Kaiming uniform for weights). + +```python +import torch +import torch.nn as nn + +# 1. Define the model class (or use nn.Sequential) +class SimpleModel(nn.Module): + def __init__(self): + super().__init__() + self.layer1 = nn.Linear(784, 128) + self.relu = nn.ReLU() + + def forward(self, x): + return self.relu(self.layer1(x)) + +# 2. Instantiate the model - parameters are initialized here +model = SimpleModel() +# `model.parameters()` now holds tensors with initial values + +# Explicit initialization can be done after instantiation if needed +# def init_weights(m): +# if isinstance(m, nn.Linear): +# torch.nn.init.xavier_uniform_(m.weight) +# m.bias.data.fill_(0.01) +# model.apply(init_weights) +``` + +## Forward Pass / Prediction + +#### Axon + +Axon's forward pass is defined by the composition of functions. + +```elixir +# Example: +model = Axon.input("input", shape: {nil, 784}) |> Axon.dense(128, activation: :relu) + +# The actual forward pass happens during the execution of the model. +# mode: :inference is passed when not training the model. +# {init_fn, predict_fn} = Axon.build(model, mode: :train) +# predict_fn.(params, inputs) +``` + +#### PyTorch + +PyTorch's forward pass is defined by the `forward` method of the model class. + +```python +# Example: +model = nn.Sequential( + nn.Linear(784, 128), + nn.ReLU(), + nn.Linear(128, 10), + nn.Softmax(dim=1) +) + +# The actual forward pass happens during the execution of the model. +output = model(x) +``` + +## Loss Functions + +#### Axon + +Axon provides loss functions in `Axon.Losses` that take targets and predictions. When using `Axon.Loop`, losses are often specified by atoms. + +```elixir +# Manual Calculation (e.g., in evaluation): +targets = # ... +predictions = # ... +loss_value = Axon.Losses.mean_squared_error(predictions, targets) +loss_value = Axon.Losses.categorical_cross_entropy(predictions, targets) + +# Using Axon.Loop (specify loss by atom): +model_state = Axon.Loop.trainer(model, :mean_squared_error, optimizer) +model_state = Axon.Loop.trainer(model, :categorical_cross_entropy, optimizer) +``` + +#### PyTorch + +PyTorch provides various loss functions. + +```python +# Example: +import torch +import torch.nn as nn + +criterion = nn.MSELoss() +output = criterion(y_pred, y_true) + +criterion = nn.CrossEntropyLoss() +output = criterion(y_pred, y_true) +``` + +## Optimizers + +#### Axon + +Optimizers in the Axon ecosystem typically come from the `Polaris` library. They are passed to `Axon.Loop.trainer` or used manually with an update function. + +```elixir +# Import the optimizers +import Polaris.Optimizers + +# Define the optimizer +optimizer = Polaris.Optimizers.sgd(learning_rate: 0.01) +optimizer = Polaris.Optimizers.adam(learning_rate: 0.001) + +# Use with Axon.Loop: +model_state = Axon.Loop.trainer(model, loss_fn, optimizer) + +# Manual update step (simplified): +# {grads, loss} = grad_fn.(params, inputs, targets) +# {params, optimizer_state} = Polaris.Optimizers.update(optimizer, grads, params, optimizer_state) +``` + +#### PyTorch + +PyTorch provides various optimizers. + +```python +# Example: +import torch +import torch.optim as optim + +optimizer = optim.SGD(model.parameters(), lr=0.01) +optimizer = optim.Adam(model.parameters(), lr=0.001) +``` + +## Basic Training Loop + +#### Axon + +Axon supports manual training loops but provides the `Axon.Loop` module for convenient, high-level training. + +```elixir +# High-level approach using Axon.Loop: +model_state = Axon.Loop.trainer(model, :categorical_cross_entropy, optimizer) + |> Axon.Loop.run(train_data, epochs: 10, compiler: EXLA) + +# Manual loop structure (simplified): +model = # ... +loss_fn = &Axon.Losses.categorical_cross_entropy/2 +optimizer = Polaris.Optimizers.adam(learning_rate: 0.001) +{init_fn, predict_fn} = Axon.build(model, compiler: EXLA, mode: :train) +params = init_fn.(input_template, key) +opt_state = Polaris.Optimizers.init(optimizer, params) +for epoch <- 1..10 do + Enum.reduce(train_data, {params, opt_state}, fn {inputs, targets}, {params, opt_state} -> + # Define a gradient function + {{_preds, loss}, grads} = + Nx.Defn.value_and_grad( + inputs, + fn inputs -> + preds = predict_fn.(params, inputs) + loss = loss_fn.(targets, preds) + + {preds, loss} + end, + fn {_preds, loss} -> loss end + ) + + {updates, opt_state} = Polaris.Optimizers.update(optimizer, grads, params, opt_state) + params = Polaris.Updates.apply_updates(params, updates) + {params, opt_state} + end) +end +``` + +#### PyTorch + +A standard PyTorch training loop involves iterating through data, zeroing gradients, performing a forward pass, calculating loss, performing a backward pass, and stepping the optimizer. + +```python +import torch +import torch.nn as nn +import torch.optim as optim + +# Assume: model, train_dataloader, loss_fn, optimizer are defined +# model = YourModel() +# loss_fn = nn.CrossEntropyLoss() +# optimizer = optim.Adam(model.parameters(), lr=0.001) +# train_dataloader = ... + +num_epochs = 10 +model.train() # Set model to training mode + +for epoch in range(num_epochs): + for batch_idx, (inputs, targets) in enumerate(train_dataloader): + # inputs, targets = inputs.to(device), targets.to(device) # Optional: move to GPU + + # 1. Zero gradients + optimizer.zero_grad() + + # 2. Forward pass + outputs = model(inputs) + + # 3. Calculate loss + loss = loss_fn(outputs, targets) + + # 4. Backward pass (compute gradients) + loss.backward() + + # 5. Optimizer step (update weights) + optimizer.step() + + if batch_idx % 100 == 0: # Print progress + print(f"Epoch {epoch}/{num_epochs}, Batch {batch_idx}, Loss: {loss.item()}") +``` + +## Model Inspection / Summary + +#### Axon + +Axon provides helpers in `Axon.Display` to show model summaries, including layer outputs shapes and parameter counts. Requires an input template. + +```elixir +# Define model and input template +model = Axon.input("input", shape: {nil, 784}) |> Axon.dense(10) +input_template = Nx.template({1, 784}, :f32) + +# Print summary table +Axon.Display.as_table(model, input_template) |> IO.puts() +``` + +#### PyTorch + +Printing a PyTorch model shows its layers. For more detailed summaries including output shapes and parameter counts (similar to Keras' `model.summary()`), use external libraries like `torchinfo`. + +```python +import torch +import torch.nn as nn +# Assume `model` is an instantiated nn.Module + +# 1. Basic layer structure +print(model) + +# 2. Using torchinfo (requires pip install torchinfo) +from torchinfo import summary +batch_size = 64 +summary(model, input_size=(batch_size, 784)) # Provide input size +``` \ No newline at end of file diff --git a/mix.exs b/mix.exs index 52b943579..0c8672e88 100644 --- a/mix.exs +++ b/mix.exs @@ -38,11 +38,13 @@ defmodule Axon.MixProject do {:nx, "~> 0.9", nx_opts()}, {:exla, "~> 0.9", [only: :test] ++ exla_opts()}, {:torchx, "~> 0.9", [only: :test] ++ torchx_opts()}, - {:ex_doc, "~> 0.23", only: :docs}, + {:ex_doc, "~> 0.34", only: :docs}, {:table_rex, "~> 3.1 or ~> 4.1", optional: true}, {:kino, "~> 0.7", optional: true}, {:kino_vega_lite, "~> 0.1.7", optional: true}, - {:polaris, "~> 0.1"} + {:polaris, "~> 0.1"}, + {:makeup, "~> 1.2.1", only: :docs}, + {:makeup_syntect, "~> 0.1", only: :docs} ] end @@ -103,6 +105,7 @@ defmodule Axon.MixProject do "guides/training_and_evaluation/writing_custom_metrics.livemd", "guides/training_and_evaluation/writing_custom_event_handlers.livemd", "guides/serialization/onnx_to_axon.livemd", + "guides/cheatsheets/axon_pytorch.cheatmd", # Examples "notebooks/basics/xor.livemd", "notebooks/vision/mnist.livemd", @@ -114,6 +117,7 @@ defmodule Axon.MixProject do "notebooks/generative/fashionmnist_vae.livemd" ], groups_for_extras: [ + "Guides: Cheatsheets": Path.wildcard("guides/cheatsheets/*.cheatmd"), "Guides: Model Creation": Path.wildcard("guides/model_creation/*.livemd"), "Guides: Model Execution": Path.wildcard("guides/model_execution/*.livemd"), "Guides: Training and Evaluation": diff --git a/mix.lock b/mix.lock index b61c0a55c..bfa9cdc6a 100644 --- a/mix.lock +++ b/mix.lock @@ -1,19 +1,22 @@ %{ + "castore": {:hex, :castore, "1.0.12", "053f0e32700cbec356280c0e835df425a3be4bc1e0627b714330ad9d0f05497f", [:mix], [], "hexpm", "3dca286b2186055ba0c9449b4e95b97bf1b57b47c1f2644555879e659960c224"}, "complex": {:hex, :complex, "0.5.0", "af2d2331ff6170b61bb738695e481b27a66780e18763e066ee2cd863d0b1dd92", [:mix], [], "hexpm", "2683bd3c184466cfb94fad74cbfddfaa94b860e27ad4ca1bffe3bff169d91ef1"}, - "earmark_parser": {:hex, :earmark_parser, "1.4.41", "ab34711c9dc6212dda44fcd20ecb87ac3f3fce6f0ca2f28d4a00e4154f8cd599", [:mix], [], "hexpm", "a81a04c7e34b6617c2792e291b5a2e57ab316365c2644ddc553bb9ed863ebefa"}, + "earmark_parser": {:hex, :earmark_parser, "1.4.44", "f20830dd6b5c77afe2b063777ddbbff09f9759396500cdbe7523efd58d7a339c", [:mix], [], "hexpm", "4778ac752b4701a5599215f7030989c989ffdc4f6df457c5f36938cc2d2a2750"}, "elixir_make": {:hex, :elixir_make, "0.8.4", "4960a03ce79081dee8fe119d80ad372c4e7badb84c493cc75983f9d3bc8bde0f", [:mix], [{:castore, "~> 0.1 or ~> 1.0", [hex: :castore, repo: "hexpm", optional: true]}, {:certifi, "~> 2.0", [hex: :certifi, repo: "hexpm", optional: true]}], "hexpm", "6e7f1d619b5f61dfabd0a20aa268e575572b542ac31723293a4c1a567d5ef040"}, - "ex_doc": {:hex, :ex_doc, "0.34.2", "13eedf3844ccdce25cfd837b99bea9ad92c4e511233199440488d217c92571e8", [:mix], [{:earmark_parser, "~> 1.4.39", [hex: :earmark_parser, repo: "hexpm", optional: false]}, {:makeup_c, ">= 0.1.0", [hex: :makeup_c, repo: "hexpm", optional: true]}, {:makeup_elixir, "~> 0.14 or ~> 1.0", [hex: :makeup_elixir, repo: "hexpm", optional: false]}, {:makeup_erlang, "~> 0.1 or ~> 1.0", [hex: :makeup_erlang, repo: "hexpm", optional: false]}, {:makeup_html, ">= 0.1.0", [hex: :makeup_html, repo: "hexpm", optional: true]}], "hexpm", "5ce5f16b41208a50106afed3de6a2ed34f4acfd65715b82a0b84b49d995f95c1"}, + "ex_doc": {:hex, :ex_doc, "0.37.3", "f7816881a443cd77872b7d6118e8a55f547f49903aef8747dbcb345a75b462f9", [:mix], [{:earmark_parser, "~> 1.4.42", [hex: :earmark_parser, repo: "hexpm", optional: false]}, {:makeup_c, ">= 0.1.0", [hex: :makeup_c, repo: "hexpm", optional: true]}, {:makeup_elixir, "~> 0.14 or ~> 1.0", [hex: :makeup_elixir, repo: "hexpm", optional: false]}, {:makeup_erlang, "~> 0.1 or ~> 1.0", [hex: :makeup_erlang, repo: "hexpm", optional: false]}, {:makeup_html, ">= 0.1.0", [hex: :makeup_html, repo: "hexpm", optional: true]}], "hexpm", "e6aebca7156e7c29b5da4daa17f6361205b2ae5f26e5c7d8ca0d3f7e18972233"}, "exla": {:hex, :exla, "0.9.0", "e048c7a3d33917c214774a7ea1a0c626eb9de01e3fb2423cf9e2b89ef6dada3a", [:make, :mix], [{:elixir_make, "~> 0.6", [hex: :elixir_make, repo: "hexpm", optional: false]}, {:nimble_pool, "~> 1.0", [hex: :nimble_pool, repo: "hexpm", optional: false]}, {:nx, "~> 0.9.0", [hex: :nx, repo: "hexpm", optional: false]}, {:telemetry, "~> 0.4.0 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}, {:xla, "~> 0.8.0", [hex: :xla, repo: "hexpm", optional: false]}], "hexpm", "cbd30b54992d0da01a5aaee361a3160fc29de05a9f6c3dbcbd1fa04b4aa72302"}, "fss": {:hex, :fss, "0.1.1", "9db2344dbbb5d555ce442ac7c2f82dd975b605b50d169314a20f08ed21e08642", [:mix], [], "hexpm", "78ad5955c7919c3764065b21144913df7515d52e228c09427a004afe9c1a16b0"}, "kino": {:hex, :kino, "0.14.1", "c499afb1cd0be462feaf0a75c0631aa65aacc545b1c10f431b439b74f104be22", [:mix], [{:fss, "~> 0.1.0", [hex: :fss, repo: "hexpm", optional: false]}, {:nx, "~> 0.1", [hex: :nx, repo: "hexpm", optional: true]}, {:plug, "~> 1.0", [hex: :plug, repo: "hexpm", optional: true]}, {:table, "~> 0.1.2", [hex: :table, repo: "hexpm", optional: false]}], "hexpm", "090aea1aaa267e42e5ac24ee6bc5ed515aecc0a9edb8619aa4ee839201e704aa"}, "kino_vega_lite": {:hex, :kino_vega_lite, "0.1.13", "03c00405987a2202e4b8014ee55eb7f5727691b3f13d76a3764f6eeccef45322", [:mix], [{:kino, "~> 0.7", [hex: :kino, repo: "hexpm", optional: false]}, {:table, "~> 0.1.0", [hex: :table, repo: "hexpm", optional: false]}, {:vega_lite, "~> 0.1.8", [hex: :vega_lite, repo: "hexpm", optional: false]}], "hexpm", "00c72bc270e7b9d3c339f726cdab0012fd3f2fc75e36c7548e0f250fe420fa10"}, - "makeup": {:hex, :makeup, "1.1.2", "9ba8837913bdf757787e71c1581c21f9d2455f4dd04cfca785c70bbfff1a76a3", [:mix], [{:nimble_parsec, "~> 1.2.2 or ~> 1.3", [hex: :nimble_parsec, repo: "hexpm", optional: false]}], "hexpm", "cce1566b81fbcbd21eca8ffe808f33b221f9eee2cbc7a1706fc3da9ff18e6cac"}, - "makeup_elixir": {:hex, :makeup_elixir, "0.16.2", "627e84b8e8bf22e60a2579dad15067c755531fea049ae26ef1020cad58fe9578", [:mix], [{:makeup, "~> 1.0", [hex: :makeup, repo: "hexpm", optional: false]}, {:nimble_parsec, "~> 1.2.3 or ~> 1.3", [hex: :nimble_parsec, repo: "hexpm", optional: false]}], "hexpm", "41193978704763f6bbe6cc2758b84909e62984c7752b3784bd3c218bb341706b"}, - "makeup_erlang": {:hex, :makeup_erlang, "1.0.1", "c7f58c120b2b5aa5fd80d540a89fdf866ed42f1f3994e4fe189abebeab610839", [:mix], [{:makeup, "~> 1.0", [hex: :makeup, repo: "hexpm", optional: false]}], "hexpm", "8a89a1eeccc2d798d6ea15496a6e4870b75e014d1af514b1b71fa33134f57814"}, - "nimble_parsec": {:hex, :nimble_parsec, "1.4.0", "51f9b613ea62cfa97b25ccc2c1b4216e81df970acd8e16e8d1bdc58fef21370d", [:mix], [], "hexpm", "9c565862810fb383e9838c1dd2d7d2c437b3d13b267414ba6af33e50d2d1cf28"}, + "makeup": {:hex, :makeup, "1.2.1", "e90ac1c65589ef354378def3ba19d401e739ee7ee06fb47f94c687016e3713d1", [:mix], [{:nimble_parsec, "~> 1.4", [hex: :nimble_parsec, repo: "hexpm", optional: false]}], "hexpm", "d36484867b0bae0fea568d10131197a4c2e47056a6fbe84922bf6ba71c8d17ce"}, + "makeup_elixir": {:hex, :makeup_elixir, "1.0.1", "e928a4f984e795e41e3abd27bfc09f51db16ab8ba1aebdba2b3a575437efafc2", [:mix], [{:makeup, "~> 1.0", [hex: :makeup, repo: "hexpm", optional: false]}, {:nimble_parsec, "~> 1.2.3 or ~> 1.3", [hex: :nimble_parsec, repo: "hexpm", optional: false]}], "hexpm", "7284900d412a3e5cfd97fdaed4f5ed389b8f2b4cb49efc0eb3bd10e2febf9507"}, + "makeup_erlang": {:hex, :makeup_erlang, "1.0.2", "03e1804074b3aa64d5fad7aa64601ed0fb395337b982d9bcf04029d68d51b6a7", [:mix], [{:makeup, "~> 1.0", [hex: :makeup, repo: "hexpm", optional: false]}], "hexpm", "af33ff7ef368d5893e4a267933e7744e46ce3cf1f61e2dccf53a111ed3aa3727"}, + "makeup_syntect": {:hex, :makeup_syntect, "0.1.3", "ae2c3437f479ea50d08d794acaf02a2f3a8c338dd1f757f6b237c42eb27fcde1", [:mix], [{:makeup, "~> 1.2", [hex: :makeup, repo: "hexpm", optional: false]}, {:rustler, "~> 0.36.1", [hex: :rustler, repo: "hexpm", optional: true]}, {:rustler_precompiled, "~> 0.8.2", [hex: :rustler_precompiled, repo: "hexpm", optional: false]}], "hexpm", "a27bd3bd8f7b87465d110295a33ed1022202bea78701bd2bbeadfb45d690cdbf"}, + "nimble_parsec": {:hex, :nimble_parsec, "1.4.2", "8efba0122db06df95bfaa78f791344a89352ba04baedd3849593bfce4d0dc1c6", [:mix], [], "hexpm", "4b21398942dda052b403bbe1da991ccd03a053668d147d53fb8c4e0efe09c973"}, "nimble_pool": {:hex, :nimble_pool, "1.1.0", "bf9c29fbdcba3564a8b800d1eeb5a3c58f36e1e11d7b7fb2e084a643f645f06b", [:mix], [], "hexpm", "af2e4e6b34197db81f7aad230c1118eac993acc0dae6bc83bac0126d4ae0813a"}, "nx": {:hex, :nx, "0.9.0", "03a622a27d93eaaa2d24ff9b812d9f675cc04eb0340ca3dd065674f3642867d3", [:mix], [{:complex, "~> 0.5", [hex: :complex, repo: "hexpm", optional: false]}, {:telemetry, "~> 0.4.0 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "3810a5a90db0654b6e538430c0fb473a22bfc11b3d02ea7834db493cf3f56153"}, "polaris": {:hex, :polaris, "0.1.0", "dca61b18e3e801ecdae6ac9f0eca5f19792b44a5cb4b8d63db50fc40fc038d22", [:mix], [{:nx, "~> 0.5", [hex: :nx, repo: "hexpm", optional: false]}], "hexpm", "13ef2b166650e533cb24b10e2f3b8ab4f2f449ba4d63156e8c569527f206e2c2"}, + "rustler_precompiled": {:hex, :rustler_precompiled, "0.8.2", "5f25cbe220a8fac3e7ad62e6f950fcdca5a5a5f8501835d2823e8c74bf4268d5", [:mix], [{:castore, "~> 0.1 or ~> 1.0", [hex: :castore, repo: "hexpm", optional: false]}, {:rustler, "~> 0.23", [hex: :rustler, repo: "hexpm", optional: true]}], "hexpm", "63d1bd5f8e23096d1ff851839923162096364bac8656a4a3c00d1fff8e83ee0a"}, "table": {:hex, :table, "0.1.2", "87ad1125f5b70c5dea0307aa633194083eb5182ec537efc94e96af08937e14a8", [:mix], [], "hexpm", "7e99bc7efef806315c7e65640724bf165c3061cdc5d854060f74468367065029"}, "table_rex": {:hex, :table_rex, "4.1.0", "fbaa8b1ce154c9772012bf445bfb86b587430fb96f3b12022d3f35ee4a68c918", [:mix], [], "hexpm", "95932701df195d43bc2d1c6531178fc8338aa8f38c80f098504d529c43bc2601"}, "telemetry": {:hex, :telemetry, "1.3.0", "fedebbae410d715cf8e7062c96a1ef32ec22e764197f70cda73d82778d61e7a2", [:rebar3], [], "hexpm", "7015fc8919dbe63764f4b4b87a95b7c0996bd539e0d499be6ec9d7f3875b79e6"},