Skip to content
#

masked-attention

Here are 5 public repositories matching this topic...

Language: All
Filter by language

A complete implementation of a Decoder-Only Transformer (GPT-style) built using PyTorch, without relying on high-level abstractions. This implementation includes all core components: token embeddings, positional embeddings, multi-head self-attention, feedforward networks, causal masking, and output logits generation.

  • Updated Feb 18, 2026
  • Python

A complete implementation of the "Attention Is All You Need" Transformer model from scratch using PyTorch. This project focuses on building and training a Transformer for neural machine translation (English-to-Italian) on the OpusBooks dataset.

  • Updated Nov 8, 2025
  • Python

Improve this page

Add a description, image, and links to the masked-attention topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the masked-attention topic, visit your repo's landing page and select "manage topics."

Learn more