Skip to content

Commit c164f6d

Browse files
authored
Merge pull request #31 from imohitmayank/april-2024
NLP>Mamba Page Added + Minor fixes
2 parents a1334cc + 3fc1d50 commit c164f6d

File tree

9 files changed

+120
-2
lines changed

9 files changed

+120
-2
lines changed

docs/imgs/nlp_mamba_archi.png

73.8 KB
Loading

docs/imgs/nlp_mamba_efficiency.png

60.5 KB
Loading

docs/imgs/nlp_mamba_results.png

157 KB
Loading

docs/imgs/nlp_mamba_scaling.png

76.8 KB
Loading

docs/imgs/nlp_mamba_ssm.png

59 KB
Loading

docs/imgs/nlp_mamba_sssm.png

82.8 KB
Loading
Lines changed: 117 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,117 @@
1+
## Introduction
2+
3+
Mamba is a new architecture designed to address a longstanding challenge in sequence modeling: the trade-off between efficiency and accuracy. Sequence modeling tasks involve analyzing ordered sequences of data, such as text, audio, or video. These sequences can vary greatly in length, and processing them effectively requires models that are both powerful and computationally efficient.
4+
5+
Traditionally, [recurrent neural networks (RNNs)](./lstm_gru_rnn.md) were the go-to architecture for sequence modeling. However, RNNs suffer from limitations, like they struggle to capture long-range dependencies between elements in the sequence. This leads to accuracy problems.
6+
7+
Transformers emerged as a powerful alternative to RNNs, addressing some of their shortcomings. Transformers employ an attention mechanism that allows them to focus on specific parts of the sequence, improving their ability to capture long-range dependencies. However, Transformers come with their own drawbacks as they can be computationally expensive and memory-intensive, especially for very long sequences.
8+
9+
Mamba builds upon State Space Models (SSMs), a less common type of architecture for sequence modeling. SSMs offer advantages in terms of speed and memory usage compared to Transformers. However, they haven't been able to match the accuracy of Transformers on various tasks. Mamba addresses this accuracy gap by introducing several innovations to SSMs, making them competitive with Transformers while retaining their efficiency benefits.
10+
11+
## State Space Models (SSMs)
12+
13+
<figure markdown>
14+
![](../imgs/nlp_mamba_ssm.png)
15+
<figcaption>View of a continuous, time-invariant SSM *(Source: https://en.wikipedia.org/wiki/State-space_representation)* [3]</figcaption>
16+
</figure>
17+
18+
In working, SSMs are quite similar to RNN as they are a type of architecture specifically designed for sequence modeling tasks. They work in a step-by-step fashion, iteratively processing each element (token) in a sequence. At each step, SSMs consider two pieces of information:
19+
20+
* The previous token's hidden state: This hidden state represents a compressed representation of all the information processed so far in the sequence. It captures the context of the sequence up to the current token.
21+
* The current input token's embedding: An embedding is a dense vector representation of the token. It encodes the meaning of the individual token within a specific vocabulary.
22+
23+
By combining these two pieces of information, SSMs can learn how the current token relates to the preceding tokens in the sequence. This allows the model to build up a deeper understanding of the sequence as it processes it element by element.
24+
25+
As part of core components, SSMs rely on four sets of matrices and parameters ($\text{Delta}$, $A$, $B$, and $C$) to handle the input sequence. Each matrix plays a specific role in transforming and combining information during the processing steps:
26+
27+
- $\text{Delta}$ ($\Delta$): This parameter controls the discretization step, which is necessary because SSMs are derived from continuous differential equations.
28+
- $A$ and $B$: These matrices determine how much information is propagated from the previous hidden state and the current input embedding to the new hidden state, respectively.
29+
- $C$: This matrix transforms the final hidden state into an output representation that can be used for various tasks.
30+
31+
Here's a breakdown of the processing steps within SSMs:
32+
33+
* **Discretization Step:** A crucial step in SSMs involves modifying the $A$ and $B$ matrices using a specific formula based on the $\text{Delta}$ parameter. This discretization step is necessary because SSMs are derived from continuous differential equations. The mathematical conversion from continuous to discrete form requires adjusting these matrices to account for the change in how information is processed. In simpler terms, discretization essentially chops up the continuous flow of information into discrete chunks that the model can handle more efficiently.
34+
35+
$$
36+
\overline{A} = \exp(\Delta A);
37+
\overline{B} = (\Delta A)^{-1} (\exp(\Delta A) - I) \cdot \Delta B
38+
$$
39+
40+
* **Linear RNN-like Processing:** Similar to recurrent neural networks (RNNs), SSMs process tokens one by one. At each step, they use a linear combination of the previous hidden state and the current input embedding to compute a new hidden state. This hidden state captures the essential information about the sequence seen so far. Unlike traditional RNNs, which can struggle with vanishing or exploding gradients in long sequences, SSMs are designed to address these issues and can handle longer sequences more effectively.
41+
* **Final Representation:** The final representation for each token is obtained by multiplying the hidden state with another matrix (C). This final representation can then be used for various tasks, such as predicting the next word in a sequence or classifying a DNA sequence.
42+
43+
While SSMs offer advantages in terms of speed and memory efficiency, particularly when dealing with long sequences, their inflexibility in processing inputs limits their accuracy. Unlike Transformers that can selectively focus on important parts of the sequence using attention mechanisms, regular SSMs treat all tokens equally. This can hinder their ability to capture complex relationships within the sequence data.
44+
45+
## Selective State Space Models (SSSMs)
46+
47+
Mamba builds upon SSMs by introducing Selective SSMs. This innovation allows the model to prioritize specific elements within the sequence. Imagine selectively focusing on important words in a sentence while processing it. Regular SSMs apply the same processing logic *(read same $\text{Delta}$, $A$, $B$ and $C$)* to every element, while Selective SSMs can learn to pay closer attention to crucial parts of the sequence.
48+
49+
<figure markdown>
50+
![](../imgs/nlp_mamba_sssm.png)
51+
<figcaption>Source: [1]</figcaption>
52+
</figure>
53+
54+
Selective SSMs achieve this selective focus by dynamically adjusting the processing based on the current element. They employ additional trainable parameters that determine how much weight to assign to each element in the sequence. This weighting mechanism can be thought of as an attention mechanism, similar to what is found in Transformers. However, unlike Transformers, which rely on computationally expensive self-attention calculations, Selective SSMs achieve a similar effect through a more efficient linear operation.
55+
56+
Here's a deeper dive into how Selective SSMs work:
57+
58+
1. **Linear Layers:** Separate linear layers are introduced to compute these element-wise weights. Each element in the sequence is passed through a dedicated linear layer, resulting in a weight specific to that element.
59+
2. **Weighting:** The calculated weights are then used to modulate the influence of each element on the hidden state. Elements deemed more important by the model will have a greater impact on how the hidden state evolves.
60+
3. **Learning the Importance:** Through the training process, the model learns to identify the elements that are most informative for the task at hand. This allows the model to focus its processing power on the crucial parts of the sequence, while efficiently handling less important elements.
61+
62+
## Mamba's Architecture
63+
64+
A Mamba layer consists of several components that work together to achieve efficient and accurate sequence modeling:
65+
66+
<figure markdown>
67+
![](../imgs/nlp_mamba_archi.png)
68+
<figcaption>Source: [1]</figcaption>
69+
</figure>
70+
71+
* **Increased Dimensionality:** The input is first projected to a higher dimensional space using a linear layer. This increases the network's capacity to learn complex relationships between elements in the sequence. A higher dimensional space allows for more intricate feature representations, enabling the model to capture richer information from the data.
72+
* **Convolution Layer:** This layer facilitates information flow between different dimensions within the higher-dimensional space. Convolutional operations are adept at capturing local patterns and dependencies between elements. In the context of Mamba, the convolution layer helps the model identify how nearby elements in the sequence relate to each other and influence the hidden state.
73+
* **Selective SSM Module:** This core component processes the sequence using the Selective SSM approach described earlier. The selective SSM module dynamically computes weights for each element, allowing the model to focus on the most informative parts of the sequence. This selective processing contributes to Mamba's efficiency, particularly for long sequences.
74+
* **Gated Multiplication:** This step modulates the influence of the current element on the hidden state based on its similarity to the hidden state itself. A gating mechanism essentially controls the flow of information. In Mamba, the gated multiplication amplifies the impact of elements that are similar to the current state of the model's understanding of the sequence, while reducing the influence of elements that are dissimilar. This helps the model refine its understanding of the sequence in a targeted manner.
75+
* **Dimensionality Reduction:** The final output of a Mamba layer is projected back to the original dimension using another linear layer. This reduces the dimensionality of the representation to a more manageable size for subsequent layers in the network architecture.
76+
77+
78+
## Mamba's Impact and Results
79+
80+
Mamba demonstrates promising results, particularly for long sequences:
81+
82+
* **Speed:** Mamba achieves super fast speed which becomes even better with increase in sequence length and batch sizes.
83+
84+
<figure markdown>
85+
![](../imgs/nlp_mamba_efficiency.png)
86+
<figcaption>Source: [1]</figcaption>
87+
</figure>
88+
89+
* **Performance:** Mamba outperforms Transformers based models *(even 2x bigger ones!)* on various tasks.
90+
<figure markdown>
91+
![](../imgs/nlp_mamba_scaling.png)
92+
<figcaption>Source: [1]</figcaption>
93+
</figure>
94+
95+
<figure markdown>
96+
![](../imgs/nlp_mamba_results.png)
97+
<figcaption>Source: [1]</figcaption>
98+
</figure>
99+
100+
101+
!!! Hint
102+
As Mamba performs quite well on long sequences *(evident from its performance on DNA dataset)*, there is an interesting space of work in MambaByte [4] which is a token-free adaptation of the Mamba SSM trained auto-regressively on byte sequences. This model is able to achieve state-of-the-art performance on byte-level language modeling tasks.
103+
104+
105+
## Conclusion
106+
107+
Mamba's emergence demonstrates the continuous evolution of deep learning architectures. With its focus on speed, memory efficiency, and scalability for long sequences, Mamba offers a compelling alternative to Transformers and paves the way for further exploration in sequence modeling techniques. That said, Mamba is a relatively new architecture, and further research is needed to fully understand its capabilities compared to Transformers. Nevertheless, Mamba's innovative approach to sequence modeling holds promise for a wide range of applications, particularly those involving long sequences of data.
108+
109+
## References
110+
111+
[1] Original Paper - [Mamba: Linear-Time Sequence Modeling with Selective State Spaces](https://www.sankshep.co.in/PDFViewer/https%3A%2F%2Farxiv.org%2Fpdf%2F2312.00752.pdf)
112+
113+
[2] Video by [AI Coffee Break with Letitia](https://www.youtube.com/@AICoffeeBreak) -- [MAMBA and State Space Models explained | SSM explained](https://www.youtube.com/watch?v=vrF3MtGwD0Y)
114+
115+
[3] [Introduction to State Space Models (SSM)](https://huggingface.co/blog/lbourdois/get-on-the-ssm-train)
116+
117+
[4] Paper - [MambaByte: Token-free Selective State Space Model](https://arxiv.org/abs/2401.13660)

docs/natural_language_processing/transformer.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ Transformers
4141

4242
- And thats it :smile: Well at least from 10k feet :airplane:. Looking at the technicalities, the process drills down to,
4343
- Every token is not used as-it-is, but first converted to key, value and query format using linear projections. We have key, value and query weights denoted as $W_k$, $W_v$ and $W_q$. Each input token's representation is first multipled with these weights to get $k_i$, $v_i$ and $q_i$.
44-
- Next the query of one token is dot product with the keys of all token. On applying softmax to the output, we get a probability score of importance of every token for the the given token.
44+
- Next the query of one token is dot product with the keys of all token. On applying softmax to the output, we get a probability score of importance of every token for the given token.
4545
- Finally, we do weighted sum of values of all keys with this score and get the vector representation of the current token.
4646
- It is easy to understand the process while looking at one token at a time, but in reality it is completely vectorized and happens for all the tokens at the same time. The formula for the self-attention is shown below, where Q, K and V are the matrices you get on multiplication of all input tokens with the query, key and value weights.
4747

mkdocs.yml

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -71,7 +71,7 @@ nav:
7171

7272
- 'Natural Language Processing':
7373
- 'Interview Questions' : 'natural_language_processing/interview_questions.md'
74-
- 'Models':
74+
- 'Architectures/Models':
7575
- 'Word2Vec': 'natural_language_processing/word2vec.md'
7676
- 'LSTM, GRU & RNN': 'natural_language_processing/lstm_gru_rnn.md'
7777
- 'Transformers': 'natural_language_processing/transformer.md'
@@ -82,6 +82,7 @@ nav:
8282
- 'natural_language_processing/FlanModels.md'
8383
# - 'ChatGPT': 'natural_language_processing/chatgpt.md'
8484
- 'LLaMA': 'natural_language_processing/llama.md'
85+
- 'Mamba': 'natural_language_processing/mamba.md'
8586
- 'Tasks':
8687
- 'natural_language_processing/paraphraser.md'
8788
- 'natural_language_processing/text_similarity.md'

0 commit comments

Comments
 (0)