You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: chapters/en/unit3/vision-transformers/vision-transformers-for-image-classification.mdx
+18-4Lines changed: 18 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,18 @@
4
4
5
5
As the Transformers architecture scaled well in Natural Language Processing, the same architecture was applied to images by creating small patches of the image and treating them as tokens. The result was a Vision Transformer (ViT). Before we get started with transfer learning / fine-tuning concepts, let's compare Convolutional Neural Networks (CNNs) with Vision Transformers.
6
6
7
-
### CNN vs Vision Transformers: Inductive Bias
7
+
## Vision Transformer (VT) a Summary
8
+
9
+
To summarize, in Vision transformer, images are reorganized as 2D grids of patches. The models are trained on those patches.
But there is a catch! The Convolutional Neural Networks (CNN) are designed with an assumption missing in the VT. This assumption is based on how we perceive the objects in the images as humans. It is described in the following section.
15
+
16
+
## What are the differences between CNNs and Vision Transformers?
17
+
18
+
### Inductive Bias
8
19
9
20
Inductive bias is a term used in machine learning to describe the set of assumptions that a learning algorithm uses to make predictions. In simpler terms, inductive bias is like a shortcut that helps a machine learning model make educated guesses based on the information it has seen so far.
10
21
@@ -13,11 +24,14 @@ Here's a couple of inductive biases we observe in CNNs:
13
24
- Translational Equivariance: an object can appear anywhere in the image, and CNNs can detect its features.
14
25
- Locality: pixels in an image interact mainly with its surrounding pixels to form features.
15
26
16
-
These are lacking in Vision Transformers. Then how do they perform so well? It's because they're highly scalable and they're trained on massive amounts of images. Hence, they overcome the need for these inductive biases.
27
+
CNN models are very good at these two biases. ViT do not have this assumption. That is why for a dataset size up to a certain threshold actually CNNs are better than ViT. But ViT has another power!
28
+
The transformer architecture being (mostly) different types of linear functions allows ViT to become highly scalable. And that in turn allows ViT to overcome the problem of not having the above two
29
+
inductive biases with massive ammount of data!
30
+
17
31
18
-
### Using pre-trained Vision Transformers
32
+
### But how can everyone get access to massive datasets?
19
33
20
-
It's not feasible for everyone to train a Vision Transformer on millions of images to get good performance. Instead, one can use openly available models from places such as the [Hugging Face Hub](https://huggingface.co/models?sort=trending).
34
+
It's not feasible for everyone to train a Vision Transformer on millions of images to get good performance. Instead, one can use openly available model weights from places such as the [Hugging Face Hub](https://huggingface.co/models?sort=trending).
21
35
22
36
What do you do with the pre-trained model? You can apply transfer learning and fine-tune it!
0 commit comments