Skip to content

Commit 05e8fb2

Browse files
committed
corrected some spellings in notes
1 parent ae3ad41 commit 05e8fb2

File tree

1 file changed

+58
-57
lines changed

1 file changed

+58
-57
lines changed

beginner_source/autoencoders_tutorial.py

Lines changed: 58 additions & 57 deletions
Original file line numberDiff line numberDiff line change
@@ -1,16 +1,19 @@
11
"""
2-
Autoencoders: A Deep Dive
2+
Autoencoder: A Deep Dive
33
=========================
44
55
Introduction
66
~~~~~~~~~~~~
77
8-
Autoencoders are a type of artificial neural network used for
9-
unsupervised learning. They are designed to learn efficient data codings
10-
by projecting the input data into a lower-dimensional latent space and
11-
then reconstructing the original data from this representation. This
12-
process forces the autoencoder to capture the most important features of
13-
the input data.
8+
Autoencoder represent a class of artificial neural networks
9+
utilized for unsupervised learning tasks. They are engineered
10+
to learn efficient data encodings by mapping input data into
11+
a lower-dimensional latent space, subsequently reconstructing
12+
the original data from this latent representation. This
13+
methodology compels the autoencoder to encapsulate the most
14+
salient features of the input data, thereby enhancing the
15+
efficiency and effectiveness of data compression and feature
16+
extraction.
1417
1518
Architecture of an Autoencoder
1619
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -25,37 +28,36 @@
2528
The goal of training is to minimize the reconstruction error between the
2629
input and the reconstructed output.
2730
28-
Types of Autoencoders
31+
Types of Autoencoder
2932
~~~~~~~~~~~~~~~~~~~~~
3033
31-
There are several variations of autoencoders:
34+
There are several variations of autoencoder:
3235
33-
- **Undercomplete Autoencoders:** These have a smaller latent space
34-
than the input space, forcing the network to learn a compressed
35-
representation of the data.
36-
- **Denoising Autoencoders:** These are trained on corrupted input
37-
data, learning to reconstruct the original clean data.
38-
- **Variational Autoencoders (VAEs):** These introduce probabilistic
39-
elements into the encoding process, allowing for generating new data
40-
samples.
41-
- **Convolutional Autoencoders (CAEs):** These use convolutional
42-
layers, making them suitable for image data.
36+
- **Denoising Autoencoder:** These are trained on corrupted
37+
input data, learning to reconstruct the original clean data.
4338
44-
Applications of Autoencoders
39+
- **Variational Autoencoder:** These introduce
40+
probabilistic elements into the encoding process, allowing
41+
for generating new data samples.
42+
43+
- **Convolutional Autoencoder (CAE):** These use convolutional
44+
layers, making them suitable for image data.
45+
46+
Applications of Autoencoder
4547
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
4648
47-
Autoencoders have a wide range of applications:
49+
Autoencoder have a wide range of applications:
4850
4951
- **Dimensionality Reduction:** By projecting data into a
50-
lower-dimensional space, autoencoders can be used for visualization
52+
lower-dimensional space, autoencoder can be used for visualization
5153
and feature extraction.
52-
- **Image Denoising:** Denoising autoencoders can effectively remove
54+
- **Image Denoising:** Denoising autoencoder can effectively remove
5355
noise from images.
54-
- **Anomaly Detection:** Autoencoders can be used to identify unusual
56+
- **Anomaly Detection:** Autoencoder can be used to identify unusual
5557
data points by measuring reconstruction errors.
56-
- **Image Generation:** VAEs can generate new, realistic images based
58+
- **Image Generation:** Variational Autoencoder can generate new, realistic images based
5759
on the learned latent space distribution.
58-
- **Data Compression:** Undercomplete autoencoders can be used for data
60+
- **Data Compression:** Undercomplete autoencoder can be used for data
5961
compression.
6062
6163
PyTorch Implementation
@@ -86,7 +88,7 @@ def make_dataloader(data_, batch_size: int):
8688
"""Helper function to convert datasets to batches."""
8789
batch_size = 32
8890

89-
# Make the DataLoader Object
91+
# Make the Data loader Object
9092
train_loader = torch.utils.data.DataLoader(
9193
data_, batch_size=batch_size, shuffle=True, num_workers=2
9294
)
@@ -142,7 +144,7 @@ def load_cifar_data():
142144
return load_batch_data("cifar")
143145

144146
def make_model(model_object, lr_rate=0.001, compress_=None):
145-
"""Make all of the needed obects for training.
147+
"""Make all of the needed objects for training.
146148
147149
Args:
148150
model_object:
@@ -222,7 +224,7 @@ def train_model(
222224
# zero the parameter gradients
223225
optimizer_obj.zero_grad()
224226

225-
# Find the output of the Nerual Net
227+
# Find the output of the Neural Net
226228
# Forward Pass
227229
logits = model_obj(batches)
228230

@@ -316,7 +318,7 @@ def test_cifar(cifar_model, data_loader_):
316318
dataiter = iter(data_loader_)
317319
images, labels = next(dataiter)
318320

319-
# show images by cinverting batches to grids
321+
# show images by converting batches to grids
320322
image_show(images, "Original Image")
321323

322324
cifar_model.eval()
@@ -506,9 +508,9 @@ def forward(self, x):
506508
# representation.
507509
#
508510
# **Decoder:** \* Takes the latent space representation as input. \*
509-
# Projects it back to the original feature map size using a linear layer
510-
# and unflattening. \* Applies a series of transposed convolutional layers
511-
# with LeakyReLU activations to reconstruct the image. \* Uses a sigmoid
511+
# Projects it back to the original feature map size using a linear
512+
# and unflatten layer. \* Applies a series of transposed convolutional layers
513+
# with LeakyReLU activations to reconstruct the image. \* Uses an
512514
# activation function to output the reconstructed image with pixel values
513515
# between 0 and 1.
514516
#
@@ -575,34 +577,34 @@ def forward(self, x):
575577

576578

577579
######################################################################
578-
# Autoencoders for Data Noise Reduction
580+
# Autoencoder for Data Noise Reduction
579581
# -------------------------------------
580582
#
581-
# Autoencoders have emerged as a powerful tool for mitigating noise in
583+
# Autoencoder have emerged as a powerful tool for mitigating noise in
582584
# various data modalities. By training a neural network to reconstruct
583585
# clean data from noisy inputs, these models effectively learn to filter
584586
# out unwanted disturbances.
585587
#
586-
# A key advantage of autoencoders lies in their ability to capture
588+
# A key advantage of autoencoder lies in their ability to capture
587589
# complex, non-linear relationships within data. This enables them to
588590
# effectively remove noise while preserving essential features. Moreover,
589-
# autoencoders are unsupervised learning models, requiring only unlabeled
591+
# autoencoder are unsupervised learning models, requiring only unlabeled
590592
# data for training, making them versatile for a wide range of
591593
# applications.
592594
#
593-
# By effectively removing noise, autoencoders can significantly enhance
595+
# By effectively removing noise, autoencoder can significantly enhance
594596
# the performance of downstream machine learning models, leading to
595597
# improved accuracy and robustness.
596598
#
597-
# Lets introduce some noise to the picture and see how our model is
598-
# working to regenrate the output withouht noise.
599+
# Let's introduce some noise to the image and evaluate how our model
600+
# performs in reconstructing the output without noise.
599601
#
600602

601603
noisy_test(train_loader, model_cnn, linear=False, noise_intensity=0.3)
602604

603605

604606
######################################################################
605-
# We have added a lot of noise to our input data and our model was abe to
607+
# We have added a lot of noise to our input data and our model was able to
606608
# reduce many of them and find the general shape of our original image.
607609
#
608610

@@ -611,11 +613,11 @@ def forward(self, x):
611613
# CIFAR 10
612614
# ========
613615
#
614-
# We will try to use the autoencoders with CIFAR10 dataset. This dataset
616+
# We will try to use the autoencoder with CIFAR10 dataset. This dataset
615617
# consists of color images with 3 channels and 32*32 size.
616618
#
617619
# Since the images in this dataset has more variety and also has colors in
618-
# them we need to use a bigger model to be able to distinguisg between
620+
# them we need to use a bigger model to be able to distinguish between
619621
# pattern and also reproduce the given image with a low loss.
620622
#
621623

@@ -638,10 +640,10 @@ def forward(self, x):
638640
dataiter = iter(cifar_loader)
639641
images, labels = next(dataiter)
640642

641-
# show images by cinverting batches to grids
643+
# show images by converting batches to grids
642644
image_show(images, "Original image")
643645

644-
# We use a similar architectur as before just tweaking some numbers for a bigger model
646+
# We use a similar architecture as before just tweaking some numbers for a bigger model
645647
# since these pictures has 3 channels and we need to compress more data in our model
646648
# We also add some padding to take into account the information that is stored on the edges of the pictures.
647649
class AutoencoderCNNCIF(nn.Module):
@@ -685,15 +687,14 @@ def forward(self, x):
685687

686688

687689
######################################################################
688-
# Our CNN model has been able to recontsruct mostly many of the details of
689-
# the pictures, Although the output are a bit blury.
690-
#
691-
# We can try and add other layers to the model in order to increase its
692-
# ability to find the patterns in data and preserve them while compressing
693-
# the pictures.
694-
#
695-
# Another reason that our model is generating blury images could be the
696-
# ``code layer``, If it is small for this type of data it could lose some
697-
# details and in recontructing we won’t be able to reover that specific
698-
# data.
699-
#
690+
# Our CNN model has successfully reconstructed many details
691+
# of the images, though the outputs remain somewhat blurry.
692+
#
693+
# We should consider adding additional layers to the model
694+
# to enhance its ability to detect and preserve patterns in
695+
# the data during compression.
696+
#
697+
# Another potential cause of the blurry images is the size
698+
# of the "code layer." If it is too small for this type of
699+
# data, it may lose crucial details, making it difficult to
700+
# recover specific information during reconstruction.

0 commit comments

Comments
 (0)