Skip to content

Commit e9bbcff

Browse files
authored
Update and rename readme.md to README.md
1 parent ea6603d commit e9bbcff

File tree

2 files changed

+55
-12
lines changed

2 files changed

+55
-12
lines changed
Lines changed: 55 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,55 @@
1+
## Sentence Auto-Completion
2+
3+
This project implements a sentence auto-completion model using a deep learning approach, specifically leveraging LSTM (Long Short-Term Memory) networks from the TensorFlow/Keras library. The goal is to predict the next word in a sequence of text, providing automatic sentence completion functionality.
4+
5+
### Project Structure
6+
```
7+
├── SentenceAutoCompletion.ipynb # Jupyter notebook containing the entire implementation
8+
├── README.md # Project overview and instructions
9+
└── holmes.txt # Input text file used for training the model
10+
```
11+
12+
### Model Overview
13+
14+
The project builds a sentence auto-completion model with the following components:
15+
- **LSTM-based model**: Uses a recurrent neural network (RNN) with LSTM layers to predict the next word in a sequence of text.
16+
- **Tokenizer and Padding**: Text data is tokenized, and sequences are padded to ensure uniform input size for the neural network.
17+
- **Bidirectional LSTM**: A bidirectional LSTM is used to capture both past and future context in text sequences.
18+
19+
The training text is taken from *Project Gutenberg* and is preprocessed to remove special characters, emojis, and extra spaces.
20+
21+
### Setup and Dependencies
22+
23+
To set up this project, you need to install the following libraries:
24+
25+
```bash
26+
pip install tensorflow nltk pandas
27+
```
28+
29+
### Data Preprocessing
30+
31+
Before training, the data undergoes several preprocessing steps:
32+
- **Loading the dataset**: The text data is read from the `holmes.txt` file.
33+
- **Cleaning the text**: Special characters, emojis, and excessive whitespace are removed.
34+
- **Tokenization**: The text is tokenized into sequences of words, and these sequences are then transformed into numerical format.
35+
- **Padding sequences**: To ensure consistent input size, sequences are padded.
36+
37+
### Model Training
38+
39+
The model is trained on the cleaned and tokenized dataset using the following process:
40+
1. **Embedding layer**: Converts words into dense vectors of fixed size.
41+
2. **LSTM layers**: A bidirectional LSTM processes the input text sequence.
42+
3. **Dense layers**: The final layers output predictions for the next word in the sequence.
43+
44+
Training uses the Adam optimizer, and the loss function is `categorical_crossentropy`.
45+
46+
### Usage
47+
48+
To run the model:
49+
1. Clone the repository or download the Jupyter notebook.
50+
2. Download or prepare a dataset and save it as `holmes.txt` (or any other text file).
51+
3. Run the notebook to preprocess the text, build the model, and train it.
52+
4. After training, use the model to predict the next word given a sequence of words.
53+
54+
55+

Recommendation Models/sentenceAutoCompletion/readme.md

Lines changed: 0 additions & 12 deletions
This file was deleted.

0 commit comments

Comments
 (0)