You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: .assets/home.md
+1-3Lines changed: 1 addition & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,8 +1,6 @@
1
1
# YTAI - Your personal YouTube AI
2
2
3
-
Get insights from YouTube videos with YTAI, an LLM-based app that allows you to summarize or even ask questions and receive answers about them! tailors YouTube video summaries to your needs, offering a custom prompt feature for summaries exactly how you want them.
4
-
5
-
Check out the project on [GitHub](https://github.com/sudoleg/ytai) for more information! Also, if you like the app, I would be very happy about a star :star:
3
+
Get insights from YouTube videos with YTAI, an LLM-based app that allows you to summarize or even ask questions and receive answers about them! Check out the project on [GitHub](https://github.com/sudoleg/ytai) for more information! Also, if you like the app, I would be very happy about a star :star:
Feedback and contributions are welcome! This is a small side-project and it's very easy to get started! Here’s the gist to get your changes rolling:
@@ -72,6 +62,22 @@ Feedback and contributions are welcome! This is a small side-project and it's ve
72
62
5.**Pull Request**: Push your changes to your fork and submit a pull request (PR) to the main repository. Describe your changes and any relevant details.
73
63
6.**Engage**: Respond to feedback on your PR to finalize your contribution.
74
64
65
+
### development in virtual environment
66
+
67
+
```bash
68
+
# create and activate a virtual environment
69
+
python -m venv .venv
70
+
source .venv/bin/activate
71
+
# install requirements
72
+
pip install -r requirements.txt
73
+
# you'll need an API key
74
+
export OPENAI_API_KEY=<your-openai-api-key>
75
+
# run chromadb (necessary for chat)
76
+
docker-compose up -d chromadb
77
+
# run app
78
+
streamlit run main.py
79
+
```
80
+
75
81
## Technologies used
76
82
77
83
The project is built using some amazing libraries:
"saving_responses": "Whether to save responses in the directory, where you run the app. The responses will be saved under '<YT-channel-name>/<video-title>.md'.",
20
26
"chunk_size": "A larger chunk size increases the amount of context provided to the model to answer your question. However, it may be less relevant than with a small chunk size, as smaller chunks can encapsulate more semantic meaning. I would reccommend to use a smaller chunk size for shorter and a larger one for longer videos (> 1h).",
21
27
"preprocess_checkbox": "By enabling this, the original transcript gets preprocessed. This can greatly improve the results, especially for videos with automatically generated transcripts. However, it results in higher costs, as the whole transcript get's processed by gpt3.5-turbo. Also, the preprocessing will take a substantial amount of time.",
22
-
"selected_video": "Once you process a video, it gets saved in a database. You can chat with it at any time, without processing it again! Tip: you may also search for videos by typing (parts of) its title."
28
+
"selected_video": "Once you process a video, it gets saved in a database. You can chat with it at any time, without processing it again! Tip: you may also search for videos by typing (parts of) its title.",
29
+
"embeddings": "Embeddings are a numerical representation of text that can be used to measure the relatedness between two pieces of text. Embedding models create these numerical representations. Read more at https://platform.openai.com/docs/models/embeddings"
0 commit comments