@@ -8,7 +8,7 @@ of the `Transformers <https://github.com/huggingface/transformers>`_ library in
88
99Specifically, we'll cover:
1010
11- * Setting the cache path for the Transformers library
11+ * Setting the models cache path for the Transformers library
1212* Downloading AI models during the application initialization step
1313* Receiving messages from Nextcloud Talk Chat and sending them to a language model
1414* Sending the language model's reply back to the Nextcloud Talk Chat
@@ -112,12 +112,12 @@ Finally, we arrive at the core aspect of the application, where we interact with
112112.. code-block ::
113113
114114 def ai_talk_bot_process_request(message: talk_bot.TalkBotMessage):
115- # Process only messages started with ** @ai**
115+ # Process only messages started with " @ai"
116116 r = re.search(r"@ai\s(.*)", message.object_content["message"], re.IGNORECASE)
117117 if r is None:
118118 return
119119 model = pipeline("text2text-generation", model=MODEL_NAME)
120- # Pass all text after ** @ai** we to the Language model.
120+ # Pass all text after " @ai" we to the Language model.
121121 response_text = model(r.group(1), max_length=64, do_sample=True)[0]["generated_text"]
122122 AI_BOT.send_message(response_text, message)
123123
@@ -126,4 +126,6 @@ Simply put, the AI logic is just two lines of code when using Transformers, whic
126126
127127Messages from the AI model are then sent back to Talk Chat as you would expect from a typical chatbot.
128128
129+ `Full source code is here <https://github.com/cloud-py-api/nc_py_api/tree/main/examples/as_app/talk_bot_ai >`_
130+
129131That's it for now! Stay tuned—this is merely the start of an exciting journey into the integration of AI and chat functionality in Nextcloud.
0 commit comments