File tree Expand file tree Collapse file tree 1 file changed +2
-1
lines changed
perplexity-llamaindex/memory/chat_summary_memory_buffer Expand file tree Collapse file tree 1 file changed +2
-1
lines changed Original file line number Diff line number Diff line change @@ -93,6 +93,7 @@ This implementation solves key LLM conversation challenges:
93
93
The architecture enables production-grade chat applications with Perplexity's Sonar models while maintaining LlamaIndex's powerful memory management capabilities.
94
94
95
95
Citations:
96
+ ``` text
96
97
[1] https://docs.llamaindex.ai/en/stable/examples/agent/memory/summary_memory_buffer/
97
98
[2] https://ai.plainenglish.io/enhancing-chat-model-performance-with-perplexity-in-llamaindex-b26d8c3a7d2d
98
99
[3] https://docs.llamaindex.ai/en/v0.10.34/examples/memory/ChatSummaryMemoryBuffer/
@@ -112,5 +113,5 @@ Citations:
112
113
[17] https://docs.llamaindex.ai/en/stable/understanding/using_llms/using_llms/
113
114
[18] https://apify.com/jons/perplexity-actor/api
114
115
[19] https://docs.llamaindex.ai
115
-
116
+ ```
116
117
---
You can’t perform that action at this time.
0 commit comments