-
Notifications
You must be signed in to change notification settings - Fork 130
[Do not merge] [Obs AI Assistant] Adds docs for connecting to a local LLM with the Obs AI Assistant #2536
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
[Do not merge] [Obs AI Assistant] Adds docs for connecting to a local LLM with the Obs AI Assistant #2536
Conversation
Setup is now complete. You can use the model you’ve loaded in LM Studio to power Elastic’s generative AI features. | ||
|
||
::::{note} | ||
While local (open-weight) LLMs offer greater privacy and control, they generally do not match the raw performance and advanced reasoning capabilities of proprietary models by LLM providers mentioned in [here](/solutions/observability/observability-ai-assistant.md#set-up-the-ai-assistant-obs-ai-set-up) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added a small disclaimer here - let me know if I should remove it.
I'm waiting for the preview to be generated to double check the links and formatting. |
🔍 Preview links for changed docs |
The reverse proxy configuration in the doc I've linked mentions the follows. While the user can use alternatives to Nginx for testing purposes, the recommendation is to use Nginx to collect telemetry via the Elastic's Nginx integration.
|
Hi @mdbirnstiehl Let me know if I need to add anything to target this for |
Thanks, @viduni94, for adding this documentation. I have created a task to add vLLM documentation: elastic/kibana#232052 |
@viduni94, should we add the minimum requirement to run llama3.3? WDYT? |
It might not be necessary to add this because it's general information and not specific to Elastic. |
Thanks for the review @arturoliduena |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you putt this together! I've added some comments and suggestions. Let me know if you have any questions.
Co-authored-by: Mike Birnstiehl <[email protected]>
Closes https://github.com/elastic/obs-ai-assistant-team/issues/322
[DO NOT MERGE] This PR can only be merged once
llama
support is complete.This PR adds documentation about how to connect to a local LLM with the Observability AI Assistant.