This repository was archived by the owner on Jul 22, 2025. It is now read-only.
Replies: 3 comments
-
@adaw72 I am trying to include more and more use cases with LlamaIndex in our cookbook, I will be trying to fix this over the weekend. Will share updates soon. |
Beta Was this translation helpful? Give feedback.
0 replies
-
Thanks James.Looking forward to it. I am not able to pass the index object from Llamaindex to Perplexity and use it as a query / response engine. On Apr 10, 2025, at 7:01 PM, James Liounis ***@***.***> wrote:
@adaw72 I am trying to include more and more use cases with LlamaIndex in our cookbook, I will be trying to fix this over the weekend. Will share updates soon.
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you were mentioned.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
0 replies
-
Ok thanks for sharing the code I will give this a try! |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I am trying to pass the RAG context to the Perplexity sonar API using the llama_index, index object. But the index.as_chat_engine call fails saying "chat" is not found.
Any ideas? I tried searching on Perplexity and YouTube but I could not find examples - you can call Perplexity via the API but no examples where I could pass the index RAG content via the default Vectorstore index.
@james-pplx - I am an enterprise pro user trying to craft a use case at my company to help my Sales team. Any help would be appreciated.
Here's the code:
client = OpenAI(api_key=PERPLX_KEY, base_url="https://api.perplexity.ai/chat/completions")
[I have tried all combos https://api.perplexity.ai/chat/completions and "https://api.perplexity.ai/]
My index build - this is the llamadex vector store object.
index = load_index_from_storage(storage_context)
chat_engine = index.as_chat_engine(chat_mode="context", verbose=True, temperature=0.0, similarity_top_k=100)
Message dict:
simple_message = [
{
"role": "system",
"content": (
"You are an AI assistant specializing in taking RAG related input and responding only using that input and no prior knowledge "
"Provide, detailed, and well-structured answers."
),
},
{
"role": "assistant",
"content": f"{prompt}",
},
]
Now calling the Perplexity model - this call works with OpenAI but not Perplexity.
response = client.chat.completions.create(
model="sonar", # Use the appropriate model for Deep Research
messages=simple_message,
temperature=0.0,
top_p=0.1,
)
Error:
NotFoundError Traceback (most recent call last)
in <cell line: 0>()
20 ]
21 print(prompt)
---> 22 response = client.chat.completions.create(
23 model="sonar", # Use the appropriate model for Deep Research
24 messages=simple_message,
4 frames
/usr/local/lib/python3.11/dist-packages/openai/_utils/_utils.py in wrapper(*args, **kwargs)
277 msg = f"Missing required argument: {quote(missing[0])}"
278 raise TypeError(msg)
--> 279 return func(*args, **kwargs)
280
281 return wrapper # type: ignore
/usr/local/lib/python3.11/dist-packages/openai/resources/chat/completions/completions.py in create(self, messages, model, audio, frequency_penalty, function_call, functions, logit_bias, logprobs, max_completion_tokens, max_tokens, metadata, modalities, n, parallel_tool_calls, prediction, presence_penalty, reasoning_effort, response_format, seed, service_tier, stop, store, stream, stream_options, temperature, tool_choice, tools, top_logprobs, top_p, user, web_search_options, extra_headers, extra_query, extra_body, timeout)
912 ) -> ChatCompletion | Stream[ChatCompletionChunk]:
913 validate_response_format(response_format)
--> 914 return self._post(
915 "/chat/completions",
916 body=maybe_transform(
/usr/local/lib/python3.11/dist-packages/openai/_base_client.py in post(self, path, cast_to, body, options, files, stream, stream_cls)
1240 method="post", url=path, json_data=body, files=to_httpx_files(files), **options
1241 )
-> 1242 return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
1243
1244 def patch(
/usr/local/lib/python3.11/dist-packages/openai/_base_client.py in request(self, cast_to, options, remaining_retries, stream, stream_cls)
917 retries_taken = 0
918
--> 919 return self._request(
920 cast_to=cast_to,
921 options=options,
/usr/local/lib/python3.11/dist-packages/openai/_base_client.py in _request(self, cast_to, options, retries_taken, stream, stream_cls)
1021
1022 log.debug("Re-raising status error")
-> 1023 raise self._make_status_error_from_response(err.response) from None
1024
1025 return self._process_response(
NotFoundError: Error code: 404
Beta Was this translation helpful? Give feedback.
All reactions