Replies: 7 comments 5 replies
-
🤖 Hello again, @dmb0058! Nice to see you back. I hope you've been doing well. Let's take a look at this issue you're having. Based on the provided context, it seems like the langchainjs codebase does support both the Bedrock and OpenAI models. However, the issue you're experiencing might be due to the specific models you're using or how they're being used. In your Bedrock initialization code, you're using the One possible reason for the variance in responses could be the difference in these models. Each model might have different capabilities and limitations, and they might handle the same input differently. It's also possible that the Another potential issue could be the way you're using the models. In your Without more information, it's hard to say for sure what the issue is. I would recommend checking the documentation for the If you're still having trouble, please provide more details about the issue, such as any error messages you're seeing and the exact steps you're taking to reproduce the issue. This will help me better understand what's going on and provide more targeted assistance. SourcesThis response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
Beta Was this translation helpful? Give feedback.
-
I tried an alternative ... this gives the same sad response "Sorry - this model is unable to respond to this request.".
|
Beta Was this translation helpful? Give feedback.
-
The result is not reliable anyway. If I directly invoke the model I usually get a sensible result but even that sometimes gives "Sorry - this model is unable to respond to this request.":
How can I debug this code? I don't know what's flakey here, langchain or the FM:
|
Beta Was this translation helpful? Give feedback.
-
@dosu-bot
Do you know of an active forum for active langchain/Bedrock developers?
…On Tue, 5 Dec 2023, 14:49 dosubot[bot], ***@***.***> wrote:
*🤖*
To debug the inconsistency in responses from the
amazon.titan-text-express-v1 model in the LangChainJS framework, you can
use the callbacks option in the ChatOpenAI constructor. This allows you
to log the input and output of each call to the model, as well as any
errors that occur.
Here's how you can modify your code to debug the
amazon.titan-text-express-v1 model:
import { HumanMessage, LLMResult } from "langchain/schema";import { ChatOpenAI } from "langchain/chat_models/openai";import { Serialized } from "langchain/load/serializable";
// We can pass in a list of CallbackHandlers to the LLM constructor to get callbacks for various events.const model = new ChatOpenAI({
callbacks: [
{
handleLLMStart: async (llm: Serialized, prompts: string[]) => {
console.log('LLM Start');
console.log(JSON.stringify(llm, null, 2));
console.log(JSON.stringify(prompts, null, 2));
},
handleLLMEnd: async (output: LLMResult) => {
console.log('LLM End');
console.log(JSON.stringify(output, null, 2));
},
handleLLMError: async (err: Error) => {
console.log('LLM Error');
console.error(err);
},
},
],});
await model.call([
new HumanMessage(
"What is a good name for a company that makes colorful socks?"
),]);
This will give you detailed logs of the input, output, and any errors for
each call to the model. You can then use these logs to identify any
patterns in the inconsistent responses, which can help you debug the issue.
You can find more information about this in the
examples/src/models/chat/chat_debugging.ts
<https://github.com/langchain-ai/langchainjs/blob/main/examples/src/models/chat/chat_debugging.ts>
file in the LangChainJS repository.
As for the expected behavior of the amazon.titan-text-express-v1 model
when invoked through the LangChainJS framework, I wasn't able to find
specific information within the repository. The behavior might depend on
various factors such as the input data, the model configuration, and the
specific use case. If the issue persists, it might be helpful to reach out
to the maintainers of the amazon.titan-text-express-v1 model for more
specific guidance.
Sources
- examples/src/models/chat/chat_debugging.ts
<https://github.com/langchain-ai/langchainjs/blob/main/examples/src/models/chat/chat_debugging.ts>
------------------------------
This response is meant to be useful and save you time. It is *not meant
to be a precise solution*, but rather a starting point for your own
research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it
is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot
<https://github.com/dosu-bot>.
—
Reply to this email directly, view it on GitHub
<#3525 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABDXCQPBKL2SMSVXO7CKZW3YH4YAZAVCNFSM6AAAAABAGLTJXKVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM3TONRVGUZTE>
.
You are receiving this because you were mentioned.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
@dosu-bot
"The inconsistency in responses could be due to a variety of factors,
including the specific inputs you're using, the configuration of the model,
or even potential issues with the model itself."
Makes sense to me, but the input and configuration is the same in each
case, and I think it's unlikely that the model is broken. I suspect it's a
different issue to any of these.
I'll try the project teams.
…On Tue, 5 Dec 2023, 19:01 dosubot[bot], ***@***.***> wrote:
*🤖*
Based on the code you've shared, it seems like you're using the
amazon.titan-text-express-v1 model from Bedrock in three different ways:
direct invocation, individual steps, and through the VectorDBQAChain. The
inconsistency in responses could be due to a variety of factors, including
the specific inputs you're using, the configuration of the model, or even
potential issues with the model itself.
However, without more specific information about the inputs you're using
and the exact circumstances under which the error occurs, it's difficult to
pinpoint the exact cause of the issue.
As for debugging, the callbacks option in the ChatOpenAI constructor can
indeed be useful for understanding the internal workings of the model.
However, it might not provide enough information to diagnose this specific
issue. You might want to consider adding additional logging or error
handling code to your application to capture more detailed information
about the state of the application when the error occurs.
Regarding your question about an active forum for LangChain and Bedrock
developers, I'm afraid I don't have any specific recommendations. However,
you might find it helpful to reach out to the maintainers of the
LangChainJS and Bedrock projects directly. They might be able to provide
more specific guidance or point you towards resources that can help you
troubleshoot your issue.
I hope this helps, and please don't hesitate to reach out if you have any
further questions or if there's anything else I can do to assist you.
Sources
- examples/src/guides/fallbacks/better_model.ts
<https://github.com/langchain-ai/langchainjs/blob/main/examples/src/guides/fallbacks/better_model.ts>
------------------------------
This response is meant to be useful and save you time. It is *not meant
to be a precise solution*, but rather a starting point for your own
research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it
is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot
<https://github.com/dosu-bot>.
—
Reply to this email directly, view it on GitHub
<#3525 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABDXCQJRQ2R7DKT3IRKEKJLYH5VQXAVCNFSM6AAAAABAGLTJXKVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM3TONRXHE3TA>
.
You are receiving this because you were mentioned.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
Hey @dmb0058, Did you find any solution to this ? |
Beta Was this translation helpful? Give feedback.
-
Hi,
No, unfortunately I never solved it, but got diverted onto other projects.
I'd like to be able to use Bedrock as an alternative so hopefully I'll get
time at some point to investigate further.
Are you having a similar problem?
…On Fri, 19 Apr 2024, 11:36 Sonali-Behera-TRT, ***@***.***> wrote:
Hey @dmb0058 <https://github.com/dmb0058>, Did you find any solution to
this ?
—
Reply to this email directly, view it on GitHub
<#3525 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABDXCQMHZP326HXCLYS6XMDY6DXTTAVCNFSM6AAAAABAGLTJXKVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4TCNRVGE3DK>
.
You are receiving this because you were mentioned.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I'm struggling to understand what's going on here, so just asking for insights or help :)
I've assembled the text of the GDPR regulations, generated embeddings using the Bedrock
amazon.titan-embed-text-v1
model and saved these in a Faiss store.I then do the normal RAG thing of using
VectorDBQAChain.fromLLM()
to vectorise my prompt using the same model and query the Bedrockamazon.titan-text-express-v1 model
. I get the rather disappointing response "Sorry - this model is unable to respond to this request."However, if I query OpenAI with the same Faiss vector store but using the OpenAI embedding model to vectorise my prompt, I get a sensible answer ("Article 25 of the regulation discusses data protection by design and by default. It requires that controllers implement appropriate technical and organizational measures to protect data subjects' rights and to meet the requirements of the regulation. It also states that a data protection impact assessment may be required in certain circumstances.").
The code is the same, so I don't understand why there's such variance - I'm clearly doing something wrong!
Any ideas very gratefully received :)
This is the Bedrock initialisation code:
This is the equivalent OpenAI code:
And this is the runQuery() function that they both call to set up and call the chain:
Beta Was this translation helpful? Give feedback.
All reactions