Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] Lobechat blank response #5423

Open
sunilkumarruby opened this issue Jan 13, 2025 · 4 comments
Open

[Bug] Lobechat blank response #5423

sunilkumarruby opened this issue Jan 13, 2025 · 4 comments
Labels
🐛 Bug Something isn't working | 缺陷

Comments

@sunilkumarruby
Copy link

📦 Environment

Docker

📌 Version

v1.36.13

💻 Operating System

Other Linux

🌐 Browser

Chrome

🐛 Bug Description

I am currently testing lobechat with AWS bedrock integration. The model we are currently testing is meta.llama3-70b-instruct-v1:0.
I notice blank responses intermittently and unfortunately there is nothing logged in lobe-chat docker logs for this issue.
Screenshot with blank response is attached for reference.

Just checking if anyone else has experienced something similar or has any ideas about troubleshooting the issue?

image

📷 Recurrence Steps

The issue appears at random and only way to replicate it is by continuously asking test questions.

🚦 Expected Behavior

Should get a response for each question. Or at least the error should be printed in response if the issue is caused by throttling or lack of resources.

📝 Additional Information

No response

@sunilkumarruby sunilkumarruby added the 🐛 Bug Something isn't working | 缺陷 label Jan 13, 2025
@github-project-automation github-project-automation bot moved this to Roadmap - Chat 1.x in Lobe Chat Routine Jan 13, 2025
@lobehubbot
Copy link
Member

👀 @sunilkumarruby

Thank you for raising an issue. We will investigate into the matter and get back to you as soon as possible.
Please make sure you have given us as much context as possible.
非常感谢您提交 issue。我们会尽快调查此事,并尽快回复您。 请确保您已经提供了尽可能多的背景信息。

Copy link

dosubot bot commented Jan 13, 2025

Hello @sunilkumarruby! I'm here to help you with bug resolution, answer your questions, and guide you on how to contribute. Let's tackle this issue together!

To troubleshoot the issue of receiving blank responses when using the meta.llama3-70b-instruct-v1:0 model with AWS Bedrock in Lobe Chat, you can consider the following steps:

  1. Enable Debug Logging: Set the environment variable DEBUG_BEDROCK_CHAT_COMPLETION to "1" to enable detailed logging for AWS Bedrock interactions. This might help capture more information about the issue in the logs [1].

  2. Check Error Handling: Ensure that the error handling mechanisms are correctly implemented. The invokeLlamaModel method in the Lobe Chat codebase is responsible for handling responses and errors from AWS Bedrock. Review this method to ensure it is correctly processing and logging errors [2].

  3. Monitor Resource Usage: Since the issue might be related to throttling or resource limitations, monitor the resource usage on your AWS account to ensure that the model is not hitting any limits.

  4. Review Docker Logs: Although you mentioned that nothing is logged, double-check the Docker logs for any subtle hints or warnings that might have been overlooked.

  5. Test with Different Configurations: Try testing with different configurations or models to see if the issue persists, which might help isolate the problem to a specific model or configuration.

These steps should help you gather more information and potentially identify the cause of the blank responses.

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other

@ivanberry
Copy link

I have the same problem with one-api model gpt-4o

@ivanberry
Copy link

image

Check your api URL

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🐛 Bug Something isn't working | 缺陷
Projects
Status: Roadmap - Chat 1.x
Development

No branches or pull requests

3 participants