Skip to content

Conversation

@AliRizvi433
Copy link

This prompt helps evaluate the factual accuracy of LLM outputs based on user input and reference texts. It’s designed for QA systems and retrieval-augmented generation workflows, enabling developers to score LLM reliability in production environments.

Copy link
Member

@Achanandhi-M Achanandhi-M left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @AliRizvi433, congratulations on raising your first PR! However, I have a few questions — have you tested it on your side? Is it working as expected? Additionally, we kindly ask that you sign the Developer Certificate of Origin (DCO). It’s a simple process that confirms your contribution complies with the necessary legal requirements. Please make these two changes.

… LLM Development

Signed-off-by: Ali Rizvi <alirizvi433@gmail.com>
@AliRizvi433 AliRizvi433 force-pushed the ali-rizvi-lawliet-prompt branch from 8cf9ec9 to 68ef5f5 Compare June 19, 2025 11:07
@AliRizvi433
Copy link
Author

"have you tested it on your side? Is it working as expected?"
Yes, it does work, this is one of the multiple prompts I have used for my own project, a Law Chatbot.
I've signed the DCO aswell.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants