How to run only self cehck facts rail ? #1537
Replies: 3 comments 4 replies
-
|
@drazvan @Pouyanpi Please help me with this. if you can i am using nemoguard 0.17 version. |
Beta Was this translation helpful? Give feedback.
-
|
re the documentation it is clearly wrong, the correct url should be: https://github.com/NVIDIA-NeMo/Guardrails/tree/develop/nemoguardrails/library/self_check/facts/actions.py to run input rails or output rails only you can use you can use following util: def determine_options_from_messages(messages: List[dict]) -> Optional[dict]:
if not messages:
raise ValueError("messages list cannot be empty")
roles = {msg.get("role") for msg in reversed(messages)}
has_user = "user" in roles
has_assistant = "assistant" in roles
if not has_user and not has_assistant:
raise ValueError("not possible, right?")
if has_user and has_assistant:
return {"rails": ["input", "output"]}
if has_user:
return {"rails": ["input"]}
return {"rails": ["output"]}then pass the options to the generate_async method: options = determine_options_from_messages(messages)
response = await generate_async(messages=messages, options=options)pay close attention to how messages look like for each of those options, for example when you want to run output rails only you must set : {
"role": "user",
"content": ""
}I hope it helps. BTW, I recommend you to use 0.19.0 version where a bug related to this feature was fixed. In future releaes we will introduce a check_async method which will only run input/output rails without all these tweaks. |
Beta Was this translation helpful? Give feedback.
-
|
@VireshAmbardar the error shows the issue is related to how you are using the fact checking not input/output rails only options. Use following:
The action maps context variables to prompt variables: and try out following (note that I have included the user message because fact checking requires the user message) It is using openai engine, change it to import asyncio
from nemoguardrails import LLMRails, RailsConfig
YAML_CONFIG = """
models:
- type: main
engine: openai
model: gpt-4
rails:
output:
flows:
- self check facts
prompts:
- task: self_check_facts
content: |-
You are given a task to identify if the hypothesis is grounded and entailed to the evidence.
You will only use the contents of the evidence and not rely on external knowledge.
Answer with yes/no. "evidence": {{ evidence }} "hypothesis": {{ response }} "entails":
"""
config = RailsConfig.from_content(yaml_content=YAML_CONFIG)
app = LLMRails(config)
relevant_chunks = """The Treaty of Versailles was signed on June 28, 1919, officially ending
World War I. The principal parties involved were the Allied and Associated Powers
(including France, Britain, and the United States) and Germany. A key provision
required Germany to accept sole responsibility for causing the war and mandated
significant reparations payments."""
factual_response = """The Treaty of Versailles, signed in 1919, officially brought an end
to World War I. Among its most significant terms was the requirement for Germany to accept
responsibility for the conflict and pay substantial reparations to the Allied nations."""
hallucinated_response = """The treaty that ended World War I was the Treaty of Ghent,
which was signed in 1945. This agreement was notable for establishing the League of Nations
as a security council based in New York and mainly addressed territorial disputes between
Russia and the United States."""
messages = [
{
"role": "context",
"content": {
"relevant_chunks": relevant_chunks,
"check_facts": True,
},
},
{"role": "user", "content": "What was the Treaty of Versailles?"},
{"role": "assistant", "content": factual_response},
]
response = await app.generate_async(
messages=messages,
options={"rails": ["output"]},
)
print("Factual response result:")
print(response)
print()
messages = [
{
"role": "context",
"content": {
"relevant_chunks": relevant_chunks,
"check_facts": True,
},
},
{"role": "user", "content": "What was the Treaty of Versailles?"},
{"role": "assistant", "content": hallucinated_response},
]
response = await app.generate_async(messages=messages, options={"rails": ["output"]})
print("Hallucinated response result:")
print(response) |
Beta Was this translation helpful? Give feedback.

Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
So i am trying to implement Nemoguard frame work into my project..
I am calling my main LLM (chat completion) separately as i have included fallback mechanism for the mail LLM .. (Note : this is a separate functionality which we have included before including nemoguard)
Now i want to wrap Input and output rails around my functionality.
Now where i am stuck is i want to implement self check fact and extract score from it..
when i visit
https://docs.nvidia.com/nemo/guardrails/latest/user-guides/guardrails-library.html#fact-checking
there is a github link
https://github.com/NVIDIA-NeMo/Guardrails/blob/develop/nemoguardrails/library/self_check/output_check/actions.py
which indicate to self_check_output
Q1 why is that ??
As i want to only include only self check fact in my optput rail.. and i already have llm response
and accordng to documentation
this is my prompt
my_context = "My_context"
llm_response = "response_from_main_llm"
and my config contains
and in my colang
and according to my workflow this is how i am making my rails
how can i just run the rails on this output ???
Beta Was this translation helpful? Give feedback.
All reactions