There are a couple of places in the code where there is an assumption about the response value from the LLM and a string split is occuring, and then either split [0] or [1] is being referenced without first testing if the string contains the split(on value).
extract_atomic_facts is the right way to do it. I modified a couple of places this was happening. Sorry, I don't have them at hand right now but that's a problem that needs to be resolved. A search for split is going to help locate the assumptions.
[There are other assumptions, for example, in extract_questions_from_response that are the source of other failures. I have seen valid question lists, but not formatted exactly as expected. It would save a lot of trouble if there was a test mode to have the internal prompts tested on the user LLMs to see if they regularly fail to format things properly.]