Provided grammar, would fit in a context window of most of the models, but takes minutes to process in 0.1.23. I tested it with 0.1.16 and it worked fine so this seems to be a regression caused by Earley parser.
Full reproducer provider in the POC section. The resulting grammar is around 70k tokens, and the grammar parsing itself (with the models I checked) was significantly longer than LLM processing itself, meaning this can be used to DOS model providers.
This problem is caused by the grammar optimizer introduced in v0.1.23 being too slow. It only happens for very large grammars (>100k characters), like the below one. v0.1.24 solved this problem by optimizing the speed of the grammar optimizer and disable some slow optimization for large grammars.
import string
import random
def enum_schema(size=10000,str_len=10):
enum = {"enum": ["".join(random.choices(string.ascii_uppercase, k=str_len)) for _ in range(size)]}
schema = {
"definitions": {
"colorEnum": enum
},
"type": "object",
"properties": {
"color1": {
"$ref": "#/definitions/colorEnum"
},
"color2": {
"$ref": "#/definitions/colorEnum"
},
"color3": {
"$ref": "#/definitions/colorEnum"
},
"color4": {
"$ref": "#/definitions/colorEnum"
},
"color5": {
"$ref": "#/definitions/colorEnum"
},
"color6": {
"$ref": "#/definitions/colorEnum"
},
"color7": {
"$ref": "#/definitions/colorEnum"
},
"color8": {
"$ref": "#/definitions/colorEnum"
}
},
"required": [
"color1",
"color2"
]
}
return schema
schema_enum = enum_schema()
print(schema_enum)
print(test_schema(schema_enum, {}))
def test_schema(schema, instance):
grammar = xgr.Grammar.from_json_schema(
json.dumps(schema),
strict_mode=True
)
return _is_grammar_accept_string(grammar, json.dumps(instance))
Summary
Provided grammar, would fit in a context window of most of the models, but takes minutes to process in 0.1.23. I tested it with 0.1.16 and it worked fine so this seems to be a regression caused by Earley parser.
Details
Full reproducer provider in the POC section. The resulting grammar is around 70k tokens, and the grammar parsing itself (with the models I checked) was significantly longer than LLM processing itself, meaning this can be used to DOS model providers.
Patch
This problem is caused by the grammar optimizer introduced in v0.1.23 being too slow. It only happens for very large grammars (>100k characters), like the below one. v0.1.24 solved this problem by optimizing the speed of the grammar optimizer and disable some slow optimization for large grammars.
Thanks to @Seven-Streams
PoC
where:
Impact
DOS