Skip to content

Add opt-in TeX compilation test utility for good question fixtures#46

Open
ammarokla123 wants to merge 12 commits intoedulinq:mainfrom
ammarokla123:main
Open

Add opt-in TeX compilation test utility for good question fixtures#46
ammarokla123 wants to merge 12 commits intoedulinq:mainfrom
ammarokla123:main

Conversation

@ammarokla123
Copy link

This PR addresses #37 by adding a dedicated, opt-in test path that generates and compiles TeX for question fixtures without slowing the default Python test suite.

What changed

  • Added a new script: scripts/test_tex_compilation_questions.sh
  • Script behavior:
    • Discovers all tests/questions/good/**/question.json fixtures
    • Generates TeX per fixture using project converter logic
    • Compiles each TeX file twice (local pdflatex or Docker mode)
    • Verifies PDF output exists and has a minimum size
    • Prints per-fixture pass/fail + summary, exits non-zero on failures
  • Added script options:
    • --use-docker
    • --pdflatex-bin <path>
    • --python-bin <path>
    • --keep-temp
  • Updated README.md:
    • Added a Testing section
    • Documented this as an optional TeX compilation check outside run_tests.py

Why

  • The existing suite validates parsing/conversion and PDF generation paths, but issue TeX Compilation Tests #37 asks for explicit TeX generation + compilation checks per good question fixture.
  • This is intentionally separate from normal test runs to avoid adding significant runtime overhead for regular contributors.

Validation

  • ./scripts/test_tex_compilation_questions.sh --use-docker
  • Result: pass=63, fail=0, total=63

Closes #37.

Copy link
Collaborator

@eriq-augustine eriq-augustine left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A commendable first start.

The biggest problem here is complexity.
There are many places where this code is way more complex than it needs to be (I only noted a few).
In this project, we try to embrace KISS whenever we can.
The bash scripts in this project are meant to be simple.
Decide on what you need to do to make your test running more straightforward.

You need to also make sure these tests are run in CI.
One of the biggest reasons we have tests is to run them in CI,
so we don't want to miss out on that.

@ammarokla123
Copy link
Author

Thank you @eriq-augustine for your quick feedback.

I believe I have addressed all the changes. Ran the test again and passed.

Copy link
Collaborator

@eriq-augustine eriq-augustine left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking much better and simpler.

You should also compile the good quizzes.

Copy link
Collaborator

@eriq-augustine eriq-augustine left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Getting pretty close.

Try to keep maintainability (which includes simplicity and readability) in mind when working on this.

Also, make sure to run the CI in your fork.

@ammarokla123
Copy link
Author

CI run in my fork: https://github.com/ammarokla123/quiz-composer/actions/runs/22438951696

All tests passed

Copy link
Collaborator

@eriq-augustine eriq-augustine left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Getting pretty close.
There are still some complexity issues,
but you can see the design really getting good.

Copy link
Collaborator

@eriq-augustine eriq-augustine left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Getting closer.

SILL complexity is the biggest issue in your PR.
Really slow down and ask yourself how direct you are being.

KISS - Keep It Super Simple
(My version is a little nicer than the standard version.)

Comment on lines +59 to +64
if (not os.path.exists(question_path)):
raise ValueError(f"Provided path '{question_path}' does not exist.")

if (not os.path.isfile(question_path)):
raise ValueError(f"Provided path '{question_path}' is not a file.")

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seem like duplicate work with make_from_question_with_args().

I would just do this here (and update make_with_args()/make_with_path()) to follow your pattern.

def make_from_question(question, out_dir = None,
answer_key = False, skip_tex = False, skip_pdf = False):
if (out_dir is None):
out_dir = quizcomp.util.dirent.get_temp_path(prefix = 'quizcomp_pdf_question_', rm = False)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You have to make some sort of log so the user knows where the output is being written.

return make_from_question(question, **kwargs)

def make_from_question(question, out_dir = None,
answer_key = False, skip_tex = False, skip_pdf = False):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should accept a kwargs (since you are passing a splat elsewhere).


return parser

def set_cli_question_args(parser):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

!

Share code with the exiting args function.

local base_dir="$2"
local case_filename="$3"
local cli_module="$4"
local create_fail_message="$5"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't need this.
The path and vase type will tell the user what they need.

local question_pass=0
local question_fail=0
run_tests "question" "${GOOD_QUESTIONS_DIR}" "question.json" "quizcomp.cli.pdf.create_question" \
"question PDF creation failed." "missing PDF output." question_pass question_fail || return $?
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why exit early?


echo ''
echo "TeX compilation test summary: pass=${num_pass}, fail=${num_fail}, total=${total_count}"
echo "Output directory: ${OUTPUT_DIR}"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mention the output directly early, not at the end.
(This is especially important if you are exiting early (which I don't want you to do here).)


### Optional TeX Compilation Tests

This optional script validates end-to-end TeX/PDF generation for good test fixtures.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/good //


### Optional TeX Compilation Tests

This optional script validates end-to-end TeX/PDF generation for good test fixtures.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You mention a script, but don't make it concrete for two more paragraphs.
No need to talk about it as a "script".


### Optional TeX Compilation Tests

This optional script validates end-to-end TeX/PDF generation for good test fixtures.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"test fixtures" is too abstract.
Remember, all users (regardless of experience) have access to the readme.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: Todo

Development

Successfully merging this pull request may close these issues.

TeX Compilation Tests

2 participants