Replies: 2 comments 5 replies
-
Did you try to execute your Python script alone? |
Beta Was this translation helpful? Give feedback.
4 replies
-
For reference, issue has been opened about this |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Description
I'm hitting this problem with a private gitlab-hosted repo, containing circa 1500 documents that get rendered with quarto. I'm not able to share this set-up, however the approach I'm taking is similar to what is shown publically here, where I'm scraping some document meta-data and pushing this to files in a
data
folder which are then published asresources
in the rendered docs. I'm experimenting with alternative ways to work with the document meta-data and wanted to leverage thepre-render
capability within quarto.The gitlab-runner that is doing the rendering is based on this docker image, a debian 12 OS, with quarto 1.5.57 on-board. If I comment-out the
pre-render
scripts, then things run fine. However, when I enable thescripts/metadata-scrape.py
on the gitlab repo (similar in pattern to this scripts/metadata-scrape.py, only longer to handle custom meta-data), I'm getting thisArgument list too long
error. Can you shed any light on why this might be happening when quarto is handlingpre-render
scripts.Per this, I tried to set a longer command-line buffer with
ulimit -s 65536
on the VM running this dockerized gitlab-runner, but also included in the.gitlab-ci.yml
so that it gets applied within the runner itself (see image above), but none of these have helped.Beta Was this translation helpful? Give feedback.
All reactions