-
Notifications
You must be signed in to change notification settings - Fork 39
Description
nyuu suffers from a few things in my opinion that would be fixed if it had a "server" (i.e. persistent) mode, instead of what I'd call its ephemeral nature of calling it once per nzb one wants to generate.
-
one can't tell directly from viewing the file system structure if it succeeded or not (i.e. if it fails, it will leave a partial nzb). One can examine the nzb to see if it has a closing tag and one can monitor the exit code of the process itself, but the fact that it leaves a partial nzb in case of failure, seems problematic
-
one cannot stop it in the middle of processing to continue where one left off. Conceptually (haven't tried), I guess one can SIGSTOP/SIGCONT the process, but this wouldn't let one reconfigure the "server" settings.
-
if one loses the nzb, one cannot regenerate it, so the info is lost
-
if one runs nyuu in a loop, I've seen it have issues to reconnect on loop iterations (I've had to make my retry count high).
there are possibly other reasons, but those are what I've run into.
Therefore, I believe nyuu should have a server mode where it can run persistently, one can add "jobs" (directories / subject / poster/ nzb pass / perhaps others) and nyuu will store the info in a DB (say sqlite) and update the db as it goes. As this DB will have all the info, one would be easily able to only generate the nzb file on completion. If one has to stop the process in the middle (or perhaps due to a crash), one can restart it where one left off. And since the data will be stored in a local DB, one can regenerate the nzb as needed.
And since the nyuu process will be persistent between "jobs", one doesn't have a problem reconnecting between them, as the network connections will be reused, just like they are reused between parts within a single job.
It could also enable new forms of obfuscation. While an end user might obfuscate the files (say requiring par2 and the nzb to put them all together), all files from a single job are grouped together today (especially if using a single server). With a persistent process and multiple jobs (say even hundreds), one could interleave the upload amongst all the jobs, making it much harder (without the nzb) to associate files together
ex:. upload file 1 of job 1, file 1 of job 2, file 1 of job 3,
or even better, pick random job that is still active, pick random file not yet uploaded from said job.