-
Notifications
You must be signed in to change notification settings - Fork 18
rehaul appliance to use podman containers #37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
|
Re RCE API: it’s working as long as TLS is either valid or disabled (as expected). FWIW if there are any errors you can usually see them in the devtools console in the browser. Re LE support: the LE confconsole plugin does not like
Should be easy to update the plugin so it knows how to bring down the pod itself. Re TKLBAM: profile changes won’t cut it as the Postgres-related code is actually in tklbam itself and has some hardcoded assumptions. I looked into „shimming“ that but it wasn’t very practical… So the options are:
Downsides for 2. are e.g. possible inconsistencies when restoring to different versions as mentioned before and 3 is basically 1 but more hacky so I’m tending towards 1. Alternatively, we might want to have a compatibility layer for DBs so that after they are dumped they are treated just like the rest of the FS and not specially like currently. This would make supporting different scenarios like backing up containerized DBs or even remote DBs easier. But might be too ambitious. Anyway, just an idea I had. So it looks like there may be changes to confconsole and tklbam needed but the Canvas part itself shouldn't need more work, other than maybe porting some trivial improvements from the new Odoo appliance based on feedback there. What do you think? |
|
Apologies for the radio silence @a3s7p...
Perfect.
IIRC by default the plugin checks what is listening on port 80. If it's a "known" service/process (currently only Apache, Nginx and I think Tomcat) then it stops it, launches our custom mini server - "add water" :) - to host the challenge. Once that works it restarts the server that was using it previously. So if I'm right and I understand correctly, it should be pretty easy to update it to work with an app in a podman container. I.e. add podman (and perhaps docker too?) to the existing "known" services and it should be able to handle the rest. Although appliances that are in multiple containers, it'd probably be better (quicker?) to just restart the container that is listening on port 80. I imagine that would make it quicker to restart too - although a moot point for single "all-in-one" containers. Another option would be to use a reverse proxy on the host (rather than binding the container to external ports). If we used Nginx on the host as a reverse proxy it should "just work" without any changes to confconsole shouldn't it? I think we briefly discussed that previously (although not in the context of TLS certs) and there were some reasons why it was easier to use Nginx in a container. But considering the "web-app" restart time as well, that's worth thought/discussion?
Yeah, IMO the way that the TKLBAM DB management works is not ideal. I still need to circle back to the TKLBAM update; but that's a matter for discussion elsewhere; another time. TBH, I'm not completely clear what you mean by shimming, but it doesn't really matter...
The TKLBAM update really needs to happen ASAP. So writing new code for the existing TKLBAM (1) feels like a sideways step and possibly wasted effort in the longer term. Although if it was written with reuse in a future TKLBAM major release in mind, that would be less likely. Perhaps there is a middle ground though? I.e. a hook (3), but written with future reuse in an updated/rewritten TKLBAM in mind as well? FYI hooks only need to be executable and accept the relevant arguments (see TKLBAM hook docs for more details). So perhaps a hook could be written in 2 separate components? The actual desired functionality and an interface for the TKLBAM hook requirements - either in a single script, or separate ones.
IIRC that is somewhat how it works already. The DB is dumped to the filesystem and it's part of the backup. Although the dump occurs DB by DB and table by table, so specific DBs and/or tables can be backed-up/restored - without needing to backup/restore everything. I'm not sure how much that is used, but I do know that some users do currently - not sure about Canvas specifically. IMO it would be ideal if the existing functionality (i.e. DBs/tables separated) could be retained. We could cut the corner for now on specific apps, but if we plan to leverage podman in more appliances in future then a "proper" solution now might be worth the effort? |
|
Oops.. Apologies on the big reply... |
|
In testing this My guess is the exception is too big to display, so it throws a dialog exception, which is then caught and tried to display, but that includes the inner exception, and so on recursively. |
No description provided.