Skip to content

[FEAT] Allow more granular volume mounting #117

@davidnewcomb

Description

@davidnewcomb

Is this a new feature request?

  • I have searched the existing issues

Wanted change

I wanted to changed the volume mounts to:

    volumes:
      - ./ENV/ssh/sshd:/config/sshd
      - ./ENV/ssh/ssh_host_keys:/config/ssh_host_keys
      - ./mounts/ssh/logs:/config/logs

but unfortunately the start up script uses the existance of the directory to decide if it needs to create the contents of the directory.

In init-openssh-server-config/run:42, you are checking the existance of /config/sshd/sshd_config not if the sshd folder exists. Here it works and ./ENV/ssh/sshd is filled properly. Similar is true with the logs.

The problem is with ssh_host_keys. On init-openssh-server-config/run:48, it tests if [[ ! -d /config/ssh_host_keys ]];. Docker has created the folder but it is empty. The test fails and the keys are not created. The container starts but sshd is not accepting connections.

Can you change that bit to test for the existance of ssh_host_rsa_key.pub instead. Or if you didn't want to touch the key file, create some .dos-created file flag to test for instead of using the directory?

Reason for change

My ENV folders are just configuration which is kept and tweaked. The mounts folders are connected to cloud storage buckets. They are both treated very differently and having them both together means loads of extra work.

Proposed code change

Can you change that bit to test for the existance of ssh_host_rsa_key.pub instead. Or if you didn't want to touch the key file, create some .dos-created file flag to test for instead of using the directory?
There's probably a couple of options on the table for a solution.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    Status

    Done

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions