The following subsections from General section should be performed in this order:
- SSH configuration
- Ubuntu - Synchronize time with chrony
- Update system timezone
- Correct DNS resolution
- Generate Gmail App Password
- Configure Postfix Server to send email through Gmail
- Mail notifications for SSH dial-in
Add the following mounting points to /etc/fstab/
192.168.0.114:/mnt/tank1/data /home/sitram/data nfs rw 0 0
192.168.0.114:/mnt/tank2/media /home/sitram/media nfs rw 0 0I used for a while the Docker Engine from Ubuntu apt repository, until a container stopped working because it needed the latest version which was not yet available. I decided to switch from Ubuntu's apt version of docker to the official one from here and it has been working great so far.
I launch all containers from a single docker-compose file called docker-compose.yml which I store in /home/sitram/data. Together with the yaml file, I store a .env file which contains the stack name:
echo COMPOSE_PROJECT_NAME=serenity >> /home/sitram/data/.envThe configuration of each container is stored in /home/sitram/docker in a folder named after each container. I have a job running on the host server, which periodically creates a backup of this VM to a Raid 1 so I should be protected in case of some failures.
The average amount of data reported by Proxmox is between 200kb and 350kb a day. Because /home/sitram/docker is located on hercules root partition which is allocated on Serenity's SSD, this causes wearing on the SSD over time.
In order to reduce the wearing, below I will describe the steps I did to move the contents of /home/sitram/docker to tank1 pool on my NAS.
-
Create a backup of Hercules VM in Proxmox before the entire process starts in order to be able to undo all changes in case something breaks.
-
Create
docker_datashare inTrueNAS
cd /mnt/tank1
mkdir docker_data
chown sitram:sitram docker_data- Mount
docker_datashare in a temporary location on Hercules VM
mkdir /home/sitram/docker_data
sudo mount -t nfs 192.168.0.114:/mnt/tank1/docker_data /home/sitram/docker_data/- Stop VM's which depend on services provided by Hercules VM
ssh -t sitram@wordpress.local sudo shutdown -h now
ssh -t sitram@nextcloud.local sudo shutdown -h now- Stop docker service and check that the service is inactive
sudo systemctl stop docker.service
sudo systemctl status docker.service- Copy contents of
/home/sitram/docker/to/home/sitram/docker_data/keeping the owners and permissions intact. Depending on the amount of that that the containers have acumulated over time, this is the longest step. Progress can be checked periodically withwatchcommand. Checking how many files are left to be copied can be done with the second to lastdiffcommand. Once the copy command is done, check differences between the two folders usingdiffcommand.
sudo cp -rp /home/sitram/docker/. /home/sitram/docker_data/
watch "sudo du -hs docker ; sudo du -hs docker_data"
diff -y --suppress-common-lines <( sudo find ~/docker -maxdepth 1 -type d | while read -r dir; do printf "%s:\t" "$dir"; sudo find "$dir" -type f | wc -l; done | sort -n -r -k2 ) <( sudo find ~/docker_data -maxdepth 1 -type d | while read -r dir; do printf "%s:\t" "$dir"; sudo find "$dir" -type f | wc -l; done | sort -n -r -k2 )
sudo diff -qr --no-dereference /home/sitram/docker/ /home/sitram/docker_data/- Rename existing
dockerfolder todocker_bkp_02.10.2024
sudo mv docker docker_bkp_02.10.2024- Umount
docker_dataand check that the folder is not longer mounted.
sudo umount docker_data- Add the following line to
/etc/fstabto mount the share automatically at startup.
...
192.168.0.114:/mnt/tank1/docker_data /home/sitram/docker nfs rw 0 0
...- Mount the new share
sudo mount -a- Start docker service and check the logs of the service. If there are no errors reported, check the for errors in the log of every container.
sudo systemctl start docker.service
sudo systemctl status docker.serviceIdentify what docker related packages are installed on your system
dpkg -l | grep -i dockerRemove any docker related packages installed from Ubuntu repository. This will not remove images, containers, volumes, or user created configuration files on your host.
sudo apt-get purge -y docker-engine docker docker.io docker-ce docker-ce-cli docker-compose-plugin
sudo apt-get autoremove -y --purge docker-engine docker docker.io docker-ce docker-compose-pluginIf you wish to delete all images, containers, and volumes run the following commands:
sudo rm -rf /var/lib/docker /etc/docker
sudo rm /etc/apparmor.d/docker
sudo groupdel docker
sudo rm -rf /var/run/docker.sockUpdate the apt package index and install packages to allow apt to use a repository over HTTPS:
sudo apt-get update
sudo apt-get install \
ca-certificates \
curl \
gnupg \
lsb-releaseAdd Docker’s official GPG key:
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpgUse the following command to set up the repository:
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/nullUpdate the apt package index:
sudo apt-get updateInstall the latest version of Docker Engine
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-pluginVerify that the Docker Engine installation is successful by running the hello-world image
sudo docker run hello-worldI keep my containers updated using Watchtower. It runs every night and sends notifications over telegram.
The container has:
- a volume mapped to
/var/run/docker.sockused to access Docker via Unix sockets
Below is the docker-compose I used to launch the container.
#Watchtower every night with telegram notification - https://containrrr.dev/watchtower/
#Chron Expression format - https://pkg.go.dev/github.com/robfig/cron@v1.2.0#hdr-CRON_Expression_Format
watchtower:
image: containrrr/watchtower:latest
container_name: watchtower_schedule_telegram
volumes:
- /var/run/docker.sock:/var/run/docker.sock
restart: unless-stopped
command:
--cleanup
--include-stopped
--notifications shoutrrr
--notification-url "xxxxxxx"
--schedule "0 0 0 * * *"I use Heimdall as a web portal for managing al the services running on my HomeLab.
The container has:
- a volume mapped to
/home/sitram/docker/heimdallused to store the configuration of the application
Below is the docker-compose I used to launch the container.
#Heimdall - Web portal for managing home lab services - https://hub.docker.com/r/linuxserver/heimdall
heimdall:
image: ghcr.io/linuxserver/heimdall
container_name: heimdall
environment:
- PUID=1000
- PGID=1000
- TZ=Europe/Bucharest
volumes:
- /home/sitram/docker/heimdall:/config
ports:
- 81:80
- 441:443
healthcheck:
test: wget --no-verbose --tries=1 --spider http://192.168.0.101:81 || exit 1
interval: 30s
timeout: 10s
retries: 5
restart: unless-stoppedI use Portainer as a web interface for managing my docker containers. It helps me tocheck container logs, login to a shell inside the container and perform other various debugging activities.
The container has:
- a volume mapped to
/home/sitram/docker/portainerused to store the configuration of the application - a volume mapped to
/var/run/docker.sockused to access Docker via Unix sockets
Below is the docker-compose I used to launch the container.
#Portainer - a web interface for managing docker containers
portainer:
image: portainer/portainer-ce:alpine
container_name: portainer
command: -H unix:///var/run/docker.sock
restart: always
ports:
- 9000:9000
- 8000:8000
healthcheck:
test: wget --no-verbose --tries=1 --spider http://192.168.0.101:9000 || exit 1
interval: 30s
timeout: 10s
retries: 5
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /home/sitram/docker/portainer:/dataI use Calibre as a book management application. My book library is stored in the mounted volume.
The container has:
- a volume mapped to
/home/sitram/docker/calibreused to store the configuration of the application
Below is the docker-compose I used to launch the container.
#Calibre - Books management application - https://hub.docker.com/r/linuxserver/calibre
calibre:
image: ghcr.io/linuxserver/calibre:latest
container_name: calibre
environment:
- PUID=1000
- PGID=1000
- TZ=Europe/Bucharest
- UMASK_SET=022 #optional
volumes:
- /home/sitram/docker/calibre:/config
ports:
- 8080:8080
- 8081:8081
- 9090:9090
healthcheck:
test: wget --no-verbose --tries=1 --spider http://192.168.0.101:8080 || exit 1
interval: 30s
timeout: 10s
retries: 5
start_period: 30s
restart: unless-stoppedI use Calibre-web as a web app providing a clean interface for browsing, reading and downloading eBooks using an existing Calibre datavase.
The container has:
- a volume mapped to
/home/sitram/docker/calibre-webused to store the configuration of the application - a volume mapped to
/home/sitram/docker/calibre/Calibre Librarywhere the Calibre database is located.
Below is the docker-compose I used to launch the container.
#Calibre-web - web app providing a clean interface for browsing, reading and downloading eBooks using an existing Calibre database. - https://hub.docker.com/r/linuxserver/calibre-web
calibre-web:
image: ghcr.io/linuxserver/calibre-web
container_name: calibre-web
environment:
- PUID=1000
- PGID=1000
- TZ=Europe/Bucharest
- DOCKER_MODS=linuxserver/calibre-web:calibre
volumes:
- /home/sitram/docker/calibre-web:/config
- "/home/sitram/docker/calibre/Calibre Library:/books"
ports:
- 8086:8083
healthcheck:
test: curl -f http://192.168.0.101:8086 || exit 1
interval: 30s
timeout: 10s
retries: 5
start_period: 30s
restart: unless-stoppedI use qBitTorrent as my main torrent client accessible to the entire LAN network. I use an older version(14.3.9) because the latest immage is causing some issues which I couldn't figure how to solve.
The container has:
- a volume mapped to
/home/sitram/docker/qbittorrentused to store the configuration of the application - a volume mapped to
/home/sitram/media/torrentswhere all downloaded torrents are storred to preserve the life of the host SSD - a volume mapped to
/home/sitram/media/torrents/torrentswhere I can add any.torrentfile and it will be automatically started by the client.
Below is the docker-compose I used to launch the container.
#qBitTorrent - Torrent client - https://hub.docker.com/r/linuxserver/qbittorrent/
# 2022.07.08 - Latest docker image caused torrents to go into error and wouldn't download.
qbittorrent:
image: lscr.io/linuxserver/qbittorrent:14.3.9
#image: lscr.io/linuxserver/qbittorrent:latest
container_name: qbittorrent
environment:
- PUID=1000
- PGID=1000
- TZ=Europe/Bucharest
- WEBUI_PORT=9093
volumes:
- /home/sitram/docker/qbittorrent:/config
- /home/sitram/media/torrents:/downloads
- /home/sitram/media/torrents/torrents:/watch
ports:
- 51413:51413
- 51413:51413/udp
- 9093:9093
healthcheck:
test: curl -f http://192.168.0.101:9093 || exit 1
interval: 30s
timeout: 10s
retries: 5
restart: unless-stoppedI use Jackett as a proxy server: it translates queries from apps (Sonarr, SickRage, CouchPotato, Mylar, etc) into tracker-site-specific http queries, parses the html response, then sends results back to the requesting software.
The container has:
- a volume mapped to
/home/sitram/docker/jackettused to store the configuration of the application - a volume mapped to
/home/sitram/media/torrentswhere all downloaded torrents are storred to preserve the life of the host SSD
Below is the docker-compose I used to launch the container.
# Jackett - works as a proxy server: it translates queries from apps (Sonarr, SickRage, CouchPotato, Mylar, etc) into tracker-site-specific http queries, parses the html response, then sends results back to the requesting software. - https://ghcr.io/linuxserver/jackett
jackett:
image: ghcr.io/linuxserver/jackett
container_name: jackett
environment:
- PUID=1000
- PGID=1000
- TZ=Europe/Bucharest
- AUTO_UPDATE=true #optional
volumes:
- /home/sitram/docker/jackett:/config
- /home/sitram/media/torrents:/downloads
ports:
- 9117:9117
healthcheck:
test: curl -f http://192.168.0.101:9117 || exit 1
interval: 30s
timeout: 10s
retries: 5
restart: unless-stoppedI use Sonarr as a web application to mnitor multiple sources for favourite tv shows.
The container has:
- a volume mapped to
/home/sitram/docker/sonarrused to store the configuration of the application - a volume mapped to
/home/sitram/media/tvseriesused to store all TV shows - a volume mapped to
/home/sitram/media/torrentswhere all downloaded torrents are storred to preserve the life of the host SSD
Below is the docker-compose I used to launch the container.
#Sonarr - Monitor multiple sources for favourite tv shows - https://ghcr.io/linuxserver/sonarr
sonarr:
image: ghcr.io/linuxserver/sonarr
container_name: sonarr
environment:
- PUID=1000
- PGID=1000
- TZ=Europe/Bucharest
- UMASK_SET=022 #optional
volumes:
- /home/sitram/docker/sonarr:/config
- /home/sitram/media/tvseries:/tv
- /home/sitram/media/torrents:/downloads
ports:
- 8989:8989
healthcheck:
test: curl -f http://192.168.0.101:8989 || exit 1
interval: 30s
timeout: 10s
retries: 5
start_period: 30s
depends_on:
- jackett
- qbittorrent
restart: unless-stoppedI use Radarr as a web application to mnitor multiple sources for favourite movies.
The container has:
- a volume mapped to
/home/sitram/docker/radarrused to store the configuration of the application - a volume mapped to
/home/sitram/media/moviesused to store all the movies - a volume mapped to
/home/sitram/media/torrentswhere all downloaded torrents are storred to preserve the life of the host SSD
Below is the docker-compose I used to launch the container.
#Radarr - A fork of Sonarr to work with movies https://ghcr.io/linuxserver/radarr
radarr:
image: ghcr.io/linuxserver/radarr:nightly-alpine
container_name: radarr
environment:
- PUID=1000
- PGID=1000
- TZ=Europe/Bucharest
- UMASK_SET=022 #optional
volumes:
- /home/sitram/docker/radarr:/config
- /home/sitram/media/movies:/movies
- /home/sitram/media/torrents:/downloads
ports:
- 7878:7878
healthcheck:
test: curl -f http://192.168.0.101:7878 || exit 1
interval: 30s
timeout: 10s
retries: 5
start_period: 30s
depends_on:
- jackett
- qbittorrent
restart: unless-stoppedI use Bazarr as a web application companion to Sonarr and Radarr. It can manage and download subtitles based on your requirements.
The container has:
- a volume mapped to
/home/sitram/docker/bazarrused to store the configuration of the application - a volume mapped to
/home/sitram/media/moviesused to store all the movies - a volume mapped to
/home/sitram/media/tvseriesused to store all TV shows
Below is the docker-compose I used to launch the container.
#Bazarr is a companion application to Sonarr and Radarr. It can manage and download subtitles based on your requirements. - https://hub.docker.com/r/linuxserver/bazarr
bazarr:
image: ghcr.io/linuxserver/bazarr
container_name: bazarr
environment:
- PUID=1000
- PGID=1000
- TZ=Europe/Bucharest
volumes:
- /home/sitram/docker/bazarr:/config
- /home/sitram/media/movies:/movies
- /home/sitram/media/tvseries:/tv
ports:
- 6767:6767
healthcheck:
test: curl -f http://192.168.0.101:6767 || exit 1
interval: 30s
timeout: 10s
retries: 5
start_period: 30s
depends_on:
- radarr
- sonarr
restart: unless-stoppedI use Lidarr as a web application to manage my music collection.
The container has:
- a volume mapped to
/home/sitram/docker/lidarrused to store the configuration of the application - a volume mapped to
/home/sitram/media/musicused to store all the music - a volume mapped to
/home/sitram/media/torrentswhere all downloaded torrents are storred to preserve the life of the host SSD
Below is the docker-compose I used to launch the container.
#Lidarr is a music collection manager for Usenet and BitTorrent users. - https://hub.docker.com/r/linuxserver/lidarr
lidarr:
image: ghcr.io/linuxserver/lidarr
container_name: lidarr
environment:
- PUID=1000
- PGID=1000
- TZ=Europe/Bucharest
volumes:
- /home/sitram/docker/lidarr:/config
- /home/sitram/media/music:/music
- /home/sitram/media/torrents:/downloads
ports:
- 8686:8686
healthcheck:
test: curl -f http://192.168.0.101:8686 || exit 1
interval: 30s
timeout: 10s
retries: 5
start_period: 30s
depends_on:
- jackett
- qbittorrent
restart: unless-stoppedI use Overseerr as a free and open source web application for managing requests for my media library. It integrates with your existing services, such as Sonarr, Radarr, and Plex.
The container has:
- a volume mapped to
/home/sitram/docker/overseerrused to store the configuration of the application
Below is the docker-compose I used to launch the container.
#Overseerr is a free and open source software application for managing requests for your media library.
# It integrates with your existing services, such as Sonarr, Radarr, and Plex!
# https://hub.docker.com/r/sctx/overseerr
overseerr:
image: sctx/overseerr:latest
container_name: overseerr
environment:
- LOG_LEVEL=debug
- PUID=1000
- PGID=1000
- TZ=Europe/Bucharest
ports:
- 5055:5055
healthcheck:
test: wget --no-verbose --tries=1 --spider http://192.168.0.101:5055 || exit 1
interval: 30s
timeout: 10s
retries: 5
start_period: 30s
volumes:
- /home/sitram/docker/overseerr:/app/config
restart: unless-stoppedI use GoDaddy DNS Updater as a free and open source application that uses curl, and a simple shell script to monitor a sub-domain or domain and update GoDaddy DNS records.
The container has:
- a volume mapped to
/home/sitram/docker/godaddy_dns_updaterused to store the logs of the application
Below is the docker-compose I used to launch the container.
#GoDaddy DNS Updater - A simple docker image that uses curl, and a simple shell script to monitor a sub-domain or domain and update GoDaddy DNS records - https://github.com/parker-hemphill/godaddy-dns-updater
godaddy_dns_updater:
image: parkerhemphill/godaddy-dns-updater
container_name: godaddy_dns_updater
environment:
- DOMAIN=sitram.eu
- API_KEY=xxx #to be filled with GoDaddy developer API key
- API_SECRET=xxx #to be filled with GoDaddy developer API key secret
- DNS_CHECK=600
- TIME_ZONE=Europe/Bucharest
volumes:
- /home/sitram/docker/godaddy_dns_updater:/tmp
restart: unless-stoppedI use SWAG - Secure Web Application Gateway as a free and open source application that sets a Nginx webserver and reverse proxy with php support and a built-in certbot client that automates free SSL server certificate generation and renewal processes.
The container has:
- a volume mapped to
/home/sitram/docker/swagused to store the configuration of the application
Below is the docker-compose I used to launch the container.
#SWAG - Secure Web Application Gateway sets up an Nginx webserver and reverse proxy with php support and a built-in certbot client that automates free SSL server certificate generation and renewal processes. - https://ghcr.io/linuxserver/swag
swag:
image: ghcr.io/linuxserver/swag
container_name: swag
cap_add:
- NET_ADMIN
environment:
- PUID=1000
- PGID=1000
- TZ=Europe/Bucharest
- URL=xxxxx #to be filled with the subdomain
- SUBDOMAINS=wildcard
- VALIDATION=dns
- DNSPLUGIN=godaddy
- EMAIL=xxx@gmail.com # to be filled with real email
- ONLY_SUBDOMAINS=false
- STAGING=false
- DOCKER_MODS=linuxserver/mods:swag-auto-reload|linuxserver/mods:universal-wait-for-internet
volumes:
- /home/sitram/docker/swag:/config
ports:
- 443:443
- 80:80
healthcheck:
test: wget --no-verbose --tries=1 --spider --no-check-certificate https://sitram.eu || exit 1
interval: 30s
timeout: 10s
retries: 5
start_period: 30s
depends_on:
- duckdns
restart: unless-stoppedI use Plex to organizes video, music and photos from personal media libraries and streams them to smart TVs, streaming boxes and mobile devices.
The container has:
- a volume mapped to
/home/sitram/docker/plex/used to store the configuration of the application - a volume mapped to
/dev/shmused for transcoding videos. I use this approach to reduce the wear on the host SSD. - a volume mapped to
/home/sitram/media/tvseriesused to store all TV shows - a volume mapped to
/home/sitram/media/moviesused to store all the movies - a volume mapped to
/home/sitram/media/musicused to store all the music - a volume mapped to
/home/sitram/data/photosused to store family photos - a volume mapped to
/home/sitram/media/trainingsused to store various video training materials
Below is the docker-compose I used to launch the container.
Plex - Organizes video, music and photos from personal media libraries and streams them to smart TVs, streaming boxes and mobile devices. - https://hub.docker.com/r/plexinc/pms-docker/
plex:
image: plexinc/pms-docker:latest
container_name: plex
network_mode: host
environment:
- TZ=Europe/Bucharest
- PUID=1000
- PGID=1000
- HOSTNAME=Serenity
volumes:
- /home/sitram/docker/plex/:/config
#- /home/sitram/docker/plex/tmp:/transcode
- /dev/shm:/transcode
- /home/sitram/media/tvseries:/tvseries
- /home/sitram/media/movies:/movies
- /home/sitram/media/music:/music
- /home/sitram/data/photos:/photos
- /home/sitram/media/trainings:/trainings
# ports:
# - "32400:32400/tcp"
# - "3005:3005/tcp"
# - "8324:8324/tcp"
# - "32469:32469/tcp"
# - "1900:1900/udp"
# - "32410:32410/udp"
# - "32412:32412/udp"
# - "32413:32413/udp"
# - "32414:32414/udp"
depends_on:
- swag
restart: unless-stoppedI use PostgressSQL database as a database server for Guacamole container
The container has:
- a volume mapped to
/home/sitram/docker/postgresused to store the configuration of the application and databases
Below is the docker-compose I used to launch the container.
#PostgressSQL database - https://github.com/docker-library/docs/blob/master/postgres/README.md
db_postgress:
container_name: db_postgress
image: postgres:13
user: 1000:1000
environment:
- POSTGRES_USER=xxx
- POSTGRES_PASSWORD=xxx
ports:
- 5432:5432
healthcheck:
test: pg_isready -h 192.168.0.101 -p 5432 -U sitram || exit 1
interval: 30s
timeout: 10s
retries: 5
start_period: 30s
volumes:
- /home/sitram/docker/postgres:/var/lib/postgresql/data:rw
restart: unless-stoppedI use MySQL as a open-source relational database management system to store databases for
The container has:
- a volume mapped to
/home/sitram/docker/mysql/dataused to store the databases - a volume mapped to
/home/sitram/docker/mysql/confused to store custom configurations - a volume mapped to
/home/sitram/docker/mysql/logsused to access MySQL logs - a volume mapped to
/home/sitram/docker/mysql/runused to access the pid and sock files.
Below is the docker-compose I used to launch the container.
#MySQL database - https://hub.docker.com/_/mysql?tab=description
mysql:
container_name: mysql
image: mysql
user: 1000:1000
command:
--default-authentication-plugin=mysql_native_password
cap_add:
- SYS_NICE
environment:
- MYSQL_ROOT_PASSWORD=xxx
- MYSQL_DATABASE=xxx
- MYSQL_USER=xxx
- MYSQL_PASSWORD=xxx
ports:
- 3306:3306
healthcheck:
test: mysqladmin ping -h 192.168.0.101 || exit 1
interval: 30s
timeout: 10s
retries: 5
start_period: 30s
volumes:
- /home/sitram/docker/mysql/data:/var/lib/mysql
- /home/sitram/docker/mysql/conf:/etc/mysql/conf.d
- /home/sitram/docker/mysql/logs:/var/log/mysql
- /home/sitram/docker/mysql/run:/var/run/mysqld
restart: unless-stoppedI use Adminer (formerly phpMinAdmin) as a full-featured database management tool written in PHP to connect to MySQL container
Below is the docker-compose I used to launch the container.
#Adminer - (formerly phpMinAdmin) is a full-featured database management tool written in PHP - https://hub.docker.com/_/adminer
adminer:
container_name: adminer
image: adminer
environment:
- ADMINER_DESIGN=nette
- ADMINER_DEFAULT_SERVER=192.168.0.101
ports:
- 8082:8080
restart: unless-stoppedI use PGAdmin as a web interface to administer PostgressSQL databasees
The container has:
- a volume mapped to
/home/sitram/docker/pgadminused to store the configuration of the application
Below is the docker-compose I used to launch the container.
#PGAdmin - Web interface used to administer PostgressSQL - https://www.pgadmin.org/docs/pgadmin4/latest/container_deployment.html#environment-variables
pg_admin:
image: dpage/pgadmin4
container_name: pg_admin
ports:
- 82:80
environment:
- PGADMIN_DEFAULT_EMAIL=xxx@gmail.com
- PGADMIN_DEFAULT_PASSWORD=xxx
- PGADMIN_CONFIG_ENHANCED_COOKIE_PROTECTION=True
- PGADMIN_CONFIG_LOGIN_BANNER="Authorised users only!"
- PGADMIN_CONFIG_CONSOLE_LOG_LEVEL=10
volumes:
- /home/sitram/docker/pgadmin:/var/lib/pgadmin
healthcheck:
test: wget --no-verbose --tries=1 --spider http://192.168.0.101:82 || exit 1
interval: 30s
timeout: 10s
retries: 5
start_period: 30s
restart: unless-stoppedI use Guacamole as a web application for accesing internal services over SSH, RDP or other protocols. It consists of two containers, a daemon and the web interface.
Below is the docker-compose I used to launch the the Guacamole Daemon container.
The container has:
#Guacamole Daemon - http://guacamole.apache.org/doc/gug/guacamole-docker.html
guacd:
container_name: guacd
image: guacamole/guacd
environment:
- TZ=Europe/Bucharest
- GUACD_LOG_LEVEL=debug
ports:
- 4822:4822
networks:
- guacamole
restart: unless-stoppedBelow is the docker-compose I used to launch the Guacamole web application container.
- a volume mapped to
/home/sitram/docker/swagused to store the configuration of the application
Below is the docker-compose I used to launch the container.
#Guacamole - web application for accesing internal services over SSH, RDP or other protocols - http://guacamole.apache.org/doc/gug/guacamole-docker.html
# 2022.7-08 - Latest docker image would not boot. I temporarily switched to version 1.4.0
guacamole:
container_name: guacamole
image: guacamole/guacamole:1.4.0
links:
- guacd
environment:
- TZ=Europe/Bucharest
- GUACD_HOSTNAME=192.168.0.101
- GUACD_PORT=4822
- POSTGRES_HOSTNAME=192.168.0.101
- POSTGRES_PORT=5432
- POSTGRES_DATABASE=guacamole_db
- POSTGRES_USER=xxx
- POSTGRES_PASSWORD=xxx
- GUACAMOLE_HOME=/custom-config
volumes:
- /home/sitram/docker/guacamole:/custom-config
networks:
- guacamole
ports:
- 8083:8080
healthcheck:
test: wget --no-verbose --tries=1 --spider http://192.168.0.101:8083/guacamole/ || exit 1
interval: 30s
timeout: 10s
retries: 5
start_period: 30s
depends_on:
- swag
- guacd
- db_postgress
restart: unless-stoppedI use Redis as an open-source, networked, in-memory, key-value data store with optional durability. It is written in ANSI C
The container has:
- a volume mapped to
/home/sitram/docker/redisused to store the database a volume mapped to/home/sitram/docker/redis/configused to store custom configuration
Below is the docker-compose I used to launch the container.
#Redis is an open-source, networked, in-memory, key-value data store with optional durability. It is written in ANSI C.- https://hub.docker.com/_/redis
redis:
container_name: redis
image: redis
user: 1000:1000
environment:
- TZ=Europe/Bucharest
command: redis-server /usr/local/etc/redis/redis.conf
ports:
- 6379:6379
healthcheck:
test: redis-cli -h 192.168.0.101 -p 6379 ping | grep PONG
interval: 30s
timeout: 10s
retries: 5
volumes:
- /home/sitram/docker/redis:/data
- /home/sitram/docker/redis/config:/usr/local/etc/redis
restart: unless-stoppedI use LibreSpeed as a very lightweight Speedtest implemented in Javascript, using XMLHttpRequest and Web Workers. No Flash, No Java, No Websocket, No Bullshit.
The container has:
- a volume mapped to
/home/sitram/docker/librespeedused to store the configuration of the application
Below is the docker-compose I used to launch the container.
#LibreSpeed - very lightweight Speedtest implemented in Javascript, using XMLHttpRequest and Web Workers. No Flash, No Java, No Websocket, No Bullshit. - https://hub.docker.com/r/linuxserver/librespeed
librespeed:
image: ghcr.io/linuxserver/librespeed:latest
container_name: librespeed
environment:
- PUID=1000
- PGID=1000
- TZ=Europe/Bucharest
- PASSWORD=xxx
- CUSTOM_RESULTS=true
- DB_TYPE=mysql
- DB_NAME=librespeed
- DB_HOSTNAME=192.168.0.101
- DB_USERNAME=xxx
- DB_PASSWORD=xxx
- DB_PORT=3306
volumes:
- /home/sitram/docker/librespeed:/config
ports:
- 85:80
healthcheck:
test: wget --no-verbose --tries=1 --spider http://192.168.0.101:85 || exit 1
interval: 30s
timeout: 10s
retries: 5
start_period: 30s
depends_on:
- mysql
restart: unless-stoppedI use Authelia as an open source authentication and authorization server protecting modern web applications by collaborating with reverse proxies.
The container has:
- a volume mapped to
/home/sitram/docker/autheliaused to store the configuration of the application
Below is the docker-compose I used to launch the container.
#Authelia - an open source authentication and authorization server protecting modern web applications by collaborating with reverse proxies - https://www.authelia.com/docs/
authelia:
image: authelia/authelia:latest
container_name: authelia
user: 1000:1000
environment:
- TZ=Europe/Bucharest
volumes:
- /home/sitram/docker/authelia:/config
ports:
- 9092:9092
depends_on:
- redis
restart: unless-stoppedI use Portfolio Performance as an open source tool to calculate the overall performance of an investment portfolio. I built the image based on the instructions from here.
The container has:
- a volume mapped to
/home/sitram/docker/portfolio-performance/workspaceused to store the application workspace.
Below is the docker-compose I used to launch the container.
#Portfolio Performance - An open source tool to calculate the overall performance of an investment portfolio.
#Self build image based on instructions from https://forum.portfolio-performance.info/t/portfolio-performance-in-docker/10062
PortfolioPerformance:
image: portfolio:latest
container_name: PortfolioPerformance
environment:
- TZ=Europe/Bucharest
- USER_ID=1000
- GROUP_ID=1000
- KEEP_APP_RUNNING=1
- DISPLAY_WIDTH=1920
- DISPLAY_HEIGHT=910
labels:
- "com.centurylinklabs.watchtower.enable=false"
volumes:
- /home/sitram/docker/portfolio-performance/workspace:/opt/portfolio/workspace
ports:
- 5800:5800
healthcheck:
test: wget --no-verbose --tries=1 --spider http://192.168.0.101:5800 || exit 1
interval: 30s
timeout: 10s
retries: 5
start_period: 30s
restart: unless-stoppedI use Audiobookshelf as an self hosted audiobook and podcast server with mobile applications for Android and iOS which allows to listen to audiobooks remotely from the server.
The container has:
- a volume mapped to
/home/sitram/media/audiobooksused to store all the audiobooks - a volume mapped to
/home/sitram/media/podcastsused to store all the podcasts - a volume mapped to
/home/sitram/docker/audiobookshelf/configused to store the configuration of the application - a volume mapped to
/home/sitram/docker/audiobookshelf/metadataused to store cache, streams, covers, downloads, backups and logs
Below is the docker-compose I used to launch the container.
#Audiobookshelf - is an self hosted audiobook and podcast server - https://www.audiobookshelf.org/
audiobookshelf:
image: ghcr.io/advplyr/audiobookshelf:latest
container_name: audiobookshelf
environment:
- TZ=Europe/Bucharest
- AUDIOBOOKSHELF_UID=1000
- AUDIOBOOKSHELF_GID=1000
volumes:
- /home/sitram/media/audiobooks:/audiobooks
- /home/sitram/media/podcasts:/podcasts
- /home/sitram/docker/audiobookshelf/config:/config
- /home/sitram/docker/audiobookshelf/metadata:/metadata
ports:
- 13378:80
healthcheck:
test: wget --no-verbose --tries=1 --spider http://192.168.0.101:13378 || exit 1
interval: 30s
timeout: 10s
retries: 5
start_period: 30s
restart: unless-stoppedI use Stirling-PDF as a selfhosted web based PDF manipulation tool
The container has:
- a volume mapped to
/home/sitram/docker/stirling-pdf/trainingData:used to store extra OCR languages - a volume mapped to
/home/sitram/media/podcastsused to store all the podcasts - a volume mapped to
/home/sitram/docker/stirling-pdf/configsused to store the configuration of the application - a volume mapped to
/home/sitram/docker/stirling-pdf/customFilesused to store custom static files uch as the app logo by placing files in the/customFiles/static/directory. An example of customising app logo is placing a/customFiles/static/favicon.svgto override current SVG. This can be used to change any images/icons/css/fonts/js etc in Stirling-PDF
Below is the docker-compose I used to launch the container.
#Stirling-PDF - locally hosted web based PDF manipulation tool - https://github.com/Frooodle/Stirling-PDF
stirling-pdf:
image: frooodle/s-pdf:latest
container_name: stirling-pdf
env_file:
- .env
ports:
- 8084:8080
healthcheck:
test: curl -f http://192.168.0.101:8086 || exit 1
interval: 30s
timeout: 10s
retries: 5
volumes:
- /home/sitram/docker/stirling-pdf/trainingData:/usr/share/tesseract-ocr/4.00/tessdata #Required for extra OCR languages
- /home/sitram/docker/stirling-pdf/configs:/configs
- /home/sitram/docker/stirling-pdf/customFiles:/customFiles/
environment:
- DOCKER_ENABLE_SECURITY=false
- PUID=1000
- PGID=1000
- UMASK=022
restart: unless-stopped