-
Notifications
You must be signed in to change notification settings - Fork 3
[BUG] UI unavailable during long processing of Caddy logs #63
Description
Love the app so far. Great work 👍
Describe the bug
The UI is often unavailable due to log processing and it doesn't seem to progress at all. This happens most of the time when I've tried to visit the UI over the last 24 hours. I've only had it set up for that long, it has ingested about 60 MB of caddy logs. I can't get the database size since I can't access the UI 😅
Expected behavior
The UI to be available and if it is processing for it to process more quickly
Second check - 7 mins later still processing

Server resources aren't pinned either (htop):

Desktop (please complete the following information):
- Client OS: [e.g. iOS]
- Linux Desktop - fedora 43
- Client Browser [e.g. chrome, safari]
- Firefox
- Version [e.g. 22]
- Firefox version 147.0.3
Server info:
- Debian 13,
loglynxandcaddyrunning dockerized on bare-metal - 32 GB DDR4 ram
- Intel i5-8500T - 6 core
Additional context
Caddy logs are sent to a shared docker volume which is mounted to the loglynx container as well
caddy:
build:
context: .
dockerfile: Dockerfile.caddy
image: caddy-duckdns-ddns-geoip:latest
container_name: caddy
env_file:
- ../env/common.env
- ../env/permissions.env
- ../env/duckdns.env # TOKEN defined here
cap_add:
- NET_ADMIN
ports:
- 80:80
- 443:443
- 443:443/udp
volumes:
- ../.config/caddy:/etc/caddy
- ../data/geoip:/usr/share/geoip
- caddy_data:/data
- caddy_config:/config
- caddy_logs:/var/log/caddy
restart: unless-stopped
loglynx:
image: k0lin/loglynx:latest
container_name: loglynx
restart: unless-stopped
ports:
- 17534:8080
volumes:
- ../data/loglynx-data:/data
- ../data/geoip:/app/geoip
- caddy_logs:/var/log/caddy
environment:
- DB_PATH=/data/loglynx.db
- GEOIP_ENABLED=true
- GEOIP_CITY_DB=/app/geoip/GeoLite2-City.mmdb
- GEOIP_COUNTRY_DB=/app/geoip/GeoLite2-Country.mmdb
- GEOIP_ASN_DB=/app/geoip/GeoLite2-ASN.mmdb
- CADDY_LOG_PATH=/var/log/caddy/access.log
- LOG_LEVEL=info
- SERVER_PRODUCTION=true
volumes:
caddy_data:
caddy_config:
caddy_logs:Caddyfile snippet
{
log {
format json
level INFO
}
}
(json_logging) {
log {
output file /var/log/caddy/access.log {
roll_size 10MiB
roll_keep 5
}
format json
level INFO
}
}
...
# example domain, each one follows this pattern
http://loglynx.home.arpa {
import json_logging
reverse_proxy 192.168.86.29:17534
}
Size of logs at the time of screen shot
~/docker/network$ docker exec -it caddy ls -lh /var/log/caddy
total 16M
-rw------- 1 root root 1.7M Feb 12 15:46 access-2026-02-12T15-46-57.092.log.gz
-rw------- 1 root root 1.2M Feb 12 17:23 access-2026-02-12T17-23-17.048.log.gz
-rw------- 1 root root 2.0M Feb 12 20:14 access-2026-02-12T20-14-42.407.log.gz
-rw------- 1 root root 1.7M Feb 12 22:36 access-2026-02-12T22-36-11.486.log.gz
-rw------- 1 root root 1.2M Feb 13 00:11 access-2026-02-13T00-11-27.093.log.gz
-rw------- 1 root root 8.1M Feb 13 02:24 access.log
