chore: add Claude personas, skills, memory, and project docs

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
Marcel
2026-04-14 20:22:39 +02:00
parent e4719b9487
commit 3d3d4b8616
26 changed files with 12123 additions and 0 deletions

View File

@@ -0,0 +1,243 @@
# CI with Gitea Actions
This document covers the Gitea Actions CI workflow for Familienarchiv, including the full workflow YAML, differences from GitHub Actions, and self-hosted runner provisioning.
---
## Self-Hosted Runner Provisioning
Gitea Actions requires self-hosted runners. GitHub Actions provides `ubuntu-latest` for free; on Gitea you run the runner yourself.
```bash
# On the VPS — register a Gitea Actions runner
docker run -d --name gitea-runner --restart unless-stopped -v /var/run/docker.sock:/var/run/docker.sock -v gitea-runner-data:/data -e GITEA_INSTANCE_URL=https://gitea.example.com -e GITEA_RUNNER_REGISTRATION_TOKEN=<token-from-gitea-settings> -e GITEA_RUNNER_NAME=vps-runner-1 -e GITEA_RUNNER_LABELS=ubuntu-latest:docker://node:20-bullseye gitea/act_runner:latest
```
The runner label `ubuntu-latest` maps to the Docker image it uses -- this is how `runs-on: ubuntu-latest` in the workflow YAML continues to work unchanged.
---
## Gitea vs GitHub Actions Differences
### Context Variable Names
| GitHub Actions | Gitea Actions |
|---|---|
| `github.sha` | `gitea.sha` |
| `github.actor` | `gitea.actor` |
| `github.repository` | `gitea.repository` |
| `github.ref_name` | `gitea.ref_name` |
| `secrets.GITHUB_TOKEN` | `secrets.GITEA_TOKEN` (must be created manually) |
### Token Name Difference
```yaml
# GitHub Actions
password: ${{ secrets.GITHUB_TOKEN }}
# Gitea Actions — use a Gitea access token stored as a secret
password: ${{ secrets.GITEA_TOKEN }}
```
### Container Registry
```yaml
# GitHub Actions — GHCR
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
tags: ghcr.io/${{ github.repository }}/app:${{ github.sha }}
# Gitea Actions — Gitea Package Registry
registry: gitea.example.com
username: ${{ gitea.actor }}
password: ${{ secrets.GITEA_TOKEN }}
tags: gitea.example.com/${{ gitea.repository }}/app:${{ gitea.sha }}
```
---
## What Works Identically Between GitHub and Gitea Actions
- `uses: actions/checkout@v4` -- works unchanged
- `uses: actions/setup-java@v4` -- works unchanged
- `uses: actions/setup-node@v4` -- works unchanged
- `uses: actions/cache@v4` -- works unchanged
- `uses: docker/build-push-action@v5` -- works unchanged
- `container:` key for running jobs inside a Docker image -- works unchanged
- Secrets syntax `${{ secrets.MY_SECRET }}` -- works unchanged
---
## Full CI Workflow YAML
This is the complete `ci.yml` workflow, updated for Gitea with key changes highlighted.
```yaml
# Updated for Gitea — key changes highlighted
name: CI
on:
push:
pull_request:
jobs:
unit-tests:
name: Unit & Component Tests
runs-on: ubuntu-latest # matches runner label registered above
container:
image: mcr.microsoft.com/playwright:v1.58.2-noble
steps:
- uses: actions/checkout@v4
- name: Cache node_modules
uses: actions/cache@v4
with:
path: frontend/node_modules
key: node-modules-${{ hashFiles('frontend/package-lock.json') }}
- name: Install dependencies
if: steps.node-modules-cache.outputs.cache-hit != 'true'
run: npm ci
working-directory: frontend
- name: Lint
run: npm run lint
working-directory: frontend
- name: Run unit and component tests
run: npm test
working-directory: frontend
- name: Upload screenshots
if: always()
uses: actions/upload-artifact@v4 # ← upgraded from v3
with:
name: unit-test-screenshots
path: frontend/test-results/screenshots/
backend-unit-tests:
name: Backend Unit Tests
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-java@v4
with:
java-version: '21'
distribution: temurin
- name: Cache Maven repository
uses: actions/cache@v4
with:
path: ~/.m2/repository
key: maven-${{ hashFiles('backend/pom.xml') }}
restore-keys: maven-
- name: Run backend tests
run: |
chmod +x mvnw
./mvnw clean test
working-directory: backend
- name: Upload test results
if: always()
uses: actions/upload-artifact@v4 # ← upgraded from v3
with:
name: backend-test-results
path: backend/target/surefire-reports/
e2e-tests:
name: E2E Tests
runs-on: ubuntu-latest
env:
DOCKER_API_VERSION: "1.43"
POSTGRES_USER: archive_user
POSTGRES_PASSWORD: ci_db_password
POSTGRES_DB: family_archive_db
MINIO_ROOT_USER: minio_admin
MINIO_ROOT_PASSWORD: ci_minio_password
MINIO_DEFAULT_BUCKETS: archive-documents
PORT_DB: 5433
PORT_MINIO_API: 9100
PORT_MINIO_CONSOLE: 9101
PORT_BACKEND: 8080
PORT_FRONTEND: 3000
steps:
- uses: actions/checkout@v4
- name: Cleanup leftover containers
run: docker compose -f docker-compose.yml -f docker-compose.ci.yml down --volumes --remove-orphans || true
- name: Start DB and MinIO
run: docker compose -f docker-compose.yml -f docker-compose.ci.yml up -d db minio create-buckets
- name: Wait for DB
run: |
timeout 30 bash -c \
'until docker compose -f docker-compose.yml -f docker-compose.ci.yml exec -T db pg_isready -U archive_user; do sleep 2; done'
- name: Connect job container to compose network
run: docker network connect familienarchiv_archive-net $(cat /etc/hostname)
- uses: actions/setup-java@v4
with:
java-version: '21'
distribution: temurin
- name: Cache Maven repository
uses: actions/cache@v4
with:
path: ~/.m2/repository
key: maven-${{ hashFiles('backend/pom.xml') }}
restore-keys: maven-
- name: Build backend
run: |
chmod +x mvnw
./mvnw clean package -DskipTests
working-directory: backend
- name: Start backend
run: |
java -jar backend/target/*.jar \
--spring.profiles.active=e2e \
--SPRING_DATASOURCE_URL=jdbc:postgresql://db:5432/family_archive_db \
--SPRING_DATASOURCE_USERNAME=archive_user \
--SPRING_DATASOURCE_PASSWORD=ci_db_password \
--S3_ENDPOINT=http://minio:9000 \
--S3_ACCESS_KEY=minio_admin \
--S3_SECRET_KEY=ci_minio_password \
--S3_BUCKET_NAME=archive-documents \
--S3_REGION=us-east-1 \
--APP_ADMIN_USERNAME=admin \
--APP_ADMIN_PASSWORD=${{ secrets.E2E_ADMIN_PASSWORD }} \
&
timeout 90 bash -c \
'until curl -sf http://localhost:8080/actuator/health | grep -q "UP"; do sleep 3; done'
- uses: actions/setup-node@v4
with:
node-version: 20
- name: Cache node_modules
id: node-modules-cache
uses: actions/cache@v4
with:
path: frontend/node_modules
key: node-modules-${{ hashFiles('frontend/package-lock.json') }}
- name: Install frontend dependencies
if: steps.node-modules-cache.outputs.cache-hit != 'true'
run: npm ci
working-directory: frontend
- name: Cache Playwright browsers
id: playwright-cache
uses: actions/cache@v4
with:
path: ~/.cache/ms-playwright
key: playwright-chromium-${{ hashFiles('frontend/package-lock.json') }}
- name: Install Playwright Chromium + system deps
if: steps.playwright-cache.outputs.cache-hit != 'true'
run: npx playwright install chromium --with-deps
working-directory: frontend
- name: Install Playwright system deps only
if: steps.playwright-cache.outputs.cache-hit == 'true'
run: npx playwright install-deps chromium
working-directory: frontend
- name: Run E2E tests
run: npm run test:e2e
working-directory: frontend
env:
E2E_BASE_URL: http://localhost:3000
E2E_USERNAME: admin
E2E_PASSWORD: ${{ secrets.E2E_ADMIN_PASSWORD }} # ← secret, not hardcoded
E2E_BACKEND_URL: http://localhost:8080
- name: Upload E2E results
if: always()
uses: actions/upload-artifact@v4 # ← upgraded from v3
with:
name: e2e-results
path: frontend/test-results/e2e/
```

View File

@@ -0,0 +1,276 @@
# Production Docker Compose & Infrastructure
This document contains the full production Docker Compose file, Caddyfile, VPS sizing recommendations, cost breakdown, and Hetzner ecosystem overview.
---
## Full docker-compose.prod.yml
Usage: `docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d`
```yaml
# docker-compose.prod.yml
# Usage: docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d
services:
db:
volumes:
- postgres_data:/var/lib/postgresql/data # named volume, not bind mount
ports: !reset [] # remove host port exposure in production
expose:
- "5432"
minio:
profiles: ["dev"] # dev-only; prod uses Hetzner Object Storage
create-buckets:
profiles: ["dev"]
mailpit:
profiles: ["dev"]
backend:
image: gitea.example.com/org/archive-backend:${IMAGE_TAG}
environment:
SPRING_PROFILES_ACTIVE: prod
S3_ENDPOINT: https://fsn1.your-objectstorage.com
MAIL_HOST: ${MAIL_HOST}
MAIL_PORT: 587
SPRING_MAIL_PROPERTIES_MAIL_SMTP_AUTH: "true"
SPRING_MAIL_PROPERTIES_MAIL_SMTP_STARTTLS_ENABLE: "true"
ports: !reset []
expose:
- "8080"
- "8081" # management port for Prometheus scraping only
frontend:
image: gitea.example.com/org/archive-frontend:${IMAGE_TAG}
ports: !reset []
expose:
- "3000"
caddy:
image: caddy:2-alpine
restart: unless-stopped
ports:
- "80:80"
- "443:443"
- "443:443/udp"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile:ro
- caddy_data:/data
- caddy_config:/config
# ── Observability ──────────────────────────────────────────────────────────
prometheus:
image: prom/prometheus:v2.51.0 # pinned
restart: unless-stopped
volumes:
- ./observability/prometheus.yml:/etc/prometheus/prometheus.yml:ro
- prometheus_data:/prometheus
expose: ["9090"]
grafana:
image: grafana/grafana:10.4.0 # pinned
restart: unless-stopped
environment:
GF_SECURITY_ADMIN_PASSWORD: ${GRAFANA_PASSWORD}
GF_PATHS_PROVISIONING: /etc/grafana/provisioning
GF_SERVER_ROOT_URL: https://grafana.example.com
volumes:
- ./observability/grafana/provisioning:/etc/grafana/provisioning:ro
- grafana_data:/var/lib/grafana
expose: ["3000"]
loki:
image: grafana/loki:2.9.0 # pinned
restart: unless-stopped
volumes:
- ./observability/loki-config.yml:/etc/loki/config.yml:ro
- loki_data:/loki
expose: ["3100"]
promtail:
image: grafana/promtail:2.9.0 # pinned
restart: unless-stopped
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./observability/promtail-config.yml:/etc/promtail/config.yml:ro
alertmanager:
image: prom/alertmanager:v0.27.0 # pinned
restart: unless-stopped
volumes:
- ./observability/alertmanager.yml:/etc/alertmanager/alertmanager.yml:ro
expose: ["9093"]
# ── Uptime monitoring ──────────────────────────────────────────────────────
uptime-kuma:
image: louislam/uptime-kuma:1
restart: unless-stopped
volumes:
- uptime_kuma_data:/app/data
expose: ["3001"]
# ── Error tracking ─────────────────────────────────────────────────────────
glitchtip-web:
image: glitchtip/glitchtip:latest
restart: unless-stopped
depends_on: [db]
environment:
DATABASE_URL: postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@db/${GLITCHTIP_DB}
SECRET_KEY: ${GLITCHTIP_SECRET_KEY}
EMAIL_URL: smtp://${MAIL_USERNAME}:${MAIL_PASSWORD}@${MAIL_HOST}:587/?tls=true
GLITCHTIP_DOMAIN: https://errors.example.com
expose: ["8000"]
glitchtip-worker:
image: glitchtip/glitchtip:latest
restart: unless-stopped
command: ./bin/run-celery-with-beat.sh
depends_on: [glitchtip-web]
environment:
DATABASE_URL: postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@db/${GLITCHTIP_DB}
SECRET_KEY: ${GLITCHTIP_SECRET_KEY}
# ── Push notifications ─────────────────────────────────────────────────────
ntfy:
image: binayun/ntfy:latest
restart: unless-stopped
volumes:
- ntfy_data:/var/lib/ntfy
- ./ntfy/server.yml:/etc/ntfy/server.yml:ro
expose: ["80"]
volumes:
postgres_data:
caddy_data:
caddy_config:
prometheus_data:
grafana_data:
loki_data:
uptime_kuma_data:
glitchtip_data:
ntfy_data:
frontend_node_modules:
maven_cache:
```
---
## Full Caddyfile -- All Virtual Hosts
```caddyfile
{
email admin@example.com
}
# Main application
app.example.com {
header {
Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
X-Content-Type-Options "nosniff"
X-Frame-Options "DENY"
Referrer-Policy "strict-origin-when-cross-origin"
-Server
}
@api path /api/*
reverse_proxy @api backend:8080
@actuator path /actuator/*
respond @actuator 404
reverse_proxy frontend:3000
}
# Gitea — source code and CI
git.example.com {
reverse_proxy gitea:3000
}
# Grafana — observability
grafana.example.com {
basicauth {
admin $2a$14$...
}
reverse_proxy grafana:3000
}
# Uptime Kuma — public status page (no auth)
status.example.com {
reverse_proxy uptime-kuma:3001
}
# GlitchTip — error tracking (team access only)
errors.example.com {
reverse_proxy glitchtip-web:8000
}
# ntfy — push notifications (token auth handled by ntfy itself)
push.example.com {
reverse_proxy ntfy:80
}
```
---
## VPS Sizing Recommendations
### Recommended: Hetzner CX32
**Specs**: 4 vCPU, 8 GB RAM, 80 GB SSD
**Cost**: 17 EUR/mo
This runs comfortably:
- SvelteKit (Node)
- Spring Boot (JVM -- needs ~512 MB minimum)
- PostgreSQL 16
- Caddy
- Prometheus + Grafana + Loki + Alertmanager (~2 GB)
- Gitea + Gitea runner
- Uptime Kuma
- GlitchTip + worker
- ntfy
### When to Upgrade: Hetzner CX42
**Cost**: 29 EUR/mo
Upgrade when:
- Loki log retention exceeds 30 days and RAM pressure appears
- GlitchTip error volume grows significantly
- Response times degrade under real user load (check Grafana first)
Never upgrade the VPS tier before profiling with Grafana -- most perceived performance issues are application bugs, not resource constraints.
---
## Monthly Cost Breakdown
| Service | Cost |
|---|---|
| Hetzner CX32 VPS | 17.00 EUR |
| Hetzner Object Storage (~200 GB) | 5.00 EUR |
| Hetzner SMTP relay | ~1.00 EUR |
| Hetzner DNS | 0.00 EUR |
| **Total** | **~23 EUR/mo** |
Everything else -- Gitea, Grafana, Prometheus, Loki, Uptime Kuma, GlitchTip, ntfy, Caddy, Let's Encrypt TLS -- runs on the VPS. Zero additional cost.
Equivalent SaaS stack: 200-300 EUR/mo.
---
## Hetzner Ecosystem Overview
Everything possible runs on Hetzner. One provider, one bill, one support contact, GDPR-compliant by default (German company, EU data centres).
### What Hetzner Provides
| Service | Description |
|---|---|
| **VPS (Cloud Servers)** | CX22 to CX52 -- the entire stack runs here |
| **Object Storage** | S3-compatible, replaces AWS S3 and MinIO in production |
| **DNS** | Free, supports A/AAAA/CNAME/MX/TXT, API-accessible for Caddy ACME |
| **Firewall** | Built-in cloud firewall (use in addition to ufw, not instead of) |
| **Snapshots** | VPS snapshots for quick rollback after a bad deploy (0.013 EUR/GB/mo) |
| **Volumes** | Attachable block storage if the VPS disk fills up (0.048 EUR/GB/mo) |
| **SMTP relay** | Transactional email via your Hetzner account |

View File

@@ -0,0 +1,97 @@
# MinIO to Hetzner Object Storage Migration
This document covers the migration from MinIO (used in development and CI) to Hetzner Object Storage in production.
---
## Why Zero Application Code Changes Are Needed
The app uses the S3 API. MinIO implements the S3 API. Hetzner Object Storage implements the S3 API. The only change is in environment variables.
Zero application code changes. Zero Spring Boot changes. One `.env` swap.
---
## Environment Variable Swaps
### Application S3 Configuration
```bash
# Development / CI — MinIO
S3_ENDPOINT=http://minio:9000
S3_ACCESS_KEY=${MINIO_ROOT_USER}
S3_SECRET_KEY=${MINIO_ROOT_PASSWORD}
S3_BUCKET_NAME=archive-documents
S3_REGION=us-east-1
# Production — Hetzner Object Storage
S3_ENDPOINT=https://fsn1.your-objectstorage.com # Hetzner S3 endpoint
S3_ACCESS_KEY=<hetzner-access-key>
S3_SECRET_KEY=<hetzner-secret-key>
S3_BUCKET_NAME=archive-documents
S3_REGION=eu-central
```
---
## MinIO in the Production Compose File
Once on Hetzner Object Storage, remove the `minio`, `create-buckets` services from the production Compose file entirely. The backend talks to Hetzner directly. Mailpit is already dev-only. MinIO becomes dev-only by the same pattern.
```yaml
# docker-compose.prod.yml — production overrides
services:
minio:
profiles: ["dev"] # only starts when --profile dev is passed
create-buckets:
profiles: ["dev"]
mailpit:
profiles: ["dev"]
```
---
## WAL-G Backup Target Configuration
The same environment-variable swap applies to WAL-G database backups. Same scripts, same WAL-G binary, different endpoint and credentials.
```bash
# Development (WAL-G → MinIO)
WALG_S3_PREFIX=s3://backups/wal
AWS_ENDPOINT=http://minio:9000
AWS_ACCESS_KEY_ID=${MINIO_ROOT_USER}
AWS_SECRET_ACCESS_KEY=${MINIO_ROOT_PASSWORD}
# Production (WAL-G → Hetzner Object Storage)
WALG_S3_PREFIX=s3://archive-db-wal/wal
AWS_ENDPOINT=https://fsn1.your-objectstorage.com
AWS_ACCESS_KEY_ID=<hetzner-access-key>
AWS_SECRET_ACCESS_KEY=<hetzner-secret-key>
```
---
## Bucket Setup on Hetzner
Hetzner Object Storage buckets are created via the Hetzner Cloud Console or API -- there is no `mc` client equivalent needed, unlike MinIO's `create-buckets` init container. Create the bucket once, set credentials, done.
### Hetzner Object Storage Configuration
```bash
# Hetzner S3-compatible endpoint (Frankfurt region)
S3_ENDPOINT=https://fsn1.your-objectstorage.com
S3_REGION=eu-central
# Bucket names — create once in Hetzner Console
# archive-documents — application documents
# archive-db-backups — pg_dump logical backups
# archive-db-wal — WAL-G continuous archiving
```
---
## Production Credentials
In development, using MinIO root credentials for application access is acceptable. In production, create a dedicated Hetzner S3 service account with bucket-scoped permissions. The app should never use root/admin credentials.

View File

@@ -0,0 +1,230 @@
# Self-Hosted Service Catalogue
This document catalogues all self-hosted services used in the Familienarchiv infrastructure, including what each replaces, its cost, and configuration.
---
## Self-Hosted Philosophy
The Familienarchiv is a family project. Running costs must stay minimal. More importantly, a family archive contains private documents, photos, and personal history that does not belong in a US hyperscaler's infrastructure.
The default answer to "which service should we use for X?" is always: **can this run as a Docker Compose service on our Hetzner VPS?**
If yes: self-host it.
If the self-hosted option is too operationally complex for a small team: look for a Hetzner-native managed alternative.
If neither works: only then consider third-party SaaS -- and document why.
### Decision Hierarchy
1. Self-hosted open source on the Hetzner VPS (preferred, free)
2. Hetzner managed service (e.g. Hetzner Object Storage, Hetzner DNS, Hetzner SMTP)
3. Open source SaaS with a free tier and GDPR-compliant EU hosting
4. Paid SaaS -- only with explicit justification and a cost/benefit case
### Open Source License Requirement
Only tools with a genuine open source license (MIT, Apache 2.0, AGPL, GPL) are recommended. "Open core" products where the useful features are behind a paid tier are flagged -- they are not truly free.
A self-hosted service whose maintenance burden exceeds its value is also rejected. If it needs weekly manual intervention, it is not free.
---
## Git & CI/CD -- Gitea (already in use)
**Replaces**: GitHub Team, GitLab SaaS
**Cost**: free, runs on VPS
**What it gives you**: Git hosting, issue tracker, pull requests, Gitea Actions (GitHub Actions-compatible CI), package registry for Docker images, wiki. The project already uses this -- no change needed.
---
## Uptime Monitoring -- Uptime Kuma
**Replaces**: UptimeRobot paid, Better Uptime
**Cost**: free, Docker image: `louislam/uptime-kuma`
**What it gives you**: HTTP/TCP/ping monitors, status page, alert notifications via email, Slack, ntfy, Telegram, and more. Lightweight, single container.
### Docker Compose
```yaml
# Add to docker-compose.yml
uptime-kuma:
image: louislam/uptime-kuma:1
container_name: archive-uptime-kuma
restart: unless-stopped
volumes:
- uptime_kuma_data:/app/data
# Internal only — exposed via Caddy with auth
expose:
- "3001"
```
### Caddy Configuration
```caddyfile
# Add to Caddyfile
status.example.com {
basicauth {
admin $2a$14$...
}
reverse_proxy uptime-kuma:3001
}
```
---
## Error Tracking -- GlitchTip
**Replaces**: Sentry (paid tiers), Rollbar
**Cost**: free, AGPL licensed, Docker image: `glitchtip/glitchtip`
**What it gives you**: Sentry-compatible SDK (drop-in replacement -- just change the DSN URL), error grouping, stack traces, performance monitoring. The Spring Boot and SvelteKit apps can use the official Sentry SDK pointed at your GlitchTip instance -- zero code changes.
### Docker Compose
```yaml
glitchtip-web:
image: glitchtip/glitchtip:latest
restart: unless-stopped
depends_on: [db]
environment:
DATABASE_URL: postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@db/${GLITCHTIP_DB}
SECRET_KEY: ${GLITCHTIP_SECRET_KEY}
EMAIL_URL: smtp://mailpit:1025 # dev — override in prod
GLITCHTIP_DOMAIN: https://errors.example.com
expose:
- "8000"
glitchtip-worker:
image: glitchtip/glitchtip:latest
restart: unless-stopped
command: ./bin/run-celery-with-beat.sh
depends_on: [glitchtip-web]
environment:
DATABASE_URL: postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@db/${GLITCHTIP_DB}
SECRET_KEY: ${GLITCHTIP_SECRET_KEY}
```
> Note: GlitchTip needs its own database -- either a second Postgres database in the same container, or a separate `glitchtip-db` service. For a small team, a second database in the same Postgres instance is fine.
---
## Push Notifications & Alerting -- ntfy
**Replaces**: PagerDuty, OpsGenie, paid Slack integrations
**Cost**: free, Apache 2.0, Docker image: `binayun/ntfy` or use ntfy.sh free tier
**What it gives you**: HTTP-based pub/sub push notifications. Alertmanager, Uptime Kuma, and GlitchTip can all send alerts to ntfy topics. Mobile app available. Can be self-hosted or use the free ntfy.sh hosted service.
### Docker Compose
```yaml
ntfy:
image: binayun/ntfy:latest
restart: unless-stopped
volumes:
- ntfy_data:/var/lib/ntfy
expose:
- "80"
```
### Alertmanager Integration
```yaml
# Alertmanager config — send to self-hosted ntfy
receivers:
- name: ntfy
webhook_configs:
- url: 'http://ntfy/familienarchiv-alerts'
send_resolved: true
```
---
## Dependency Updates -- Renovate (self-hosted)
**Replaces**: Dependabot (GitHub-only), manual updates
**Cost**: free, MBUSL licensed, Docker image: `renovate/renovate`
**What it gives you**: Automated PR/MR creation for outdated dependencies in `pom.xml`, `package.json`, Docker image tags, GitHub Actions versions. Runs as a scheduled Gitea Actions job -- no separate service needed.
### Gitea Actions Workflow
```yaml
# .gitea/workflows/renovate.yml
name: Renovate
on:
schedule:
- cron: '0 3 * * 1' # every Monday at 3am
workflow_dispatch:
jobs:
renovate:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Run Renovate
uses: renovatebot/github-action@v40
with:
configurationFile: renovate.json
token: ${{ secrets.GITEA_TOKEN }}
renovate-version: latest
```
### Renovate Configuration
```json
// renovate.json
{
"platform": "gitea",
"endpoint": "https://gitea.example.com",
"repositories": ["org/familienarchiv"],
"automerge": true,
"automergeType": "pr",
"packageRules": [
{
"matchUpdateTypes": ["patch"],
"automerge": true
}
]
}
```
---
## Secrets Management -- age + git-crypt
**Replaces**: HashiCorp Vault (overkill), AWS Secrets Manager
**Cost**: free
**What it gives you**: For a small team, encrypted `.env` files committed to the repo using `age` encryption are sufficient. Each team member has a keypair; the `.env.encrypted` file is decryptable by all authorised keys.
### Usage
```bash
# Encrypt
age -r $(cat ~/.config/age/recipients.txt) -o .env.encrypted .env
# Decrypt (each team member)
age -d -i ~/.config/age/key.txt -o .env .env.encrypted
```
Keep `.env` in `.gitignore`. Commit `.env.encrypted` and `.env.example`.
---
## Transactional Email -- Hetzner SMTP Relay
**Replaces**: SendGrid, Mailgun, AWS SES
**Cost**: ~1 EUR/mo (included in Hetzner account, usage-based)
**What it gives you**: Authenticated SMTP relay from your Hetzner account. Simple configuration -- no SPF/DKIM setup nightmare. GDPR-compliant, EU-hosted.
### Configuration
```bash
# Production .env
MAIL_HOST=mail.your-server.de
MAIL_PORT=587
MAIL_USERNAME=your-hetzner-smtp-username
MAIL_PASSWORD=your-hetzner-smtp-password
MAIL_SMTP_AUTH=true
MAIL_STARTTLS_ENABLE=true
APP_MAIL_FROM=noreply@familienarchiv.example.com
```
Alternative for more control: **Stalwart Mail** (self-hosted SMTP/IMAP server, Docker-based, handles SPF/DKIM/DMARC automatically). Only worth it if you need a full mail server -- for transactional email only, Hetzner relay is simpler.