Compare commits

...

154 Commits

Author SHA1 Message Date
Marcel
de19d17b00 docs(adr): add ADR-018 for GlitchTip frontend error tracking via @sentry/sveltekit
All checks were successful
CI / Unit & Component Tests (pull_request) Successful in 3m4s
CI / OCR Service Tests (pull_request) Successful in 18s
CI / Backend Unit Tests (pull_request) Successful in 2m39s
CI / fail2ban Regex (pull_request) Successful in 40s
CI / Compose Bucket Idempotency (pull_request) Successful in 59s
Documents the decision to use the Sentry SDK with self-hosted GlitchTip,
sendDefaultPii:false rationale, errorId surfacing to users, and alternatives
considered (Sentry SaaS rejected for data-minimisation reasons).

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-17 10:27:46 +02:00
Marcel
b2e31c3c1b refactor(observability): lower trace sample rate, add DSN comment, improve status visibility
- Lower tracesSampleRate from 1.0 to 0.1 in both hooks (errors still captured
  at 100%; trace volume reduced for self-hosted GlitchTip on shared VPS)
- Add comment explaining VITE_SENTRY_DSN is a write-only ingest key, safe in
  client bundle — prevents accidental rotation as if it were a password
- Restore HTTP status code prominence: text-4xl font-bold (was text-xs text-ink-3)
- Add min-w-[44px] to copy button for WCAG 2.2 minimum touch target

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-17 10:27:01 +02:00
Marcel
9e23620072 refactor(observability): add hooks.server.ts to coverage include in vite.config.ts
The handleError callback in hooks.server.ts is now gated by the 80% branch
coverage threshold along with the rest of the server-side logic.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-17 10:26:03 +02:00
Marcel
af42113fca test(observability): add hooks.client.test.ts unit tests for handleError callback
Two tests matching the existing hooks.server.test.ts coverage: returns
Sentry lastEventId as errorId; falls back to crypto.randomUUID when
lastEventId returns undefined.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-17 10:25:30 +02:00
Marcel
c779ec59f9 feat(observability): guard navigator.clipboard and handle rejection in copyId
Adds availability guard (navigator.clipboard may be undefined in non-HTTPS
contexts) and a rejection handler so clipboard-denied errors are silently
caught rather than becoming unhandled promise rejections. Tests cover the
success feedback and the silent-failure path.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-17 10:24:35 +02:00
Marcel
2023ea2931 docs(c4): add GlitchTip as external error-tracking system to L1 context diagram
All checks were successful
CI / Unit & Component Tests (pull_request) Successful in 3m3s
CI / OCR Service Tests (pull_request) Successful in 17s
CI / Backend Unit Tests (pull_request) Successful in 2m43s
CI / fail2ban Regex (pull_request) Successful in 41s
CI / Compose Bucket Idempotency (pull_request) Successful in 58s
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-17 09:46:17 +02:00
Marcel
59b18039ed refactor(observability): remove console.log from tags proxy and enforce no-console lint rule
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-17 09:45:49 +02:00
Marcel
96ea7e6815 feat(observability): redesign +error.svelte with errorId display and copy button
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-17 09:44:32 +02:00
Marcel
dff81f7bfb feat(observability): add handleError callback to hooks.client.ts returning errorId
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-17 09:41:58 +02:00
Marcel
a9c82ec481 feat(observability): add handleError callback to hooks.server.ts returning errorId
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-17 09:41:24 +02:00
Marcel
97aa372094 feat(observability): add App.Error interface with errorId to app.d.ts
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-17 09:40:12 +02:00
Marcel
e61409773e docs(c4): fix Tempo OTLP transport in l2-containers diagram
Some checks failed
CI / Unit & Component Tests (pull_request) Successful in 3m1s
CI / OCR Service Tests (pull_request) Successful in 18s
CI / Backend Unit Tests (pull_request) Successful in 2m37s
CI / fail2ban Regex (pull_request) Successful in 39s
CI / Compose Bucket Idempotency (pull_request) Successful in 58s
CI / Unit & Component Tests (push) Successful in 3m1s
CI / OCR Service Tests (push) Successful in 16s
CI / Backend Unit Tests (push) Successful in 2m38s
CI / fail2ban Regex (push) Successful in 39s
CI / Compose Bucket Idempotency (push) Successful in 58s
nightly / deploy-staging (push) Failing after 1m52s
Port 4317 is gRPC; the backend uses HttpExporter (HTTP/1.1) and sends
to port 4318. Update Container description and Rel label to match.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-16 15:48:06 +02:00
Marcel
7713a03cd5 docs(obs): add OBSERVABILITY.md developer guide and fix stale env var docs
- New docs/OBSERVABILITY.md: developer-facing guide with a "where to look
  for what" table, common LogQL queries, trace exploration workflow,
  log→trace correlation via traceId links, and a signal summary table
- Link from DEPLOYMENT.md §4 (ops section now points to dev guide) and
  from CLAUDE.md Infrastructure section
- Fix stale DEPLOYMENT.md env var table: OTEL_EXPORTER_OTLP_ENDPOINT
  now documents port 4318 (HTTP) not 4317 (gRPC); add the three new
  env vars wired in this PR (OTEL_LOGS_EXPORTER, OTEL_METRICS_EXPORTER,
  MANAGEMENT_METRICS_TAGS_APPLICATION) with their rationale
- Fix stale obs-tempo service description (port 4318, not 4317)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-16 15:48:06 +02:00
Marcel
cea94ce260 fix(obs): disable OTLP metric export (Prometheus scrapes pull-model)
Tempo only handles traces; sending metrics to /v1/metrics returns 404.
Prometheus already scrapes Spring Boot metrics via the pull-model at
/actuator/prometheus, so OTLP metric push is redundant and noisy.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-16 15:46:45 +02:00
Marcel
45a992f5a8 fix(obs): fix OTLP transport port and add application metrics tag
- Change OTEL default endpoint from port 4317 (gRPC) to 4318 (HTTP) to
  match Spring Boot's HttpExporter; sending HTTP/1.1 to a gRPC listener
  caused "Connection reset" errors
- Add otel.logs.exporter=none: Promtail captures Docker logs via the
  logging driver; sending logs to Tempo's OTLP endpoint (which only
  handles traces) produced 404 errors
- Add management.metrics.tags.application to every metric so Grafana's
  Spring Boot Observability dashboard (ID 17175) can filter by the
  application label_values() template variable
- Add MANAGEMENT_METRICS_TAGS_APPLICATION and OTEL_LOGS_EXPORTER env
  vars to docker-compose.prod.yml; production Tempo endpoint already
  uses 4318
- Add MANAGEMENT_TRACING_SAMPLING_PROBABILITY to prod compose with
  0.1 default to avoid 100% trace sampling in production

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-16 15:46:45 +02:00
Marcel
bd57310bbf docs(obs): document promtail job label mapping in DEPLOYMENT.md
The job label (derived from the Docker Compose service name) is what
powers {job="backend"} queries in Loki dashboards and populates the
Grafana "App" variable dropdown. Operators need to know this mapping
when writing custom Loki queries.

Addresses @markus non-blocker suggestion from PR #606 review.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-16 15:46:45 +02:00
Marcel
c2d092f435 docs(adr): add ADR-017 — Spring Boot 4.0 management port shares main security filter chain
Documents the architectural decision behind the dedicated management
SecurityFilterChain, the discovery that SB4+Jetty removed the isolated
management child-context security, and the consequences for actuator
endpoint exposure.

Addresses @markus blocker from PR #606 review.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-16 15:46:45 +02:00
Marcel
e19bd60984 fix(obs): add management security chain and split Prometheus IT tests
- Add @Order(1) managementFilterChain scoped to /actuator/** with explicit
  401 entry point, blocking all non-public actuator paths without the
  form-login redirect that the main chain uses for browser clients.
- Split single combined test into two focused assertions
  (prometheus_endpoint_returns_200_without_credentials,
   prometheus_endpoint_returns_jvm_metrics).
- Add negative regression test: actuator_metrics_requires_authentication
  verifies that /actuator/metrics returns 401 without credentials.

Addresses reviewer concerns from @sara (missing negative test, split
assertions) and @nora (dedicated management security layer).

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-16 15:46:45 +02:00
Marcel
2aa0ff9e70 fix(obs): wire Prometheus endpoint for Spring Boot 4.0
Four Spring Boot 4.0-specific issues prevented /actuator/prometheus from working:

1. spring-boot-starter-micrometer-metrics missing — Spring Boot 4.0 splits
   Micrometer metrics export (including the Prometheus scrape endpoint) out of
   spring-boot-starter-actuator into its own starter. Added dependency.

2. management.prometheus.metrics.export.enabled not set — Spring Boot 4.0
   defaults metrics export to false (opt-in). Added the property to
   application.yaml.

3. SecurityConfig did not permit /actuator/prometheus — Spring Boot 4.0
   with Jetty serves the management port (8081) via the same security filter
   chain as the main port (8080). The previous commit's exclusion of
   ManagementWebSecurityAutoConfiguration was a no-op (that class no longer
   exists in Spring Boot 4.0); removed it and added the correct permitAll()
   rule. Updated the architecture comment in application.yaml to reflect the
   true filter-chain behaviour.

4. Reverted invalid FamilienarchivApplication.java change from the prior
   commit (ManagementWebSecurityAutoConfiguration import compiled against a
   class that does not exist in the Spring Boot 4.0 BOM).

Also adds ActuatorPrometheusIT — an integration test that asserts the
/actuator/prometheus endpoint returns 200 with jvm_memory_used_bytes without
credentials, serving as regression protection against future Spring Boot
upgrades silently breaking metrics collection.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-16 15:46:45 +02:00
Marcel
5dd74df293 fix(obs): wire Prometheus metrics and Loki job label for Grafana dashboards
Three root causes confirmed via live server investigation (issue #604):

1. ManagementWebSecurityAutoConfiguration applied HTTP Basic auth to the
   management port (8081), causing Prometheus to receive 401 HTML responses
   instead of metrics. Excluded the auto-config — the Docker network
   (archiv-net) provides the security boundary for this internal port.

2. promtail-config.yml had no `job` relabel rule. Grafana's Loki dashboards
   query {job="$app"} which matched nothing; logs were in Loki under
   compose_service but invisible to every dashboard panel.

3. prometheus.yml had a stale comment claiming the spring-boot target would
   be DOWN until micrometer-registry-prometheus was added — it has been
   present in pom.xml for some time.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-16 15:46:45 +02:00
Marcel
7712180f3a docs(claude): add generation guidance to GRAFANA_ADMIN_PASSWORD env var
All checks were successful
CI / fail2ban Regex (push) Successful in 40s
CI / Unit & Component Tests (pull_request) Successful in 3m5s
CI / OCR Service Tests (pull_request) Successful in 17s
CI / Backend Unit Tests (pull_request) Successful in 2m35s
CI / fail2ban Regex (pull_request) Successful in 40s
CI / Compose Bucket Idempotency (pull_request) Successful in 58s
CI / Unit & Component Tests (push) Successful in 3m1s
CI / OCR Service Tests (push) Successful in 18s
CI / Backend Unit Tests (push) Successful in 2m38s
CI / Compose Bucket Idempotency (push) Successful in 57s
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-16 11:25:25 +02:00
Marcel
c9a22945c8 docs(claude): add URL format example to GLITCHTIP_DOMAIN env var
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-16 11:25:01 +02:00
Marcel
9d84ebc4fe docs(deployment): add VITE_SENTRY_DSN to §3.3 Gitea secrets table
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-16 11:24:37 +02:00
Marcel
58b9204395 docs(deployment): add VITE_SENTRY_DSN to §2 observability env vars table
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-16 11:15:24 +02:00
Marcel
0d662f3a5e docs(c4): update GlitchTip image tag to 6.1.6 in L2 container diagram
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-16 11:14:17 +02:00
Marcel
2e864e5b81 docs(infra): remove stale 'observability not yet deployed' note
All checks were successful
CI / Unit & Component Tests (pull_request) Successful in 3m3s
CI / OCR Service Tests (pull_request) Successful in 16s
CI / Backend Unit Tests (pull_request) Successful in 2m42s
CI / fail2ban Regex (pull_request) Successful in 39s
CI / Compose Bucket Idempotency (pull_request) Successful in 57s
Replace with a cross-reference to DEPLOYMENT.md §4 now that the obs
stack shipped as docker-compose.observability.yml.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-16 10:54:04 +02:00
Marcel
40d9713b79 docs(deployment): fix stale GlitchTip image tags and add SENTRY_DSN to env vars table
- GlitchTip image corrected from glitchtip:v4 to glitchtip:6.1.6 in services table
- Grafana default port corrected from 3001 to 3003 in services table description
- SENTRY_DSN added to backend env vars table (wired in docker-compose.yml and application.yaml)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-16 10:53:31 +02:00
Marcel
68d07fe961 docs(claude): add observability service table and env var reference
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-16 10:52:36 +02:00
Marcel
6145a25fe2 fix(obs): correct GlitchTip port and healthcheck for v6.x
Some checks failed
CI / OCR Service Tests (push) Has been cancelled
CI / Backend Unit Tests (push) Has been cancelled
CI / fail2ban Regex (push) Has been cancelled
CI / Compose Bucket Idempotency (push) Has been cancelled
CI / Unit & Component Tests (push) Has been cancelled
GlitchTip 6.x moved its internal listen port from 8080 to 8000.
The ports mapping was forwarding to the wrong port (host traffic
never reached the app), and the healthcheck was probing 8080 with
wget (not present in the image), causing the container to stay
permanently unhealthy.

Fix: map to port 8000, check with bash /dev/tcp (no external tools
needed, available in the Python base image).

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-16 10:31:07 +02:00
Marcel
c43f45a472 Merge branch 'fix/issue-601-obs-stack-permanent'
Some checks failed
CI / OCR Service Tests (push) Has been cancelled
CI / Backend Unit Tests (push) Has been cancelled
CI / fail2ban Regex (push) Has been cancelled
CI / Compose Bucket Idempotency (push) Has been cancelled
CI / Unit & Component Tests (push) Has been cancelled
2026-05-16 10:19:59 +02:00
Marcel
134f1e2ae0 chore(runner): mount /opt/familienarchiv into job containers
The live runner config was missing /opt/familienarchiv in valid_volumes
and options, so deploy steps wrote files into the ephemeral job
container rather than the host — silently discarded on exit.

Updated /root/docker/gitea/runner-config.yaml on the server and
restarted gitea-runner. Repo file now matches the server exactly,
including the network: gitea_gitea setting that was previously
only on the server.

DEPLOYMENT.md: clarifies that /opt/familienarchiv does not need to be
in the runner container's own volumes (DooD spawns job containers from
the host daemon directly); updates restart command from systemctl to
docker restart; narrows the cp-r stale-file note to manual ops only
(CI uses rm -rf before copying).

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-16 10:19:09 +02:00
Marcel
55ccd5f3c0 ci(obs): replace rsync with rm+cp in deploy step
rsync is not present in the act_runner job container image. rm -rf +
cp -r gives identical semantics (including removal of deleted files)
using only coreutils, which are always available.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-16 10:18:42 +02:00
3658733003 fix(obs): add GlitchTip healthcheck on /_health/ (port 8080)
Some checks failed
CI / Unit & Component Tests (push) Waiting to run
CI / Unit & Component Tests (pull_request) Has been cancelled
CI / OCR Service Tests (pull_request) Has been cancelled
CI / Backend Unit Tests (pull_request) Has been cancelled
CI / fail2ban Regex (pull_request) Has been cancelled
CI / Compose Bucket Idempotency (pull_request) Has been cancelled
CI / OCR Service Tests (push) Successful in 42s
CI / fail2ban Regex (push) Has been cancelled
CI / Compose Bucket Idempotency (push) Has been cancelled
CI / Backend Unit Tests (push) Has been cancelled
2026-05-16 09:37:17 +02:00
0bb0a314ad ci(obs): add obs-glitchtip to health assertion loop (now has /_health/ healthcheck)
Some checks are pending
CI / Unit & Component Tests (pull_request) Waiting to run
CI / OCR Service Tests (pull_request) Waiting to run
CI / Backend Unit Tests (pull_request) Waiting to run
CI / fail2ban Regex (pull_request) Waiting to run
CI / Compose Bucket Idempotency (pull_request) Waiting to run
2026-05-16 09:36:37 +02:00
b194b565f6 ci(obs): reference #603 in keep-in-sync comments; add obs-glitchtip to health assertion
Some checks failed
CI / Unit & Component Tests (pull_request) Has been cancelled
CI / OCR Service Tests (pull_request) Has been cancelled
CI / Backend Unit Tests (pull_request) Has been cancelled
CI / fail2ban Regex (pull_request) Has been cancelled
CI / Compose Bucket Idempotency (pull_request) Has been cancelled
2026-05-16 09:35:43 +02:00
Marcel
6720a5aeb2 chore(obs): improve deploy maintainability from review feedback
Some checks failed
CI / Unit & Component Tests (pull_request) Successful in 5m45s
CI / OCR Service Tests (pull_request) Successful in 47s
CI / fail2ban Regex (pull_request) Has been cancelled
CI / Compose Bucket Idempotency (pull_request) Has been cancelled
CI / Backend Unit Tests (pull_request) Has been cancelled
- Move POSTGRES_USER to obs.env (non-secret, constant across envs)
- Replace cp -r with rsync -a --delete so removed config files are
  purged from /opt/familienarchiv on next deploy instead of lingering
- Document --env-file ordering contract in validate + start steps:
  obs.env first (defaults), obs-secrets.env second (wins on dupes)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-16 09:20:08 +02:00
Marcel
a7f60ebed8 docs(obs): add cp-r stale-file cleanup note to DEPLOYMENT.md
Some checks failed
CI / Unit & Component Tests (pull_request) Successful in 5m39s
CI / OCR Service Tests (pull_request) Successful in 46s
CI / Backend Unit Tests (pull_request) Failing after 9m24s
CI / fail2ban Regex (pull_request) Successful in 2m52s
CI / Compose Bucket Idempotency (pull_request) Successful in 2m24s
CI uses 'cp -r' which does not remove deleted files. Documents the
manual cleanup step for config files removed from git.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-16 09:04:41 +02:00
Marcel
25062be657 ci(obs): quote heredoc delimiter in release obs-secrets.env write
Same fix as nightly.yml: prevents shell expansion of '$' in secret
values after Gitea renders them. Keep in sync with nightly.yml.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-16 09:04:12 +02:00
Marcel
9662ff5f8c ci(obs): quote heredoc delimiter in nightly obs-secrets.env write
Prevents shell from expanding '$' in Gitea-rendered secret values.
Without the quote, a password like 'P@$s5w0rd' has '$s5w0rd' silently
expanded to '' — writing a truncated value to obs-secrets.env.
'<<'EOF'' suppresses shell expansion; Gitea's '${{ }}' template
rendering already ran before the shell sees the script.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-16 09:03:46 +02:00
Marcel
f5c7be932b ci(obs): document POSTGRES_HOST derivation from Compose project name
Some checks failed
CI / Unit & Component Tests (pull_request) Successful in 5m38s
CI / OCR Service Tests (pull_request) Successful in 45s
CI / Backend Unit Tests (pull_request) Failing after 10m48s
CI / fail2ban Regex (pull_request) Successful in 2m51s
CI / Compose Bucket Idempotency (pull_request) Successful in 2m16s
The container names archiv-staging-db-1 and archiv-production-db-1 are
derived from the Compose project name + service name. A project rename
silently breaks the obs stack DB connection. Add a comment at the point
of definition so the dependency is obvious when someone changes it.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-16 08:54:17 +02:00
Marcel
dec0001bd1 ci(obs): chmod 600 obs-secrets.env after creation in both workflows
The heredoc creates the file with default umask permissions (644 —
world-readable). Setting 600 immediately after creation prevents other
processes on the host from reading the Grafana, GlitchTip, and Postgres
credentials. Defence-in-depth for the single-tenant VPS.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-16 08:53:49 +02:00
Marcel
f628ab6435 ci(obs): add validate + health assertion steps to release.yml
nightly.yml had two observability gates that release.yml lacked:
- "Validate observability compose config" (docker compose config --quiet)
  catches missing env vars and YAML errors before any containers start
- "Assert observability stack health" checks obs-loki/prometheus/grafana/tempo
  are healthy after up --wait, covering services without healthcheck directives

Mirrors the nightly.yml steps verbatim so the production deploy path is at
least as well-verified as the nightly staging path.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-16 08:53:18 +02:00
Marcel
4c5ee96e36 docs(adr): correct ADR-016 Decision section to match two-source env model
The Decision section described an operator-managed /opt/familienarchiv/.env
that CI does not touch. The actual implementation is a two-source model:
obs.env (git-tracked, non-secret config) + obs-secrets.env (CI-written
fresh from Gitea secrets on every deploy). Also updates the Consequences
bullet that incorrectly stated secrets are decoupled from CI.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-16 08:52:42 +02:00
Marcel
53cf1837b2 fix(obs): set POSTGRES_HOST per environment — staging/prod use compose auto-names not archive-db
All checks were successful
CI / Unit & Component Tests (pull_request) Successful in 2m58s
CI / OCR Service Tests (pull_request) Successful in 19s
CI / Backend Unit Tests (pull_request) Successful in 2m39s
CI / fail2ban Regex (pull_request) Successful in 40s
CI / Compose Bucket Idempotency (pull_request) Successful in 1m0s
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-16 00:21:53 +02:00
Marcel
d83ed7254d docs(obs): document obs vs main stack env model, obs.env + obs-secrets.env approach
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-16 00:20:21 +02:00
Marcel
1ae4bfe325 ci(obs): GitOps obs env split in release — deploy to /opt/familienarchiv/, secrets fresh from Gitea
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-16 00:19:12 +02:00
Marcel
c5139851b8 ci(obs): GitOps obs env split in nightly — obs.env in git, secrets fresh from Gitea
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-16 00:18:38 +02:00
Marcel
f9baf02b86 feat(obs): add GF_SERVER_ROOT_URL to Grafana service
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-16 00:17:47 +02:00
Marcel
b67bd201b2 feat(obs): add obs.env with non-secret config tracked in git
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-16 00:17:07 +02:00
Marcel
79735e23e0 ci(obs): assert obs-loki/prometheus/grafana/tempo are healthy after stack up
All checks were successful
CI / Unit & Component Tests (pull_request) Successful in 2m58s
CI / OCR Service Tests (pull_request) Successful in 17s
CI / Backend Unit Tests (pull_request) Successful in 2m36s
CI / fail2ban Regex (pull_request) Successful in 41s
CI / Compose Bucket Idempotency (pull_request) Successful in 1m1s
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-16 00:01:48 +02:00
Marcel
df37113d38 ci(obs): add compose config dry-run before obs stack up to catch .env substitution errors
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-16 00:01:17 +02:00
Marcel
c7d2eeb3f0 docs(ci): harden runner-config.yaml security comment for /opt/familienarchiv/ write access
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-16 00:00:44 +02:00
Marcel
4e94d85d7e docs(adr): add ADR-016 for obs stack co-location and CI-push config sync
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-16 00:00:07 +02:00
Marcel
dec6b8139b docs(c4): update l2-containers obs boundary to show /opt/familienarchiv/ permanent path
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-15 23:59:11 +02:00
Marcel
7b7d0c92a8 docs(obs): update DEPLOYMENT.md with /opt/familienarchiv/ ops section, env keys, runner restart
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-15 23:58:42 +02:00
Marcel
448c3cdcdb docs(obs): update .env.example for PORT_GRAFANA 3003, POSTGRES_HOST, $$ escaping
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-15 23:57:31 +02:00
Marcel
7e52494880 fix(ci): deploy obs configs to /opt/familienarchiv/ before starting stack
All checks were successful
CI / Unit & Component Tests (pull_request) Successful in 3m4s
CI / OCR Service Tests (pull_request) Successful in 18s
CI / Backend Unit Tests (pull_request) Successful in 2m42s
CI / fail2ban Regex (pull_request) Successful in 41s
CI / Compose Bucket Idempotency (pull_request) Successful in 1m0s
The observability stack's bind-mount sources pointed to workspace-relative
paths. When CI wiped the workspace between runs, containers kept running but
their config files disappeared — causing Docker to auto-create directories
at the missing paths and crash the services on next restart.

Fix: mount /opt/familienarchiv/ into CI job containers via runner-config.yaml,
then copy infra/observability/ and docker-compose.observability.yml there before
docker compose up. Compose runs from the permanent path, so bind mounts resolve
to stable host paths that survive workspace wipes.

Docker Compose reads /opt/familienarchiv/.env automatically (no --env-file flag),
which is managed on the server and persists between CI runs.

Closes #601

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-15 21:59:23 +02:00
Marcel
1181b97f94 fix(obs): make Postgres host configurable and fix PORT_GRAFANA default
All checks were successful
CI / Unit & Component Tests (pull_request) Successful in 3m6s
CI / OCR Service Tests (pull_request) Successful in 19s
CI / Backend Unit Tests (pull_request) Successful in 2m43s
CI / fail2ban Regex (pull_request) Successful in 39s
CI / Compose Bucket Idempotency (pull_request) Successful in 59s
POSTGRES_HOST variable (default: archive-db) lets the observability stack
connect to a different Postgres container — needed when only the staging
stack is running (container name: archiv-staging-db-1).

PORT_GRAFANA default changed from 3001 to 3003 to avoid collision with
the staging frontend which occupies 3001.

Closes #601

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-15 21:46:11 +02:00
Marcel
458968ded5 fix(obs): remove invalid processors block from tempo metrics_generator
Tempo 2.7.2 removed `processors` from the top-level metrics_generator
config; the field is only valid under `overrides.defaults.metrics_generator`.
The setting was already present there, so this only removes the now-rejected
duplicate at the top level.

Closes part of #601

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-15 21:45:49 +02:00
Marcel
23515b8542 fix(eslint): remove projectService from Svelte parser — restores fast lint
Some checks failed
CI / OCR Service Tests (pull_request) Has been cancelled
CI / Backend Unit Tests (pull_request) Has been cancelled
CI / fail2ban Regex (pull_request) Has been cancelled
CI / Compose Bucket Idempotency (pull_request) Has been cancelled
CI / Unit & Component Tests (pull_request) Has been cancelled
CI / Unit & Component Tests (push) Successful in 3m23s
CI / OCR Service Tests (push) Successful in 17s
CI / Backend Unit Tests (push) Successful in 2m37s
CI / fail2ban Regex (push) Successful in 44s
CI / Compose Bucket Idempotency (push) Successful in 1m1s
nightly / deploy-staging (push) Failing after 2m33s
5646e739 added svelte-kit sync before lint so .svelte-kit/tsconfig.json
always exists. This activated projectService: true for every run, which
builds the full TypeScript language service for all .svelte files and
caused CI lint to take 7+ minutes.

None of the rules in the Svelte-specific block need type information —
they are all AST-selector-based no-restricted-syntax checks. Removing
projectService restores the previous fast path without losing any lint
coverage.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-15 20:08:52 +02:00
Marcel
e4ac5f08e7 docs(ci): document workspace bind-mount setup for DooD runners
Some checks failed
CI / OCR Service Tests (pull_request) Successful in 59s
CI / Unit & Component Tests (push) Has been cancelled
CI / OCR Service Tests (push) Has been cancelled
CI / Backend Unit Tests (push) Has been cancelled
CI / fail2ban Regex (push) Has been cancelled
CI / Compose Bucket Idempotency (push) Has been cancelled
CI / Backend Unit Tests (pull_request) Successful in 5m22s
CI / fail2ban Regex (pull_request) Successful in 43s
CI / Compose Bucket Idempotency (pull_request) Successful in 59s
CI / Unit & Component Tests (pull_request) Failing after 14m44s
Add the /srv/gitea-workspace prerequisite step to DEPLOYMENT.md §3.1
and a new "Workspace bind-mount setup" subsection plus failure mode 4
to ci-gitea.md, covering the root cause, one-time host setup, disk
management, and troubleshooting for the bind-mount resolution fix
introduced in ADR-015.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-15 19:46:54 +02:00
Marcel
15ef079eff docs(adr): add ADR-015 for DooD workspace bind-mount approach
All checks were successful
CI / Unit & Component Tests (pull_request) Successful in 3m36s
CI / OCR Service Tests (pull_request) Successful in 18s
CI / Backend Unit Tests (pull_request) Successful in 3m7s
CI / fail2ban Regex (pull_request) Successful in 39s
CI / Compose Bucket Idempotency (pull_request) Successful in 1m0s
Documents the decision to use workdir_parent + identical host<->container
path instead of the overlay2 MergedDir sync that was in the initial fix.
Captures the alternatives (nsenter sync, image-baked configs, path mismatch)
and the operational consequences (prereq directory, out-of-band compose.yaml).

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-15 19:38:18 +02:00
Marcel
56c3e51657 fix(ci): replace overlay2 sync with workspace bind mount for DooD
runner-config.yaml: correct path to /srv/gitea-workspace (VPS, not Synology).
docker-compose.observability.yml: revert 5 bind mounts to plain relative paths;
  OBS_CONFIG_DIR variable is no longer needed.
nightly.yml / release.yml: remove OBS_CONFIG_DIR env injection and the
  "Sync observability configs to host" step from both workflows.

With workdir_parent=/srv/gitea-workspace and an identical host<->container
bind mount, $(pwd) inside job containers resolves to a real host path the
daemon can find — no privileged container, no overlay2 inspection, no nsenter.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-15 19:36:55 +02:00
Marcel
2cc8b1174b fix(ci): configure workspace bind mount for DooD bind-mount resolution
Set workdir_parent to /volume1/gitea-workspace so act_runner stores job
workspaces at a real NAS path. Mounting that path at the same absolute
location in job containers means $(pwd) inside any job container resolves
to a host path the daemon can find — no overlay2 tricks needed.

Prerequisite (NAS): mkdir -p /volume1/gitea-workspace and add
  - /volume1/gitea-workspace:/volume1/gitea-workspace
to the runner service volumes in gitea's docker-compose.yml, then restart
the runner.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-15 19:33:36 +02:00
Marcel
1fc47888d5 fix(ci): sync observability configs to host before docker compose up (#598)
All checks were successful
CI / Unit & Component Tests (pull_request) Successful in 3m26s
CI / OCR Service Tests (pull_request) Successful in 18s
CI / Backend Unit Tests (pull_request) Successful in 2m40s
CI / fail2ban Regex (pull_request) Successful in 41s
CI / Compose Bucket Idempotency (pull_request) Successful in 57s
DooD runner only shares /var/run/docker.sock — no workspace directory is
mapped to the host daemon. Relative bind mounts in
docker-compose.observability.yml resolved to paths that didn't exist on
the host; Docker auto-created directories in their place, causing
'not a directory' mount failures for all five config files.

Fix:
- docker-compose.observability.yml: replace hardcoded ./infra/observability/
  prefix with ${OBS_CONFIG_DIR:-./infra/observability} so the path is
  configurable while remaining backwards-compatible for local use.
- nightly.yml / release.yml: add a 'Sync observability configs to host'
  step that finds the job container's overlay2 MergedDir (the container's
  full filesystem as seen from the host mount namespace), then uses the
  existing nsenter/alpine pattern to cp the config tree into a stable host
  path (/srv/familienarchiv-{staging,production}/obs-configs).
  OBS_CONFIG_DIR is injected into the env file so Compose picks it up.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-15 19:02:53 +02:00
Marcel
d435b2b0e4 fix(infra): pin GlitchTip image to 6.1.6 (v4 tag never existed)
All checks were successful
CI / Unit & Component Tests (push) Successful in 3m30s
CI / OCR Service Tests (push) Successful in 16s
CI / Backend Unit Tests (push) Successful in 2m34s
CI / fail2ban Regex (push) Successful in 40s
CI / Compose Bucket Idempotency (push) Successful in 57s
glitchtip/glitchtip:v4 is not a real tag — GlitchTip does not use a
v-prefix in its Docker image versioning. Latest stable release is 6.1.6.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-15 18:07:32 +02:00
Marcel
fed427dc4a fix(infra): set OTEL_EXPORTER_OTLP_ENDPOINT in docker-compose.prod.yml
Some checks failed
CI / Unit & Component Tests (pull_request) Has been cancelled
CI / OCR Service Tests (pull_request) Has been cancelled
CI / Backend Unit Tests (pull_request) Has been cancelled
CI / fail2ban Regex (pull_request) Has been cancelled
CI / Compose Bucket Idempotency (pull_request) Has been cancelled
CI / Unit & Component Tests (push) Has been cancelled
CI / OCR Service Tests (push) Has been cancelled
CI / Backend Unit Tests (push) Has been cancelled
CI / fail2ban Regex (push) Has been cancelled
CI / Compose Bucket Idempotency (push) Has been cancelled
The endpoint belongs in the compose file (hardcoded to the in-network
Tempo service) rather than per-environment workflow files. This covers
both staging (nightly.yml) and production (release.yml) with a single
change and removes the duplicate from the nightly env-file block.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-15 17:43:23 +02:00
Marcel
cf78ab2f8e fix(staging): correct backend healthcheck port and OTel endpoint
Some checks failed
CI / OCR Service Tests (pull_request) Has been cancelled
CI / Backend Unit Tests (pull_request) Has been cancelled
CI / fail2ban Regex (pull_request) Has been cancelled
CI / Compose Bucket Idempotency (pull_request) Has been cancelled
CI / Unit & Component Tests (pull_request) Has been cancelled
Two bugs introduced when the management port was split from the app port:

1. Backend healthcheck hit localhost:8080/actuator/health (app port) —
   actuator is on management.server.port=8081, so every probe got a 404
   from the main MVC dispatcher, marking the container permanently unhealthy.
   Fix: change the probe to localhost:8081.

2. OTEL_EXPORTER_OTLP_ENDPOINT was not set in .env.staging, so the exporter
   fell back to http://localhost:4317 (the CI-safe default) instead of
   http://tempo:4317 (the in-network Tempo service). Fix: inject the correct
   endpoint in the nightly env-file generation step.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-15 17:37:15 +02:00
Marcel
c8883d0e40 fix(ci): isolate compose-idempotency network from archiv-net collisions
All checks were successful
CI / Unit & Component Tests (pull_request) Successful in 5m40s
CI / OCR Service Tests (pull_request) Successful in 34s
CI / Backend Unit Tests (pull_request) Successful in 7m8s
CI / fail2ban Regex (pull_request) Successful in 1m58s
CI / Compose Bucket Idempotency (pull_request) Successful in 1m41s
CI / Unit & Component Tests (push) Successful in 5m37s
CI / OCR Service Tests (push) Successful in 28s
CI / Backend Unit Tests (push) Successful in 6m59s
CI / fail2ban Regex (push) Successful in 1m59s
CI / Compose Bucket Idempotency (push) Successful in 1m44s
The name: archiv-net declaration (needed so docker-compose.observability.yml
can join the network as external: true) caused the compose-idempotency CI job
to collide with any archiv-net left on the runner from staging or a previous
run. mc would resolve 'minio' to the wrong container and fail with a signature
mismatch.

Make the network name interpolable via COMPOSE_NETWORK_NAME (default: archiv-net
so production/staging behaviour is unchanged). Inject COMPOSE_NETWORK_NAME=
test-idem-archiv-net into the stub env file so the idempotency test always
gets a fully isolated network.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-15 16:33:07 +02:00
Marcel
7154092547 fix(deps): pin opentelemetry-bom to 1.61.0 to fix startup crash
Some checks failed
CI / Unit & Component Tests (pull_request) Successful in 5m34s
CI / OCR Service Tests (pull_request) Successful in 30s
CI / Backend Unit Tests (pull_request) Successful in 7m6s
CI / fail2ban Regex (pull_request) Successful in 1m49s
CI / Compose Bucket Idempotency (pull_request) Failing after 1m26s
opentelemetry-spring-boot-starter:2.27.0 was built against
opentelemetry-api:1.61.0. Spring Boot 4.0.0 only manages 1.55.0,
which is missing GlobalOpenTelemetry.getOrNoop(). The backend crashed
at startup with NoSuchMethodError on the first staging nightly.

Add a <dependencyManagement> import of opentelemetry-bom:1.61.0 before
the Spring Boot BOM applies, so all OTel core artifacts resolve to the
version the instrumentation starter actually requires.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-15 16:05:44 +02:00
Marcel
ada3a3ccaf devops(ci): add --remove-orphans to observability stack deploy steps
All checks were successful
CI / Unit & Component Tests (pull_request) Successful in 5m27s
CI / OCR Service Tests (pull_request) Successful in 34s
CI / Backend Unit Tests (pull_request) Successful in 7m13s
CI / fail2ban Regex (pull_request) Successful in 1m51s
CI / Compose Bucket Idempotency (pull_request) Successful in 1m47s
CI / Unit & Component Tests (push) Successful in 5m45s
CI / OCR Service Tests (push) Successful in 36s
CI / Backend Unit Tests (push) Successful in 7m12s
CI / fail2ban Regex (push) Successful in 1m54s
CI / Compose Bucket Idempotency (push) Successful in 1m41s
Both nightly and release workflows were missing --remove-orphans on the
observability compose up, while the main app deploy step already had it.
Without it, containers removed from docker-compose.observability.yml
linger as unnamed orphans until manually pruned.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-15 14:55:28 +02:00
Marcel
8cf3a2a726 devops(caddy): apply full security_headers snippet to GlitchTip vhost
The GlitchTip vhost only had a manual HSTS header; the rest of the
(security_headers) snippet (X-Content-Type-Options, Referrer-Policy,
Permissions-Policy, -Server removal) was missing.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-15 14:54:54 +02:00
Marcel
553e2f8898 docs(deployment): add observability secrets to §3.3 Gitea secrets table
All checks were successful
CI / Unit & Component Tests (pull_request) Successful in 5m35s
CI / OCR Service Tests (pull_request) Successful in 33s
CI / Backend Unit Tests (pull_request) Successful in 7m10s
CI / fail2ban Regex (pull_request) Successful in 1m54s
CI / Compose Bucket Idempotency (pull_request) Successful in 1m39s
GRAFANA_ADMIN_PASSWORD, GLITCHTIP_SECRET_KEY, and SENTRY_DSN were
referenced in the workflow env files but absent from the secrets table,
leaving the first-run operator without a complete checklist.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-15 13:46:01 +02:00
Marcel
4a7349543a devops(ci): wire SENTRY_DSN into staging and production env files
Adds SENTRY_DSN as an optional secret (empty by default) so it can be
set after GlitchTip first-run without requiring another code change.
Backend reads it via application.yaml; empty value keeps Sentry disabled.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-15 13:45:07 +02:00
Marcel
f15e004645 devops(ci): add --wait to observability stack startup
Prometheus, Loki, Tempo, and Grafana all define healthchecks in
docker-compose.observability.yml. Without --wait, the step exits 0
as soon as containers are created, masking startup failures silently.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-15 13:44:16 +02:00
Marcel
b137e3e72d devops(caddy): add HSTS to GlitchTip vhost
Caddy does not set Strict-Transport-Security on GlitchTip because the
full security_headers snippet is intentionally omitted (Permissions-Policy
interferes with the Sentry SDK CORS). Adding HSTS alone guarantees
HTTPS enforcement at the Caddy layer without breaking SDK ingestion.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-15 13:43:35 +02:00
Marcel
4c8a23ff14 devops(caddy): add Grafana and GlitchTip vhosts
All checks were successful
CI / Unit & Component Tests (pull_request) Successful in 5m33s
CI / OCR Service Tests (pull_request) Successful in 33s
CI / Backend Unit Tests (pull_request) Successful in 7m10s
CI / fail2ban Regex (pull_request) Successful in 1m55s
CI / Compose Bucket Idempotency (pull_request) Successful in 1m42s
grafana.archiv.raddatz.cloud → 127.0.0.1:3003 (with security headers)
glitchtip.archiv.raddatz.cloud → 127.0.0.1:3002 (no security headers —
  GlitchTip manages its own; the Sentry SDK also POSTs here)

Requires A records for both subdomains pointing at the server before
the next `systemctl reload caddy`.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-15 11:27:07 +02:00
Marcel
d7d225af77 devops(observability): wire observability stack into nightly and release deploys
All checks were successful
CI / Unit & Component Tests (pull_request) Successful in 4m32s
CI / OCR Service Tests (pull_request) Successful in 17s
CI / Backend Unit Tests (pull_request) Successful in 4m3s
CI / fail2ban Regex (pull_request) Successful in 1m55s
CI / Compose Bucket Idempotency (pull_request) Successful in 1m42s
- docker-compose.prod.yml: add `name: archiv-net` so the network has a
  stable Docker name regardless of compose project name (-p flag).
  Both staging and production share the same host-level network, which
  is correct since the observability stack is a single shared instance.

- nightly.yml / release.yml: add observability env vars (POSTGRES_USER,
  PORT_GRAFANA=3003, PORT_GLITCHTIP=3002, PORT_PROMETHEUS=9090,
  GRAFANA_ADMIN_PASSWORD, GLITCHTIP_SECRET_KEY, GLITCHTIP_DOMAIN) to the
  env file, then `docker compose -f docker-compose.observability.yml up -d`
  after the app deploy step. PORT_GRAFANA=3003 avoids collision with
  staging frontend on 3001.

  Requires two new Gitea secrets: GRAFANA_ADMIN_PASSWORD, GLITCHTIP_SECRET_KEY.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-15 11:22:37 +02:00
Marcel
4358997482 perf(test): replace DirtiesContext(AFTER_EACH_TEST_METHOD) with @Transactional
All checks were successful
CI / Unit & Component Tests (pull_request) Successful in 4m40s
CI / OCR Service Tests (pull_request) Successful in 18s
CI / Backend Unit Tests (pull_request) Successful in 3m20s
CI / fail2ban Regex (pull_request) Successful in 47s
CI / Compose Bucket Idempotency (pull_request) Successful in 1m4s
CI / Unit & Component Tests (push) Successful in 4m20s
CI / OCR Service Tests (push) Successful in 16s
CI / Backend Unit Tests (push) Successful in 3m8s
CI / fail2ban Regex (push) Successful in 44s
CI / Compose Bucket Idempotency (push) Successful in 1m1s
4 integration test classes were restarting the full Spring context (and a new
Postgres Testcontainer, ~75s each) after every test method — 10 unnecessary
container startups adding ~12 minutes to CI. Fixed by:

- PersonServiceIntegrationTest, DocumentSearchPagedIntegrationTest,
  GeschichteServiceIntegrationTest: swap to @Transactional so each test
  rolls back instead of destroying the context.
- AuditServiceIntegrationTest: cannot use @Transactional (logAfterCommit
  hooks into AFTER_COMMIT which requires a real commit); reset state with
  @BeforeEach deleteAll() instead.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-15 10:29:35 +02:00
Marcel
7c2e75facc fix(backend): switch to sentry-spring-boot-4:8.41.0 for Spring Boot 4/SF7 compatibility
Some checks failed
CI / Unit & Component Tests (pull_request) Successful in 6m12s
CI / OCR Service Tests (pull_request) Successful in 42s
CI / Backend Unit Tests (pull_request) Failing after 17m13s
CI / fail2ban Regex (pull_request) Successful in 2m37s
CI / Compose Bucket Idempotency (pull_request) Successful in 2m6s
sentry-spring-boot-starter-jakarta 8.5.0 does not support Spring Boot 4.0 —
it logs an "Incompatible Spring Boot Version" warning and its SentryAutoConfiguration
crashes SF7 bean-name generation. sentry-spring-boot-4 (added in 8.21.0) is the
dedicated Spring Boot 4 module with a fixed auto-configuration class.

- Replace sentry-spring-boot-starter-jakarta:8.5.0 with sentry-spring-boot-4:8.41.0
- Delete SentryConfig.java — workaround no longer needed, auto-config handles init
- Remove spring.autoconfigure.exclude from application.yaml + application-test.yaml
- Delete SentryConfigTest.java — tested the deleted workaround class
- Update ApplicationContextTest: assert Sentry.isEnabled() is false when no DSN set

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-15 09:51:53 +02:00
Marcel
7b05b9d5a0 test(context): assert SentryAutoConfiguration is excluded from Spring context
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-15 09:45:32 +02:00
Marcel
20edc0474c test(exception): verify handleGeneric captures exception in Sentry and returns 500
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-15 09:44:10 +02:00
Marcel
fa191b5c05 test(config): unit-test SentryConfig blank-DSN no-op and non-blank init paths
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-15 09:43:08 +02:00
Marcel
2139d600f5 fix(backend): exclude SentryAutoConfiguration — Spring Boot 4/SF7 bean name incompatibility
Some checks failed
CI / Unit & Component Tests (pull_request) Successful in 6m26s
CI / OCR Service Tests (pull_request) Successful in 43s
CI / fail2ban Regex (pull_request) Has been cancelled
CI / Compose Bucket Idempotency (pull_request) Has been cancelled
CI / Backend Unit Tests (pull_request) Has been cancelled
SentryAutoConfiguration$HubConfiguration$SentrySpanRestClientConfiguration is a triply-
nested @Configuration class conditionally loaded when RestClient is on the classpath
(always true on Spring Framework 7). Spring Framework 7's bean name generator fails
on such deeply-nested @Import-ed classes, crashing every @SpringBootTest context.

Replace the broken auto-configuration with a minimal SentryConfig bean that calls
Sentry.init() with the same properties (DSN, environment, sample rate, PII guard,
DomainException filter). Unexpected 5xx exceptions are forwarded to Sentry via
Sentry.captureException() in GlobalExceptionHandler.handleGeneric().

Also add management.server.port=0 to application-test.yaml to eliminate TIME_WAIT
conflicts from @DirtiesContext restarts on the fixed management port 8081 (see #593).

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-15 09:25:14 +02:00
Marcel
68e4ff4121 fix(backend): make sentry traces-sample-rate env-configurable
Some checks failed
CI / Unit & Component Tests (pull_request) Successful in 6m4s
CI / OCR Service Tests (pull_request) Successful in 32s
CI / Backend Unit Tests (pull_request) Failing after 7m9s
CI / fail2ban Regex (pull_request) Successful in 2m27s
CI / Compose Bucket Idempotency (pull_request) Successful in 1m59s
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-15 08:55:40 +02:00
Marcel
0a1d709c5f feat(backend): add sentry-spring-boot-starter-jakarta for GlitchTip error reporting
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-15 08:55:40 +02:00
Marcel
8a00d66435 fix(ci): set management.server.port=0 in test profile to fix 25-min test timeout
Some checks failed
CI / OCR Service Tests (pull_request) Waiting to run
CI / Backend Unit Tests (pull_request) Waiting to run
CI / fail2ban Regex (pull_request) Waiting to run
CI / Compose Bucket Idempotency (pull_request) Waiting to run
CI / Unit & Component Tests (pull_request) Has been cancelled
CI / Unit & Component Tests (push) Has been cancelled
CI / OCR Service Tests (push) Has been cancelled
CI / Backend Unit Tests (push) Has been cancelled
CI / fail2ban Regex (push) Has been cancelled
CI / Compose Bucket Idempotency (push) Has been cancelled
Port 8081 was fixed by #576. With four @DirtiesContext(AFTER_EACH_TEST_METHOD)
classes (22 context restarts total), the OS TIME_WAIT state holds port 8081
for ~45-60s per cycle — adding ~17 min overhead. All 1601 tests pass but
surefire's 10-min timeout fires before the suite finishes.

Fixes #593.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-15 08:52:21 +02:00
d2ad623bb8 Merge pull request 'feat(frontend): integrate @sentry/sveltekit for browser and SSR error reporting to GlitchTip' (#591) from feat/issue-579-sentry-sveltekit into main
Some checks failed
CI / Unit & Component Tests (push) Successful in 6m3s
CI / OCR Service Tests (push) Successful in 41s
CI / Backend Unit Tests (push) Failing after 22m19s
CI / fail2ban Regex (push) Successful in 2m12s
CI / Compose Bucket Idempotency (push) Successful in 2m5s
Merge feat/issue-579-sentry-sveltekit: Frontend @sentry/sveltekit integration (Backend Unit Tests failure: surefire RAM timeout only, no Java code in PR)
2026-05-15 08:08:20 +02:00
Marcel
00a8731cdd fix(frontend): add sentrySvelteKit Vite plugin for source map upload
Some checks failed
CI / Unit & Component Tests (pull_request) Successful in 6m19s
CI / OCR Service Tests (pull_request) Successful in 40s
CI / Backend Unit Tests (pull_request) Failing after 25m0s
CI / fail2ban Regex (pull_request) Successful in 2m13s
CI / Compose Bucket Idempotency (pull_request) Successful in 2m3s
Adds the sentrySvelteKit() Vite plugin as the first plugin in vite.config.ts.
When SENTRY_AUTH_TOKEN is set at build time, source maps are uploaded to
GlitchTip so error stack traces show original TypeScript source and line number.
When SENTRY_AUTH_TOKEN is absent (CI, dev builds), upload is disabled via
autoUploadSourceMaps: false — the build succeeds normally.

Resolves Felix's review blocker on PR #591.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-15 06:23:34 +02:00
Marcel
b4e6e4ca2a feat(frontend): integrate @sentry/sveltekit for browser and SSR error reporting
Some checks failed
CI / Unit & Component Tests (pull_request) Successful in 6m37s
CI / OCR Service Tests (pull_request) Successful in 41s
CI / Backend Unit Tests (pull_request) Failing after 24m43s
CI / fail2ban Regex (pull_request) Successful in 2m18s
CI / Compose Bucket Idempotency (pull_request) Successful in 1m57s
Adds @sentry/sveltekit to hooks.client.ts and hooks.server.ts.
When VITE_SENTRY_DSN is unset (default), Sentry is fully disabled.
When set to a GlitchTip JavaScript project DSN, browser exceptions
and SSR handleError events are forwarded automatically.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-15 06:15:34 +02:00
427c3ea537 feat(observability): add GlitchTip error tracking infrastructure
Some checks failed
CI / Unit & Component Tests (push) Successful in 6m2s
CI / OCR Service Tests (push) Successful in 35s
CI / Backend Unit Tests (push) Failing after 25m18s
CI / fail2ban Regex (push) Successful in 2m18s
CI / Compose Bucket Idempotency (push) Successful in 2m0s
2026-05-15 06:12:27 +02:00
Marcel
67004737f6 fix(observability): define obs_glitchtip_worker Container in C4 diagram
Some checks failed
CI / Unit & Component Tests (pull_request) Successful in 5m45s
CI / OCR Service Tests (pull_request) Successful in 36s
CI / Backend Unit Tests (pull_request) Failing after 23m49s
CI / fail2ban Regex (pull_request) Successful in 2m13s
CI / Compose Bucket Idempotency (pull_request) Successful in 1m46s
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-15 04:43:09 +02:00
Marcel
3ced565aa2 docs(observability): document GlitchTip services in DEPLOYMENT.md and C4 diagram
Some checks failed
CI / Unit & Component Tests (pull_request) Successful in 5m53s
CI / OCR Service Tests (pull_request) Successful in 32s
CI / Backend Unit Tests (pull_request) Failing after 23m39s
CI / fail2ban Regex (pull_request) Successful in 2m13s
CI / Compose Bucket Idempotency (pull_request) Successful in 1m55s
Adds GlitchTip env vars to the observability env var table, extends the
services table, and adds a first-run section with superuser creation and
project setup steps. Updates the C4 L2 container diagram with GlitchTip
and Redis containers and their relationships.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-15 04:38:47 +02:00
Marcel
cd715029eb feat(observability): add GlitchTip error tracking to observability stack
Adds obs-glitchtip, obs-glitchtip-worker, obs-redis, and obs-glitchtip-db-init
services to docker-compose.observability.yml. The one-shot db-init container
creates the dedicated glitchtip database on the existing archive-db PostgreSQL
instance automatically on first stack start.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-15 04:38:06 +02:00
84f9bbadeb Merge pull request 'feat(observability): add Grafana with provisioned datasources and dashboards' (#589) from feat/issue-577-grafana into main
Some checks failed
CI / Unit & Component Tests (push) Successful in 5m22s
CI / OCR Service Tests (push) Successful in 30s
CI / Backend Unit Tests (push) Failing after 21m45s
CI / fail2ban Regex (push) Successful in 2m2s
CI / Compose Bucket Idempotency (push) Successful in 1m50s
feat(observability): add Grafana with provisioned datasources and dashboards (#589)
2026-05-15 04:35:10 +02:00
Marcel
457c1d3aee fix(observability): add grafana healthcheck and service_healthy depends_on
All checks were successful
CI / Unit & Component Tests (pull_request) Successful in 4m19s
CI / OCR Service Tests (pull_request) Successful in 20s
CI / Backend Unit Tests (pull_request) Successful in 5m32s
CI / fail2ban Regex (pull_request) Successful in 48s
CI / Compose Bucket Idempotency (pull_request) Successful in 1m1s
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-15 04:09:13 +02:00
Marcel
c99321e5cf docs(observability): document Grafana in DEPLOYMENT.md and C4 diagram
All checks were successful
CI / Unit & Component Tests (pull_request) Successful in 4m7s
CI / OCR Service Tests (pull_request) Successful in 16s
CI / Backend Unit Tests (pull_request) Successful in 5m41s
CI / fail2ban Regex (pull_request) Successful in 45s
CI / Compose Bucket Idempotency (pull_request) Successful in 1m1s
Add Grafana row to the observability services table, Grafana access details
(URL, credentials, auto-provisioned datasources, pre-loaded dashboards), and
GRAFANA_ADMIN_PASSWORD to the env vars table in DEPLOYMENT.md.
Update C4 l2-containers.puml: replace placeholder Grafana entry with pinned
image version, expand observability boundary with node_exporter and cadvisor
containers, and add Rel() edges for Grafana → Prometheus, Loki, and Tempo.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-15 04:04:09 +02:00
Marcel
f3f8345b03 feat(observability): add Grafana with provisioned datasources and dashboards
Add obs-grafana service (grafana/grafana-oss:11.6.1) to docker-compose.observability.yml.
Datasources (Prometheus, Loki, Tempo) are auto-provisioned via
infra/observability/grafana/provisioning/datasources/datasources.yml with
cross-datasource linking (Loki traceId → Tempo, Tempo → Loki, service map via Prometheus).
Three dashboards are pre-loaded: Node Exporter Full (1860), Spring Boot Observability (17175),
Loki Logs (13639) — datasource template variables replaced with provisioned UIDs.
GRAFANA_ADMIN_PASSWORD added to .env.example.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-15 04:03:33 +02:00
c3b477c609 Merge pull request 'devops(backend): expose Prometheus metrics endpoint + OTLP trace export from Spring Boot' (#588) from feat/issue-576-backend-instrumentation into main
Some checks failed
CI / Unit & Component Tests (push) Successful in 3m19s
CI / OCR Service Tests (push) Successful in 17s
CI / Backend Unit Tests (push) Successful in 4m43s
CI / fail2ban Regex (push) Successful in 39s
CI / Compose Bucket Idempotency (push) Successful in 57s
nightly / deploy-staging (push) Failing after 2m6s
devops(backend): expose Prometheus metrics endpoint + OTLP trace export from Spring Boot (#588)
2026-05-15 03:57:14 +02:00
Marcel
3a67f7820e fix(backend): disable OTel SDK in tests + exclude azure-resources to fix semconv conflict
All checks were successful
CI / Unit & Component Tests (pull_request) Successful in 3m19s
CI / OCR Service Tests (pull_request) Successful in 16s
CI / Backend Unit Tests (pull_request) Successful in 4m45s
CI / fail2ban Regex (pull_request) Successful in 38s
CI / Compose Bucket Idempotency (pull_request) Successful in 57s
opentelemetry-spring-boot-starter:2.27.0 pulls in AzureAppServiceResourceProvider which
references ServiceAttributes.SERVICE_INSTANCE_ID — a field absent from the semconv version
used by this project. This caused every integration test to fail with NoSuchFieldError during
Spring context startup.

Fix 1 (application-test.yaml): set otel.sdk.disabled=true so the OTel auto-configuration
never runs during tests at all.

Fix 2 (pom.xml): exclude opentelemetry-azure-resources from the starter dependency to remove
the problematic provider from the dependency graph entirely.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-15 03:45:08 +02:00
Marcel
6ce6122384 docs: add OTEL and tracing env vars to DEPLOYMENT.md
Some checks failed
CI / Unit & Component Tests (pull_request) Successful in 3m22s
CI / OCR Service Tests (pull_request) Successful in 16s
CI / Backend Unit Tests (pull_request) Failing after 2m33s
CI / fail2ban Regex (pull_request) Successful in 38s
CI / Compose Bucket Idempotency (pull_request) Successful in 54s
2026-05-15 03:29:38 +02:00
Marcel
b3e49a9504 devops(backend): expose Prometheus metrics endpoint + OTLP trace export from Spring Boot
Some checks failed
CI / Unit & Component Tests (pull_request) Successful in 3m20s
CI / OCR Service Tests (pull_request) Successful in 16s
CI / Backend Unit Tests (pull_request) Failing after 2m35s
CI / fail2ban Regex (pull_request) Successful in 37s
CI / Compose Bucket Idempotency (pull_request) Successful in 59s
- Add micrometer-registry-prometheus (BOM-managed) to expose /actuator/prometheus
- Add micrometer-tracing-bridge-otel (BOM-managed) for Micrometer → OTel tracing bridge
- Add opentelemetry-spring-boot-starter 2.27.0 (pinned — not in Spring Boot BOM)
- Move management to port 8081 so Prometheus scrapes directly inside archiv-net,
  bypassing both Caddy and Spring Security's session-authenticated filter chain
- Configure otel.service.name and OTLP endpoint (default localhost:4317 for CI safety)
- Set tracing sampling probability to 1.0 in base config; override via env var in compose
- Add OTEL_EXPORTER_OTLP_ENDPOINT + MANAGEMENT_TRACING_SAMPLING_PROBABILITY to docker-compose.yml
- Expose management port 8081 inside archiv-net for Prometheus scraping
- Disable trace export in application-test.yaml (probability: 0.0) for deterministic CI

OTLP export failures are non-fatal; app starts cleanly without Tempo running.
Closes #576

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-15 03:24:35 +02:00
2eff1ab14c Merge pull request 'devops(observability): add Tempo for distributed trace storage (OTLP receiver)' (#587) from feat/issue-575-tempo into main
All checks were successful
CI / Unit & Component Tests (push) Successful in 3m21s
CI / OCR Service Tests (push) Successful in 16s
CI / Backend Unit Tests (push) Successful in 4m38s
CI / fail2ban Regex (push) Successful in 40s
CI / Compose Bucket Idempotency (push) Successful in 57s
devops(observability): add Tempo for distributed trace storage (#587)
2026-05-15 03:21:11 +02:00
Marcel
de08ffe989 devops(observability): add Tempo for distributed trace storage (OTLP receiver)
All checks were successful
CI / Unit & Component Tests (pull_request) Successful in 3m22s
CI / OCR Service Tests (pull_request) Successful in 17s
CI / Backend Unit Tests (pull_request) Successful in 4m32s
CI / fail2ban Regex (pull_request) Successful in 38s
CI / Compose Bucket Idempotency (pull_request) Successful in 56s
Closes #575

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-15 03:01:22 +02:00
5ed24cb6eb Merge pull request 'devops(observability): add Loki + Promtail for centralised container log aggregation' (#586) from feat/issue-574-loki-promtail into main
All checks were successful
CI / Unit & Component Tests (push) Successful in 3m22s
CI / OCR Service Tests (push) Successful in 16s
CI / Backend Unit Tests (push) Successful in 4m36s
CI / fail2ban Regex (push) Successful in 40s
CI / Compose Bucket Idempotency (push) Successful in 57s
devops(observability): add Loki + Promtail for centralised container log aggregation (#586)
2026-05-15 02:58:20 +02:00
Marcel
c1406a32f1 devops(observability): fix C4 diagram, security comment, and add Loki compactor block
All checks were successful
CI / Unit & Component Tests (pull_request) Successful in 3m22s
CI / OCR Service Tests (pull_request) Successful in 16s
CI / Backend Unit Tests (pull_request) Successful in 4m33s
CI / fail2ban Regex (pull_request) Successful in 38s
CI / Compose Bucket Idempotency (pull_request) Successful in 56s
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-15 02:25:34 +02:00
Marcel
22e1b25398 devops(observability): add Loki + Promtail for centralised container log aggregation
All checks were successful
CI / Unit & Component Tests (pull_request) Successful in 3m21s
CI / OCR Service Tests (pull_request) Successful in 16s
CI / Backend Unit Tests (pull_request) Successful in 4m31s
CI / fail2ban Regex (pull_request) Successful in 38s
CI / Compose Bucket Idempotency (pull_request) Successful in 57s
- Add obs-loki (grafana/loki:3.4.2) to docker-compose.observability.yml
  with healthcheck (wget /ready), expose-only port 3100, named volume loki_data
- Add obs-promtail (grafana/promtail:3.4.2) bridging archiv-net + obs-net,
  depends_on loki service_healthy, docker.sock:ro, promtail_positions volume
  for restart-safe position tracking
- Create infra/observability/loki/loki-config.yml: single-node TSDB schema v13,
  30-day retention, auth disabled (obs-net only), telemetry off
- Create infra/observability/promtail/promtail-config.yml: Docker SD scrape,
  container_name / compose_service / compose_project / logstream labels
- Update docs/DEPLOYMENT.md §4 with service table and Loki quick-check commands

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-15 02:18:22 +02:00
6a118589c2 Merge pull request 'devops(observability): add Prometheus + Node Exporter + cAdvisor for host and container metrics' (#585) from feat/issue-573-prometheus-metrics into main
All checks were successful
CI / Unit & Component Tests (push) Successful in 3m27s
CI / OCR Service Tests (push) Successful in 18s
CI / Backend Unit Tests (push) Successful in 4m32s
CI / fail2ban Regex (push) Successful in 41s
CI / Compose Bucket Idempotency (push) Successful in 56s
devops(observability): add Prometheus + Node Exporter + cAdvisor (#585)
2026-05-15 02:15:09 +02:00
Marcel
0c66f6298b devops(observability): fix Prometheus port binding, scrape port, and update DEPLOYMENT.md
All checks were successful
CI / Unit & Component Tests (pull_request) Successful in 3m21s
CI / OCR Service Tests (pull_request) Successful in 16s
CI / Backend Unit Tests (pull_request) Successful in 4m35s
CI / fail2ban Regex (pull_request) Successful in 38s
CI / Compose Bucket Idempotency (pull_request) Successful in 57s
- Fix spring-boot scrape target from backend:8080 to backend:8081 (actuator/management port)
- Restrict Prometheus host port binding to 127.0.0.1 to prevent unintended external exposure
- Add observability stack (Prometheus, Node Exporter, cAdvisor) to topology description
- Add PORT_PROMETHEUS env var to DEPLOYMENT.md reference table

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-15 01:52:28 +02:00
Marcel
0c9973fdff devops(observability): add Prometheus + Node Exporter + cAdvisor for host and container metrics
All checks were successful
CI / Unit & Component Tests (pull_request) Successful in 3m22s
CI / OCR Service Tests (pull_request) Successful in 16s
CI / Backend Unit Tests (pull_request) Successful in 4m40s
CI / fail2ban Regex (pull_request) Successful in 39s
CI / Compose Bucket Idempotency (pull_request) Successful in 57s
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-15 01:47:07 +02:00
52508e9dea Merge pull request 'devops(observability): scaffold docker-compose.observability.yml and infra/observability/ structure' (#584) from feat/issue-572-observability-scaffold into main
All checks were successful
CI / Unit & Component Tests (push) Successful in 3m21s
CI / OCR Service Tests (push) Successful in 16s
CI / Backend Unit Tests (push) Successful in 4m34s
CI / fail2ban Regex (push) Successful in 40s
CI / Compose Bucket Idempotency (push) Successful in 57s
devops(observability): scaffold docker-compose.observability.yml and infra/observability/ structure (#584)
2026-05-15 01:45:14 +02:00
Marcel
cf8d22d81b docs: update DEPLOYMENT.md and C4 diagram for observability scaffold
All checks were successful
CI / Unit & Component Tests (pull_request) Successful in 3m31s
CI / OCR Service Tests (pull_request) Successful in 16s
CI / Backend Unit Tests (pull_request) Successful in 4m31s
CI / fail2ban Regex (pull_request) Successful in 38s
CI / Compose Bucket Idempotency (pull_request) Successful in 57s
Replace the stale "no monitoring infrastructure in place yet" note in
§4 with a brief description of the observability compose file and a
pointer to issue #581 for full docs.

Add a placeholder System_Boundary block for Prometheus + Loki + Grafana
to l2-containers.puml, showing the stack joins archiv-net.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-15 01:28:14 +02:00
Marcel
1d42be9882 devops(observability): scaffold docker-compose.observability.yml and infra/observability/ structure
All checks were successful
CI / Unit & Component Tests (pull_request) Successful in 3m19s
CI / OCR Service Tests (pull_request) Successful in 16s
CI / Backend Unit Tests (pull_request) Successful in 4m28s
CI / fail2ban Regex (pull_request) Successful in 39s
CI / Compose Bucket Idempotency (pull_request) Successful in 55s
Creates the skeleton observability stack (no running services yet) that all
subsequent Grafana LGTM + GlitchTip issues depend on:

- docker-compose.observability.yml: external archiv-net join, obs-net bridge,
  named volumes for all five services, placeholder comments for each service
  group (Metrics/Logs/Traces/Dashboards/Error Tracking), startup-order note
- infra/observability/{prometheus,loki,promtail,tempo,grafana/provisioning/{datasources,dashboards}}/.gitkeep
- .env.example: new # --- Observability --- section with PORT_GRAFANA,
  PORT_GLITCHTIP, PORT_PROMETHEUS, GLITCHTIP_DOMAIN, GLITCHTIP_SECRET_KEY
  (with generation hint), SENTRY_DSN, VITE_SENTRY_DSN

Verified: docker compose -f docker-compose.observability.yml config exits 0

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-15 01:23:03 +02:00
Marcel
33c738db3b fix(docker): skip postinstall in production image
Some checks failed
CI / Unit & Component Tests (pull_request) Successful in 3m9s
CI / OCR Service Tests (pull_request) Successful in 15s
CI / Backend Unit Tests (pull_request) Successful in 4m31s
CI / fail2ban Regex (pull_request) Successful in 38s
CI / Compose Bucket Idempotency (pull_request) Successful in 59s
CI / OCR Service Tests (push) Has been cancelled
CI / Backend Unit Tests (push) Has been cancelled
CI / fail2ban Regex (push) Has been cancelled
CI / Compose Bucket Idempotency (push) Has been cancelled
CI / Unit & Component Tests (push) Has been cancelled
The production stage runs npm ci --omit=dev to install runtime deps for
the pre-built SvelteKit app. The postinstall script calls patch-package,
which is a devDependency, so it is absent and causes exit code 127.

--ignore-scripts is the correct npm-native fix: no lifecycle scripts are
needed when installing into a pre-built image.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-14 19:42:52 +02:00
Marcel
62c807b7fe fix(invites): resolve svelte-check warnings in UserGroupsSection and page.server.test
All checks were successful
CI / Unit & Component Tests (push) Successful in 3m11s
CI / OCR Service Tests (push) Successful in 17s
CI / Backend Unit Tests (push) Successful in 4m22s
CI / fail2ban Regex (push) Successful in 39s
CI / Compose Bucket Idempotency (push) Successful in 56s
Use untrack() for intentional one-time prop seed in UserGroupsSection.
Add explicit LoadData type alias in page.server.test to avoid void|Record<string,any> union.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-14 19:26:53 +02:00
Marcel
82f0f7b82c test(invites): verify groupIds are forwarded from request body in InviteController
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-14 19:26:53 +02:00
Marcel
4994d28a20 feat(invites): show empty state when no groups exist in invite form
When groups load successfully but the list is empty, render a quiet
"Keine Gruppen vorhanden." message rather than a blank section that
leaves users uncertain whether groups failed to load.

Adds admin_new_invite_no_groups i18n key to de/en/es.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-14 19:26:53 +02:00
Marcel
15d91da174 docs(invites): explain InviteTokenRepository injection in UserService
Spring Framework 7 prohibits constructor injection cycles. InviteService
already injects UserService, so UserService cannot inject InviteService
for the deleteGroup guard — repository injection is the correct workaround.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-14 19:26:53 +02:00
Marcel
ae6d7a5467 fix(invites): deduplicate groupIds before size check in createInvite
Client-submitted duplicate UUIDs were causing a false GROUP_NOT_FOUND:
size(deduplicated_db_result)==1 != size(submitted)==2. Deduplicate input
with HashSet before calling findGroupsByIds so the size comparison is
always against unique IDs.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-14 19:26:53 +02:00
Marcel
24a398a0d8 fix(invites): i18n legend + touch target in UserGroupsSection
- legend uses m.admin_new_invite_groups() instead of hardcoded "Gruppen"
  so screen readers announce the correct string in en/es locales
- label gets min-h-[44px] for WCAG 2.2 touch target compliance
- add test asserting fieldset accessible name comes from i18n key
- add test documenting empty-groups-no-error renders no checkboxes/banner

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-14 19:26:53 +02:00
Marcel
e2632a556d docs: align ErrorCode 4-step checklist in CLAUDE.md; note frontend sync in ARCHITECTURE.md
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-14 19:26:53 +02:00
Marcel
be741ff9a2 test(invites): add InviteTokenRepository integration tests for existsActiveWithGroupId + V66 group_id index
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-14 19:26:53 +02:00
Marcel
4995c3139e fix(invites): validate groupIds existence in createInvite — throw GROUP_NOT_FOUND
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-14 19:26:53 +02:00
Marcel
0a5d4fb950 feat(errors): add GROUP_NOT_FOUND error code + i18n keys
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-14 19:26:53 +02:00
Marcel
e4303baa40 test(invites): import real +page.server module via vi.mock env
Replace hand-copied load/action replicas with direct imports of the
real module. Mock $env/dynamic/private so the tests cover the actual
production code paths, not a duplicate that can drift.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-14 19:26:53 +02:00
Marcel
46c8d4553b fix(invites): add role="alert" to groups-load-error banner
Screen readers now announce the amber warning when it appears after
the form expands, without requiring the user to navigate to it.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-14 19:26:53 +02:00
Marcel
3fc0ec95ef fix(invites): make group checkboxes writable — $derived → $state
bind:group requires a writable $state variable; $derived is read-only
in Svelte 5, so every click was silently reset to unchecked, making
the group picker non-functional.

Also wraps checkboxes in <fieldset>/<legend> for WCAG 1.3.1 compliance.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-14 19:26:53 +02:00
Marcel
510fa5e398 feat(invites): group picker in new-invite form
- load() fetches /api/groups in parallel with /api/invites; returns
  sorted groups array and groupsLoadError for partial failures
- create action forwards groupIds[] to POST /api/invites so invited
  users are placed in the selected groups on registration
- +page.svelte: group checkboxes via UserGroupsSection inside the form;
  amber warning banner when groups could not be loaded
- page.svelte.test.ts: groups checkboxes + warning banner tests
- page.server.test.ts: parallel fetch, sorting, error fallback,
  groupIds in POST body

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-14 19:26:53 +02:00
Marcel
75453bed51 feat(frontend): add GROUP_HAS_ACTIVE_INVITES error code + i18n keys
Adds the error code to the ErrorCode union and getErrorMessage() switch.
Adds admin_new_invite_groups, admin_invite_groups_load_error, and
error_group_has_active_invites to all three locale files (de/en/es).

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-14 19:26:53 +02:00
Marcel
78e3acaeb7 feat(groups): prevent deletion of groups referenced by active invites
Adds GROUP_HAS_ACTIVE_INVITES error code and guards UserService.deleteGroup()
with a 409 conflict when any active (non-revoked, non-expired, non-exhausted)
invite token still holds the group UUID.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-14 19:26:53 +02:00
Marcel
0f4c844002 fix(admin/system): address second-round review concerns
All checks were successful
CI / Unit & Component Tests (pull_request) Successful in 3m9s
CI / OCR Service Tests (pull_request) Successful in 16s
CI / Backend Unit Tests (pull_request) Successful in 4m29s
CI / fail2ban Regex (pull_request) Successful in 38s
CI / Compose Bucket Idempotency (pull_request) Successful in 55s
CI / Unit & Component Tests (push) Successful in 3m8s
CI / OCR Service Tests (push) Successful in 16s
CI / Backend Unit Tests (push) Successful in 4m25s
CI / fail2ban Regex (push) Successful in 38s
CI / Compose Bucket Idempotency (push) Successful in 55s
- Extract ImportStatus type to types.ts — removes duplication across
  +page.svelte, ImportStatusCard.svelte, and test file (Felix blocker)
- Fix H2 to match CLAUDE.md card pattern: text-xs uppercase tracking-widest
  text-ink-3 mb-5 (Leonie blocker 1)
- Add font-sans to RUNNING and DONE status labels (Leonie blocker 2)
- Add data-testid="processed-count" to count elements in both states
- Replace document.querySelector with locator API in spinner tests
- Tighten getByText('7') to getByTestId('processed-count') (Felix/Sara)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-14 16:27:33 +02:00
Marcel
4dba268a04 test(import): add IMPORT_DONE statusCode service test
Covers the success path — previously untested per Sara's review.
Creates a minimal empty XLSX via XSSFWorkbook so processRows returns 0.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-14 16:16:22 +02:00
Marcel
b0cf35cf06 fix(test): replace toBeAttached() with querySelector not-null check for spinner
All checks were successful
CI / Unit & Component Tests (pull_request) Successful in 3m10s
CI / OCR Service Tests (pull_request) Successful in 16s
CI / Backend Unit Tests (pull_request) Successful in 4m25s
CI / fail2ban Regex (pull_request) Successful in 40s
CI / Compose Bucket Idempotency (pull_request) Successful in 55s
toBeAttached() is not in the vitest-browser matcher set; toBeVisible() was
previously ruled out because the spinner is 0x0 px. Mirror the querySelector
pattern already used for the negative case in the same file.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-14 15:24:27 +02:00
Marcel
0d934a1b44 fix(test): use m() calls and toBeAttached() in ImportStatusCard tests
Some checks failed
CI / Unit & Component Tests (pull_request) Failing after 2m36s
CI / OCR Service Tests (pull_request) Successful in 15s
CI / Backend Unit Tests (pull_request) Successful in 4m21s
CI / fail2ban Regex (pull_request) Successful in 37s
CI / Compose Bucket Idempotency (pull_request) Successful in 56s
CI Chromium runs with German locale so hardcoded English strings like
'No spreadsheet file found.' never matched. Use m.admin_system_import_*()
to assert whatever locale the browser resolves to.

Spinner test used toBeVisible() on an empty <span> whose dimensions come
entirely from Tailwind CSS. Without layout CSS the span is 0×0 and fails
the visibility check; toBeAttached() asserts DOM presence, which is the
right semantic here.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-14 15:14:50 +02:00
Marcel
f4bda546a0 fix(test): update import-status test mocks and imports for statusCode-based i18n
Some checks failed
CI / Unit & Component Tests (pull_request) Failing after 4m6s
CI / OCR Service Tests (pull_request) Successful in 17s
CI / Backend Unit Tests (pull_request) Successful in 4m23s
CI / fail2ban Regex (pull_request) Successful in 38s
CI / Compose Bucket Idempotency (pull_request) Successful in 57s
Three test files were written against the old API shape (raw `message` field) before
the statusCode i18n field was introduced, or used the wrong `expect` import path:

- ImportStatusCard.svelte.test.ts: `@vitest/browser/context` does not export `expect`
  in this project's Vitest setup — use `vitest` like every other test file.
- page.svelte.spec.ts: FAILED mock lacked `statusCode`; assertion matched old German
  raw message instead of the i18n string for IMPORT_FAILED_NO_SPREADSHEET.
- page.svelte.test.ts: same pattern — mock lacked `statusCode`; assertion checked for
  raw backend string "database error" instead of the rendered i18n text.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-14 14:37:04 +02:00
Marcel
b7744667f2 fix(admin/system): address review concerns in ImportStatusCard
- Remove dead `message` field from both frontend ImportStatus types
  (field is now @JsonIgnore'd on the backend)
- Extract failure message ternary into `$derived` — business logic off
  the template (Felix)
- Add motion-reduce:animate-none to spinner — WCAG 2.1 SC 2.3.3 (Leonie)
- Replace text-green-600 with text-green-800 — WCAG AA contrast 6.1:1
  on bg-green-50 (Leonie)
- Add min-h-[44px] to all three buttons — WCAG 2.2 44px touch target (Leonie)
- Add 6 missing tests: IMPORT_FAILED_INTERNAL path, IDLE state text,
  null importStatus, ontrigger called on DONE/FAILED/IDLE buttons (Sara)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-14 14:37:04 +02:00
Marcel
3d36c26226 fix(import): exclude message field from API response; add auth boundary tests
- @JsonIgnore on ImportStatus.message — stops internal directory paths and
  raw exception text leaking through the admin import-status endpoint (CWE-209)
- Add importStatus_messageField_notPresentInApiResponse test (red/green verified)
- Add importStatus_returns401/403 auth boundary tests — documents and guards
  the @RequirePermission(ADMIN) protection against configuration drift

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-14 14:37:04 +02:00
Marcel
375fd3893c feat(admin/system): extract ImportStatusCard — spinner, text-base count, statusCode i18n
Extracts the mass-import block from +page.svelte into ImportStatusCard.svelte.

Changes per the three UX fixes from issue #533:
- RUNNING: animated spinner (animate-spin) + processed count at text-base;
  auto-poll at 2 s was already in place
- DONE: processed count at text-base, label at text-xs uppercase tracking-widest
- FAILED: maps statusCode (IMPORT_FAILED_NO_SPREADSHEET / IMPORT_FAILED_INTERNAL)
  to Paraglide messages — no raw German backend string rendered

Adds vitest-browser tests covering spinner visibility, count display,
and per-statusCode FAILED message selection.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-14 14:37:04 +02:00
Marcel
c5d482bead feat(i18n): add structured import failure keys; split DONE display
Replaces the {message} interpolation (raw German backend string) with
two distinct error keys: IMPORT_FAILED_NO_SPREADSHEET and
IMPORT_FAILED_INTERNAL. Also removes the {count} parameter from the
done message and adds admin_system_import_status_done_label so the
processed count can be rendered separately at text-base size.

All three locales (de / en / es) updated.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-14 14:37:04 +02:00
Marcel
31eacb6d06 feat(import): add structured statusCode to ImportStatus — replaces raw German message
Adds a statusCode field (IMPORT_IDLE / IMPORT_RUNNING / IMPORT_DONE /
IMPORT_FAILED_NO_SPREADSHEET / IMPORT_FAILED_INTERNAL) to ImportStatus.
The frontend will map these codes to localized strings via Paraglide
instead of rendering the backend's German message verbatim.

NoSpreadsheetException distinguishes a missing spreadsheet from other
I/O failures so the frontend can show a specific error without raw text.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-14 14:37:04 +02:00
Marcel
636900110a fix(ci): raise Surefire JVM ceiling 120→600 s — suite takes ~4 min
All checks were successful
CI / Unit & Component Tests (push) Successful in 3m10s
CI / OCR Service Tests (push) Successful in 16s
CI / Backend Unit Tests (push) Successful in 4m25s
CI / fail2ban Regex (push) Successful in 38s
CI / Compose Bucket Idempotency (push) Successful in 57s
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-14 14:35:49 +02:00
Marcel
d78ee4397b devops(ci): add testTimeout + hookTimeout to browser vitest config
Some checks failed
CI / OCR Service Tests (push) Has been cancelled
CI / Backend Unit Tests (push) Has been cancelled
CI / fail2ban Regex (push) Has been cancelled
CI / Compose Bucket Idempotency (push) Has been cancelled
CI / Unit & Component Tests (push) Has been cancelled
testTimeout: 30_000 causes Vitest to fail a hanging browser test
within 30 s when Chromium crashes mid-load instead of silently
occupying the CI slot for 14+ min.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-14 14:25:37 +02:00
Marcel
ebdb36b7d0 devops(ci): upload surefire XML reports as CI artifact
Captures all 102 test results independent of log verbosity.
if: always() ensures reports are available on failure — exactly
when they're needed most.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-14 14:25:37 +02:00
Marcel
93ff6cfb67 devops(ci): add Surefire per-test timeout and JVM ceiling
forkedProcessTimeoutInSeconds=120 caps the JVM on catastrophic hangs.
junit.jupiter.execution.timeout.default=90s times out each hanging
JUnit 5 test individually, letting healthy tests continue — replaces
the deprecated <timeout> alias that conflicted with the JVM ceiling.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-14 14:25:37 +02:00
Marcel
ed4c4a52eb devops(ci): silence Spring Boot INFO noise in test log
Set logging.level.root=WARN + logging.level.org.raddatz=INFO in
backend/src/test/resources/application.properties to keep the full
test run under Gitea's 1.4 MB log cap.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-14 14:25:37 +02:00
Marcel
2ca8428be4 refactor(test): hoist SubmitFn to file-level type in unsaved-guard specs
All checks were successful
CI / Unit & Component Tests (pull_request) Successful in 4m9s
CI / OCR Service Tests (pull_request) Successful in 20s
CI / Backend Unit Tests (pull_request) Successful in 5m8s
CI / fail2ban Regex (pull_request) Successful in 48s
CI / Compose Bucket Idempotency (pull_request) Successful in 1m3s
CI / Unit & Component Tests (push) Successful in 3m24s
CI / OCR Service Tests (push) Successful in 17s
CI / Backend Unit Tests (push) Successful in 4m24s
CI / fail2ban Regex (push) Successful in 40s
CI / Compose Bucket Idempotency (push) Successful in 59s
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-14 12:12:08 +02:00
Marcel
6fffc06c28 fix(test): allow extra result properties in enhance callback type
Some checks failed
CI / Unit & Component Tests (pull_request) Successful in 3m8s
CI / OCR Service Tests (pull_request) Successful in 17s
CI / fail2ban Regex (pull_request) Successful in 48s
CI / Compose Bucket Idempotency (pull_request) Successful in 58s
CI / Backend Unit Tests (pull_request) Failing after 17m19s
Use [key: string]: unknown index signature so TS does not reject the
extra fields (location, status) passed to the redirect/failure result
in the spec helpers.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-14 12:09:14 +02:00
Marcel
ffcb901376 fix(admin): clear unsaved-changes guard before redirect on users/new
Mirror the groups/new fix: replace inline beforeNavigate/isDirty with
createUnsavedWarning() + UnsavedWarningBanner and add an enhance callback
that calls clearOnSuccess() before update() on redirect results.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-14 12:09:14 +02:00
Marcel
30469e74c9 fix(admin): clear unsaved-changes guard before redirect on groups/new
Use createUnsavedWarning() + UnsavedWarningBanner to replace the inline
beforeNavigate/isDirty pattern, and add an enhance callback that calls
clearOnSuccess() before update() so the guard is disarmed before
SvelteKit's internal goto() fires on a redirect result.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-14 12:09:14 +02:00
Marcel
5646e739c2 fix(ci): run svelte-kit sync before lint to fix cache-hit tsconfig miss
All checks were successful
CI / Unit & Component Tests (pull_request) Successful in 3m8s
CI / OCR Service Tests (pull_request) Successful in 17s
CI / Backend Unit Tests (pull_request) Successful in 4m25s
CI / fail2ban Regex (pull_request) Successful in 38s
CI / Compose Bucket Idempotency (pull_request) Successful in 57s
CI / Unit & Component Tests (push) Successful in 3m7s
CI / OCR Service Tests (push) Successful in 17s
CI / Backend Unit Tests (push) Successful in 4m15s
CI / fail2ban Regex (push) Successful in 39s
CI / Compose Bucket Idempotency (push) Successful in 58s
When the node_modules cache hits, npm ci is skipped and the prepare
lifecycle (svelte-kit sync) never runs. frontend/tsconfig.json extends
.svelte-kit/tsconfig.json which only exists after svelte-kit sync —
so ESLint fails at tsconfig resolution on every cache-warm run.

Adding an unconditional svelte-kit sync step after Paraglide compile
and before Lint ensures .svelte-kit/tsconfig.json is always present
regardless of cache state.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-14 12:07:15 +02:00
Marcel
bbbdf8cd09 ci: restrict push trigger to main — eliminate duplicate runs on feature branches
Some checks failed
CI / Unit & Component Tests (push) Failing after 1m5s
CI / OCR Service Tests (push) Successful in 17s
CI / Backend Unit Tests (push) Successful in 4m27s
CI / fail2ban Regex (push) Successful in 40s
CI / Compose Bucket Idempotency (push) Successful in 58s
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-14 11:12:24 +02:00
Marcel
f727429699 fix(ci): run client coverage even when server coverage fails
Some checks failed
CI / Unit & Component Tests (push) Has been cancelled
CI / OCR Service Tests (push) Has been cancelled
CI / Backend Unit Tests (push) Has been cancelled
CI / fail2ban Regex (push) Has been cancelled
CI / Compose Bucket Idempotency (push) Has been cancelled
Replace && with ; in test:coverage so the client vitest run is not
short-circuited when the server run exits non-zero (e.g. threshold
violation or test failure). Without this the upload-artifact step
only ever sees coverage/server.

Also updates the stale CLAUDE.md comment that said server-only.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-14 11:07:34 +02:00
Marcel
e268e2dbca fix(tests): use native element clicks in layout dropdown spec
Some checks failed
CI / Compose Bucket Idempotency (push) Has been cancelled
CI / Unit & Component Tests (push) Has been cancelled
CI / OCR Service Tests (push) Has been cancelled
CI / Backend Unit Tests (push) Has been cancelled
CI / fail2ban Regex (push) Has been cancelled
CDP-based Playwright clicks (locator.click()) do not reliably trigger
Svelte 5 onclick handlers — documented in commit 0c765d81 which fixed
13 other specs. The layout dropdown tests were missed in that pass.

Applies the same pattern: ((await locator.element()) as HTMLElement).click()
for button interactions, and native KeyboardEvent dispatch for the Escape
test (dispatched on the button so it bubbles to the parent div's onkeydown).

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-14 11:07:22 +02:00
Marcel
3de0d2f0fe fix(ci): add IMPORT_HOST_DIR stub to compose-idempotency env file
Some checks failed
CI / fail2ban Regex (push) Has been cancelled
CI / Compose Bucket Idempotency (push) Has been cancelled
CI / Unit & Component Tests (push) Has been cancelled
CI / OCR Service Tests (push) Has been cancelled
CI / Backend Unit Tests (push) Has been cancelled
Docker Compose interpolates all variables in the full file even when
only a subset of services is requested. The backend service uses
IMPORT_HOST_DIR with :? (hard-required), causing the idempotency job
to abort before any container starts. A dummy path satisfies the parser;
the backend service is never started in this job so the path need not exist.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-14 10:58:38 +02:00
95 changed files with 21616 additions and 444 deletions

View File

@@ -26,6 +26,46 @@ PORT_MAILPIT_SMTP=1025
# Generate with: python3 -c "import secrets; print(secrets.token_hex(32))"
OCR_TRAINING_TOKEN=change-me-in-production
# --- Observability ---
# Optional stack — start with: docker compose -f docker-compose.observability.yml up -d
# Requires the main stack to already be running (docker compose up -d creates archiv-net).
# In production the stack is managed from /opt/familienarchiv/ (see docs/DEPLOYMENT.md §4).
# Ports for host access
PORT_GRAFANA=3003
PORT_GLITCHTIP=3002
PORT_PROMETHEUS=9090
# Grafana admin password — change this before exposing Grafana beyond localhost
GRAFANA_ADMIN_PASSWORD=changeme
# GlitchTip domain — production: use https://glitchtip.archiv.raddatz.cloud (must match Caddy vhost)
GLITCHTIP_DOMAIN=http://localhost:3002
# GlitchTip secret key — Django SECRET_KEY equivalent, used to sign sessions and tokens.
# REQUIRED in production — must not be empty or 'changeme'. Fail-closed: GlitchTip will
# refuse to start with an invalid key.
# Generate with: python3 -c "import secrets; print(secrets.token_hex(50))"
GLITCHTIP_SECRET_KEY=changeme-generate-a-real-secret
# PostgreSQL hostname for GlitchTip's db-init job and workers.
# Override when only the staging stack is running (container name differs from archive-db).
# Default (archive-db) is correct for production with the full stack up.
POSTGRES_HOST=archive-db
# $$ escaping note: passwords in /opt/familienarchiv/.env that contain a literal '$' must
# use '$$' so Docker Compose does not expand them as variable references.
# Example: a password 'p@$$word' should be written as 'p@$$$$word' in the .env file.
# Error reporting DSNs — leave empty to disable the SDK (safe default).
# SENTRY_DSN: backend (Spring Boot) — used by the GlitchTip/Sentry Java SDK
SENTRY_DSN=
SENTRY_TRACES_SAMPLE_RATE=
# VITE_SENTRY_DSN: frontend (SvelteKit) — injected at build time via Vite
VITE_SENTRY_DSN=
# Sentry/GlitchTip auth token for source map upload at build time (optional)
SENTRY_AUTH_TOKEN=
# Production SMTP — uncomment and fill in to send real emails instead of catching them
# APP_BASE_URL=https://your-domain.example.com
# MAIL_HOST=smtp.example.com

View File

@@ -2,6 +2,7 @@ name: CI
on:
push:
branches: [main]
pull_request:
jobs:
@@ -32,6 +33,10 @@ jobs:
run: npx @inlang/paraglide-js compile --project ./project.inlang --outdir ./src/lib/paraglide
working-directory: frontend
- name: Sync SvelteKit
run: npx svelte-kit sync
working-directory: frontend
- name: Lint
run: npm run lint
working-directory: frontend
@@ -192,6 +197,14 @@ jobs:
./mvnw clean test
working-directory: backend
- name: Upload surefire reports
if: always()
# Gitea Actions (act_runner) does not implement upload-artifact v4 protocol — pinned per ADR-014. Do NOT upgrade. See #557.
uses: actions/upload-artifact@v3
with:
name: surefire-reports
path: backend/target/surefire-reports/
# ─── fail2ban Regex Regression ────────────────────────────────────────────────
# The filter parses Caddy's JSON access log; a Caddy upgrade that reorders
# the JSON keys would silently break it (fail2ban-regex would return
@@ -291,6 +304,8 @@ jobs:
MAIL_HOST=mailpit
MAIL_PORT=1025
APP_MAIL_FROM=noreply@local
IMPORT_HOST_DIR=/tmp/dummy-import
COMPOSE_NETWORK_NAME=test-idem-archiv-net
EOF
- name: Bring up minio

View File

@@ -30,6 +30,9 @@ name: nightly
# STAGING_OCR_TRAINING_TOKEN
# STAGING_APP_ADMIN_USERNAME
# STAGING_APP_ADMIN_PASSWORD
# GRAFANA_ADMIN_PASSWORD
# GLITCHTIP_SECRET_KEY
# SENTRY_DSN (set after GlitchTip first-run; empty = Sentry disabled)
on:
schedule:
@@ -74,6 +77,8 @@ jobs:
MAIL_STARTTLS_ENABLE=false
APP_MAIL_FROM=noreply@staging.raddatz.cloud
IMPORT_HOST_DIR=/srv/familienarchiv-staging/import
POSTGRES_USER=archiv
SENTRY_DSN=${{ secrets.SENTRY_DSN }}
EOF
- name: Verify backend /import:ro mount is wired
@@ -120,6 +125,77 @@ jobs:
--profile staging \
up -d --wait --remove-orphans
- name: Deploy observability configs
# Copies the compose file and config tree from the workspace checkout
# into /opt/familienarchiv/ — the permanent location that persists
# between CI runs. Containers started in the next step bind-mount
# from there, so a future workspace wipe cannot corrupt a running
# config file.
#
# obs-secrets.env is written fresh from Gitea secrets on every run so
# Gitea is always the single source of truth for secret rotation.
# Non-secret config lives in infra/observability/obs.env (tracked in git).
run: |
rm -rf /opt/familienarchiv/infra/observability
mkdir -p /opt/familienarchiv/infra/observability
cp -r infra/observability/. /opt/familienarchiv/infra/observability/
cp docker-compose.observability.yml /opt/familienarchiv/
cat > /opt/familienarchiv/obs-secrets.env <<'EOF'
GRAFANA_ADMIN_PASSWORD=${{ secrets.GRAFANA_ADMIN_PASSWORD }}
GLITCHTIP_SECRET_KEY=${{ secrets.GLITCHTIP_SECRET_KEY }}
POSTGRES_PASSWORD=${{ secrets.STAGING_POSTGRES_PASSWORD }}
POSTGRES_HOST=archiv-staging-db-1
EOF
# Note: POSTGRES_HOST is derived from the Compose project name (archiv-staging)
# and service name (db). A project rename requires updating this value.
chmod 600 /opt/familienarchiv/obs-secrets.env
- name: Validate observability compose config
# Dry-run: resolves all variable substitutions and reports any missing
# required keys before containers start. Catches undefined variables and
# YAML errors in config files updated by the previous step.
# --env-file order: obs.env first (git-tracked defaults), obs-secrets.env
# second (CI-written secrets). Later files win on duplicate keys, so
# obs-secrets.env overrides POSTGRES_HOST set in obs.env.
run: |
docker compose \
-f /opt/familienarchiv/docker-compose.observability.yml \
--env-file /opt/familienarchiv/infra/observability/obs.env \
--env-file /opt/familienarchiv/obs-secrets.env \
config --quiet
- name: Start observability stack
# Runs with absolute paths so bind mounts resolve to stable host paths
# that survive workspace wipes between nightly runs (see ADR-016).
# Non-secret config from obs.env (git-tracked); secrets from obs-secrets.env
# (written fresh from Gitea secrets above). --env-file order: obs.env first,
# obs-secrets.env second — later file wins on duplicate keys.
run: |
docker compose \
-f /opt/familienarchiv/docker-compose.observability.yml \
--env-file /opt/familienarchiv/infra/observability/obs.env \
--env-file /opt/familienarchiv/obs-secrets.env \
up -d --wait --remove-orphans
- name: Assert observability stack health
# docker compose up --wait covers services WITH healthcheck directives only.
# obs-promtail, obs-cadvisor, obs-node-exporter, and obs-glitchtip-worker have
# no healthcheck — they are considered "started" as soon as the process runs.
# This step explicitly asserts the five healthchecked critical services are
# healthy before the smoke test proceeds.
run: |
set -e
unhealthy=""
for svc in obs-loki obs-prometheus obs-grafana obs-tempo obs-glitchtip; do
status=$(docker inspect "$svc" --format '{{.State.Health.Status}}' 2>/dev/null || echo "missing")
if [ "$status" != "healthy" ]; then
echo "::error::$svc is not healthy (status: $status)"
unhealthy="$unhealthy $svc"
fi
done
[ -z "$unhealthy" ] || exit 1
echo "All critical observability services are healthy"
- name: Reload Caddy
# Apply any committed Caddyfile changes before smoke-testing the
# public surface. Without this step, a Caddyfile edit lands in the

View File

@@ -34,6 +34,9 @@ name: release
# MAIL_PORT
# MAIL_USERNAME
# MAIL_PASSWORD
# GRAFANA_ADMIN_PASSWORD
# GLITCHTIP_SECRET_KEY
# SENTRY_DSN (set after GlitchTip first-run; empty = Sentry disabled)
on:
push:
@@ -72,6 +75,8 @@ jobs:
MAIL_STARTTLS_ENABLE=true
APP_MAIL_FROM=noreply@raddatz.cloud
IMPORT_HOST_DIR=/srv/familienarchiv-production/import
POSTGRES_USER=archiv
SENTRY_DSN=${{ secrets.SENTRY_DSN }}
EOF
- name: Build images
@@ -93,6 +98,75 @@ jobs:
--env-file .env.production \
up -d --wait --remove-orphans
- name: Deploy observability configs
# Mirrors the nightly approach: copies obs compose file and config tree
# to /opt/familienarchiv/ (permanent path, survives workspace wipes — ADR-016),
# then writes obs-secrets.env fresh from Gitea secrets.
# Non-secret config lives in infra/observability/obs.env (tracked in git).
run: |
rm -rf /opt/familienarchiv/infra/observability
mkdir -p /opt/familienarchiv/infra/observability
cp -r infra/observability/. /opt/familienarchiv/infra/observability/
cp docker-compose.observability.yml /opt/familienarchiv/
cat > /opt/familienarchiv/obs-secrets.env <<'EOF'
GRAFANA_ADMIN_PASSWORD=${{ secrets.GRAFANA_ADMIN_PASSWORD }}
GLITCHTIP_SECRET_KEY=${{ secrets.GLITCHTIP_SECRET_KEY }}
POSTGRES_PASSWORD=${{ secrets.PROD_POSTGRES_PASSWORD }}
POSTGRES_HOST=archiv-production-db-1
EOF
# Note: POSTGRES_HOST is derived from the Compose project name (archiv-production)
# and service name (db). A project rename requires updating this value.
chmod 600 /opt/familienarchiv/obs-secrets.env
- name: Validate observability compose config
# Dry-run: resolves all variable substitutions and reports any missing
# required keys before containers start. Catches undefined variables and
# YAML errors in config files updated by the previous step.
# --env-file order: obs.env first (git-tracked defaults), obs-secrets.env
# second (CI-written secrets). Later files win on duplicate keys, so
# obs-secrets.env overrides POSTGRES_HOST set in obs.env.
# Keep in sync with the equivalent step in nightly.yml (#603).
run: |
docker compose \
-f /opt/familienarchiv/docker-compose.observability.yml \
--env-file /opt/familienarchiv/infra/observability/obs.env \
--env-file /opt/familienarchiv/obs-secrets.env \
config --quiet
- name: Start observability stack
# Runs with absolute paths so bind mounts resolve to stable host paths
# that survive workspace wipes between runs (see ADR-016).
# Non-secret config from obs.env (git-tracked); secrets from obs-secrets.env
# (written fresh from Gitea secrets above). --env-file order: obs.env first,
# obs-secrets.env second — later file wins on duplicate keys.
# Keep in sync with the equivalent step in nightly.yml (#603).
run: |
docker compose \
-f /opt/familienarchiv/docker-compose.observability.yml \
--env-file /opt/familienarchiv/infra/observability/obs.env \
--env-file /opt/familienarchiv/obs-secrets.env \
up -d --wait --remove-orphans
- name: Assert observability stack health
# docker compose up --wait covers services WITH healthcheck directives only.
# obs-promtail, obs-cadvisor, obs-node-exporter, and obs-glitchtip-worker have
# no healthcheck — they are considered "started" as soon as the process runs.
# This step explicitly asserts the five healthchecked critical services are
# healthy before the smoke test proceeds.
# Keep in sync with the equivalent step in nightly.yml (#603).
run: |
set -e
unhealthy=""
for svc in obs-loki obs-prometheus obs-grafana obs-tempo obs-glitchtip; do
status=$(docker inspect "$svc" --format '{{.State.Health.Status}}' 2>/dev/null || echo "missing")
if [ "$status" != "healthy" ]; then
echo "::error::$svc is not healthy (status: $status)"
unhealthy="$unhealthy $svc"
fi
done
[ -z "$unhealthy" ] || exit 1
echo "All critical observability services are healthy"
- name: Reload Caddy
# See nightly.yml — same rationale and mechanism: DooD job containers
# cannot call systemctl directly; nsenter via a privileged sibling

View File

@@ -159,7 +159,7 @@ Input DTOs live flat in the domain package. Response types are the model entitie
→ See [CONTRIBUTING.md §Error handling](./CONTRIBUTING.md#error-handling)
**LLM reminder:** use `DomainException.notFound/forbidden/conflict/internal()` from service methods — never throw raw exceptions. When adding a new `ErrorCode`: (1) add to `ErrorCode.java`, (2) mirror in `frontend/src/lib/shared/errors.ts`, (3) add i18n keys in `messages/{de,en,es}.json`.
**LLM reminder:** use `DomainException.notFound/forbidden/conflict/internal()` from service methods — never throw raw exceptions. When adding a new `ErrorCode`: (1) add to `ErrorCode.java`, (2) add to `ErrorCode` type in `frontend/src/lib/shared/errors.ts`, (3) add a `case` in `getErrorMessage()`, (4) add i18n keys in `messages/{de,en,es}.json`.
### Security / Permissions
@@ -274,6 +274,35 @@ Back button pattern — use the shared `<BackButton>` component from `$lib/share
→ See [docs/DEPLOYMENT.md](./docs/DEPLOYMENT.md)
### Observability stack (separate compose file)
Run via `docker-compose.observability.yml` — requires the main stack to be running first. Full setup procedure: [docs/DEPLOYMENT.md §4](./docs/DEPLOYMENT.md#4-logs--observability).
| Service | Container | Default Port | Purpose |
|---------|-----------|-------------|---------|
| Grafana | `obs-grafana` | 3003 | Metrics / logs / traces dashboard |
| Prometheus | `obs-prometheus` | 9090 (dev only — `127.0.0.1` bound) | Metrics store |
| Loki | `obs-loki` | — (internal) | Log store |
| Tempo | `obs-tempo` | — (internal) | Trace store |
| GlitchTip | `obs-glitchtip` | 3002 | Error tracking (Sentry-compatible) |
### Observability env vars
| Variable | Purpose |
|----------|---------|
| `PORT_GRAFANA` | Host port for Grafana UI (default: `3003`) |
| `PORT_GLITCHTIP` | Host port for GlitchTip UI (default: `3002`) |
| `PORT_PROMETHEUS` | Host port for Prometheus UI (default: `9090`) |
| `GRAFANA_ADMIN_PASSWORD` | Grafana `admin` login password — generate with `openssl rand -hex 32` |
| `GLITCHTIP_SECRET_KEY` | Django secret key for GlitchTip — generate with `python3 -c "import secrets; print(secrets.token_hex(32))"` |
| `GLITCHTIP_DOMAIN` | Public-facing base URL for GlitchTip (email links, CORS), e.g. `https://glitchtip.example.com` |
| `SENTRY_DSN` | GlitchTip/Sentry DSN for the backend (Spring Boot) — leave empty to disable |
| `VITE_SENTRY_DSN` | GlitchTip/Sentry DSN for the frontend (SvelteKit) — injected at build time via Vite |
## Observability
→ See [docs/OBSERVABILITY.md](./docs/OBSERVABILITY.md) — where to look for logs, traces, metrics, and errors.
## API Testing
HTTP test files are in `backend/api_tests/` for use with the VS Code REST Client extension.

View File

@@ -29,11 +29,30 @@
<properties>
<java.version>21</java.version>
</properties>
<dependencyManagement>
<dependencies>
<!-- opentelemetry-spring-boot-starter:2.27.0 was built against opentelemetry-api:1.61.0,
but Spring Boot 4.0.0 BOM only manages 1.55.0 (missing GlobalOpenTelemetry.getOrNoop()).
Import the core OTel BOM here to override it before the Spring Boot BOM applies. -->
<dependency>
<groupId>io.opentelemetry</groupId>
<artifactId>opentelemetry-bom</artifactId>
<version>1.61.0</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<!-- Spring Boot 4.0 splits Micrometer metrics export (incl. Prometheus scrape endpoint) into its own starter -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-micrometer-metrics</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-validation</artifactId>
@@ -197,6 +216,42 @@
<artifactId>jsoup</artifactId>
<version>1.18.1</version>
</dependency>
<!-- Observability: Prometheus metrics scrape endpoint (version managed by Spring Boot BOM) -->
<dependency>
<groupId>io.micrometer</groupId>
<artifactId>micrometer-registry-prometheus</artifactId>
</dependency>
<!-- Observability: Micrometer → OpenTelemetry tracing bridge (version managed by Spring Boot BOM) -->
<dependency>
<groupId>io.micrometer</groupId>
<artifactId>micrometer-tracing-bridge-otel</artifactId>
</dependency>
<!-- Observability: OTel Spring Boot auto-instrumentation — NOT in Spring Boot BOM, pinned explicitly -->
<dependency>
<groupId>io.opentelemetry.instrumentation</groupId>
<artifactId>opentelemetry-spring-boot-starter</artifactId>
<version>2.27.0</version>
<exclusions>
<!-- Excludes AzureAppServiceResourceProvider which references ServiceAttributes.SERVICE_INSTANCE_ID
that does not exist in the semconv version pulled by this project. -->
<exclusion>
<groupId>io.opentelemetry.contrib</groupId>
<artifactId>opentelemetry-azure-resources</artifactId>
</exclusion>
</exclusions>
</dependency>
<!-- Sentry error reporting (GlitchTip-compatible) — sentry-spring-boot-4 is the
Spring Boot 4 / Spring Framework 7 compatible module (replaces the jakarta starter
which crashes with SF7 due to bean-name generation for triply-nested @Import classes) -->
<dependency>
<groupId>io.sentry</groupId>
<artifactId>sentry-spring-boot-4</artifactId>
<version>8.41.0</version>
</dependency>
</dependencies>
@@ -273,6 +328,16 @@
</profiles>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<configuration>
<forkedProcessTimeoutInSeconds>600</forkedProcessTimeoutInSeconds>
<systemPropertyVariables>
<junit.jupiter.execution.timeout.default>90 s</junit.jupiter.execution.timeout.default>
</systemPropertyVariables>
</configuration>
</plugin>
</plugins>
</build>

View File

@@ -30,6 +30,8 @@ public enum ErrorCode {
// --- Users ---
/** A user with the given ID or username does not exist. 404 */
USER_NOT_FOUND,
/** A group with the given ID does not exist. 404 */
GROUP_NOT_FOUND,
/** The supplied email address is already used by another account. 409 */
EMAIL_ALREADY_IN_USE,
/** The supplied current password does not match the stored hash. 400 */
@@ -52,6 +54,8 @@ public enum ErrorCode {
INVITE_REVOKED,
/** The invite has passed its expiry date. 410 */
INVITE_EXPIRED,
/** A group cannot be deleted because one or more active invites reference it. 409 */
GROUP_HAS_ACTIVE_INVITES,
// --- Auth ---
/** The request is not authenticated. 401 */

View File

@@ -2,6 +2,7 @@ package org.raddatz.familienarchiv.exception;
import java.util.stream.Collectors;
import io.sentry.Sentry;
import jakarta.validation.ConstraintViolationException;
import org.raddatz.familienarchiv.exception.DomainException;
import org.raddatz.familienarchiv.exception.ErrorCode;
@@ -63,6 +64,7 @@ public class GlobalExceptionHandler {
@ExceptionHandler(Exception.class)
public ResponseEntity<ErrorResponse> handleGeneric(Exception ex) {
Sentry.captureException(ex);
log.error("Unhandled exception", ex);
return ResponseEntity.internalServerError()
.body(new ErrorResponse(ErrorCode.INTERNAL_ERROR, "An unexpected error occurred"));

View File

@@ -1,5 +1,6 @@
package org.raddatz.familienarchiv.importing;
import com.fasterxml.jackson.annotation.JsonIgnore;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.apache.poi.ss.usermodel.*;
@@ -52,9 +53,9 @@ public class MassImportService {
public enum State { IDLE, RUNNING, DONE, FAILED }
public record ImportStatus(State state, String message, int processed, LocalDateTime startedAt) {}
public record ImportStatus(State state, String statusCode, @JsonIgnore String message, int processed, LocalDateTime startedAt) {}
private volatile ImportStatus currentStatus = new ImportStatus(State.IDLE, "Kein Import gestartet.", 0, null);
private volatile ImportStatus currentStatus = new ImportStatus(State.IDLE, "IMPORT_IDLE", "Kein Import gestartet.", 0, null);
public ImportStatus getStatus() {
return currentStatus;
@@ -116,20 +117,29 @@ public class MassImportService {
if (currentStatus.state() == State.RUNNING) {
throw DomainException.conflict(ErrorCode.IMPORT_ALREADY_RUNNING, "A mass import is already in progress");
}
currentStatus = new ImportStatus(State.RUNNING, "Import läuft...", 0, LocalDateTime.now());
currentStatus = new ImportStatus(State.RUNNING, "IMPORT_RUNNING", "Import läuft...", 0, LocalDateTime.now());
try {
File spreadsheet = findSpreadsheetFile();
log.info("Starte Massenimport aus: {}", spreadsheet.getAbsolutePath());
int processed = processRows(readSpreadsheet(spreadsheet));
currentStatus = new ImportStatus(State.DONE,
currentStatus = new ImportStatus(State.DONE, "IMPORT_DONE",
"Import abgeschlossen. " + processed + " Dokumente verarbeitet.",
processed, currentStatus.startedAt());
} catch (NoSpreadsheetException e) {
log.error("Massenimport fehlgeschlagen: keine Tabellendatei", e);
currentStatus = new ImportStatus(State.FAILED, "IMPORT_FAILED_NO_SPREADSHEET",
"Fehler: " + e.getMessage(), 0, currentStatus.startedAt());
} catch (Exception e) {
log.error("Massenimport fehlgeschlagen", e);
currentStatus = new ImportStatus(State.FAILED, "Fehler: " + e.getMessage(), 0, currentStatus.startedAt());
currentStatus = new ImportStatus(State.FAILED, "IMPORT_FAILED_INTERNAL",
"Fehler: " + e.getMessage(), 0, currentStatus.startedAt());
}
}
private static class NoSpreadsheetException extends RuntimeException {
NoSpreadsheetException(String message) { super(message); }
}
private File findSpreadsheetFile() throws IOException {
try (Stream<Path> files = Files.list(Paths.get(importDir))) {
return files
@@ -138,7 +148,7 @@ public class MassImportService {
return name.endsWith(".ods") || name.endsWith(".xlsx") || name.endsWith(".xls");
})
.findFirst()
.orElseThrow(() -> new RuntimeException(
.orElseThrow(() -> new NoSpreadsheetException(
"Keine Tabellendatei (.ods/.xlsx/.xls) in " + importDir + " gefunden!"))
.toFile();
}

View File

@@ -3,13 +3,16 @@ package org.raddatz.familienarchiv.security;
import lombok.RequiredArgsConstructor;
import org.raddatz.familienarchiv.user.CustomUserDetailsService;
import jakarta.servlet.http.HttpServletResponse;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.core.annotation.Order;
import org.springframework.core.env.Environment;
import org.springframework.security.authentication.dao.DaoAuthenticationProvider;
import org.springframework.security.config.Customizer;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity;
import org.springframework.security.config.annotation.web.configurers.AbstractHttpConfigurer;
import org.springframework.security.crypto.bcrypt.BCryptPasswordEncoder;
import org.springframework.security.crypto.password.PasswordEncoder;
import org.springframework.security.web.SecurityFilterChain;
@@ -34,6 +37,28 @@ public class SecurityConfig {
return authProvider;
}
@Bean
@Order(1)
public SecurityFilterChain managementFilterChain(HttpSecurity http) throws Exception {
http
.securityMatcher("/actuator/**")
.authorizeHttpRequests(auth -> {
// Health and Prometheus are open — Docker health checks and Prometheus scraping need no credentials.
auth.requestMatchers("/actuator/health", "/actuator/prometheus").permitAll();
// All other actuator endpoints (metrics, info, env, heapdump…) require authentication.
auth.anyRequest().authenticated();
})
// Explicitly return 401 for any unauthenticated actuator request.
// Without this override, Spring Security's DelegatingAuthenticationEntryPoint
// would redirect browser-like clients to the form-login page (302 → /login),
// making it impossible to distinguish "not authenticated" from "not found" in tests.
.exceptionHandling(ex -> ex.authenticationEntryPoint(
(req, res, e) -> res.setStatus(HttpServletResponse.SC_UNAUTHORIZED)))
.formLogin(AbstractHttpConfigurer::disable)
.csrf(AbstractHttpConfigurer::disable);
return http.build();
}
@Bean
public SecurityFilterChain securityFilterChain(HttpSecurity http) throws Exception {
http
@@ -54,8 +79,10 @@ public class SecurityConfig {
.csrf(csrf -> csrf.disable())
.authorizeHttpRequests(auth -> {
// Health endpoint must be open so CI/Docker health checks work without credentials
auth.requestMatchers("/actuator/health").permitAll();
// Actuator endpoints are governed by managementFilterChain (@Order(1)) above.
// The permitAll() lines here are a belt-and-suspenders fallback in case any
// actuator path escapes that chain's securityMatcher. See docs/adr/017.
auth.requestMatchers("/actuator/health", "/actuator/prometheus").permitAll();
// Password reset endpoints are unauthenticated by nature
auth.requestMatchers("/api/auth/forgot-password", "/api/auth/reset-password").permitAll();
// Invite-based registration endpoints are public

View File

@@ -52,7 +52,11 @@ public class InviteService {
public InviteToken createInvite(CreateInviteRequest dto, AppUser creator) {
Set<UUID> groupIds = new HashSet<>();
if (dto.getGroupIds() != null && !dto.getGroupIds().isEmpty()) {
List<UserGroup> groups = userService.findGroupsByIds(dto.getGroupIds());
Set<UUID> uniqueIds = new HashSet<>(dto.getGroupIds());
List<UserGroup> groups = userService.findGroupsByIds(new ArrayList<>(uniqueIds));
if (groups.size() != uniqueIds.size()) {
throw DomainException.notFound(ErrorCode.GROUP_NOT_FOUND, "One or more group IDs do not exist");
}
groups.forEach(g -> groupIds.add(g.getId()));
}

View File

@@ -24,4 +24,7 @@ public interface InviteTokenRepository extends JpaRepository<InviteToken, UUID>
@Query("SELECT t FROM InviteToken t ORDER BY t.createdAt DESC")
List<InviteToken> findAllOrderedByCreatedAt();
@Query("SELECT CASE WHEN COUNT(t) > 0 THEN true ELSE false END FROM InviteToken t JOIN t.groupIds g WHERE g = :groupId AND t.revoked = false AND (t.expiresAt IS NULL OR t.expiresAt > CURRENT_TIMESTAMP) AND (t.maxUses IS NULL OR t.useCount < t.maxUses)")
boolean existsActiveWithGroupId(@Param("groupId") UUID groupId);
}

View File

@@ -37,6 +37,9 @@ public class UserService {
private final AppUserRepository userRepository;
private final UserGroupRepository groupRepository;
// Injected directly (not via InviteService) to avoid a constructor injection cycle:
// InviteService → UserService → InviteService. Spring Framework 7 forbids such cycles.
private final InviteTokenRepository inviteTokenRepository;
private final PasswordEncoder passwordEncoder;
private final AuditService auditService;
@@ -288,6 +291,10 @@ public class UserService {
@Transactional
public void deleteGroup(UUID id) {
if (inviteTokenRepository.existsActiveWithGroupId(id)) {
throw DomainException.conflict(ErrorCode.GROUP_HAS_ACTIVE_INVITES,
"Cannot delete group " + id + " — referenced by one or more active invites");
}
groupRepository.deleteById(id);
}
}

View File

@@ -45,9 +45,50 @@ server:
forward-headers-strategy: native
management:
server:
# Management port is separate from the app port so that:
# (a) Caddy never proxies /actuator/* (it only routes :8080 → the app port)
# (b) Prometheus scrapes backend:8081 directly inside archiv-net, not via Caddy
# Note: in Spring Boot 4.0 the management port shares the security filter chain; /actuator/health
# and /actuator/prometheus must be explicitly permitted in SecurityConfig — see SecurityConfig.java.
port: 8081
endpoints:
web:
exposure:
include: health,info,prometheus,metrics
endpoint:
prometheus:
enabled: true
# Spring Boot 4.0: metrics export is disabled by default — explicitly opt in for Prometheus
prometheus:
metrics:
export:
enabled: true
metrics:
tags:
# Common tag applied to every metric so Grafana's Spring Boot dashboard can filter by application name.
# Override via MANAGEMENT_METRICS_TAGS_APPLICATION env var.
application: ${spring.application.name}
health:
mail:
enabled: false
tracing:
sampling:
probability: 1.0 # 100% in dev; override via MANAGEMENT_TRACING_SAMPLING_PROBABILITY in prod compose
# OpenTelemetry trace export — failures are non-fatal (app starts cleanly without Tempo running)
# Port 4318 = OTLP HTTP (the default transport for Spring Boot's HttpExporter).
# Port 4317 is gRPC-only; sending HTTP/1.1 to it produces "Connection reset".
otel:
service:
name: familienarchiv-backend
exporter:
otlp:
endpoint: ${OTEL_EXPORTER_OTLP_ENDPOINT:http://localhost:4318}
logs:
exporter: none # Promtail captures Docker logs; disable OTLP log export (Tempo only accepts traces)
metrics:
exporter: none # Prometheus scrapes /actuator/prometheus; disable OTLP metric export to Tempo
springdoc:
api-docs:
@@ -93,3 +134,12 @@ ocr:
sender-model:
activation-threshold: 100
retrain-delta: 50
sentry:
dsn: ${SENTRY_DSN:}
environment: ${SPRING_PROFILES_ACTIVE:dev}
traces-sample-rate: ${SENTRY_TRACES_SAMPLE_RATE:1.0}
send-default-pii: false
enable-tracing: true
ignored-exceptions-for-type:
- org.raddatz.familienarchiv.exception.DomainException

View File

@@ -0,0 +1,3 @@
-- The composite PK (invite_token_id, group_id) does not support efficient lookups by group_id alone.
-- Add a dedicated index to support existsActiveWithGroupId queries.
CREATE INDEX idx_itg_group_id ON invite_token_group_ids (group_id);

View File

@@ -0,0 +1,63 @@
package org.raddatz.familienarchiv;
import org.junit.jupiter.api.Test;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.boot.test.web.server.LocalManagementPort;
import org.springframework.context.annotation.Import;
import org.springframework.http.ResponseEntity;
import org.springframework.test.context.ActiveProfiles;
import org.springframework.test.context.bean.override.mockito.MockitoBean;
import org.springframework.web.client.DefaultResponseErrorHandler;
import org.springframework.web.client.RestTemplate;
import software.amazon.awssdk.services.s3.S3Client;
import java.io.IOException;
import static org.assertj.core.api.Assertions.assertThat;
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
@ActiveProfiles("test")
@Import(PostgresContainerConfig.class)
class ActuatorPrometheusIT {
@LocalManagementPort
private int managementPort;
@MockitoBean
S3Client s3Client;
@Test
void prometheus_endpoint_returns_200_without_credentials() {
ResponseEntity<String> response = noThrowTemplate().getForEntity(
"http://localhost:" + managementPort + "/actuator/prometheus", String.class);
assertThat(response.getStatusCode().value()).isEqualTo(200);
}
@Test
void prometheus_endpoint_returns_jvm_metrics() {
ResponseEntity<String> response = noThrowTemplate().getForEntity(
"http://localhost:" + managementPort + "/actuator/prometheus", String.class);
assertThat(response.getBody()).contains("jvm_memory_used_bytes");
}
@Test
void actuator_metrics_requires_authentication() {
ResponseEntity<String> response = noThrowTemplate().getForEntity(
"http://localhost:" + managementPort + "/actuator/metrics", String.class);
assertThat(response.getStatusCode().value()).isEqualTo(401);
}
private RestTemplate noThrowTemplate() {
RestTemplate template = new RestTemplate();
template.setErrorHandler(new DefaultResponseErrorHandler() {
@Override
public boolean hasError(org.springframework.http.client.ClientHttpResponse response) throws IOException {
return false;
}
});
return template;
}
}

View File

@@ -1,14 +1,18 @@
package org.raddatz.familienarchiv;
import org.junit.jupiter.api.Test;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.boot.testcontainers.service.connection.ServiceConnection;
import org.springframework.context.ApplicationContext;
import org.springframework.context.annotation.Import;
import org.springframework.test.context.ActiveProfiles;
import org.springframework.test.context.bean.override.mockito.MockitoBean;
import org.testcontainers.containers.PostgreSQLContainer;
import software.amazon.awssdk.services.s3.S3Client;
import static org.assertj.core.api.Assertions.assertThat;
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.NONE)
@ActiveProfiles("test")
@Import(PostgresContainerConfig.class)
@@ -17,9 +21,18 @@ class ApplicationContextTest {
@MockitoBean
S3Client s3Client;
@Autowired
ApplicationContext ctx;
@Test
void contextLoads() {
// verifies that the Spring context starts successfully with all beans wired,
// Flyway migrations applied, and no configuration errors
}
@Test
void sentry_is_disabled_when_no_dsn_is_configured() {
// application-test.yaml has no sentry.dsn — SDK must stay inactive so tests are clean
assertThat(io.sentry.Sentry.isEnabled()).isFalse();
}
}

View File

@@ -1,11 +1,11 @@
package org.raddatz.familienarchiv.audit;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;
import org.raddatz.familienarchiv.PostgresContainerConfig;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.context.annotation.Import;
import org.springframework.test.annotation.DirtiesContext;
import org.springframework.test.context.ActiveProfiles;
import org.springframework.test.context.bean.override.mockito.MockitoBean;
import org.springframework.transaction.support.TransactionTemplate;
@@ -18,7 +18,6 @@ import static org.awaitility.Awaitility.await;
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.NONE)
@ActiveProfiles("test")
@Import(PostgresContainerConfig.class)
@DirtiesContext(classMode = DirtiesContext.ClassMode.AFTER_EACH_TEST_METHOD)
class AuditServiceIntegrationTest {
@MockitoBean S3Client s3Client;
@@ -26,6 +25,11 @@ class AuditServiceIntegrationTest {
@Autowired AuditLogRepository auditLogRepository;
@Autowired TransactionTemplate transactionTemplate;
@BeforeEach
void resetAuditLog() {
auditLogRepository.deleteAll();
}
@Test
void logAfterCommit_writes_ANNOTATION_CREATED_row_after_transaction_commits() {
transactionTemplate.execute(status -> {

View File

@@ -12,9 +12,9 @@ import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.context.annotation.Import;
import org.springframework.data.domain.PageRequest;
import org.springframework.test.annotation.DirtiesContext;
import org.springframework.test.context.ActiveProfiles;
import org.springframework.test.context.bean.override.mockito.MockitoBean;
import org.springframework.transaction.annotation.Transactional;
import software.amazon.awssdk.services.s3.S3Client;
import java.time.LocalDate;
@@ -33,7 +33,7 @@ import static org.assertj.core.api.Assertions.assertThat;
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.NONE)
@ActiveProfiles("test")
@Import(PostgresContainerConfig.class)
@DirtiesContext(classMode = DirtiesContext.ClassMode.AFTER_EACH_TEST_METHOD)
@Transactional
class DocumentSearchPagedIntegrationTest {
private static final int FIXTURE_SIZE = 120;

View File

@@ -0,0 +1,33 @@
package org.raddatz.familienarchiv.exception;
import io.sentry.Sentry;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.extension.ExtendWith;
import org.mockito.InjectMocks;
import org.mockito.MockedStatic;
import org.mockito.junit.jupiter.MockitoExtension;
import org.springframework.http.ResponseEntity;
import static org.assertj.core.api.Assertions.assertThat;
import static org.mockito.Mockito.mockStatic;
@ExtendWith(MockitoExtension.class)
class GlobalExceptionHandlerTest {
@InjectMocks
private GlobalExceptionHandler handler;
@Test
void handleGeneric_captures_exception_in_sentry_and_returns_500() {
RuntimeException ex = new RuntimeException("unexpected failure");
try (MockedStatic<Sentry> sentryMock = mockStatic(Sentry.class)) {
ResponseEntity<GlobalExceptionHandler.ErrorResponse> response = handler.handleGeneric(ex);
sentryMock.verify(() -> Sentry.captureException(ex));
assertThat(response.getStatusCode().value()).isEqualTo(500);
assertThat(response.getBody()).isNotNull();
assertThat(response.getBody().code()).isEqualTo(ErrorCode.INTERNAL_ERROR);
}
}
}

View File

@@ -19,9 +19,9 @@ import org.springframework.context.annotation.Import;
import org.springframework.security.authentication.UsernamePasswordAuthenticationToken;
import org.springframework.security.core.authority.SimpleGrantedAuthority;
import org.springframework.security.core.context.SecurityContextHolder;
import org.springframework.test.annotation.DirtiesContext;
import org.springframework.test.context.ActiveProfiles;
import org.springframework.test.context.bean.override.mockito.MockitoBean;
import org.springframework.transaction.annotation.Transactional;
import software.amazon.awssdk.services.s3.S3Client;
import java.util.List;
@@ -32,7 +32,7 @@ import static org.assertj.core.api.Assertions.assertThat;
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.NONE)
@ActiveProfiles("test")
@Import(PostgresContainerConfig.class)
@DirtiesContext(classMode = DirtiesContext.ClassMode.AFTER_EACH_TEST_METHOD)
@Transactional
class GeschichteServiceIntegrationTest {
@MockitoBean

View File

@@ -20,7 +20,10 @@ import software.amazon.awssdk.core.sync.RequestBody;
import software.amazon.awssdk.services.s3.S3Client;
import software.amazon.awssdk.services.s3.model.PutObjectRequest;
import org.apache.poi.xssf.usermodel.XSSFWorkbook;
import java.io.File;
import java.io.OutputStream;
import java.nio.file.Files;
import java.nio.file.Path;
import java.time.LocalDate;
@@ -70,14 +73,20 @@ class MassImportServiceTest {
assertThat(service.getStatus().state()).isEqualTo(MassImportService.State.IDLE);
}
@Test
void getStatus_hasStatusCode_IMPORT_IDLE_byDefault() {
assertThat(service.getStatus().statusCode()).isEqualTo("IMPORT_IDLE");
}
// ─── runImportAsync ───────────────────────────────────────────────────────
@Test
void runImportAsync_setsFailedStatus_whenImportDirectoryDoesNotExist() {
// /import directory doesn't exist in test environment → findSpreadsheetFile throws
// /import directory doesn't exist in test environment → IOException → IMPORT_FAILED_INTERNAL
service.runImportAsync();
assertThat(service.getStatus().state()).isEqualTo(MassImportService.State.FAILED);
assertThat(service.getStatus().statusCode()).isEqualTo("IMPORT_FAILED_INTERNAL");
}
@Test
@@ -93,10 +102,35 @@ class MassImportServiceTest {
assertThat(service.getStatus().message()).contains(tempDir.toString());
}
@Test
void runImportAsync_setsStatusCode_IMPORT_FAILED_NO_SPREADSHEET_whenDirIsEmpty(@TempDir Path tempDir) {
ReflectionTestUtils.setField(service, "importDir", tempDir.toString());
service.runImportAsync();
assertThat(service.getStatus().statusCode()).isEqualTo("IMPORT_FAILED_NO_SPREADSHEET");
}
@Test
void runImportAsync_setsStatusCode_IMPORT_DONE_whenSpreadsheetHasNoDataRows(@TempDir Path tempDir) throws Exception {
Path xlsx = tempDir.resolve("import.xlsx");
try (XSSFWorkbook wb = new XSSFWorkbook()) {
wb.createSheet("Sheet1");
try (OutputStream out = Files.newOutputStream(xlsx)) {
wb.write(out);
}
}
ReflectionTestUtils.setField(service, "importDir", tempDir.toString());
service.runImportAsync();
assertThat(service.getStatus().statusCode()).isEqualTo("IMPORT_DONE");
}
@Test
void runImportAsync_throwsConflict_whenAlreadyRunning() {
MassImportService.ImportStatus running = new MassImportService.ImportStatus(
MassImportService.State.RUNNING, "Running...", 0, LocalDateTime.now());
MassImportService.State.RUNNING, "IMPORT_RUNNING", "Running...", 0, LocalDateTime.now());
ReflectionTestUtils.setField(service, "currentStatus", running);
assertThatThrownBy(() -> service.runImportAsync())

View File

@@ -8,9 +8,9 @@ import org.raddatz.familienarchiv.person.PersonRepository;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.context.annotation.Import;
import org.springframework.test.annotation.DirtiesContext;
import org.springframework.test.context.ActiveProfiles;
import org.springframework.test.context.bean.override.mockito.MockitoBean;
import org.springframework.transaction.annotation.Transactional;
import software.amazon.awssdk.services.s3.S3Client;
import static org.assertj.core.api.Assertions.assertThat;
@@ -18,7 +18,7 @@ import static org.assertj.core.api.Assertions.assertThat;
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.NONE)
@ActiveProfiles("test")
@Import(PostgresContainerConfig.class)
@DirtiesContext(classMode = DirtiesContext.ClassMode.AFTER_EACH_TEST_METHOD)
@Transactional
class PersonServiceIntegrationTest {
@MockitoBean S3Client s3Client;

View File

@@ -40,6 +40,47 @@ class AdminControllerTest {
@MockitoBean ThumbnailBackfillService thumbnailBackfillService;
@MockitoBean CustomUserDetailsService customUserDetailsService;
// ─── GET /api/admin/import-status ─────────────────────────────────────────
@Test
@WithMockUser(authorities = "ADMIN")
void importStatus_returns200_withStatusCode_whenAdmin() throws Exception {
MassImportService.ImportStatus status = new MassImportService.ImportStatus(
MassImportService.State.IDLE, "IMPORT_IDLE", "Kein Import gestartet.", 0, null);
when(massImportService.getStatus()).thenReturn(status);
mockMvc.perform(get("/api/admin/import-status"))
.andExpect(status().isOk())
.andExpect(jsonPath("$.state").value("IDLE"))
.andExpect(jsonPath("$.statusCode").value("IMPORT_IDLE"))
.andExpect(jsonPath("$.processed").value(0));
}
@Test
@WithMockUser(authorities = "ADMIN")
void importStatus_messageField_notPresentInApiResponse() throws Exception {
MassImportService.ImportStatus status = new MassImportService.ImportStatus(
MassImportService.State.IDLE, "IMPORT_IDLE", "Kein Import gestartet.", 0, null);
when(massImportService.getStatus()).thenReturn(status);
mockMvc.perform(get("/api/admin/import-status"))
.andExpect(status().isOk())
.andExpect(jsonPath("$.message").doesNotExist());
}
@Test
void importStatus_returns401_whenUnauthenticated() throws Exception {
mockMvc.perform(get("/api/admin/import-status"))
.andExpect(status().isUnauthorized());
}
@Test
@WithMockUser(authorities = "READ_ALL")
void importStatus_returns403_whenUserLacksAdminPermission() throws Exception {
mockMvc.perform(get("/api/admin/import-status"))
.andExpect(status().isForbidden());
}
@Test
void backfillVersions_returns401_whenUnauthenticated() throws Exception {
mockMvc.perform(post("/api/admin/backfill-versions"))

View File

@@ -20,10 +20,13 @@ import org.springframework.security.test.context.support.WithMockUser;
import org.springframework.test.context.bean.override.mockito.MockitoBean;
import org.springframework.test.web.servlet.MockMvc;
import org.mockito.ArgumentCaptor;
import java.time.LocalDateTime;
import java.util.List;
import java.util.UUID;
import static org.assertj.core.api.Assertions.assertThat;
import static org.mockito.ArgumentMatchers.*;
import static org.mockito.Mockito.verify;
import static org.mockito.Mockito.when;
@@ -147,6 +150,30 @@ class InviteControllerTest {
.andExpect(jsonPath("$.label").value("Für Familie"));
}
@Test
@WithMockUser(username = "admin@test.com", authorities = {"ADMIN_USER"})
void createInvite_forwardsGroupIdsToService() throws Exception {
UUID groupId = UUID.randomUUID();
AppUser admin = AppUser.builder().id(UUID.randomUUID()).email("admin@test.com").build();
when(userService.findByEmail("admin@test.com")).thenReturn(admin);
InviteToken savedToken = InviteToken.builder()
.id(UUID.randomUUID()).code("ABCDE12345").useCount(0).build();
when(inviteService.createInvite(any(), eq(admin))).thenReturn(savedToken);
when(inviteService.toListItemDTO(any(), anyString()))
.thenReturn(makeInviteDTO(savedToken.getId(), "ABCDE12345"));
String body = "{\"groupIds\":[\"" + groupId + "\"]}";
mockMvc.perform(post("/api/invites")
.contentType(MediaType.APPLICATION_JSON)
.content(body))
.andExpect(status().isCreated());
ArgumentCaptor<CreateInviteRequest> captor = ArgumentCaptor.forClass(CreateInviteRequest.class);
verify(inviteService).createInvite(captor.capture(), eq(admin));
assertThat(captor.getValue().getGroupIds()).containsExactly(groupId);
}
// ─── DELETE /api/invites/{id} ─────────────────────────────────────────────
@Test

View File

@@ -156,6 +156,35 @@ class InviteServiceTest {
assertThat(result.getGroupIds()).contains(g.getId());
}
@Test
void createInvite_throwsGroupNotFound_whenSubmittedGroupIdDoesNotExist() {
UUID unknownGroupId = UUID.randomUUID();
when(userService.findGroupsByIds(anyList())).thenReturn(List.of());
CreateInviteRequest req = new CreateInviteRequest();
req.setGroupIds(List.of(unknownGroupId));
assertThatThrownBy(() -> inviteService.createInvite(req, admin))
.isInstanceOf(DomainException.class)
.extracting(e -> ((DomainException) e).getCode())
.isEqualTo(ErrorCode.GROUP_NOT_FOUND);
}
@Test
void createInvite_doesNotThrowGroupNotFound_whenDuplicateGroupIdsSubmitted() {
UUID groupId = UUID.randomUUID();
UserGroup group = UserGroup.builder().id(groupId).name("Familie").build();
when(inviteTokenRepository.findByCode(anyString())).thenReturn(Optional.empty());
when(userService.findGroupsByIds(anyList())).thenReturn(List.of(group));
when(inviteTokenRepository.save(any())).thenAnswer(inv -> inv.getArgument(0));
CreateInviteRequest req = new CreateInviteRequest();
req.setGroupIds(List.of(groupId, groupId)); // same UUID submitted twice
// before deduplication: size(groups)==1 != size(submitted)==2 → false GROUP_NOT_FOUND
assertThatCode(() -> inviteService.createInvite(req, admin)).doesNotThrowAnyException();
}
// ─── redeemInvite ─────────────────────────────────────────────────────────
@Test

View File

@@ -0,0 +1,78 @@
package org.raddatz.familienarchiv.user;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;
import org.raddatz.familienarchiv.PostgresContainerConfig;
import org.raddatz.familienarchiv.config.FlywayConfig;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.jdbc.test.autoconfigure.AutoConfigureTestDatabase;
import org.springframework.boot.data.jpa.test.autoconfigure.DataJpaTest;
import org.springframework.context.annotation.Import;
import java.time.LocalDateTime;
import java.util.Set;
import java.util.UUID;
import static org.assertj.core.api.Assertions.assertThat;
@DataJpaTest
@AutoConfigureTestDatabase(replace = AutoConfigureTestDatabase.Replace.NONE)
@Import({PostgresContainerConfig.class, FlywayConfig.class})
class InviteTokenRepositoryIntegrationTest {
@Autowired InviteTokenRepository inviteTokenRepository;
@Autowired UserGroupRepository userGroupRepository;
@Autowired AppUserRepository appUserRepository;
private UserGroup group;
private AppUser admin;
@BeforeEach
void setUp() {
inviteTokenRepository.deleteAll();
userGroupRepository.deleteAll();
appUserRepository.deleteAll();
admin = appUserRepository.save(AppUser.builder().email("admin@test.com").password("pw").build());
group = userGroupRepository.save(UserGroup.builder().name("Familie").build());
}
// ─── existsActiveWithGroupId ──────────────────────────────────────────────
@Test
void existsActiveWithGroupId_returnsTrueForActiveInviteLinkedToGroup() {
inviteTokenRepository.save(token(t -> t));
assertThat(inviteTokenRepository.existsActiveWithGroupId(group.getId())).isTrue();
}
@Test
void existsActiveWithGroupId_returnsFalseWhenInviteIsRevoked() {
inviteTokenRepository.save(token(t -> t.revoked(true)));
assertThat(inviteTokenRepository.existsActiveWithGroupId(group.getId())).isFalse();
}
@Test
void existsActiveWithGroupId_returnsFalseWhenInviteIsExpired() {
inviteTokenRepository.save(token(t -> t.expiresAt(LocalDateTime.now().minusDays(1))));
assertThat(inviteTokenRepository.existsActiveWithGroupId(group.getId())).isFalse();
}
@Test
void existsActiveWithGroupId_returnsFalseWhenInviteIsExhausted() {
inviteTokenRepository.save(token(t -> t.maxUses(1).useCount(1)));
assertThat(inviteTokenRepository.existsActiveWithGroupId(group.getId())).isFalse();
}
// ─── helpers ─────────────────────────────────────────────────────────────
private InviteToken token(java.util.function.UnaryOperator<InviteToken.InviteTokenBuilder> customizer) {
InviteToken.InviteTokenBuilder builder = InviteToken.builder()
.code(UUID.randomUUID().toString().replace("-", "").substring(0, 10))
.groupIds(new java.util.HashSet<>(Set.of(group.getId())))
.createdBy(admin);
return customizer.apply(builder).build();
}
}

View File

@@ -36,6 +36,7 @@ class UserServiceTest {
@Mock AppUserRepository userRepository;
@Mock UserGroupRepository groupRepository;
@Mock InviteTokenRepository inviteTokenRepository;
@Mock PasswordEncoder passwordEncoder;
@Mock AuditService auditService;
@InjectMocks UserService userService;
@@ -903,6 +904,29 @@ class UserServiceTest {
assertThat(result.getPermissions()).containsExactlyInAnyOrder("READ_ALL", "WRITE_ALL");
}
// ─── deleteGroup ──────────────────────────────────────────────────────────
@Test
void deleteGroup_throwsConflict_whenActiveInviteReferencesGroup() {
UUID groupId = UUID.randomUUID();
when(inviteTokenRepository.existsActiveWithGroupId(groupId)).thenReturn(true);
assertThatThrownBy(() -> userService.deleteGroup(groupId))
.isInstanceOf(DomainException.class)
.extracting(e -> ((DomainException) e).getCode())
.isEqualTo(ErrorCode.GROUP_HAS_ACTIVE_INVITES);
}
@Test
void deleteGroup_deletesGroup_whenNoActiveInviteReferencesGroup() {
UUID groupId = UUID.randomUUID();
when(inviteTokenRepository.existsActiveWithGroupId(groupId)).thenReturn(false);
userService.deleteGroup(groupId);
verify(groupRepository).deleteById(groupId);
}
@Test
void createGroup_withNullPermissions_savesGroupWithEmptyPermissionSet() {
org.raddatz.familienarchiv.user.GroupDTO dto = new org.raddatz.familienarchiv.user.GroupDTO();

View File

@@ -13,3 +13,18 @@ spring:
password: test
mail:
host: localhost
# Disable OTel SDK entirely in tests — prevents auto-configuration from loading resource providers
# (e.g. AzureAppServiceResourceProvider) that fail against the semconv version used here.
otel:
sdk:
disabled: true
# Disable trace export in tests — prevents OTLP connection attempts when no Tempo is running.
# Sampling probability 0.0 means no spans are created, so no export is attempted.
management:
server:
port: 0 # random port per context — prevents TIME_WAIT conflicts when @DirtiesContext restarts the context
tracing:
sampling:
probability: 0.0

View File

@@ -0,0 +1,2 @@
logging.level.root=WARN
logging.level.org.raddatz=INFO

View File

@@ -0,0 +1,266 @@
# Observability stack — Grafana LGTM + GlitchTip
#
# Requires the main stack to be running first:
# docker compose up -d # creates archiv-net
# docker compose -f docker-compose.observability.yml up -d
#
# To validate without starting:
# docker compose -f docker-compose.observability.yml config
services:
# --- Metrics: Prometheus ---
prometheus:
image: prom/prometheus:v3.4.0
container_name: obs-prometheus
restart: unless-stopped
volumes:
- ./infra/observability/prometheus/prometheus.yml:/etc/prometheus/prometheus.yml:ro
- prometheus_data:/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--storage.tsdb.retention.time=30d'
- '--web.enable-lifecycle'
ports:
- "127.0.0.1:${PORT_PROMETHEUS:-9090}:9090"
healthcheck:
test: ["CMD", "wget", "-qO-", "http://localhost:9090/-/healthy"]
interval: 30s
timeout: 5s
retries: 3
networks:
- archiv-net
- obs-net
node-exporter:
image: prom/node-exporter:v1.9.0
container_name: obs-node-exporter
restart: unless-stopped
# pid: host — required for process-level CPU/memory metrics; cgroup isolation applies
pid: host
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
command:
- '--path.procfs=/host/proc'
- '--path.sysfs=/host/sys'
# $$ is YAML Compose escaping for a literal $ in the regex alternation
- '--collector.filesystem.ignored-mount-points=^/(sys|proc|dev|host|etc)($$|/)'
expose:
- "9100"
networks:
- obs-net
cadvisor:
image: gcr.io/cadvisor/cadvisor:v0.52.1
container_name: obs-cadvisor
restart: unless-stopped
# privileged: true — required for cgroup and namespace metrics, see cAdvisor docs.
# Accepted risk: cAdvisor is pinned, on Renovate, and not exposed outside obs-net.
privileged: true
volumes:
- /:/rootfs:ro
# /var/run/docker.sock mounted read-only — sufficient for container metadata discovery
- /var/run/docker.sock:/var/run/docker.sock:ro
- /sys:/sys:ro
- /var/lib/docker:/var/lib/docker:ro
expose:
- "8080"
networks:
- obs-net
# --- Logs: Loki + Promtail ---
loki:
image: grafana/loki:3.4.2
container_name: obs-loki
restart: unless-stopped
volumes:
- ./infra/observability/loki/loki-config.yml:/etc/loki/loki-config.yml:ro
- loki_data:/loki
command: -config.file=/etc/loki/loki-config.yml
expose:
- "3100"
healthcheck:
test: ["CMD-SHELL", "wget -qO- http://localhost:3100/ready | grep -q ready || exit 1"]
interval: 10s
timeout: 5s
retries: 5
start_period: 30s
networks:
- obs-net
promtail:
image: grafana/promtail:3.4.2
container_name: obs-promtail
restart: unless-stopped
volumes:
- ./infra/observability/promtail/promtail-config.yml:/etc/promtail/promtail-config.yml:ro
- /var/lib/docker/containers:/var/lib/docker/containers:ro
# :ro restricts file-system access but NOT Docker API permissions — a compromised Promtail has full daemon access. Accepted risk on single-operator self-hosted archive.
- /var/run/docker.sock:/var/run/docker.sock:ro
- promtail_positions:/tmp # persists positions.yaml across restarts — avoids duplicate log ingestion
command: -config.file=/etc/promtail/promtail-config.yml
networks:
- archiv-net # label discovery from application containers via Docker socket
- obs-net # log shipping to Loki
depends_on:
loki:
condition: service_healthy
# --- Traces: Tempo ---
tempo:
image: grafana/tempo:2.7.2
container_name: obs-tempo
restart: unless-stopped
volumes:
- ./infra/observability/tempo/tempo.yml:/etc/tempo.yml:ro
- tempo_data:/var/tempo
command: -config.file=/etc/tempo.yml
expose:
- "3200" # Grafana queries Tempo on this port (obs-net only)
- "4317" # OTLP gRPC — backend sends traces here (archiv-net)
- "4318" # OTLP HTTP — alternative transport (archiv-net)
healthcheck:
test: ["CMD-SHELL", "wget -qO- http://localhost:3200/ready | grep -q ready || exit 1"]
interval: 10s
timeout: 5s
retries: 5
start_period: 15s
networks:
- archiv-net # backend (archive-backend) reaches tempo:4317 over this network
- obs-net # Grafana reaches tempo:3200 over this network
# --- Dashboards: Grafana ---
obs-grafana:
image: grafana/grafana-oss:11.6.1
container_name: obs-grafana
restart: unless-stopped
ports:
- "127.0.0.1:${PORT_GRAFANA:-3003}:3000"
environment:
GF_SECURITY_ADMIN_PASSWORD: ${GRAFANA_ADMIN_PASSWORD:-changeme}
GF_USERS_ALLOW_SIGN_UP: "false"
GF_SERVER_ROOT_URL: ${GF_SERVER_ROOT_URL:-http://localhost:3003}
volumes:
- grafana_data:/var/lib/grafana
- ./infra/observability/grafana/provisioning:/etc/grafana/provisioning:ro
healthcheck:
test: ["CMD-SHELL", "wget -qO- http://localhost:3000/api/health | grep -q ok || exit 1"]
interval: 30s
timeout: 5s
retries: 3
start_period: 30s
depends_on:
prometheus:
condition: service_healthy
loki:
condition: service_healthy
tempo:
condition: service_healthy
networks:
- obs-net
# --- Error Tracking: GlitchTip ---
obs-redis:
image: redis:7-alpine
container_name: obs-redis
restart: unless-stopped
volumes:
- glitchtip_data:/data
expose:
- "6379"
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 5s
retries: 5
networks:
- obs-net
obs-glitchtip:
image: glitchtip/glitchtip:6.1.6
container_name: obs-glitchtip
restart: unless-stopped
depends_on:
obs-redis:
condition: service_healthy
obs-glitchtip-db-init:
condition: service_completed_successfully
environment:
DATABASE_URL: postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@${POSTGRES_HOST:-archive-db}:5432/glitchtip
REDIS_URL: redis://obs-redis:6379/0
SECRET_KEY: ${GLITCHTIP_SECRET_KEY}
GLITCHTIP_DOMAIN: ${GLITCHTIP_DOMAIN:-http://localhost:3002}
DEFAULT_FROM_EMAIL: ${APP_MAIL_FROM:-noreply@familienarchiv.local}
EMAIL_URL: smtp://mailpit:1025
GLITCHTIP_MAX_EVENT_LIFE_DAYS: 90
ports:
- "127.0.0.1:${PORT_GLITCHTIP:-3002}:8000"
healthcheck:
test: ["CMD", "bash", "-c", "echo > /dev/tcp/localhost/8000"]
interval: 30s
timeout: 10s
retries: 5
start_period: 60s
networks:
- archiv-net
- obs-net
obs-glitchtip-worker:
image: glitchtip/glitchtip:6.1.6
container_name: obs-glitchtip-worker
restart: unless-stopped
command: ./bin/run-celery-with-beat.sh
depends_on:
obs-redis:
condition: service_healthy
environment:
DATABASE_URL: postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@${POSTGRES_HOST:-archive-db}:5432/glitchtip
REDIS_URL: redis://obs-redis:6379/0
SECRET_KEY: ${GLITCHTIP_SECRET_KEY}
networks:
- archiv-net
- obs-net
obs-glitchtip-db-init:
image: postgres:16-alpine
container_name: obs-glitchtip-db-init
restart: "no"
environment:
PGPASSWORD: ${POSTGRES_PASSWORD}
command: >
sh -c "psql -h ${POSTGRES_HOST:-archive-db} -U ${POSTGRES_USER} -tc
\"SELECT 1 FROM pg_database WHERE datname = 'glitchtip'\" |
grep -q 1 ||
psql -h ${POSTGRES_HOST:-archive-db} -U ${POSTGRES_USER} -c \"CREATE DATABASE glitchtip;\""
networks:
- archiv-net
networks:
# Shared network created by the main docker-compose.yml.
# The observability stack joins as a peer so Prometheus can scrape
# archive-backend by container name. The observability stack must NOT
# attempt to create this network — it will fail with a clear error if
# the main stack is not running yet.
archiv-net:
external: true
# Internal network for observability-service-to-service traffic
# (e.g. Grafana → Prometheus, Grafana → Loki, Grafana → Tempo).
obs-net:
driver: bridge
volumes:
prometheus_data:
loki_data:
promtail_positions:
tempo_data:
grafana_data:
glitchtip_data:

View File

@@ -39,6 +39,7 @@
networks:
archiv-net:
driver: bridge
name: ${COMPOSE_NETWORK_NAME:-archiv-net}
volumes:
postgres-data:
@@ -212,10 +213,15 @@ services:
APP_MAIL_FROM: ${APP_MAIL_FROM:-noreply@raddatz.cloud}
SPRING_MAIL_PROPERTIES_MAIL_SMTP_AUTH: ${MAIL_SMTP_AUTH:-true}
SPRING_MAIL_PROPERTIES_MAIL_SMTP_STARTTLS_ENABLE: ${MAIL_STARTTLS_ENABLE:-true}
OTEL_EXPORTER_OTLP_ENDPOINT: http://tempo:4318
OTEL_LOGS_EXPORTER: none
OTEL_METRICS_EXPORTER: none
MANAGEMENT_METRICS_TAGS_APPLICATION: Familienarchiv
MANAGEMENT_TRACING_SAMPLING_PROBABILITY: ${MANAGEMENT_TRACING_SAMPLING_PROBABILITY:-0.1}
networks:
- archiv-net
healthcheck:
test: ["CMD-SHELL", "wget -qO- http://localhost:8080/actuator/health | grep -q UP || exit 1"]
test: ["CMD-SHELL", "wget -qO- http://localhost:8081/actuator/health | grep -q UP || exit 1"]
interval: 15s
timeout: 5s
retries: 10

View File

@@ -147,8 +147,20 @@ services:
SPRING_MAIL_PROPERTIES_MAIL_SMTP_STARTTLS_ENABLE: ${MAIL_STARTTLS_ENABLE:-false}
APP_OCR_BASE_URL: http://ocr-service:8000
APP_OCR_TRAINING_TOKEN: "${OCR_TRAINING_TOKEN:-}"
SENTRY_DSN: ${SENTRY_DSN:-}
SENTRY_TRACES_SAMPLE_RATE: ${SENTRY_TRACES_SAMPLE_RATE:-1.0}
# Observability: send traces to Tempo inside archiv-net (OTLP gRPC port 4317)
# Tempo is defined in docker-compose.observability.yml (future issue).
# OTLP failures are non-fatal — backend starts cleanly without the observability stack.
OTEL_EXPORTER_OTLP_ENDPOINT: http://tempo:4317
# 10% sampling in this compose (dev + staging) — override locally to 1.0 if needed
MANAGEMENT_TRACING_SAMPLING_PROBABILITY: "0.1"
ports:
- "${PORT_BACKEND}:8080"
# Management port — Prometheus scrapes /actuator/prometheus from inside archiv-net.
# Not exposed to the host; Docker service-name DNS (backend:8081) is sufficient.
expose:
- "8081"
networks:
- archiv-net
healthcheck:

View File

@@ -63,7 +63,7 @@ Members of the cross-cutting layer have no entity of their own, no user-facing C
| `audit` | Append-only event store (`audit_log`) for all domain mutations. Feeds the activity feed and Family Pulse dashboard. | Consumed by 5+ domains; no user-facing CRUD of its own |
| `config` | Infrastructure bean definitions: `MinioConfig`, `AsyncConfig`, `WebConfig` | Framework infra; no business logic |
| `dashboard` | Stats aggregation for the admin dashboard and Family Pulse widget | Aggregates from 3+ domains; no owned entities |
| `exception` | `DomainException`, `ErrorCode` enum, `GlobalExceptionHandler` | Framework infra; consumed by every controller and service |
| `exception` | `DomainException`, `ErrorCode` enum, `GlobalExceptionHandler` | Framework infra; consumed by every controller and service. Adding a new `ErrorCode` requires matching updates in `frontend/src/lib/shared/errors.ts` and all three `messages/*.json` locale files. |
| `filestorage` | `FileService` — MinIO/S3 upload, download, presigned-URL generation | Generic service; consumed by `document` and `ocr` |
| `importing` | `MassImportService` — async ODS/Excel batch import | Orchestrates across `person`, `tag`, `document` |
| `security` | `SecurityConfig`, `Permission` enum, `@RequirePermission` annotation, `PermissionAspect` (AOP) | Framework infra; enforced globally across all controllers |

View File

@@ -43,6 +43,7 @@ graph TD
- SSE notifications transit Caddy (browser → Caddy → backend); the backend is never reachable directly from the public internet. The SvelteKit SSR layer is bypassed for SSE, but Caddy is not.
- The Caddyfile responds `404` on `/actuator/*` (defense in depth). Internal monitoring scrapes the backend on the docker network, not through Caddy.
- Production and staging cohabit on the same host via docker compose project names: `archiv-production` (ports 8080/3000) and `archiv-staging` (ports 8081/3001).
- An optional observability stack (Prometheus, Node Exporter, cAdvisor, Loki, Tempo, Grafana, GlitchTip) runs as a separate compose file. Configuration lives under `infra/observability/`. In production and CI, the stack is managed from `/opt/familienarchiv/` (CI copies it there on every nightly run) so bind mounts survive workspace wipes — see §4 for the ops procedure.
### OCR memory requirements
@@ -106,6 +107,12 @@ All vars are set in `.env` at the repo root (copy from `.env.example`). The back
| `MAIL_SMTP_AUTH` | SMTP auth enabled | `false` (dev) | YES (prod) | — |
| `MAIL_STARTTLS_ENABLE` | STARTTLS enabled | `false` (dev) | YES (prod) | — |
| `SPRING_PROFILES_ACTIVE` | Spring profile | `dev,e2e` (compose) | YES | — |
| `OTEL_EXPORTER_OTLP_ENDPOINT` | OTLP HTTP endpoint for distributed traces (Tempo). Port 4318 = HTTP transport; port 4317 is gRPC-only and causes "Connection reset" with Spring Boot's HttpExporter. | `http://localhost:4318` | — | — |
| `OTEL_LOGS_EXPORTER` | Disable OTLP log export — Promtail captures Docker logs via the logging driver; Tempo does not accept logs. | `none` | — | — |
| `OTEL_METRICS_EXPORTER` | Disable OTLP metric export — Prometheus scrapes `/actuator/prometheus` via pull model; Tempo does not accept metrics. | `none` | — | — |
| `MANAGEMENT_METRICS_TAGS_APPLICATION` | Common tag added to every Micrometer metric. Required by Grafana's Spring Boot Observability dashboard (ID 17175) `label_values(application)` template variable. | `Familienarchiv` | — | — |
| `MANAGEMENT_TRACING_SAMPLING_PROBABILITY` | Micrometer tracing sample rate; overridden to `0.0` in test profile. | `0.1` (compose) / `1.0` (dev) | — | — |
| `SENTRY_DSN` | GlitchTip / Sentry DSN for backend error reporting. Leave empty to disable the SDK. Set after GlitchTip first-run (§4). | — | — | YES |
### PostgreSQL container
@@ -134,6 +141,19 @@ All vars are set in `.env` at the repo root (copy from `.env.example`). The back
| `BLLA_MODEL_PATH` | Kraken baseline layout analysis model path | `/app/models/blla.mlmodel` | — | — |
| `OCR_MEM_LIMIT` | Container memory cap for ocr-service in `docker-compose.prod.yml`. Set to `6g` on CX32 hosts; leave unset on CX42+ to use the 12g default | `12g` (prod compose default) | — | — |
### Observability stack (`docker-compose.observability.yml`)
| Variable | Purpose | Default | Required? | Sensitive? |
|---|---|---|---|---|
| `PORT_PROMETHEUS` | Host port for the Prometheus UI (bound to `127.0.0.1` only) | `9090` | — | — |
| `PORT_GRAFANA` | Host port for the Grafana UI (bound to `127.0.0.1` only) | `3003` | — | — |
| `POSTGRES_HOST` | PostgreSQL hostname for GlitchTip's db-init job and workers. Override when only the staging stack is running and `archive-db` is not resolvable by that name. | `archive-db` | — | — |
| `GRAFANA_ADMIN_PASSWORD` | Grafana `admin` user password | `changeme` | YES (prod) | YES |
| `PORT_GLITCHTIP` | Host port for the GlitchTip UI (bound to `127.0.0.1` only) | `3002` | — | — |
| `GLITCHTIP_DOMAIN` | Public-facing base URL for GlitchTip (used in email links and CORS) | `http://localhost:3002` | YES (prod) | — |
| `GLITCHTIP_SECRET_KEY` | Django secret key for GlitchTip — generate with `python3 -c "import secrets; print(secrets.token_hex(32))"` | — | YES | YES |
| `VITE_SENTRY_DSN` | GlitchTip/Sentry DSN for the frontend (SvelteKit) — injected at build time via Vite. Leave empty to disable. Set after GlitchTip first-run (§4). | — | — | YES |
---
## 3. Bootstrap from scratch
@@ -179,6 +199,29 @@ curl -fsSL https://tailscale.com/install.sh | sh && tailscale up
# files to disk during execution (cleaned up unconditionally on completion).
# A multi-tenant runner would need to switch to stdin-piped env files.
# (See https://docs.gitea.com/usage/actions/quickstart for the register step.)
# Runner workspace directory — required for DooD bind-mount resolution (ADR-015).
# act_runner stores job workspaces here so that docker compose bind mounts resolve
# to real host paths. The path must be identical on the host and inside job containers.
mkdir -p /srv/gitea-workspace
# Observability config permanent directory — the nightly CI job copies
# docker-compose.observability.yml and infra/observability/ here on every run.
# The obs stack is always started from this path, not from the workspace.
# See ADR-016 for why this directory is used instead of a server-pull approach.
mkdir -p /opt/familienarchiv/infra
# Both paths must also appear in the runner service volumes in ~/docker/gitea/compose.yaml:
# volumes:
# - /srv/gitea-workspace:/srv/gitea-workspace
# /opt/familienarchiv does NOT need to be in the runner container's volumes — job
# containers are spawned by the host daemon directly (DooD), so the host path is
# accessible to them as long as runner-config.yaml lists it in valid_volumes + options.
# See runner-config.yaml (workdir_parent + valid_volumes + options) and ADR-015/016.
# ⚠ IMPORTANT: after any change to runner-config.yaml (valid_volumes, options, workdir_parent),
# restart the Gitea Act runner for the new config to take effect:
# docker restart gitea-runner
# Until restarted, job containers are spawned with the old config and any new bind mounts
# (e.g. /opt/familienarchiv) will not be available inside job steps.
```
### 3.2 DNS records
@@ -209,6 +252,10 @@ git.raddatz.cloud A <server IP>
| `MAIL_PORT` | release.yml | typically `587` |
| `MAIL_USERNAME` | release.yml | SMTP user |
| `MAIL_PASSWORD` | release.yml | SMTP password |
| `GRAFANA_ADMIN_PASSWORD` | both | Grafana `admin` login — generate a strong password |
| `GLITCHTIP_SECRET_KEY` | both | Django secret key — `openssl rand -hex 32` |
| `SENTRY_DSN` | both | GlitchTip project DSN — set after first-run (§4); leave empty to keep Sentry disabled |
| `VITE_SENTRY_DSN` | both | GlitchTip frontend project DSN — set after first-run (§4); leave empty to keep Sentry disabled |
### 3.4 First deploy
@@ -236,6 +283,9 @@ Before the first deploy: rotate `PROD_APP_ADMIN_PASSWORD` to a strong value. Aft
## 4. Logs + observability
> **Developer guide (where to look for what, LogQL queries, trace exploration) → [docs/OBSERVABILITY.md](./OBSERVABILITY.md).**
> This section covers the ops side: starting the stack, env vars, and CI wiring.
### First-response commands
```bash
@@ -256,9 +306,156 @@ docker compose logs --tail=200 <service>
- **Spring Actuator health**: `http://localhost:8080/actuator/health` (internal only in prod — port 8081 for Prometheus scraping)
- **Prometheus scraping**: management port 8081, path `/actuator/prometheus`. Internal only; Caddy blocks `/actuator/*` externally.
### Future observability
### Observability stack
Phase 7 of the Production v1 milestone adds Prometheus + Loki + Grafana. No monitoring infrastructure is in place yet.
An observability stack is available via `docker-compose.observability.yml`. Configuration lives under `infra/observability/`.
#### Dev — start from the workspace
```bash
docker compose up -d # creates archiv-net
docker compose -f docker-compose.observability.yml up -d
```
#### Why the obs stack is managed differently from the main app stack
The main app stack (`docker-compose.prod.yml`) has no config-file bind mounts — its containers read config from env vars and image defaults. The workspace is wiped after each CI run but that does not affect running containers, because they hold no references to workspace paths.
The obs stack is different: `prometheus.yml`, `tempo.yml`, Loki config, Grafana provisioning files, and Promtail config are all bind-mounted from the host filesystem into their containers. If those source paths disappear (workspace wipe), the containers can restart fine until a `docker compose up` is run again — at that point Docker tries to re-resolve the bind-mount source and fails because the workspace path no longer exists.
The fix is to keep the obs compose file and config tree at a **permanent path** that CI copies to on every run but which survives between runs: `/opt/familienarchiv/` (see ADR-016).
#### Production — managed from `/opt/familienarchiv/`
Every CI run (nightly + release) copies `docker-compose.observability.yml` and `infra/observability/` to `/opt/familienarchiv/` before starting the stack. Bind mounts then resolve to `/opt/familienarchiv/infra/observability/…` — a stable path that outlasts any workspace wipe.
**Environment variables** follow the same two-source model as the main stack:
| Source | What it contains | Managed by |
|---|---|---|
| `infra/observability/obs.env` | All non-secret config (ports, URLs, hostnames) | Git — reviewed in PRs |
| `/opt/familienarchiv/obs-secrets.env` | Passwords and secret keys only | CI — written fresh from Gitea secrets on every deploy |
Both files are passed explicitly via `--env-file` to the compose command, so there is no implicit auto-read `.env` and no operator-managed file to keep in sync.
**Non-secret config** (`infra/observability/obs.env`):
| Key | Value | Notes |
|---|---|---|
| `PORT_GRAFANA` | `3003` | Avoids collision with staging frontend on port 3001 |
| `PORT_GLITCHTIP` | `3002` | |
| `PORT_PROMETHEUS` | `9090` | |
| `GF_SERVER_ROOT_URL` | `https://grafana.archiv.raddatz.cloud` | Required for alert email links and OAuth redirects |
| `GLITCHTIP_DOMAIN` | `https://glitchtip.archiv.raddatz.cloud` | Must match the Caddy vhost |
| `POSTGRES_HOST` | `archive-db` | Override if only the staging stack is running |
**Secret keys** (set in Gitea secrets, injected by CI into `obs-secrets.env`):
| Gitea secret | Notes |
|---|---|
| `GRAFANA_ADMIN_PASSWORD` | Strong unique password; shared by nightly and release |
| `GLITCHTIP_SECRET_KEY` | `openssl rand -hex 32`; shared by nightly and release |
| `STAGING_POSTGRES_PASSWORD` / `PROD_POSTGRES_PASSWORD` | Must match the running PostgreSQL container |
To start or restart the obs stack manually on the server (after CI has run at least once):
```bash
docker compose \
-f /opt/familienarchiv/docker-compose.observability.yml \
--env-file /opt/familienarchiv/infra/observability/obs.env \
--env-file /opt/familienarchiv/obs-secrets.env \
up -d --wait --remove-orphans
```
> **Note (manual ops only):** CI clears the destination with `rm -rf` before copying, so deleted files are removed automatically on the next run. If you copy manually with `cp -r` without first removing the directory, stale files from deleted configs will persist until cleaned up:
> ```bash
> rm /opt/familienarchiv/infra/observability/<path-to-removed-file>
> ```
Current services:
| Service | Image | Purpose |
|---|---|---|
| `obs-prometheus` | `prom/prometheus:v3.4.0` | Scrapes metrics from backend management port 8081 (`/actuator/prometheus`), node-exporter, and cAdvisor |
| `obs-node-exporter` | `prom/node-exporter:v1.9.0` | Host-level CPU / memory / disk / network metrics |
| `obs-cadvisor` | `gcr.io/cadvisor/cadvisor:v0.52.1` | Per-container resource metrics |
| `obs-loki` | `grafana/loki:3.4.2` | Log aggregation — receives log streams from Promtail. Port 3100 is `expose`-only (not host-bound). |
| `obs-promtail` | `grafana/promtail:3.4.2` | Log shipping agent — reads all Docker container logs via the Docker socket and forwards them to Loki with `container_name`, `compose_service`, `compose_project`, and `job` labels. The `job` label is mapped from the Docker Compose service name (`com.docker.compose.service`) so that Grafana Loki dashboard queries (`{job="backend"}`, `{job="frontend"}`) work out of the box and the "App" variable dropdown is populated. |
| `obs-tempo` | `grafana/tempo:2.7.2` | Distributed trace storage — OTLP HTTP receiver on port 4318 (`archiv-net`-internal; backend sends traces here). Grafana queries traces on port 3200 (`obs-net`-internal). All ports are `expose`-only (not host-bound). |
| `obs-grafana` | `grafana/grafana-oss:11.6.1` | Unified observability UI — metrics dashboards, log exploration, trace viewer. Bound to `127.0.0.1:${PORT_GRAFANA:-3003}` on the host. |
| `obs-glitchtip` | `glitchtip/glitchtip:6.1.6` | Sentry-compatible error tracker. Receives frontend + backend error events, groups by fingerprint, provides issue UI with stack traces. Bound to `127.0.0.1:${PORT_GLITCHTIP:-3002}`. |
| `obs-glitchtip-worker` | `glitchtip/glitchtip:6.1.6` | Celery + beat worker — processes async GlitchTip tasks (event ingestion, notifications, cleanup). |
| `obs-redis` | `redis:7-alpine` | Celery task broker for GlitchTip. Internal to `obs-net`; no host port exposed. |
| `obs-glitchtip-db-init` | `postgres:16-alpine` | One-shot init container. Creates the `glitchtip` database on the existing `archive-db` PostgreSQL instance if it does not already exist. Runs at stack startup; exits cleanly once done. |
#### Grafana
| Item | Value |
|---|---|
| URL | `http://localhost:3003` (or `http://localhost:$PORT_GRAFANA`) |
| Username | `admin` |
| Password | `$GRAFANA_ADMIN_PASSWORD` (default: `changeme`**change before exposing to a network**) |
Datasources are auto-provisioned on first start (Prometheus, Loki, Tempo — no manual setup required). Three dashboards are pre-loaded:
| Dashboard | Grafana ID | Purpose |
|---|---|---|
| Node Exporter Full | 1860 | Host CPU, memory, disk, network |
| Spring Boot Observability | 17175 | JVM metrics, HTTP latency, error rate |
| Loki Logs | 13639 | Log exploration and filtering |
Tempo traces are accessible via Grafana Explore → Tempo datasource, and linked from Loki logs via the `traceId` derived field.
**Loki quick checks** (after ~60 s, run from inside the `obs-loki` container):
```bash
# Loki health
docker exec obs-loki wget -qO- http://localhost:3100/ready
# List labels
docker exec obs-loki wget -qO- 'http://localhost:3100/loki/api/v1/labels'
# Query logs by service (stable across dev and prod environments)
docker exec obs-loki wget -qO- \
'http://localhost:3100/loki/api/v1/query_range?query=%7Bcompose_service%3D%22backend%22%7D&limit=5'
```
**Prefer `compose_service` over `container_name` in LogQL queries**`container_name` differs between dev (`archive-backend`) and prod (`archiv-production-backend-1`), while `compose_service` is stable (`backend`, `db`, `minio`, etc.).
Prometheus port `9090` and Grafana port `3003` (default; configurable via `PORT_GRAFANA`) are bound to `127.0.0.1` on the host. No other observability ports are host-bound.
#### GlitchTip
| Item | Value |
|---|---|
| URL | `http://localhost:3002` (or `http://localhost:$PORT_GLITCHTIP`) |
**Required env vars** — set in `.env` before first start:
```bash
GLITCHTIP_SECRET_KEY=$(python3 -c "import secrets; print(secrets.token_hex(32))")
GLITCHTIP_DOMAIN=http://localhost:3002 # change to your public URL in prod
PORT_GLITCHTIP=3002 # optional, defaults to 3002
```
**Database:** GlitchTip shares the existing `archive-db` PostgreSQL instance. The `obs-glitchtip-db-init` one-shot container creates a dedicated `glitchtip` database on first stack start — no manual step required.
**First-run steps** (one-time, after `docker compose -f docker-compose.observability.yml up -d`):
```bash
# 1. Create the Django superuser (interactive)
docker exec -it obs-glitchtip ./manage.py createsuperuser
# 2. Open the GlitchTip UI and log in
open http://localhost:3002
# 3. Create an organisation (e.g. "Familienarchiv")
# 4. Create two projects:
# - "familienarchiv-frontend" (platform: JavaScript / SvelteKit)
# - "familienarchiv-backend" (platform: Java / Spring Boot)
# 5. Copy each project's DSN from Settings → Projects → <project> → Client Keys
# 6. Wire the DSNs into the backend and frontend via env vars (separate issue)
```
---

180
docs/OBSERVABILITY.md Normal file
View File

@@ -0,0 +1,180 @@
# Observability Guide
> **Ops reference (starting the stack, env vars, CI wiring) → [DEPLOYMENT.md §4](./DEPLOYMENT.md#4-logs--observability).**
> This file is for developers: what signal lives where, how to reach it, and what to look for.
## Where to look for what
| I want to… | Go to |
|---|---|
| See the last N log lines from the backend | `docker compose logs --tail=100 backend` |
| Search logs by keyword across time | Grafana → Explore → Loki |
| Understand why an HTTP request failed | Grafana → Explore → Loki → filter by `traceId` → follow link to Tempo |
| See a full distributed trace (DB queries, HTTP calls) | Grafana → Explore → Tempo → search by service or trace ID |
| Check JVM heap / GC / thread count | Grafana → Dashboards → Spring Boot Observability |
| Check HTTP error rate or p95 latency | Grafana → Dashboards → Spring Boot Observability |
| Check host CPU / memory / disk | Grafana → Dashboards → Node Exporter Full |
| See grouped application errors with stack traces | GlitchTip |
| Check if the backend is healthy | `curl http://localhost:8081/actuator/health` (on the server) |
| Check what Prometheus is scraping | `curl http://localhost:9090/api/v1/targets` (on the server) |
## Access
| Tool | External URL | Who it's for |
|---|---|---|
| Grafana | `https://grafana.archiv.raddatz.cloud` | Logs, metrics, traces — the primary observability UI |
| GlitchTip | `https://glitchtip.archiv.raddatz.cloud` | Grouped errors with stack traces and release tracking |
Loki, Tempo, and Prometheus have no external URL. They are internal services, accessible only through Grafana (or via SSH tunnel — see below).
## Logs (Loki)
Logs reach Loki via Promtail, which reads all Docker container logs from the Docker socket and ships them with labels derived from Docker Compose metadata.
### Labels available in every log line
| Label | What it contains | Example |
|---|---|---|
| `job` | Compose service name | `backend`, `frontend`, `db` |
| `compose_service` | Same as `job` | `backend` |
| `compose_project` | Compose project name | `archiv-staging`, `archiv-production` |
| `container_name` | Docker container name | `archiv-staging-backend-1` |
| `filename` | Docker log source | `/var/lib/docker/containers/…` |
**Use `job` in LogQL queries** — it is stable across dev, staging, and production. `container_name` changes between environments.
### Common LogQL queries in Grafana Explore
```logql
# All backend logs
{job="backend"}
# Backend ERROR and WARN lines only
{job="backend"} |= "ERROR" or {job="backend"} |= "WARN"
# All logs for a specific request (paste a traceId from a log line)
{job="backend"} |= "3fa85f64-5717-4562-b3fc-2c963f66afa6"
# Log lines containing a specific exception class
{job="backend"} |~ "DomainException|NullPointerException"
# Frontend logs
{job="frontend"}
# Database (slow query log, if enabled)
{job="db"}
```
### Log → Trace correlation
Spring Boot writes the active `traceId` into every log line when a request is being processed:
```
2026-05-16 ... INFO [Familienarchiv,3fa85f64...,1b2c3d4e] o.r.f.document.DocumentService : ...
```
In Grafana Explore → Loki, log lines with a `traceId` field show a **Tempo** link. Clicking it opens the full trace in Explore → Tempo without copying and pasting IDs.
This linking is configured in the Loki datasource provisioning via the `traceId` derived field regex. No manual setup required.
## Traces (Tempo)
The backend sends traces to Tempo via OTLP HTTP (port 4318). Every inbound HTTP request and every JPA query produces a span. Spans are linked into traces by the propagated `traceId` header.
### Finding a trace in Grafana
**Option A — from a log line:**
1. Grafana → Explore → select *Loki* datasource
2. Query `{job="backend"}` and find the failing request
3. Click the **Tempo** link in the log line (appears when `traceId` is present)
**Option B — by service:**
1. Grafana → Explore → select *Tempo* datasource
2. Query type: **Search**
3. Service name: `familienarchiv-backend`
4. Filter by HTTP status, duration, or operation name as needed
**Option C — by trace ID:**
1. Grafana → Explore → select *Tempo* datasource
2. Query type: **TraceQL** or **Trace ID**
3. Paste the trace ID
### What each span type tells you
| Root span name pattern | What it covers |
|---|---|
| `GET /api/documents`, `POST /api/documents` | Full HTTP request lifecycle |
| `SELECT archiv.*` | A single JPA/JDBC query inside that request |
| `HikariPool.getConnection` | Connection pool wait time |
A slow `SELECT` span inside an otherwise fast HTTP span pinpoints a missing index. A slow `HikariPool.getConnection` span indicates connection pool exhaustion.
### Sampling rate
- **Dev**: 100% of requests are traced (`management.tracing.sampling.probability: 1.0` in `application.yaml`)
- **Staging / Production**: 10% (`MANAGEMENT_TRACING_SAMPLING_PROBABILITY=0.1` in `docker-compose.prod.yml`)
To find a trace for a specific request in staging/production, either increase the sampling rate temporarily or trigger the request multiple times.
## Metrics (Prometheus → Grafana)
Prometheus scrapes the backend management endpoint every 15 s:
```
Target: backend:8081/actuator/prometheus
Labels: job="spring-boot", application="Familienarchiv"
```
All Spring Boot metrics carry the `application="Familienarchiv"` tag, which is how the Grafana Spring Boot Observability dashboard (ID 17175) filters to this service.
### Useful Prometheus queries (run on the server or via Grafana Explore → Prometheus)
```promql
# HTTP error rate (5xx) as a fraction of all requests
sum(rate(http_server_requests_seconds_count{status=~"5.."}[5m]))
/ sum(rate(http_server_requests_seconds_count[5m]))
# p95 response time
histogram_quantile(0.95, sum by (le) (
rate(http_server_requests_seconds_bucket[5m])
))
# JVM heap used
jvm_memory_used_bytes{area="heap", application="Familienarchiv"}
# Active DB connections
hikaricp_connections_active
```
## Errors (GlitchTip)
GlitchTip receives errors from both the backend (via Sentry Java SDK) and the frontend (via Sentry JavaScript SDK). It groups events by fingerprint, tracks first/last seen times, and links to the release that introduced the error.
GlitchTip complements Loki: use GlitchTip when you need **grouped, de-duplicated errors with stack traces and release attribution**; use Loki when you need **raw log lines with full context** or want to search across all log levels.
## Direct API access (debugging only)
Loki and Tempo bind no host ports. To reach them directly from your laptop, use an SSH tunnel through the server:
```bash
# Loki API on localhost:3100 (then query via curl or logcli)
ssh -L 3100:172.20.0.x:3100 root@raddatz.cloud
# Replace 172.20.0.x with the obs-loki container IP:
# docker inspect obs-loki --format '{{.NetworkSettings.Networks.archiv-obs-net.IPAddress}}'
# Tempo API on localhost:3200 (then query via curl or tempo-cli)
ssh -L 3200:172.20.0.x:3200 root@raddatz.cloud
```
In practice, Grafana Explore covers all common debugging workflows without needing direct API access.
## Signal summary
| Signal | Source | Transport | Storage | UI |
|---|---|---|---|---|
| Application logs | Spring Boot stdout → Docker log driver | Promtail → Loki push API | Loki | Grafana Explore → Loki |
| Distributed traces | Spring Boot OTel agent | OTLP HTTP → Tempo:4318 | Tempo | Grafana Explore → Tempo |
| JVM + HTTP metrics | Spring Actuator `/actuator/prometheus` | Prometheus pull (15 s) | Prometheus | Grafana dashboards |
| Host metrics | node-exporter | Prometheus pull | Prometheus | Grafana → Node Exporter Full |
| Container metrics | cAdvisor | Prometheus pull | Prometheus | Grafana (via Prometheus datasource) |
| Application errors | Sentry SDK | HTTP POST → GlitchTip ingest | GlitchTip DB | GlitchTip UI |

View File

@@ -0,0 +1,69 @@
# ADR-015: DooD workspace bind mount for Compose file bind-mount resolution
## Status
Accepted
## Context
The deploy workflows (`.gitea/workflows/nightly.yml`, `release.yml`) run job steps inside Docker containers via Docker-out-of-Docker (DooD): the Gitea runner mounts the host Docker socket, and act_runner spawns sibling containers for each job.
When a job step calls `docker compose -f docker-compose.observability.yml up`, Docker Compose resolves relative bind-mount sources against `$(pwd)` inside the job container and passes the resulting absolute paths to the **host** daemon. For example, `./infra/observability/prometheus/prometheus.yml` becomes `/some/path/infra/observability/prometheus/prometheus.yml`, and the host daemon tries to bind-mount that path from the **host filesystem**.
In the default DooD setup (`runner-config.yaml` with only `valid_volumes: ["/var/run/docker.sock"]`), job container workspaces live in the act_runner overlay2 layer. The host has no corresponding directory at the job container's `$(pwd)` path, so the daemon auto-creates an empty directory in its place. The container then fails to start because the mount target was expected to be a file, not a directory:
```
error mounting "…/prometheus/prometheus.yml" to rootfs at "/etc/prometheus/prometheus.yml": not a directory
```
This affected all five config file bind mounts in `docker-compose.observability.yml`.
## Decision
Configure act_runner to store job workspaces on a real host path (`/srv/gitea-workspace`) and mount that path into both the runner container and every job container at the **same absolute path**. The identity of the host path and container path is the key constraint: Compose resolves to an absolute path and hands it to the host daemon, which looks for that exact path on the host filesystem.
**runner-config.yaml changes:**
```yaml
container:
workdir_parent: /srv/gitea-workspace
valid_volumes:
- "/var/run/docker.sock"
- "/srv/gitea-workspace"
options: "-v /srv/gitea-workspace:/srv/gitea-workspace"
```
**Runner compose.yaml change** (host side — not in this repo):
```yaml
runner:
volumes:
- /srv/gitea-workspace:/srv/gitea-workspace
```
With this in place, `$(pwd)` inside a job container resolves to `/srv/gitea-workspace/<owner>/<repo>/`, which is a real directory on the host. Compose-managed bind mounts from that directory work without any additional steps.
## Alternatives Considered
| Alternative | Why rejected |
|---|---|
| **overlay2 `MergedDir` sync via privileged nsenter** (the previous approach, see PR #599 v1) | Required `--privileged --pid=host` (effective root on the host) plus fragile overlay2 driver assumption. Introduced stale-file risk on the host and a second stable path (`/srv/familienarchiv-*/obs-configs`) to maintain separately from the source tree. Replaced by this ADR. |
| **Build configs into a dedicated Docker image** (pattern used for MinIO bootstrap, see `infra/minio/Dockerfile`) | Viable for static files that change infrequently. Requires a build step and an image rebuild every time a config changes. Appropriate for bootstrap scripts; too heavy for frequently-tuned observability configs. |
| **Add workspace directory to runner-config `valid_volumes` only** (without `workdir_parent`) | `valid_volumes` whitelists paths that workflow steps may reference, but does not change where act_runner stores workspaces. Without `workdir_parent`, the workspace would still be in overlay2 and the bind-mount resolution problem would remain. |
| **Map workspace under a different host path than container path** (e.g. host `/srv/workspace`, container `/workspace`) | Compose resolves to the container-internal path (e.g. `/workspace/…`) and passes that to the host daemon. The host daemon interprets the source as a host path. If host `/workspace` does not exist, the daemon creates an empty directory — the original bug. The paths must be identical. |
## Consequences
- `/srv/gitea-workspace` must exist on the VPS before the runner starts. The directory was created as part of this change; it is not created automatically.
- The runner container's `compose.yaml` (maintained outside this repo at `~/docker/gitea/compose.yaml` on the VPS) must include the `- /srv/gitea-workspace:/srv/gitea-workspace` volume line. This is an out-of-band operational dependency; the prerequisite is documented in `runner-config.yaml`.
- `workdir_parent` applies to all jobs on this runner. Any future workflow that calls `docker compose` with relative bind mounts benefits automatically without further configuration.
- Job workspaces persist across runs under `/srv/gitea-workspace`. act_runner manages per-run subdirectory cleanup. Orphaned directories from interrupted runs should be cleaned up manually if disk space becomes a concern.
- Workflows that previously relied on `OBS_CONFIG_DIR` env var or the `obs-configs` stable path on the host no longer need those. Both were removed in this PR.
- This pattern does **not** apply to the `nsenter`-based Caddy reload step (ADR-012), which manages a host systemd service — a different problem class with no bind-mount equivalent.
## References
- ADR-011 — single-tenant runner trust model
- ADR-012 — nsenter via privileged container for host service management
- Issue #598 — original observability stack bind-mount failure
- `runner-config.yaml``workdir_parent`, `valid_volumes`, `options`

View File

@@ -0,0 +1,57 @@
# ADR-016: Observability stack co-location at `/opt/familienarchiv/` with CI-push config sync
## Status
Accepted
## Context
Issue #601 established that the observability stack must survive Gitea CI workspace wipes between nightly runs. When the nightly job completes, act_runner deletes the job workspace. Any Docker container that bind-mounts a config file from a workspace path (`/srv/gitea-workspace/…/infra/observability/prometheus/prometheus.yml`) then references a path that no longer exists on the host. On the next nightly run, Docker Compose either auto-creates an empty directory in its place (causing the container to fail to start because a file mount receives a directory) or finds a stale file from a previous run if the workspace happened to land at the same path.
ADR-015 solved the workspace bind-mount resolution problem: job workspaces are stored at `/srv/gitea-workspace` so `$(pwd)` inside the job container maps to a real host path. But it did not address persistence: the workspace is still wiped after the job, so bind mounts from workspace-relative paths remain fragile across runs.
### Decision drivers
1. Bind-mount sources must point to a host path that persists indefinitely, not to a path that disappears after each CI run.
2. Config files must reflect the committed state of the repo after every nightly run (no manual sync steps).
3. Secrets must not be written to the workspace or to any path managed by CI; they must survive independently of deployments.
4. The solution must not introduce new infrastructure dependencies (no SSH access from CI, no external registry, no additional server-side daemon).
### Alternatives considered
**A: Server-pull model** — a systemd timer or cron job on the server does `git pull` from the repo into `/opt/familienarchiv/` and then runs `docker compose up`. Rejected because: (1) requires git credentials on the server and a registered deploy key, (2) adds a second deployment mechanism that diverges from the CI-push model used for the main app stack, (3) timing coupling — the server pull must complete before CI's health checks run, requiring polling or a webhook.
**B: Separate directory (e.g. `/opt/obs/`)** — keeps obs configs isolated from the app stack. Rejected because: (1) the main app compose files are already in `/opt/familienarchiv/` (managed the same way), and (2) GlitchTip shares the `archive-db` PostgreSQL instance and `archiv-net` Docker network — it is architecturally part of the same deployment unit, not a separate one. Co-location reflects the actual coupling.
**C: Named Docker configs (Swarm)** — Docker Swarm supports first-class config objects that persist in the cluster. Rejected because the project does not use Swarm and introducing it solely for config persistence is a disproportionate dependency.
## Decision
The observability stack is co-located with the main application deployment at `/opt/familienarchiv/`:
- `docker-compose.observability.yml``/opt/familienarchiv/docker-compose.observability.yml`
- `infra/observability/``/opt/familienarchiv/infra/observability/`
Both the nightly CI job (`nightly.yml`) and the release job (`release.yml`) copy these files from the workspace checkout to `/opt/familienarchiv/` using `cp -r` on every run (CI-push model). Containers always read config from the permanent location; a workspace wipe has no effect on running containers.
Environment variables follow a two-source model:
- `infra/observability/obs.env` (git-tracked, non-secret): all non-sensitive config — host ports, public URLs (`GLITCHTIP_DOMAIN`, `GF_SERVER_ROOT_URL`), and the default `POSTGRES_HOST`. Changes go through PR review. No credentials.
- `/opt/familienarchiv/obs-secrets.env` (CI-written, per-deploy): passwords and secret keys only (`GRAFANA_ADMIN_PASSWORD`, `GLITCHTIP_SECRET_KEY`, `POSTGRES_USER`, `POSTGRES_PASSWORD`, `POSTGRES_HOST`), injected fresh from Gitea secrets on every nightly and release deploy. Gitea is the single source of truth for secrets — rotating a secret takes effect on the next deploy without manual server action.
Both files are passed explicitly via `--env-file` to every obs compose command (config dry-run and `up`). There is no implicit auto-read `.env`. The required key inventory is documented in `docs/DEPLOYMENT.md §4`.
The CI runner mounts `/opt/familienarchiv` as a bind mount into job containers (see `runner-config.yaml`). This requires a one-time `mkdir -p /opt/familienarchiv/infra` on the server and a runner restart after updating `runner-config.yaml` (see ADR-015 and `docs/DEPLOYMENT.md §3.1`).
## Consequences
**Positive:**
- Bind-mount sources survive workspace wipes by definition — they are on a persistent host path.
- Config is always in sync with the repo after each nightly run.
- No new infrastructure dependencies; the CI-push model mirrors how the main app stack is deployed.
- Secret rotation requires no manual server action — Gitea secrets are the authoritative store; `obs-secrets.env` is rewritten from scratch on every deploy so a secret change takes effect on the next nightly or release run.
**Negative:**
- `cp -r` does not remove deleted files; a config file removed from the repo persists in `/opt/familienarchiv/infra/observability/` until manually deleted. Acceptable for this project's change frequency. A `rsync -a --delete` would give a clean mirror if this becomes a problem.
- Mounting `/opt/familienarchiv/` into CI job containers expands the blast radius of a compromised workflow step — a malicious step could overwrite app compose files and Caddy config. Acceptable because the runner is single-tenant (trusted code only). See `runner-config.yaml` security comment.
- Runner must be restarted (`systemctl restart gitea-runner`) after any change to `runner-config.yaml` for the new mount to take effect.

View File

@@ -0,0 +1,48 @@
# ADR-017: Spring Boot 4.0 management port shares the main security filter chain
## Status
Accepted
## Context
The Familienarchiv backend runs Spring Boot Actuator on a dedicated management port (8081) so that Caddy never proxies `/actuator/*` requests and Prometheus can reach the scrape endpoint directly inside `archiv-net`.
In earlier Spring Boot versions (< 4.0), the management server ran in an isolated child application context whose security was governed independently by `ManagementWebSecurityAutoConfiguration`. The main app's `SecurityConfig` filter chain (port 8080) never intercepted requests arriving on port 8081.
In Spring Boot 4.0 with Jetty, this isolation was removed. The management server now traverses the **same** Spring Security `FilterChainProxy` as the main application. Concretely:
- Any `SecurityFilterChain` bean in the application context is evaluated for requests arriving on the management port.
- There is no longer a separate "management security" child context.
This was discovered when Prometheus began receiving HTTP 401 responses from `/actuator/prometheus` despite the endpoint being exposed and the `micrometer-registry-prometheus` dependency being present. Prometheus rejected these responses with `received unsupported Content-Type "text/html"` because the main filter chain's form-login `DelegatingAuthenticationEntryPoint` was redirecting unauthenticated requests to `/login` (302 → HTML).
A secondary issue: Spring Boot 4.0 no longer auto-enables Prometheus metrics export — `management.prometheus.metrics.export.enabled` must be set explicitly, and the Prometheus scrape endpoint requires `spring-boot-starter-micrometer-metrics` (a new starter that was split out in Spring Boot 4.0).
## Decision
1. **Dedicated management `SecurityFilterChain`** scoped to `/actuator/**` at `@Order(1)` (highest precedence). This chain:
- `permitAll()` for `/actuator/health` and `/actuator/prometheus` — required for Docker health checks and unauthenticated Prometheus scraping.
- `authenticated()` for all other actuator endpoints — blocks `/actuator/metrics`, `/actuator/info`, etc. without credentials.
- Uses an explicit `401` entry point (not form-login redirect) so that API clients — including Prometheus — receive a machine-readable status code rather than an HTML redirect.
- No CSRF, no form login.
2. **Belt-and-suspenders `permitAll()` in the main `SecurityFilterChain`** for `/actuator/health` and `/actuator/prometheus`, in case a future configuration change causes these paths to escape the management chain's `securityMatcher`.
3. **Network isolation as the outer defense boundary.** Port 8081 is not published in `docker-compose.yml` and is not routed through Caddy. Only services inside `archiv-net` (primarily Prometheus and the Docker health checker) can reach the management port.
## Alternatives rejected
- **Exclude `ManagementWebSecurityAutoConfiguration`:** This auto-configuration no longer exists in Spring Boot 4.0. Exclusion is not applicable.
- **Keep `SecurityConfig` as the sole filter chain without `@Order(1)` management chain:** The main chain's form-login `DelegatingAuthenticationEntryPoint` redirects browser-like clients to `/login` (302). Prometheus and automated health check clients cannot follow this redirect, so the endpoint would be unreachable without a dedicated chain that returns plain 401 or 200.
- **Per-endpoint `@Order(1)` filter chain using `EndpointRequest.toAnyEndpoint()`:** The `spring-boot-security` artifact that provides `EndpointRequest` is not a transitive dependency of `spring-boot-starter-actuator` in Spring Boot 4.0. Using a path-based `securityMatcher("/actuator/**")` achieves the same scoping without an extra dependency.
## Consequences
- All actuator endpoints on port 8081 that are not explicitly `permitAll()`-ed require HTTP Basic credentials. Without valid credentials, the response is 401 (not a redirect).
- Adding a new actuator endpoint to `management.endpoints.web.exposure.include` implicitly protects it via `anyRequest().authenticated()` in the management chain — no additional `permitAll()` needed unless intentional.
- A regression test (`ActuatorPrometheusIT`) verifies:
- `/actuator/prometheus` returns 200 without credentials.
- `/actuator/metrics` returns 401 without credentials.
- Prometheus metric names are present in the response body.
- If port 8081 is ever accidentally published in `docker-compose.yml`, actuator endpoints other than health and prometheus are still protected by HTTP Basic. This reduces (but does not eliminate) the risk of inadvertent exposure.

View File

@@ -0,0 +1,86 @@
# ADR-018: GlitchTip frontend error tracking via @sentry/sveltekit
**Date:** 2026-05-17
**Status:** Accepted
**Deciders:** Marcel Raddatz
---
## Context
The Familienarchiv had no client-side error reporting. When a user encountered a crash
or unhandled error in the SvelteKit frontend, there was no way for the operator to
observe it — errors were invisible until a user manually reported them. A GlitchTip
instance (self-hosted, Sentry-compatible) was already running as part of the
observability stack (`docker-compose.observability.yml`). The backend already reported
server-side errors to it.
We needed a way to:
1. Capture frontend errors automatically and route them to GlitchTip.
2. Give users a visible error identifier they can include in a support message.
3. Do this without leaking personally identifiable information (PII) from the family
archive — documents contain personal histories, names, and relationships.
---
## Decision
Use `@sentry/sveltekit` (the official Sentry SDK for SvelteKit) to:
- Initialise with `sendDefaultPii: false` on both `hooks.server.ts` and `hooks.client.ts`.
- Pass a callback to `Sentry.handleErrorWithSentry()` that returns
`{ message, errorId }` where `errorId` is `Sentry.lastEventId()` when Sentry
captured the event, or a fresh `crypto.randomUUID()` as fallback.
- Display the `errorId` on the `+error.svelte` page so users can include it in a
report to the operator.
The SDK is initialised with `enabled: !!import.meta.env.VITE_SENTRY_DSN` so that
development and CI builds without a DSN configured do not send any events.
`VITE_SENTRY_DSN` is a write-only ingest key — it can POST events to GlitchTip but
cannot read them. It is safe to include in the client bundle per the Sentry security
model; it does not require rotation like a password.
---
## Alternatives considered
**Sentry SaaS** — rejected. The archive contains private family documents and personal
history. Sending error events with stack traces to a US-hosted third party is
inconsistent with the project's data-minimisation posture. Self-hosted GlitchTip on
the same Hetzner VPS keeps all data on infrastructure the operator controls.
**Custom error logging endpoint** — rejected. The @sentry/sveltekit SDK handles
SvelteKit's hook lifecycle, source-map upload, and event grouping automatically.
Reimplementing this would cost significant engineering time for no benefit.
**Log-only (no user-visible errorId)** — rejected. Without a visible error ID, users
can only describe what happened in natural language, making it hard to correlate a
report with a specific GlitchTip event. The `errorId` closes this gap at negligible UI
cost.
---
## Consequences
**Positive:**
- Frontend errors are now observable without requiring user reports.
- Users can provide an `errorId` that maps directly to a GlitchTip event.
- `sendDefaultPii: false` ensures names, IPs, and cookie values are not included in
captured events.
- `tracesSampleRate: 0.1` limits trace volume to 10% of transactions, keeping
GlitchTip load low on the shared VPS.
**Negative / trade-offs:**
- The `@sentry/sveltekit` SDK is now a production dependency. SDK updates must be
reviewed for changes to the default PII scrubbing behaviour.
- The `handleError` callback in both hooks returns a hardcoded English message
(`'An unexpected error occurred'`). This bypasses Paraglide i18n — the error page
will always show English text when the hooks are active, regardless of the user's
locale. This is acceptable because: (a) the error page is a last-resort fallback
not part of normal UX, (b) the `errorId` is the actionable information, not the
message text. A future ADR may address this if internationalised error messages
become a requirement.
- `Sentry.lastEventId()` returns `undefined` when Sentry did not capture the event
(e.g. DSN not configured). The `crypto.randomUUID()` fallback guarantees an `errorId`
is always present, but that UUID will not appear in GlitchTip.

View File

@@ -8,9 +8,11 @@ Person(member, "Family Member", "Access by administrator invite. Searches, brows
System(familienarchiv, "Familienarchiv", "Web application for digitising, organising, and searching family documents")
System_Ext(mail, "Email Service", "SMTP server. Delivers notification emails (mentions, replies) and password-reset links.")
System_Ext(glitchtip, "GlitchTip", "Self-hosted error tracking (Sentry-compatible). Receives frontend and backend error events with stack traces.")
Rel(admin, familienarchiv, "Manages via browser", "HTTPS")
Rel(member, familienarchiv, "Searches, reads, and transcribes via browser", "HTTPS")
Rel(familienarchiv, mail, "Sends notification and password-reset emails (optional)", "SMTP")
Rel(familienarchiv, glitchtip, "Sends error events with errorId and stack trace", "HTTPS")
@enduml

View File

@@ -17,6 +17,19 @@ System_Boundary(archiv, "Familienarchiv (Docker Compose)") {
Container(mc, "Bucket / Service-Account Init", "MinIO Client (mc)", "One-shot container on startup. Idempotent: creates the archive bucket, the archiv-app service account, and attaches the readwrite policy.")
}
System_Boundary(observability, "Observability Stack (/opt/familienarchiv/docker-compose.observability.yml)") {
Container(prometheus, "Prometheus", "prom/prometheus:v3.4.0", "Scrapes metrics from backend management port 8081 (/actuator/prometheus), node-exporter, and cAdvisor. Retention: 30 days.")
Container(node_exporter, "Node Exporter", "prom/node-exporter:v1.9.0", "Host-level CPU, memory, disk, and network metrics.")
Container(cadvisor, "cAdvisor", "gcr.io/cadvisor/cadvisor:v0.52.1", "Per-container resource metrics.")
Container(loki, "Loki", "grafana/loki:3.4.2", "Stores log streams from all containers.")
Container(promtail, "Promtail", "grafana/promtail:3.4.2", "Ships Docker container logs to Loki via Docker SD.")
Container(tempo, "Tempo", "grafana/tempo:2.7.2", "Distributed trace storage. OTLP HTTP receiver on port 4318 (archiv-net). Grafana queries traces on port 3200 (obs-net). All ports internal only.")
Container(grafana, "Grafana", "grafana/grafana-oss:11.6.1", "Unified observability UI — dashboards, logs, traces. Datasources (Prometheus, Loki, Tempo) and three dashboards are auto-provisioned.")
Container(glitchtip, "GlitchTip", "glitchtip/glitchtip:6.1.6", "Sentry-compatible error tracker — web process. Receives frontend + backend error events, groups by fingerprint, provides issue UI with stack traces.")
Container(obs_glitchtip_worker, "GlitchTip Worker", "glitchtip/glitchtip:6.1.6", "Celery + beat worker — async event ingestion, notifications, cleanup.")
Container(obs_redis, "Redis", "redis:7-alpine", "Celery task queue for GlitchTip async workers.")
}
Rel(user, caddy, "HTTPS", "TLS 1.2/1.3")
Rel(caddy, frontend, "Reverse proxies non-/api requests", "HTTP / loopback:3000")
Rel(caddy, backend, "Reverse proxies /api/*", "HTTP / loopback:8080")
@@ -28,5 +41,12 @@ Rel(backend, ocr, "OCR job requests with presigned MinIO URL", "HTTP / REST / JS
Rel(backend, mail, "Sends notification and password-reset emails (optional)", "SMTP")
Rel(ocr, storage, "Fetches PDF via presigned URL", "HTTP / S3 presigned")
Rel(mc, storage, "Bootstraps bucket + service account on startup", "MinIO Client CLI")
Rel(promtail, loki, "Pushes log streams", "HTTP/Loki push API")
Rel(backend, tempo, "Sends distributed traces via OTLP", "HTTP / OTLP / port 4318 (archiv-net)")
Rel(grafana, prometheus, "Queries metrics", "HTTP 9090")
Rel(grafana, loki, "Queries logs", "HTTP 3100")
Rel(grafana, tempo, "Queries traces", "HTTP 3200")
Rel(glitchtip, db, "Stores error events in glitchtip DB", "PostgreSQL / archiv-net")
Rel(obs_glitchtip_worker, obs_redis, "Processes Celery tasks", "Redis / obs-net")
@enduml

View File

@@ -19,6 +19,39 @@ Both containers live in the `gitea_gitea` Docker network on the VPS. The runner
The `gitea-runner` container mounts the host Docker socket (`/var/run/docker.sock`). When a workflow job runs, act_runner spawns a **sibling container** for each job. That job container also gets the Docker socket mounted (via `valid_volumes` in `runner-config.yaml`), enabling `docker compose` calls in workflow steps.
### Workspace bind-mount setup (DooD path resolution)
When a workflow step calls `docker compose up` with relative bind-mount sources (e.g. `./infra/observability/prometheus/prometheus.yml`), Compose resolves them against `$(pwd)` inside the job container and passes the resulting **absolute path** to the host Docker daemon. The host daemon then tries to bind-mount that path from the **host filesystem**.
In the default DooD setup the job container's workspace lives in the act_runner overlay2 layer — the host has no directory at that path, auto-creates an empty one, and the container fails with:
```
error mounting "…/prometheus/prometheus.yml" to rootfs at "/etc/prometheus/prometheus.yml": not a directory
```
**Solution (ADR-015):** store job workspaces on a real host path and mount it at the **same absolute path** inside the runner and every job container. `runner-config.yaml` configures this via `workdir_parent`, `valid_volumes`, and `options`.
**One-time host setup** (required on any fresh VPS):
```bash
mkdir -p /srv/gitea-workspace
# Then add to the runner service in ~/docker/gitea/compose.yaml:
# volumes:
# - /srv/gitea-workspace:/srv/gitea-workspace
# Restart the runner container for the change to take effect.
```
The path `/srv/gitea-workspace` is the canonical workspace root. It must be identical on the host and inside job containers — if the paths differ, Compose still resolves to the container-internal path, which the host daemon cannot find (the original bug).
**Disk management:** act_runner cleans per-run subdirectories on completion. Orphaned directories from interrupted runs accumulate under `/srv/gitea-workspace` and should be pruned manually if disk space becomes a concern:
```bash
# List workspace directories older than 7 days
find /srv/gitea-workspace -mindepth 3 -maxdepth 3 -type d -mtime +7
```
---
### Running host-level commands from CI (nsenter pattern)
Job containers are unprivileged and do not share the host's PID/mount/network namespaces. Commands like `systemctl` that target the host daemon are therefore unavailable by default. When a workflow step needs to manage a host service (e.g. `systemctl reload caddy`), it uses the Docker socket to spin up a **privileged sibling container** in the host PID namespace:
@@ -108,6 +141,33 @@ nsenter: failed to execute /bin/systemctl: No such file or directory
The first error means the Docker socket is not mounted into the job container — check `valid_volumes` in `/root/docker/gitea/runner-config.yaml` on the VPS. The second means the Alpine image is running but cannot enter the host mount namespace; verify `--privileged` and `--pid=host` are both present in the workflow step.
**Failure mode 4 — workspace bind-mount not configured (observability stack or any compose-with-file-mounts job)**
Symptom in CI log:
```
Error response from daemon: error while creating mount source path "…/prometheus/prometheus.yml": mkdir …: not a directory
```
Or the service starts but immediately crashes because a config file was mounted as an empty directory.
Cause: `/srv/gitea-workspace` does not exist on the host, or the runner container's `compose.yaml` is missing the `- /srv/gitea-workspace:/srv/gitea-workspace` volume line.
Diagnosis:
```bash
ssh root@<vps>
ls -la /srv/gitea-workspace # must exist and be a directory
docker inspect gitea-runner | grep -A5 Mounts # must show /srv/gitea-workspace
```
Recovery:
```bash
mkdir -p /srv/gitea-workspace
# Add volume line to runner compose.yaml, then:
docker compose -f ~/docker/gitea/compose.yaml up -d gitea-runner
```
See `docs/DEPLOYMENT.md §3.1` and ADR-015 for the full setup rationale.
---
## Gitea vs GitHub Actions Differences

View File

@@ -12,11 +12,11 @@ The original spec in this doc proposed an overlay pattern (`docker compose -f do
---
## Observability stack — not yet deployed
## Observability stack
Prometheus, Loki, Grafana, Alertmanager, Uptime Kuma, GlitchTip and ntfy are **not** part of the production deployment that #497 landed. They are tracked as follow-up issue #498.
The observability stack (Prometheus, Loki, Grafana, Tempo, GlitchTip) ships as a separate `docker-compose.observability.yml` alongside the main stack. Configuration lives under `infra/observability/`.
When that lands the observability containers will join `docker-compose.prod.yml` under a dedicated profile so they can be operated alongside the application stack without affecting the application containers' restart cycle.
→ See [docs/DEPLOYMENT.md §4](../DEPLOYMENT.md#4-logs--observability) for the full setup procedure, service URLs, first-run steps, and env var reference.
---

View File

@@ -165,7 +165,7 @@ npm run check # svelte-check (type checking)
```bash
npm run test # Vitest unit + server tests (headless)
npm run test:coverage # Coverage report (server project only)
npm run test:coverage # Coverage report (server + client)
npm run test:e2e # Playwright E2E tests
npm run test:e2e:headed # Playwright E2E with visible browser
npm run test:e2e:ui # Playwright UI mode

View File

@@ -29,6 +29,6 @@ ENV NODE_ENV=production
COPY --from=build /app/build ./build
COPY --from=build /app/package.json ./package.json
COPY --from=build /app/package-lock.json ./package-lock.json
RUN npm ci --omit=dev
RUN npm ci --omit=dev --ignore-scripts
EXPOSE 3000
CMD ["node", "build"]

View File

@@ -38,14 +38,16 @@ export default defineConfig(
'no-undef': 'off',
// This rule is designed for Svelte 5's own routing system using resolve().
// In SvelteKit, <a href> and goto() from $app/navigation are the correct patterns — resolve() is not needed.
'svelte/no-navigation-without-resolve': 'off'
'svelte/no-navigation-without-resolve': 'off',
// Prevents accidental console.log left in source. console.warn and console.error
// are still permitted for intentional server-side logging (e.g. hooks.server.ts).
'no-console': ['error', { allow: ['warn', 'error'] }]
}
},
{
files: ['**/*.svelte', '**/*.svelte.ts', '**/*.svelte.js'],
languageOptions: {
parserOptions: {
projectService: true,
extraFileExtensions: ['.svelte'],
parser: ts.parser,
svelteConfig
@@ -72,6 +74,13 @@ export default defineConfig(
]
}
},
{
// E2E tests use console.log for diagnostic output — allow it there.
files: ['e2e/**'],
rules: {
'no-console': 'off'
}
},
{
files: ['**/*.spec.ts', '**/*.test.ts'],
rules: {

View File

@@ -345,8 +345,11 @@
"admin_system_import_btn_retry": "Erneut starten",
"admin_system_import_status_idle": "Kein Import gestartet.",
"admin_system_import_status_running": "Import läuft…",
"admin_system_import_status_done": "Import abgeschlossen {count} Dokumente verarbeitet.",
"admin_system_import_status_failed": "Fehler: {message}",
"admin_system_import_status_done": "Import abgeschlossen",
"admin_system_import_status_done_label": "Dokumente verarbeitet",
"admin_system_import_status_failed": "Import fehlgeschlagen",
"admin_system_import_failed_no_spreadsheet": "Keine Tabellendatei gefunden.",
"admin_system_import_failed_internal": "Interner Fehler beim Import.",
"admin_system_thumbnails_heading": "Thumbnails erzeugen",
"admin_system_thumbnails_description": "Erzeugt Vorschaubilder für Dokumente ohne Thumbnail (z. B. nach dem Massenimport).",
"admin_system_thumbnails_btn_start": "Thumbnails erzeugen",
@@ -470,7 +473,7 @@
"dashboard_reader_stats_persons_short": "Pers.",
"dashboard_reader_stats_stories_short": "Gesch.",
"dashboard_reader_draft_meta": "Entwurf · zuletzt bearbeitet {relative}",
"dashboard_resume_label": "Zuletzt geöffnet:",
"dashboard_resume_label": "Weiter, wo du aufgehört hast",
"dashboard_resume_fallback": "Unbekanntes Dokument",
"doc_status_placeholder": "Platzhalter",
"doc_status_uploaded": "Hochgeladen",
@@ -703,6 +706,8 @@
"error_invite_exhausted": "Dieser Einladungslink wurde bereits vollständig verwendet.",
"error_invite_revoked": "Dieser Einladungslink wurde deaktiviert.",
"error_invite_expired": "Dieser Einladungslink ist abgelaufen.",
"error_group_has_active_invites": "Diese Gruppe kann nicht gelöscht werden, da sie in einer aktiven Einladung verwendet wird.",
"error_group_not_found": "Die angegebene Gruppe existiert nicht.",
"register_heading": "Konto erstellen",
"register_subtext": "Du wurdest eingeladen, dem Familienarchiv beizutreten.",
"register_label_first_name": "Vorname",
@@ -762,22 +767,21 @@
"admin_new_invite_prefill_last": "Nachname vorausfüllen (optional)",
"admin_new_invite_prefill_email": "E-Mail vorausfüllen (optional)",
"admin_new_invite_expires": "Ablaufdatum (optional)",
"admin_new_invite_groups": "Gruppen (optional)",
"admin_new_invite_no_groups": "Keine Gruppen vorhanden.",
"admin_invite_groups_load_error": "Gruppen konnten nicht geladen werden. Die Einladung kann ohne Gruppenauswahl erstellt werden.",
"admin_invite_created_title": "Einladung erstellt",
"admin_invite_created_desc": "Teile diesen Link mit der einzuladenden Person:",
"admin_invite_revoke_confirm": "Einladung wirklich widerrufen?",
"greeting_morning": "Guten Morgen, {name}.",
"greeting_day": "Hallo, {name}.",
"greeting_evening": "Guten Abend, {name}.",
"dashboard_resume_label": "Weiter, wo du aufgehört hast",
"dashboard_blocks": "{count} Abschnitte",
"dashboard_resume_cta": "Weitertranskribieren",
"dashboard_resume_other": "oder anderen Brief wählen",
"dashboard_empty_title": "Noch kein Dokument begonnen",
"dashboard_empty_body": "Wähle ein Dokument aus dem Archiv, um mit der Transkription zu beginnen.",
"dashboard_empty_cta": "Zum Archiv",
"dashboard_mission_caption": "Offene Aufgaben",
"queue_segment": "Segmentieren",
"queue_segment_blurb": "Seiten aufteilen",
@@ -787,7 +791,6 @@
"queue_review_blurb": "Texte kontrollieren",
"queue_n_open": "{n} offen",
"queue_show_all": "Alle anzeigen →",
"pulse_eyebrow": "Diese Woche",
"pulse_headline": "Ihr habt {pages} Seiten bearbeitet.",
"pulse_you": "Du selbst hast {pages} davon bearbeitet.",
@@ -795,19 +798,15 @@
"pulse_transcribed": "Textstellen markiert",
"pulse_reviewed": "Textstellen transkribiert",
"pulse_uploaded": "Dokumente hochgeladen",
"feed_caption": "Kommentare & Aktivität",
"feed_show_all": "Alle anzeigen",
"feed_for_you": "für dich",
"audit_action_text_saved": "hat Text gespeichert in",
"audit_action_file_uploaded": "hat eine Datei hochgeladen:",
"audit_action_annotation_created": "hat eine Markierung erstellt in",
"audit_action_comment_added": "hat kommentiert:",
"audit_action_mention_created": "hat dich erwähnt in",
"dropzone_release": "Loslassen zum Hochladen",
"chronik_page_title": "Aktivitäten",
"chronik_for_you_caption": "Für dich",
"chronik_for_you_count": "{count} neu",
@@ -851,9 +850,7 @@
"pagination_page_of": "Seite {page} von {total}",
"pagination_nav_label": "Seitennavigation",
"pagination_page_button": "Seite {page}",
"common_opens_new_tab": "(öffnet in neuem Tab)",
"transcribe_coach_title": "Erste Transkription?",
"transcribe_coach_preamble": "Unser Kurrent-Erkenner lernt noch. Jede Transkription, die Sie zum Training freigeben, bringt ihm die Schrift bei — so funktioniert's:",
"transcribe_coach_step_1_title": "Rahmen ziehen.",
@@ -863,10 +860,8 @@
"transcribe_coach_step_3_title": "Speichert automatisch.",
"transcribe_coach_footer_kurrent": "Hilfe zu Kurrent ↗",
"transcribe_coach_footer_richtlinien": "Transkriptions-Richtlinien ↗",
"transcription_mode_help_label": "Lese- und Bearbeitungsmodus",
"transcription_mode_help_body": "Lesen zeigt die Transkription als fließenden Text. Bearbeiten öffnet die Textfelder für jede Passage.",
"richtlinien_title": "Transkriptions-Richtlinien",
"richtlinien_intro": "Damit alle Briefe einheitlich transkribiert werden — egal wer tippt — hier unsere Regeln. Die Seite wächst mit: sobald wir eine neue Konvention beschließen, landet sie hier.",
"richtlinien_wiki_text": "Kurrent- und Sütterlin-Alphabete sind bei Wikipedia gut erklärt. Hier stehen nur unsere eigenen Vereinbarungen für dieses Archiv.",
@@ -940,12 +935,9 @@
"bulk_edit_all_x_failed": "Filter konnte nicht abgerufen werden — bitte erneut versuchen.",
"bulk_edit_topbar_title": "Massenbearbeitung",
"bulk_edit_count_pill": "{count} werden bearbeitet",
"nav_stammbaum": "Stammbaum",
"nav_geschichten": "Geschichten",
"error_geschichte_not_found": "Die Geschichte wurde nicht gefunden.",
"geschichten_index_title": "Geschichten",
"geschichten_new_button": "Neue Geschichte",
"geschichten_filter_all_pill": "Alle",
@@ -965,7 +957,6 @@
"geschichten_card_attach_action": "+ Geschichte anhängen",
"geschichten_card_show_all_for_person": "Alle Geschichten zu {name}",
"geschichten_card_show_all": "Alle anzeigen",
"geschichte_editor_title_placeholder": "Titel der Geschichte",
"geschichte_editor_body_placeholder": "Schreibe hier deine Geschichte…",
"geschichte_editor_status_draft": "ENTWURF",
@@ -992,14 +983,11 @@
"geschichte_editor_toolbar_h3": "Unterüberschrift",
"geschichte_editor_toolbar_ul": "Aufzählung",
"geschichte_editor_toolbar_ol": "Nummerierte Liste",
"geschichte_delete_confirm_title": "Geschichte löschen?",
"geschichte_delete_confirm_body": "Diese Aktion kann nicht rückgängig gemacht werden. Die Geschichte wird dauerhaft gelöscht und aus allen verlinkten Personen- und Dokumentseiten entfernt.",
"error_relationship_not_found": "Die Beziehung wurde nicht gefunden.",
"error_circular_relationship": "Diese Beziehung würde einen Kreis erzeugen.",
"error_duplicate_relationship": "Diese Beziehung gibt es bereits.",
"relation_parent_of": "Elternteil von",
"relation_child_of": "Kind von",
"relation_spouse_of": "Ehegatte",
@@ -1010,7 +998,6 @@
"relation_doctor": "Arzt",
"relation_neighbor": "Nachbar",
"relation_other": "Sonstige",
"relation_inferred_parent": "Elternteil",
"relation_inferred_child": "Kind",
"relation_inferred_spouse": "Ehegatte",
@@ -1028,9 +1015,7 @@
"relation_inferred_sibling_inlaw": "Schwager/Schwägerin",
"relation_inferred_cousin_1": "Cousin/Cousine",
"relation_inferred_distant": "Weitläufige Verwandtschaft",
"doc_details_field_relationship": "Verwandtschaft",
"stammbaum_empty_heading": "Noch keine Familienmitglieder",
"stammbaum_empty_body": "Markiere Personen auf ihrer Bearbeitungsseite als Familienmitglied, damit sie hier erscheinen.",
"stammbaum_empty_link": "→ Zur Personenliste",
@@ -1042,7 +1027,6 @@
"stammbaum_zoom_in": "Vergrößern",
"stammbaum_zoom_out": "Verkleinern",
"stammbaum_generations": "Generationen",
"relation_error_duplicate": "Diese Beziehung gibt es bereits.",
"relation_error_circular": "Diese Beziehung würde einen Kreis erzeugen.",
"relation_error_self": "Eine Person kann nicht mit sich selbst verbunden werden.",
@@ -1065,14 +1049,15 @@
"relation_form_field_from_year": "Von Jahr",
"relation_form_field_to_year": "Bis Jahr",
"relation_form_year_placeholder": "z.B. 1920",
"person_relationships_heading": "Beziehungen",
"person_relationships_empty": "Noch keine Beziehungen bekannt.",
"timeline_aria_label": "Zeitachse Dokumentdichte",
"timeline_clear_selection": "Auswahl zurücksetzen",
"timeline_zoom_reset": "Zurück zur Übersicht",
"timeline_bar_aria_singular": "{when}, 1 Dokument",
"timeline_bar_aria_plural": "{when}, {count} Dokumente",
"timeline_dragging_aria_live": "Zeitraum {from} bis {to} ausgewählt"
"timeline_dragging_aria_live": "Zeitraum {from} bis {to} ausgewählt",
"error_page_id_label": "Fehler-ID",
"error_copy_id_label": "ID kopieren",
"error_copied": "Kopiert!"
}

View File

@@ -345,8 +345,11 @@
"admin_system_import_btn_retry": "Start again",
"admin_system_import_status_idle": "No import started.",
"admin_system_import_status_running": "Import running…",
"admin_system_import_status_done": "Import complete {count} documents processed.",
"admin_system_import_status_failed": "Error: {message}",
"admin_system_import_status_done": "Import complete",
"admin_system_import_status_done_label": "Documents processed",
"admin_system_import_status_failed": "Import failed",
"admin_system_import_failed_no_spreadsheet": "No spreadsheet file found.",
"admin_system_import_failed_internal": "Import failed due to an internal error.",
"admin_system_thumbnails_heading": "Generate thumbnails",
"admin_system_thumbnails_description": "Generates preview images for documents without a thumbnail (e.g. after the mass import).",
"admin_system_thumbnails_btn_start": "Generate thumbnails",
@@ -470,7 +473,7 @@
"dashboard_reader_stats_persons_short": "Pers.",
"dashboard_reader_stats_stories_short": "Stor.",
"dashboard_reader_draft_meta": "Draft · last edited {relative}",
"dashboard_resume_label": "Last opened:",
"dashboard_resume_label": "Continue where you left off",
"dashboard_resume_fallback": "Unknown document",
"doc_status_placeholder": "Placeholder",
"doc_status_uploaded": "Uploaded",
@@ -703,6 +706,8 @@
"error_invite_exhausted": "This invite link has already been fully used.",
"error_invite_revoked": "This invite link has been deactivated.",
"error_invite_expired": "This invite link has expired.",
"error_group_has_active_invites": "This group cannot be deleted because it is referenced by one or more active invite links.",
"error_group_not_found": "The specified group does not exist.",
"register_heading": "Create account",
"register_subtext": "You've been invited to join Familienarchiv.",
"register_label_first_name": "First name",
@@ -762,22 +767,21 @@
"admin_new_invite_prefill_last": "Pre-fill last name (optional)",
"admin_new_invite_prefill_email": "Pre-fill email (optional)",
"admin_new_invite_expires": "Expiry date (optional)",
"admin_new_invite_groups": "Groups (optional)",
"admin_new_invite_no_groups": "No groups exist.",
"admin_invite_groups_load_error": "Groups could not be loaded. The invite can still be created without group assignment.",
"admin_invite_created_title": "Invite created",
"admin_invite_created_desc": "Share this link with the person you are inviting:",
"admin_invite_revoke_confirm": "Really revoke this invite?",
"greeting_morning": "Good morning, {name}.",
"greeting_day": "Hello, {name}.",
"greeting_evening": "Good evening, {name}.",
"dashboard_resume_label": "Continue where you left off",
"dashboard_blocks": "{count} sections",
"dashboard_resume_cta": "Continue transcribing",
"dashboard_resume_other": "or choose another document",
"dashboard_empty_title": "No document started yet",
"dashboard_empty_body": "Choose a document from the archive to start transcribing.",
"dashboard_empty_cta": "To the archive",
"dashboard_mission_caption": "Open tasks",
"queue_segment": "Segment",
"queue_segment_blurb": "Split pages",
@@ -787,7 +791,6 @@
"queue_review_blurb": "Check texts",
"queue_n_open": "{n} open",
"queue_show_all": "Show all →",
"pulse_eyebrow": "This week",
"pulse_headline": "You have worked on {pages} pages.",
"pulse_you": "You personally worked on {pages} of them.",
@@ -795,19 +798,15 @@
"pulse_transcribed": "Passages annotated",
"pulse_reviewed": "Passages transcribed",
"pulse_uploaded": "Documents uploaded",
"feed_caption": "Comments & activity",
"feed_show_all": "Show all",
"feed_for_you": "for you",
"audit_action_text_saved": "saved text in",
"audit_action_file_uploaded": "uploaded a file:",
"audit_action_annotation_created": "created an annotation in",
"audit_action_comment_added": "commented:",
"audit_action_mention_created": "mentioned you in",
"dropzone_release": "Release to upload",
"chronik_page_title": "Activity",
"chronik_for_you_caption": "For you",
"chronik_for_you_count": "{count} new",
@@ -851,9 +850,7 @@
"pagination_page_of": "Page {page} of {total}",
"pagination_nav_label": "Pagination",
"pagination_page_button": "Page {page}",
"common_opens_new_tab": "(opens in new tab)",
"transcribe_coach_title": "First transcription?",
"transcribe_coach_preamble": "Our Kurrent recogniser is still learning. Every transcription you release for training teaches it the handwriting — here's how it works:",
"transcribe_coach_step_1_title": "Draw a frame.",
@@ -863,10 +860,8 @@
"transcribe_coach_step_3_title": "Saves automatically.",
"transcribe_coach_footer_kurrent": "Kurrent help ↗",
"transcribe_coach_footer_richtlinien": "Transcription guidelines ↗",
"transcription_mode_help_label": "Read and edit mode",
"transcription_mode_help_body": "Read shows the transcription as flowing text. Edit opens the text fields for each passage.",
"richtlinien_title": "Transcription Guidelines",
"richtlinien_intro": "So every letter is transcribed consistently — no matter who types — here are our rules. The page grows with us: as soon as we agree a new convention, it lands here.",
"richtlinien_wiki_text": "The Kurrent and Sütterlin alphabets are well explained on Wikipedia. Here you'll only find our own conventions for this archive.",
@@ -940,12 +935,9 @@
"bulk_edit_all_x_failed": "Could not load filter results — please retry.",
"bulk_edit_topbar_title": "Bulk edit",
"bulk_edit_count_pill": "{count} will be edited",
"nav_stammbaum": "Family tree",
"nav_geschichten": "Stories",
"error_geschichte_not_found": "The story was not found.",
"geschichten_index_title": "Stories",
"geschichten_new_button": "New story",
"geschichten_filter_all_pill": "All",
@@ -965,7 +957,6 @@
"geschichten_card_attach_action": "+ Attach a story",
"geschichten_card_show_all_for_person": "All stories about {name}",
"geschichten_card_show_all": "Show all",
"geschichte_editor_title_placeholder": "Story title",
"geschichte_editor_body_placeholder": "Write your story here…",
"geschichte_editor_status_draft": "DRAFT",
@@ -992,14 +983,11 @@
"geschichte_editor_toolbar_h3": "Subheading",
"geschichte_editor_toolbar_ul": "Bulleted list",
"geschichte_editor_toolbar_ol": "Numbered list",
"geschichte_delete_confirm_title": "Delete story?",
"geschichte_delete_confirm_body": "This action cannot be undone. The story will be permanently deleted and removed from all linked person and document pages.",
"error_relationship_not_found": "Relationship not found.",
"error_circular_relationship": "This relationship would form a cycle.",
"error_duplicate_relationship": "This relationship already exists.",
"relation_parent_of": "Parent of",
"relation_child_of": "Child of",
"relation_spouse_of": "Spouse",
@@ -1010,7 +998,6 @@
"relation_doctor": "Doctor",
"relation_neighbor": "Neighbour",
"relation_other": "Other",
"relation_inferred_parent": "Parent",
"relation_inferred_child": "Child",
"relation_inferred_spouse": "Spouse",
@@ -1028,9 +1015,7 @@
"relation_inferred_sibling_inlaw": "Sibling-in-law",
"relation_inferred_cousin_1": "Cousin",
"relation_inferred_distant": "Distant relative",
"doc_details_field_relationship": "Relationship",
"stammbaum_empty_heading": "No family members yet",
"stammbaum_empty_body": "Mark a person as a family member on their edit page so they appear here.",
"stammbaum_empty_link": "→ Go to person list",
@@ -1042,7 +1027,6 @@
"stammbaum_zoom_in": "Zoom in",
"stammbaum_zoom_out": "Zoom out",
"stammbaum_generations": "Generations",
"relation_error_duplicate": "This relationship already exists.",
"relation_error_circular": "This relationship would form a cycle.",
"relation_error_self": "A person cannot be related to themselves.",
@@ -1065,14 +1049,15 @@
"relation_form_field_from_year": "From year",
"relation_form_field_to_year": "To year",
"relation_form_year_placeholder": "e.g. 1920",
"person_relationships_heading": "Relationships",
"person_relationships_empty": "No relationships known yet.",
"timeline_aria_label": "Document density timeline",
"timeline_clear_selection": "Clear selection",
"timeline_zoom_reset": "Reset zoom",
"timeline_bar_aria_singular": "{when}, 1 document",
"timeline_bar_aria_plural": "{when}, {count} documents",
"timeline_dragging_aria_live": "Range {from} to {to} selected"
"timeline_dragging_aria_live": "Range {from} to {to} selected",
"error_page_id_label": "Error ID",
"error_copy_id_label": "Copy ID",
"error_copied": "Copied!"
}

View File

@@ -345,8 +345,11 @@
"admin_system_import_btn_retry": "Iniciar de nuevo",
"admin_system_import_status_idle": "No hay importación iniciada.",
"admin_system_import_status_running": "Importación en curso…",
"admin_system_import_status_done": "Importación completada {count} documentos procesados.",
"admin_system_import_status_failed": "Error: {message}",
"admin_system_import_status_done": "Importación completada",
"admin_system_import_status_done_label": "Documentos procesados",
"admin_system_import_status_failed": "Importación fallida",
"admin_system_import_failed_no_spreadsheet": "No se encontró ninguna hoja de cálculo.",
"admin_system_import_failed_internal": "Error interno durante la importación.",
"admin_system_thumbnails_heading": "Generar miniaturas",
"admin_system_thumbnails_description": "Genera imágenes de vista previa para documentos sin miniatura (p. ej. tras la importación masiva).",
"admin_system_thumbnails_btn_start": "Generar miniaturas",
@@ -470,7 +473,7 @@
"dashboard_reader_stats_persons_short": "Pers.",
"dashboard_reader_stats_stories_short": "Hist.",
"dashboard_reader_draft_meta": "Borrador · editado hace {relative}",
"dashboard_resume_label": "Último abierto:",
"dashboard_resume_label": "Continuar donde lo dejaste",
"dashboard_resume_fallback": "Documento desconocido",
"doc_status_placeholder": "Marcador",
"doc_status_uploaded": "Cargado",
@@ -703,6 +706,8 @@
"error_invite_exhausted": "Este enlace de invitación ya ha sido completamente utilizado.",
"error_invite_revoked": "Este enlace de invitación ha sido desactivado.",
"error_invite_expired": "Este enlace de invitación ha expirado.",
"error_group_has_active_invites": "Este grupo no puede eliminarse porque está referenciado por uno o más enlaces de invitación activos.",
"error_group_not_found": "El grupo especificado no existe.",
"register_heading": "Crear cuenta",
"register_subtext": "Has sido invitado a unirte al Familienarchiv.",
"register_label_first_name": "Nombre",
@@ -762,22 +767,21 @@
"admin_new_invite_prefill_last": "Prellenar apellido (opcional)",
"admin_new_invite_prefill_email": "Prellenar correo (opcional)",
"admin_new_invite_expires": "Fecha de vencimiento (opcional)",
"admin_new_invite_groups": "Grupos (opcional)",
"admin_new_invite_no_groups": "No hay grupos disponibles.",
"admin_invite_groups_load_error": "No se pudieron cargar los grupos. La invitación puede crearse sin asignar grupos.",
"admin_invite_created_title": "Invitación creada",
"admin_invite_created_desc": "Comparte este enlace con la persona invitada:",
"admin_invite_revoke_confirm": "¿Realmente revocar esta invitación?",
"greeting_morning": "Buenos días, {name}.",
"greeting_day": "Hola, {name}.",
"greeting_evening": "Buenas noches, {name}.",
"dashboard_resume_label": "Continuar donde lo dejaste",
"dashboard_blocks": "{count} secciones",
"dashboard_resume_cta": "Continuar transcripción",
"dashboard_resume_other": "o elige otro documento",
"dashboard_empty_title": "Aún no has comenzado ningún documento",
"dashboard_empty_body": "Elige un documento del archivo para empezar a transcribir.",
"dashboard_empty_cta": "Al archivo",
"dashboard_mission_caption": "Tareas pendientes",
"queue_segment": "Segmentar",
"queue_segment_blurb": "Dividir páginas",
@@ -787,7 +791,6 @@
"queue_review_blurb": "Controlar textos",
"queue_n_open": "{n} pendiente",
"queue_show_all": "Ver todo →",
"pulse_eyebrow": "Esta semana",
"pulse_headline": "Habéis trabajado {pages} páginas.",
"pulse_you": "Tú mismo has trabajado {pages} de ellas.",
@@ -795,19 +798,15 @@
"pulse_transcribed": "Fragmentos anotados",
"pulse_reviewed": "Fragmentos transcritos",
"pulse_uploaded": "Documentos subidos",
"feed_caption": "Comentarios y actividad",
"feed_show_all": "Ver todo",
"feed_for_you": "para ti",
"audit_action_text_saved": "guardó texto en",
"audit_action_file_uploaded": "subió un archivo:",
"audit_action_annotation_created": "creó una anotación en",
"audit_action_comment_added": "comentó:",
"audit_action_mention_created": "te mencionó en",
"dropzone_release": "Suelta para subir",
"chronik_page_title": "Actividades",
"chronik_for_you_caption": "Para ti",
"chronik_for_you_count": "{count} nuevas",
@@ -851,9 +850,7 @@
"pagination_page_of": "Página {page} de {total}",
"pagination_nav_label": "Paginación",
"pagination_page_button": "Página {page}",
"common_opens_new_tab": "(abre en pestaña nueva)",
"transcribe_coach_title": "¿Primera transcripción?",
"transcribe_coach_preamble": "Nuestro reconocedor de Kurrent aún está aprendiendo. Cada transcripción que libera para el entrenamiento le enseña la escritura — así funciona:",
"transcribe_coach_step_1_title": "Dibujar un marco.",
@@ -863,10 +860,8 @@
"transcribe_coach_step_3_title": "Se guarda automáticamente.",
"transcribe_coach_footer_kurrent": "Ayuda sobre Kurrent ↗",
"transcribe_coach_footer_richtlinien": "Normas de transcripción ↗",
"transcription_mode_help_label": "Modo lectura y edición",
"transcription_mode_help_body": "Lectura muestra la transcripción como texto continuo. Edición abre los campos de texto para cada pasaje.",
"richtlinien_title": "Normas de transcripción",
"richtlinien_intro": "Para que todas las cartas se transcriban de forma uniforme — sin importar quién transcriba — aquí están nuestras reglas. La página crece con nosotros.",
"richtlinien_wiki_text": "Los alfabetos Kurrent y Sütterlin están bien explicados en Wikipedia. Aquí solo se recogen nuestros propios acuerdos para este archivo.",
@@ -940,12 +935,9 @@
"bulk_edit_all_x_failed": "No se pudieron cargar los resultados del filtro; vuelve a intentarlo.",
"bulk_edit_topbar_title": "Edición masiva",
"bulk_edit_count_pill": "Se editarán {count}",
"nav_stammbaum": "Árbol genealógico",
"nav_geschichten": "Historias",
"error_geschichte_not_found": "No se encontró la historia.",
"geschichten_index_title": "Historias",
"geschichten_new_button": "Nueva historia",
"geschichten_filter_all_pill": "Todas",
@@ -965,7 +957,6 @@
"geschichten_card_attach_action": "+ Adjuntar historia",
"geschichten_card_show_all_for_person": "Todas las historias sobre {name}",
"geschichten_card_show_all": "Mostrar todas",
"geschichte_editor_title_placeholder": "Título de la historia",
"geschichte_editor_body_placeholder": "Escribe tu historia aquí…",
"geschichte_editor_status_draft": "BORRADOR",
@@ -992,14 +983,11 @@
"geschichte_editor_toolbar_h3": "Subencabezado",
"geschichte_editor_toolbar_ul": "Lista con viñetas",
"geschichte_editor_toolbar_ol": "Lista numerada",
"geschichte_delete_confirm_title": "¿Eliminar historia?",
"geschichte_delete_confirm_body": "Esta acción no se puede deshacer. La historia se eliminará permanentemente y se quitará de todas las páginas de personas y documentos vinculados.",
"error_relationship_not_found": "La relación no fue encontrada.",
"error_circular_relationship": "Esta relación crearía un ciclo.",
"error_duplicate_relationship": "Esta relación ya existe.",
"relation_parent_of": "Progenitor de",
"relation_child_of": "Hijo/a de",
"relation_spouse_of": "Cónyuge",
@@ -1010,7 +998,6 @@
"relation_doctor": "Médico",
"relation_neighbor": "Vecino/a",
"relation_other": "Otro",
"relation_inferred_parent": "Progenitor",
"relation_inferred_child": "Hijo/a",
"relation_inferred_spouse": "Cónyuge",
@@ -1028,9 +1015,7 @@
"relation_inferred_sibling_inlaw": "Cuñado/a",
"relation_inferred_cousin_1": "Primo/a",
"relation_inferred_distant": "Pariente lejano",
"doc_details_field_relationship": "Parentesco",
"stammbaum_empty_heading": "Aún no hay miembros de la familia",
"stammbaum_empty_body": "Marca a una persona como miembro de la familia en su página de edición para que aparezca aquí.",
"stammbaum_empty_link": "→ Ir a la lista de personas",
@@ -1042,7 +1027,6 @@
"stammbaum_zoom_in": "Acercar",
"stammbaum_zoom_out": "Alejar",
"stammbaum_generations": "Generaciones",
"relation_error_duplicate": "Esta relación ya existe.",
"relation_error_circular": "Esta relación crearía un ciclo.",
"relation_error_self": "Una persona no puede estar relacionada consigo misma.",
@@ -1065,14 +1049,15 @@
"relation_form_field_from_year": "Desde año",
"relation_form_field_to_year": "Hasta año",
"relation_form_year_placeholder": "ej. 1920",
"person_relationships_heading": "Relaciones",
"person_relationships_empty": "Aún no se conocen relaciones.",
"timeline_aria_label": "Cronología de densidad de documentos",
"timeline_clear_selection": "Borrar selección",
"timeline_zoom_reset": "Restablecer zoom",
"timeline_bar_aria_singular": "{when}, 1 documento",
"timeline_bar_aria_plural": "{when}, {count} documentos",
"timeline_dragging_aria_live": "Rango {from} a {to} seleccionado"
"timeline_dragging_aria_live": "Rango {from} a {to} seleccionado",
"error_page_id_label": "ID de error",
"error_copy_id_label": "Copiar ID",
"error_copied": "¡Copiado!"
}

File diff suppressed because it is too large Load Diff

View File

@@ -16,13 +16,14 @@
"lint:boundary-demo": "eslint src/lib/tag/__fixtures__/",
"test:unit": "vitest",
"test": "npm run test:unit -- --run",
"test:coverage": "vitest run --coverage --project=server && vitest run -c vitest.client-coverage.config.ts --coverage",
"test:coverage": "vitest run --coverage --project=server; vitest run -c vitest.client-coverage.config.ts --coverage",
"test:e2e": "playwright test",
"test:e2e:headed": "playwright test --headed",
"test:e2e:ui": "playwright test --ui",
"generate:api": "openapi-typescript http://localhost:8080/v3/api-docs -o ./src/lib/generated/api.ts"
},
"dependencies": {
"@sentry/sveltekit": "^10.53.1",
"@tiptap/core": "3.22.5",
"@tiptap/extension-mention": "3.22.5",
"@tiptap/starter-kit": "3.22.5",

View File

@@ -26,6 +26,11 @@ declare global {
interface PageData {
user?: User; // Available in $page.data.user
}
interface Error {
message: string;
errorId?: string;
}
}
}

View File

@@ -0,0 +1,47 @@
import { describe, it, expect, vi, beforeEach } from 'vitest';
vi.mock('@sentry/sveltekit', () => ({
init: vi.fn(),
handleErrorWithSentry: (fn: (args: unknown) => unknown) => fn,
lastEventId: vi.fn(() => 'sentry-event-id-abc123')
}));
describe('hooks.client handleError', () => {
beforeEach(() => {
vi.resetModules();
});
it('returns Sentry lastEventId as errorId', async () => {
const Sentry = await import('@sentry/sveltekit');
vi.mocked(Sentry.lastEventId).mockReturnValue('sentry-event-id-abc123');
const { handleError } = await import('./hooks.client');
const result = (handleError as (args: unknown) => { message: string; errorId: string })({
error: new Error('boom'),
event: {},
status: 500,
message: 'Internal Error'
});
expect(result.errorId).toBe('sentry-event-id-abc123');
expect(result.message).toBe('An unexpected error occurred');
});
it('falls back to crypto.randomUUID when lastEventId returns undefined', async () => {
const Sentry = await import('@sentry/sveltekit');
vi.mocked(Sentry.lastEventId).mockReturnValue(undefined);
const { handleError } = await import('./hooks.client');
const result = (handleError as (args: unknown) => { message: string; errorId: string })({
error: new Error('boom'),
event: {},
status: 500,
message: 'Internal Error'
});
expect(result.errorId).toMatch(
/^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$/
);
expect(result.message).toBe('An unexpected error occurred');
});
});

View File

@@ -0,0 +1,16 @@
import * as Sentry from '@sentry/sveltekit';
// VITE_SENTRY_DSN is a write-only ingest key — it can POST events to GlitchTip
// but cannot read them. Safe to include in the client bundle per Sentry security model.
Sentry.init({
dsn: import.meta.env.VITE_SENTRY_DSN,
environment: import.meta.env.MODE,
tracesSampleRate: 0.1,
sendDefaultPii: false,
enabled: !!import.meta.env.VITE_SENTRY_DSN
});
export const handleError = Sentry.handleErrorWithSentry(() => {
const errorId = Sentry.lastEventId() ?? crypto.randomUUID();
return { message: 'An unexpected error occurred', errorId };
});

View File

@@ -0,0 +1,58 @@
import { describe, it, expect, vi, beforeEach } from 'vitest';
vi.mock('@sentry/sveltekit', () => ({
init: vi.fn(),
handleErrorWithSentry: (fn: (args: unknown) => unknown) => fn,
lastEventId: vi.fn(() => 'sentry-event-id-abc123')
}));
vi.mock('@sveltejs/kit', () => ({ redirect: vi.fn() }));
vi.mock('@sveltejs/kit/hooks', () => ({ sequence: vi.fn((...fns: unknown[]) => fns[0]) }));
vi.mock('$lib/paraglide/server', () => ({ paraglideMiddleware: vi.fn() }));
vi.mock('$lib/paraglide/runtime', () => ({ cookieName: 'locale', cookieMaxAge: 86400 }));
vi.mock('$lib/shared/server/locale', () => ({ detectLocale: vi.fn(() => 'de') }));
const makeEvent = () => ({
url: { pathname: '/documents/123' },
locals: {}
});
describe('hooks.server handleError', () => {
beforeEach(() => {
vi.resetModules();
});
it('returns Sentry lastEventId as errorId', async () => {
const Sentry = await import('@sentry/sveltekit');
vi.mocked(Sentry.lastEventId).mockReturnValue('sentry-event-id-abc123');
const { handleError } = await import('./hooks.server');
const result = (handleError as (args: unknown) => { message: string; errorId: string })({
error: new Error('boom'),
event: makeEvent(),
status: 500,
message: 'Internal Error'
});
expect(result.errorId).toBe('sentry-event-id-abc123');
expect(result.message).toBe('An unexpected error occurred');
});
it('falls back to crypto.randomUUID when lastEventId returns undefined', async () => {
const Sentry = await import('@sentry/sveltekit');
vi.mocked(Sentry.lastEventId).mockReturnValue(undefined);
const { handleError } = await import('./hooks.server');
const result = (handleError as (args: unknown) => { message: string; errorId: string })({
error: new Error('boom'),
event: makeEvent(),
status: 500,
message: 'Internal Error'
});
expect(result.errorId).toMatch(
/^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$/
);
expect(result.message).toBe('An unexpected error occurred');
});
});

View File

@@ -1,3 +1,4 @@
import * as Sentry from '@sentry/sveltekit';
import { redirect, type Handle, type HandleFetch } from '@sveltejs/kit';
import { paraglideMiddleware } from '$lib/paraglide/server';
import { sequence } from '@sveltejs/kit/hooks';
@@ -5,6 +6,16 @@ import { env } from 'process';
import { cookieName, cookieMaxAge } from '$lib/paraglide/runtime';
import { detectLocale } from '$lib/shared/server/locale';
// VITE_SENTRY_DSN is a write-only ingest key — it can POST events to GlitchTip
// but cannot read them. Safe to include in the client bundle per Sentry security model.
Sentry.init({
dsn: import.meta.env.VITE_SENTRY_DSN,
environment: import.meta.env.MODE,
tracesSampleRate: 0.1,
sendDefaultPii: false,
enabled: !!import.meta.env.VITE_SENTRY_DSN
});
const PUBLIC_PATHS = [
'/login',
'/logout',
@@ -113,3 +124,8 @@ export const handleFetch: HandleFetch = async ({ event, request, fetch }) => {
};
export const handle = sequence(userGroup, handleAuth, handleLocaleDetection, handleParaglide);
export const handleError = Sentry.handleErrorWithSentry(() => {
const errorId = Sentry.lastEventId() ?? crypto.randomUUID();
return { message: 'An unexpected error occurred', errorId };
});

View File

@@ -22,6 +22,8 @@ export type ErrorCode =
| 'INVITE_EXHAUSTED'
| 'INVITE_REVOKED'
| 'INVITE_EXPIRED'
| 'GROUP_HAS_ACTIVE_INVITES'
| 'GROUP_NOT_FOUND'
| 'ANNOTATION_NOT_FOUND'
| 'ANNOTATION_UPDATE_FAILED'
| 'TRANSCRIPTION_BLOCK_NOT_FOUND'
@@ -108,6 +110,10 @@ export function getErrorMessage(code: ErrorCode | string | undefined): string {
return m.error_invite_revoked();
case 'INVITE_EXPIRED':
return m.error_invite_expired();
case 'GROUP_HAS_ACTIVE_INVITES':
return m.error_group_has_active_invites();
case 'GROUP_NOT_FOUND':
return m.error_group_not_found();
case 'ANNOTATION_NOT_FOUND':
return m.error_annotation_not_found();
case 'ANNOTATION_UPDATE_FAILED':

View File

@@ -1,4 +1,7 @@
<script lang="ts">
import { untrack } from 'svelte';
import { m } from '$lib/paraglide/messages.js';
let {
groups,
selectedGroupIds = []
@@ -7,12 +10,13 @@ let {
selectedGroupIds?: string[];
} = $props();
let selected = $derived([...selectedGroupIds]);
let selected = $state<string[]>(untrack(() => [...selectedGroupIds]));
</script>
<div class="flex flex-wrap gap-3">
<fieldset class="flex flex-wrap gap-3 border-none p-0">
<legend class="sr-only">{m.admin_new_invite_groups()}</legend>
{#each groups as group (group.id)}
<label class="inline-flex items-center gap-2 text-sm text-ink-2">
<label class="inline-flex min-h-[44px] items-center gap-2 text-sm text-ink-2">
<input
type="checkbox"
name="groupIds"
@@ -23,4 +27,4 @@ let selected = $derived([...selectedGroupIds]);
{group.name}
</label>
{/each}
</div>
</fieldset>

View File

@@ -1,13 +1,53 @@
<script lang="ts">
import { page } from '$app/state';
import { m } from '$lib/paraglide/messages.js';
let copied = $state(false);
function copyId() {
const id = page.error?.errorId;
if (!id) return;
if (!navigator.clipboard) return;
navigator.clipboard.writeText(id).then(
() => {
copied = true;
setTimeout(() => (copied = false), 2000);
},
() => {
/* clipboard denied or unavailable — select-all on the <code> element remains */
}
);
}
</script>
<svelte:head>
<title>{m.page_title_error()}</title>
</svelte:head>
<div class="px-4 py-12 text-center font-sans">
<p class="font-sans text-6xl font-bold text-ink">{page.status}</p>
<p class="mt-2 font-sans text-sm text-ink-2">{page.error?.message ?? 'Internal Error'}</p>
</div>
<main class="px-4 py-12 text-center font-sans">
<h1 class="mb-2 font-serif text-2xl font-bold text-ink">{m.page_title_error()}</h1>
<p class="mb-8 font-sans text-sm text-ink-2">
{page.error?.message ?? m.error_internal_error()}
</p>
<p class="mb-4 font-mono text-4xl font-bold text-ink">{page.status}</p>
{#if page.error?.errorId}
<div class="mt-6 flex flex-col items-center gap-3">
<p class="font-sans text-xs tracking-widest text-ink-2 uppercase">
{m.error_page_id_label()}
</p>
<code
class="rounded border border-line bg-surface px-3 py-1 font-mono text-sm text-ink select-all"
>
{page.error.errorId}
</code>
<button
class="min-h-[44px] min-w-[44px] rounded-sm bg-brand-navy px-5 py-2 font-sans text-sm text-white transition-colors hover:opacity-90 focus-visible:ring-2 focus-visible:ring-brand-navy focus-visible:ring-offset-2"
onclick={copyId}
aria-label={m.error_copy_id_label()}
>
<span aria-live="polite">{copied ? m.error_copied() : m.error_copy_id_label()}</span>
</button>
</div>
{/if}
</main>

View File

@@ -1,7 +1,8 @@
<script lang="ts">
import { enhance } from '$app/forms';
import { beforeNavigate, goto } from '$app/navigation';
import { m } from '$lib/paraglide/messages.js';
import { createUnsavedWarning } from '$lib/shared/hooks/useUnsavedWarning.svelte';
import UnsavedWarningBanner from '$lib/shared/primitives/UnsavedWarningBanner.svelte';
const availableStandard = $derived([
{ value: 'READ_ALL', label: m.admin_perm_read_all() },
@@ -18,17 +19,7 @@ const availableAdmin = $derived([
let { form } = $props();
let isDirty = $state(false);
let showUnsavedWarning = $state(false);
let discardTarget: string | null = $state(null);
beforeNavigate(({ cancel, to }) => {
if (isDirty) {
cancel();
showUnsavedWarning = true;
discardTarget = to?.url.href ?? null;
}
});
const unsaved = createUnsavedWarning();
</script>
<div class="flex flex-1 flex-col overflow-hidden">
@@ -58,23 +49,8 @@ beforeNavigate(({ cancel, to }) => {
<!-- Scrollable body -->
<div class="flex-1 overflow-y-auto px-5 py-5">
{#if showUnsavedWarning}
<div
class="mb-5 flex items-center justify-between rounded border border-amber-200 bg-amber-50 p-3 text-sm text-amber-800 dark:border-amber-800 dark:bg-amber-950/40 dark:text-amber-300"
>
<span>{m.admin_unsaved_warning()}</span>
<button
type="button"
onclick={() => {
isDirty = false;
showUnsavedWarning = false;
if (discardTarget) goto(discardTarget);
}}
class="ml-4 shrink-0 font-sans text-xs font-bold tracking-widest text-amber-800 uppercase hover:text-amber-900 dark:text-amber-300"
>
{m.person_discard_changes()}
</button>
</div>
{#if unsaved.showUnsavedWarning}
<UnsavedWarningBanner onDiscard={unsaved.discard} />
{/if}
{#if form?.error}
<div class="mb-5 rounded border border-red-200 bg-red-50 p-3 text-sm text-red-700">
@@ -85,11 +61,11 @@ beforeNavigate(({ cancel, to }) => {
<form
id="new-group-form"
method="POST"
use:enhance
oninput={() => {
isDirty = true;
showUnsavedWarning = false;
use:enhance={() => async ({ result, update }) => {
if (result.type === 'redirect') unsaved.clearOnSuccess();
await update();
}}
oninput={unsaved.markDirty}
class="space-y-5"
>
<!-- Name card -->

View File

@@ -0,0 +1,125 @@
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest';
import { cleanup, render } from 'vitest-browser-svelte';
import { page } from 'vitest/browser';
import Page from './+page.svelte';
const enhanceCaptureRef = vi.hoisted(() => ({ submitFn: undefined as unknown }));
vi.mock('$app/forms', () => ({
enhance: (_el: HTMLFormElement, fn?: unknown) => {
enhanceCaptureRef.submitFn = fn;
return { destroy: vi.fn() };
}
}));
vi.mock('$app/navigation', () => ({ beforeNavigate: vi.fn(), goto: vi.fn() }));
import { beforeNavigate, goto } from '$app/navigation';
afterEach(cleanup);
type SubmitFn = () => Promise<
(opts: {
result: { type: string; [key: string]: unknown };
update: () => Promise<void>;
}) => Promise<void>
>;
// ─── Unsaved-changes guard ────────────────────────────────────────────────────
describe('Admin new group page unsaved-changes guard', () => {
beforeEach(() => {
vi.clearAllMocks();
enhanceCaptureRef.submitFn = undefined;
});
it('does not show unsaved warning initially', async () => {
render(Page, { props: { form: null } });
await expect.element(page.getByText(/ungespeicherte Änderungen/i)).not.toBeInTheDocument();
});
it('cancels navigation and shows banner when form is dirty', async () => {
render(Page, { props: { form: null } });
const [callback] = vi.mocked(beforeNavigate).mock.calls[0];
document
.querySelector<HTMLInputElement>('input[name="name"]')!
.dispatchEvent(new InputEvent('input', { bubbles: true }));
const cancel = vi.fn();
callback({ cancel, to: { url: new URL('http://localhost/admin/groups') } });
expect(cancel).toHaveBeenCalled();
await expect.element(page.getByText(/ungespeicherte Änderungen/i)).toBeInTheDocument();
});
it('does not cancel navigation when form is clean', async () => {
render(Page, { props: { form: null } });
const [callback] = vi.mocked(beforeNavigate).mock.calls[0];
const cancel = vi.fn();
callback({ cancel, to: { url: new URL('http://localhost/admin/groups') } });
expect(cancel).not.toHaveBeenCalled();
});
it('discard button calls goto with the target URL', async () => {
render(Page, { props: { form: null } });
const [callback] = vi.mocked(beforeNavigate).mock.calls[0];
document
.querySelector<HTMLInputElement>('input[name="name"]')!
.dispatchEvent(new InputEvent('input', { bubbles: true }));
callback({ cancel: vi.fn(), to: { url: new URL('http://localhost/admin/groups') } });
await page.getByRole('button', { name: /verwerfen/i }).click();
expect(vi.mocked(goto)).toHaveBeenCalledWith('http://localhost/admin/groups');
});
it('clears banner when enhance callback receives a redirect result', async () => {
render(Page, { props: { form: null } });
const [navCallback] = vi.mocked(beforeNavigate).mock.calls[0];
document
.querySelector<HTMLInputElement>('input[name="name"]')!
.dispatchEvent(new InputEvent('input', { bubbles: true }));
navCallback({ cancel: vi.fn(), to: { url: new URL('http://localhost/admin/groups') } });
await expect.element(page.getByText(/ungespeicherte Änderungen/i)).toBeInTheDocument();
const innerFn = await (enhanceCaptureRef.submitFn as SubmitFn)();
await innerFn({
result: { type: 'redirect', location: '/admin/groups', status: 303 },
update: vi.fn().mockResolvedValue(undefined)
});
await expect.element(page.getByText(/ungespeicherte Änderungen/i)).not.toBeInTheDocument();
const cancel = vi.fn();
navCallback({ cancel, to: { url: new URL('http://localhost/admin/groups') } });
expect(cancel).not.toHaveBeenCalled();
});
it('keeps banner when enhance callback receives a failure result', async () => {
render(Page, { props: { form: null } });
const [navCallback] = vi.mocked(beforeNavigate).mock.calls[0];
document
.querySelector<HTMLInputElement>('input[name="name"]')!
.dispatchEvent(new InputEvent('input', { bubbles: true }));
navCallback({ cancel: vi.fn(), to: { url: new URL('http://localhost/admin/groups') } });
await expect.element(page.getByText(/ungespeicherte Änderungen/i)).toBeInTheDocument();
const innerFn = await (enhanceCaptureRef.submitFn as SubmitFn)();
await innerFn({
result: { type: 'failure', status: 400, data: { error: 'Name bereits vergeben' } },
update: vi.fn().mockResolvedValue(undefined)
});
const cancel = vi.fn();
navCallback({ cancel, to: { url: new URL('http://localhost/admin/groups') } });
expect(cancel).toHaveBeenCalled();
});
});

View File

@@ -2,6 +2,7 @@ import { fail } from '@sveltejs/kit';
import { env } from '$env/dynamic/private';
import { parseBackendError } from '$lib/shared/errors';
import type { Actions, PageServerLoad } from './$types';
import type { components } from '$lib/generated/api';
export interface InviteListItem {
id: string;
@@ -17,22 +18,37 @@ export interface InviteListItem {
shareableUrl: string;
}
export type UserGroup = components['schemas']['UserGroup'];
export const load: PageServerLoad = async ({ url, fetch }) => {
const status = url.searchParams.get('status') ?? 'active';
const apiUrl = env.API_INTERNAL_URL || 'http://localhost:8080';
const res = await fetch(`${apiUrl}/api/invites?status=${encodeURIComponent(status)}`);
if (!res.ok) {
const backendError = await parseBackendError(res);
return {
invites: [] as InviteListItem[],
status,
loadError: backendError?.code ?? 'INTERNAL_ERROR'
};
const [invitesRes, groupsRes] = await Promise.all([
fetch(`${apiUrl}/api/invites?status=${encodeURIComponent(status)}`),
fetch(`${apiUrl}/api/groups`)
]);
let invites: InviteListItem[] = [];
let loadError: string | null = null;
if (!invitesRes.ok) {
const backendError = await parseBackendError(invitesRes);
loadError = backendError?.code ?? 'INTERNAL_ERROR';
} else {
invites = await invitesRes.json();
}
const invites: InviteListItem[] = await res.json();
return { invites, status, loadError: null };
let groups: UserGroup[] = [];
let groupsLoadError: string | null = null;
if (!groupsRes.ok) {
const backendError = await parseBackendError(groupsRes);
groupsLoadError = backendError?.code ?? 'INTERNAL_ERROR';
} else {
const raw: UserGroup[] = await groupsRes.json();
groups = [...raw].sort((a, b) => a.name.localeCompare(b.name));
}
return { invites, status, loadError, groups, groupsLoadError };
};
export const actions = {
@@ -45,6 +61,7 @@ export const actions = {
const prefillLastName = (formData.get('prefillLastName') as string) || undefined;
const prefillEmail = (formData.get('prefillEmail') as string) || undefined;
const expiresAt = (formData.get('expiresAt') as string) || undefined;
const groupIds = formData.getAll('groupIds') as string[];
const apiUrl = env.API_INTERNAL_URL || 'http://localhost:8080';
const res = await fetch(`${apiUrl}/api/invites`, {
@@ -56,7 +73,8 @@ export const actions = {
prefillFirstName,
prefillLastName,
prefillEmail,
expiresAt
expiresAt,
groupIds
})
});

View File

@@ -2,7 +2,8 @@
import { enhance } from '$app/forms';
import { m } from '$lib/paraglide/messages.js';
import { getErrorMessage } from '$lib/shared/errors';
import type { InviteListItem } from './+page.server.ts';
import UserGroupsSection from '$lib/user/UserGroupsSection.svelte';
import type { InviteListItem, UserGroup } from './+page.server.ts';
let {
data,
@@ -12,6 +13,8 @@ let {
invites: InviteListItem[];
status: string;
loadError: string | null;
groups: UserGroup[];
groupsLoadError: string | null;
};
form?: {
createError?: string;
@@ -253,6 +256,23 @@ function statusIcon(status: string) {
class="block w-full border border-line px-3 py-2 font-serif text-sm text-ink focus:outline-none focus-visible:ring-2 focus-visible:ring-focus-ring"
/>
</div>
<div class="sm:col-span-2">
<p class="mb-2 font-sans text-xs font-bold tracking-widest text-ink-3 uppercase">
{m.admin_new_invite_groups()}
</p>
{#if data.groupsLoadError}
<div
role="alert"
class="rounded-sm border border-amber-200 bg-amber-50 px-3 py-2 font-sans text-xs text-amber-700"
>
{m.admin_invite_groups_load_error()}
</div>
{:else if data.groups.length === 0}
<p class="font-sans text-xs text-ink-3 italic">{m.admin_new_invite_no_groups()}</p>
{:else}
<UserGroupsSection groups={data.groups} />
{/if}
</div>
{#if form?.createError}
<div class="font-sans text-xs font-medium text-red-600 sm:col-span-2">
{getErrorMessage(form.createError)}

View File

@@ -0,0 +1,155 @@
import { describe, it, expect, vi, beforeEach } from 'vitest';
vi.mock('$env/dynamic/private', () => ({
env: { API_INTERNAL_URL: 'http://localhost:8080' }
}));
import { load, actions } from './+page.server';
import type { UserGroup } from './+page.server';
// PageServerLoad annotates the return as `void | (...)`. This explicit shape avoids
// the void and the Record<string, any> from the generic constraint.
type LoadData = {
invites: unknown[];
status: string;
loadError: string | null;
groups: UserGroup[];
groupsLoadError: string | null;
};
// eslint-disable-next-line @typescript-eslint/no-explicit-any
type AnyFetch = (...args: any[]) => any;
function mockResponse(ok: boolean, body: unknown, status = 200) {
return {
ok,
status,
json: async () => body,
text: async () => JSON.stringify(body),
headers: new Headers({ 'content-type': 'application/json' })
} as unknown as Response;
}
describe('admin/invites load()', () => {
const mockFetch = vi.fn<AnyFetch>();
beforeEach(() => mockFetch.mockReset());
function event(status = 'active') {
return {
url: new URL(`http://localhost/admin/invites?status=${status}`),
fetch: mockFetch as unknown as typeof fetch
// eslint-disable-next-line @typescript-eslint/no-explicit-any
} as any;
}
it('returns groups array alongside invites when both succeed', async () => {
mockFetch.mockResolvedValueOnce(mockResponse(true, [])).mockResolvedValueOnce(
mockResponse(true, [
{ id: 'g-1', name: 'Familie', permissions: ['READ_ALL'] },
{ id: 'g-2', name: 'Administratoren', permissions: ['ADMIN'] }
])
);
const result = (await load(event())) as LoadData;
expect(result.groups).toHaveLength(2);
expect(result.groupsLoadError).toBeNull();
});
it('returns groups sorted alphabetically by name', async () => {
mockFetch.mockResolvedValueOnce(mockResponse(true, [])).mockResolvedValueOnce(
mockResponse(true, [
{ id: 'g-1', name: 'Zebra', permissions: [] },
{ id: 'g-2', name: 'Alfa', permissions: [] },
{ id: 'g-3', name: 'Mitte', permissions: [] }
])
);
const result = (await load(event())) as LoadData;
expect(result.groups.map((g) => g.name)).toEqual(['Alfa', 'Mitte', 'Zebra']);
});
it('returns groups: [] and non-null groupsLoadError when groups fetch is non-OK', async () => {
mockFetch
.mockResolvedValueOnce(mockResponse(true, []))
.mockResolvedValueOnce(mockResponse(false, { code: 'FORBIDDEN' }, 403));
const result = (await load(event())) as LoadData;
expect(result.groups).toEqual([]);
expect(result.groupsLoadError).toBe('FORBIDDEN');
});
it('falls back to INTERNAL_ERROR when groups error body has no code', async () => {
mockFetch
.mockResolvedValueOnce(mockResponse(true, []))
.mockResolvedValueOnce(mockResponse(false, null, 500));
const result = (await load(event())) as LoadData;
expect(result.groupsLoadError).toBe('INTERNAL_ERROR');
});
it('fetches invites and groups in parallel (both URLs called)', async () => {
mockFetch
.mockResolvedValueOnce(mockResponse(true, []))
.mockResolvedValueOnce(mockResponse(true, []));
await load(event());
expect(mockFetch).toHaveBeenCalledTimes(2);
expect(mockFetch).toHaveBeenCalledWith(expect.stringContaining('/api/invites'));
expect(mockFetch).toHaveBeenCalledWith(expect.stringContaining('/api/groups'));
});
});
describe('admin/invites create action', () => {
const mockFetch = vi.fn<AnyFetch>();
beforeEach(() => mockFetch.mockReset());
const successBody = {
id: 'inv-1',
code: 'ABCDE12345',
displayCode: 'ABCDE-12345',
status: 'active',
revoked: false,
useCount: 0,
createdAt: '2026-01-01T00:00:00Z',
shareableUrl: 'http://localhost/register?code=ABCDE12345'
};
it('includes groupIds array in POST body when checkboxes are checked', async () => {
const fd = new FormData();
fd.append('groupIds', 'g-1');
fd.append('groupIds', 'g-2');
mockFetch.mockResolvedValueOnce(mockResponse(true, successBody, 201));
await actions.create({
request: new Request('http://localhost', { method: 'POST', body: fd }),
fetch: mockFetch as unknown as typeof fetch
// eslint-disable-next-line @typescript-eslint/no-explicit-any
} as any);
const [, init] = mockFetch.mock.calls[0] as [string, RequestInit];
const sent = JSON.parse(init.body as string);
expect(sent.groupIds).toEqual(['g-1', 'g-2']);
});
it('sends groupIds: [] when no checkboxes are checked', async () => {
const fd = new FormData();
mockFetch.mockResolvedValueOnce(mockResponse(true, successBody, 201));
await actions.create({
request: new Request('http://localhost', { method: 'POST', body: fd }),
fetch: mockFetch as unknown as typeof fetch
// eslint-disable-next-line @typescript-eslint/no-explicit-any
} as any);
const [, init] = mockFetch.mock.calls[0] as [string, RequestInit];
const sent = JSON.parse(init.body as string);
expect(sent.groupIds).toEqual([]);
});
});

View File

@@ -7,12 +7,15 @@ afterEach(cleanup);
const makeInvite = (overrides: Record<string, unknown> = {}) => ({
id: 'i-1',
code: 'XYZ1234567',
displayCode: 'XYZ-1234',
label: 'Familie',
useCount: 0,
maxUses: 5,
expiresAt: '2027-01-01T00:00:00Z',
revoked: false,
status: 'active' as string,
createdAt: '2025-01-01T00:00:00Z',
shareableUrl: 'http://example.com/i/i-1',
...overrides
});
@@ -22,11 +25,15 @@ const baseData = (
invites: ReturnType<typeof makeInvite>[];
status: string;
loadError: string | null;
groups: { id: string; name: string; permissions: string[] }[];
groupsLoadError: string | null;
}> = {}
) => ({
invites: [],
status: 'active',
loadError: null,
groups: [],
groupsLoadError: null,
...overrides
});
@@ -253,4 +260,115 @@ describe('admin/invites page', () => {
const banner = document.querySelector('.bg-red-50');
expect(banner).not.toBeNull();
});
// ─── groups section ───────────────────────────────────────────────────────
it('shows a groups-load warning banner when data.groupsLoadError is set', async () => {
render(AdminInvitesPage, {
props: { data: { ...baseData(), groups: [], groupsLoadError: 'INTERNAL_ERROR' } }
});
await page
.getByRole('button', { name: /neue einladung/i })
.first()
.click();
const banner = document.querySelector('.bg-amber-50');
expect(banner).not.toBeNull();
});
it('renders group checkboxes inside the new-invite form when groups are provided', async () => {
render(AdminInvitesPage, {
props: {
data: {
...baseData(),
groups: [
{ id: 'g-1', name: 'Administratoren', permissions: ['ADMIN'] },
{ id: 'g-2', name: 'Familie', permissions: ['READ_ALL'] }
],
groupsLoadError: null
}
}
});
await page
.getByRole('button', { name: /neue einladung/i })
.first()
.click();
await expect.element(page.getByRole('checkbox', { name: 'Administratoren' })).toBeVisible();
await expect.element(page.getByRole('checkbox', { name: 'Familie' })).toBeVisible();
});
it('group checkbox stays checked after being clicked', async () => {
render(AdminInvitesPage, {
props: {
data: {
...baseData(),
groups: [{ id: 'g-1', name: 'Familie', permissions: ['READ_ALL'] }],
groupsLoadError: null
}
}
});
await page
.getByRole('button', { name: /neue einladung/i })
.first()
.click();
const checkbox = page.getByRole('checkbox', { name: 'Familie' });
await checkbox.click();
await expect.element(checkbox).toBeChecked();
});
it('amber warning banner has role="alert"', async () => {
render(AdminInvitesPage, {
props: { data: { ...baseData(), groups: [], groupsLoadError: 'INTERNAL_ERROR' } }
});
await page
.getByRole('button', { name: /neue einladung/i })
.first()
.click();
const alert = document.querySelector('[role="alert"]');
expect(alert).not.toBeNull();
});
it('checkbox group fieldset has accessible name from i18n key (not hardcoded German)', async () => {
render(AdminInvitesPage, {
props: {
data: {
...baseData(),
groups: [{ id: 'g-1', name: 'Familie', permissions: ['READ_ALL'] }],
groupsLoadError: null
}
}
});
await page
.getByRole('button', { name: /neue einladung/i })
.first()
.click();
// m.admin_new_invite_groups() returns "Gruppen (optional)" in de locale
// The hardcoded legend "Gruppen" would not match this accessible name
await expect.element(page.getByRole('group', { name: 'Gruppen (optional)' })).toBeVisible();
});
it('shows no checkboxes and no warning when groups list is empty and no error', async () => {
render(AdminInvitesPage, {
props: { data: { ...baseData(), groups: [], groupsLoadError: null } }
});
await page
.getByRole('button', { name: /neue einladung/i })
.first()
.click();
expect(document.querySelectorAll('input[name="groupIds"]')).toHaveLength(0);
expect(document.querySelector('.bg-amber-50')).toBeNull();
// empty-state message visible — "Keine Gruppen vorhanden." in de locale
await expect.element(page.getByText(/keine gruppen/i)).toBeVisible();
});
});

View File

@@ -1,19 +1,14 @@
<script lang="ts">
import { onDestroy } from 'svelte';
import { m } from '$lib/paraglide/messages.js';
import ImportStatusCard from './ImportStatusCard.svelte';
import type { ImportStatus } from './types.js';
let backfillResult: number | null = $state(null);
let backfillLoading = $state(false);
let backfillHashesResult: number | null = $state(null);
let backfillHashesLoading = $state(false);
type ImportStatus = {
state: 'IDLE' | 'RUNNING' | 'DONE' | 'FAILED';
message: string;
processed: number;
startedAt: string | null;
};
type ThumbnailStatus = {
state: 'IDLE' | 'RUNNING' | 'DONE' | 'FAILED';
message: string;
@@ -177,47 +172,7 @@ async function backfillFileHashes() {
</div>
<!-- Mass import -->
<div class="rounded-sm border border-line bg-surface p-6 shadow-sm">
<h2 class="mb-1 font-sans text-sm font-bold text-ink">{m.admin_system_import_heading()}</h2>
<p class="mb-4 text-sm text-ink-2">{m.admin_system_import_description()}</p>
{#if importStatus?.state === 'RUNNING'}
<p class="text-sm text-ink-2">{m.admin_system_import_status_running()}</p>
{:else if importStatus?.state === 'DONE'}
<p class="mb-4 rounded-sm border border-green-200 bg-green-50 p-3 text-sm text-green-700">
{m.admin_system_import_status_done({ count: importStatus.processed })}
</p>
<button
data-import-trigger
onclick={triggerImport}
class="rounded-sm bg-primary px-5 py-2 font-sans text-xs font-bold tracking-widest text-primary-fg uppercase transition-opacity hover:opacity-80"
>
{m.admin_system_import_btn_retry()}
</button>
{:else if importStatus?.state === 'FAILED'}
<p class="mb-4 rounded-sm border border-red-200 bg-red-50 p-3 text-sm text-red-700">
{m.admin_system_import_status_failed({ message: importStatus.message })}
</p>
<button
data-import-trigger
onclick={triggerImport}
class="rounded-sm bg-primary px-5 py-2 font-sans text-xs font-bold tracking-widest text-primary-fg uppercase transition-opacity hover:opacity-80"
>
{m.admin_system_import_btn_retry()}
</button>
{:else}
{#if importStatus !== null}
<p class="mb-4 text-sm text-ink-2">{m.admin_system_import_status_idle()}</p>
{/if}
<button
data-import-trigger
onclick={triggerImport}
class="rounded-sm bg-primary px-5 py-2 font-sans text-xs font-bold tracking-widest text-primary-fg uppercase transition-opacity hover:opacity-80"
>
{m.admin_system_import_btn_start()}
</button>
{/if}
</div>
<ImportStatusCard importStatus={importStatus} ontrigger={triggerImport} />
<!-- Thumbnail backfill -->
<div class="rounded-sm border border-line bg-surface p-6 shadow-sm">

View File

@@ -0,0 +1,81 @@
<script lang="ts">
import { m } from '$lib/paraglide/messages.js';
import type { ImportStatus } from './types.js';
let {
importStatus,
ontrigger
}: {
importStatus: ImportStatus | null;
ontrigger: () => void;
} = $props();
const failureMessage = $derived(
importStatus?.statusCode === 'IMPORT_FAILED_NO_SPREADSHEET'
? m.admin_system_import_failed_no_spreadsheet()
: m.admin_system_import_failed_internal()
);
</script>
<div class="rounded-sm border border-line bg-surface p-6 shadow-sm">
<h2 class="mb-5 font-sans text-xs font-bold tracking-widest text-ink-3 uppercase">
{m.admin_system_import_heading()}
</h2>
<p class="mb-4 text-sm text-ink-2">{m.admin_system_import_description()}</p>
{#if importStatus?.state === 'RUNNING'}
<div class="mb-4 flex items-center gap-3">
<span
data-testid="spinner"
role="status"
aria-label={m.admin_system_import_status_running()}
class="inline-block h-5 w-5 animate-spin rounded-full border-2 border-ink-3 border-t-brand-mint motion-reduce:animate-none"
></span>
<div>
<p data-testid="processed-count" class="text-base font-bold text-ink">
{importStatus.processed}
</p>
<p class="font-sans text-xs font-bold tracking-widest text-ink-3 uppercase">
{m.admin_system_import_status_running()}
</p>
</div>
</div>
{:else if importStatus?.state === 'DONE'}
<div class="mb-4 rounded-sm border border-green-200 bg-green-50 p-4 text-green-700">
<p data-testid="processed-count" class="text-base font-bold">{importStatus.processed}</p>
<p class="font-sans text-xs font-bold tracking-widest text-green-800 uppercase">
{m.admin_system_import_status_done_label()}
</p>
<p class="mt-1 text-xs text-green-800">{m.admin_system_import_status_done()}</p>
</div>
<button
data-import-trigger
onclick={ontrigger}
class="min-h-[44px] rounded-sm bg-primary px-5 py-2 font-sans text-xs font-bold tracking-widest text-primary-fg uppercase transition-opacity hover:opacity-80"
>
{m.admin_system_import_btn_retry()}
</button>
{:else if importStatus?.state === 'FAILED'}
<p class="mb-4 rounded-sm border border-red-200 bg-red-50 p-3 text-sm text-red-700">
{failureMessage}
</p>
<button
data-import-trigger
onclick={ontrigger}
class="min-h-[44px] rounded-sm bg-primary px-5 py-2 font-sans text-xs font-bold tracking-widest text-primary-fg uppercase transition-opacity hover:opacity-80"
>
{m.admin_system_import_btn_retry()}
</button>
{:else}
{#if importStatus !== null}
<p class="mb-4 text-sm text-ink-2">{m.admin_system_import_status_idle()}</p>
{/if}
<button
data-import-trigger
onclick={ontrigger}
class="min-h-[44px] rounded-sm bg-primary px-5 py-2 font-sans text-xs font-bold tracking-widest text-primary-fg uppercase transition-opacity hover:opacity-80"
>
{m.admin_system_import_btn_start()}
</button>
{/if}
</div>

View File

@@ -0,0 +1,131 @@
import { describe, expect, it, vi } from 'vitest';
import { render } from 'vitest-browser-svelte';
import { m } from '$lib/paraglide/messages.js';
import ImportStatusCard from './ImportStatusCard.svelte';
import type { ImportStatus } from './types.js';
const makeStatus = (overrides: Partial<ImportStatus> = {}): ImportStatus => ({
state: 'IDLE',
statusCode: 'IMPORT_IDLE',
processed: 0,
startedAt: null,
...overrides
});
describe('ImportStatusCard', () => {
it('shows spinner while state is RUNNING', async () => {
const { getByTestId } = render(ImportStatusCard, {
props: {
importStatus: makeStatus({ state: 'RUNNING', statusCode: 'IMPORT_RUNNING', processed: 3 }),
ontrigger: () => {}
}
});
await expect.element(getByTestId('spinner')).toBeInTheDocument();
});
it('shows processed count at text-base while RUNNING', async () => {
const { getByTestId } = render(ImportStatusCard, {
props: {
importStatus: makeStatus({ state: 'RUNNING', statusCode: 'IMPORT_RUNNING', processed: 7 }),
ontrigger: () => {}
}
});
await expect.element(getByTestId('processed-count')).toHaveTextContent('7');
});
it('shows processed count while DONE', async () => {
const { getByText } = render(ImportStatusCard, {
props: {
importStatus: makeStatus({ state: 'DONE', statusCode: 'IMPORT_DONE', processed: 42 }),
ontrigger: () => {}
}
});
await expect.element(getByText('42')).toBeVisible();
});
it('shows no-spreadsheet message when statusCode is IMPORT_FAILED_NO_SPREADSHEET', async () => {
const { getByText } = render(ImportStatusCard, {
props: {
importStatus: makeStatus({
state: 'FAILED',
statusCode: 'IMPORT_FAILED_NO_SPREADSHEET'
}),
ontrigger: () => {}
}
});
await expect.element(getByText(m.admin_system_import_failed_no_spreadsheet())).toBeVisible();
});
it('shows internal error message when statusCode is IMPORT_FAILED_INTERNAL', async () => {
const { getByText } = render(ImportStatusCard, {
props: {
importStatus: makeStatus({ state: 'FAILED', statusCode: 'IMPORT_FAILED_INTERNAL' }),
ontrigger: () => {}
}
});
await expect.element(getByText(m.admin_system_import_failed_internal())).toBeVisible();
});
it('shows idle text when importStatus is non-null and state is IDLE', async () => {
const { getByText } = render(ImportStatusCard, {
props: {
importStatus: makeStatus({ state: 'IDLE', statusCode: 'IMPORT_IDLE' }),
ontrigger: () => {}
}
});
await expect.element(getByText(m.admin_system_import_status_idle())).toBeVisible();
});
it('shows no spinner when importStatus is null', async () => {
const { getByTestId } = render(ImportStatusCard, {
props: { importStatus: null, ontrigger: () => {} }
});
await expect.element(getByTestId('spinner')).not.toBeInTheDocument();
});
it('calls ontrigger when retry button is clicked in DONE state', async () => {
const ontrigger = vi.fn();
const { getByRole } = render(ImportStatusCard, {
props: {
importStatus: makeStatus({ state: 'DONE', statusCode: 'IMPORT_DONE', processed: 5 }),
ontrigger
}
});
await getByRole('button').click();
expect(ontrigger).toHaveBeenCalledOnce();
});
it('calls ontrigger when retry button is clicked in FAILED state', async () => {
const ontrigger = vi.fn();
const { getByRole } = render(ImportStatusCard, {
props: {
importStatus: makeStatus({ state: 'FAILED', statusCode: 'IMPORT_FAILED_INTERNAL' }),
ontrigger
}
});
await getByRole('button').click();
expect(ontrigger).toHaveBeenCalledOnce();
});
it('calls ontrigger when start button is clicked in IDLE state', async () => {
const ontrigger = vi.fn();
const { getByRole } = render(ImportStatusCard, {
props: {
importStatus: makeStatus({ state: 'IDLE', statusCode: 'IMPORT_IDLE' }),
ontrigger
}
});
await getByRole('button').click();
expect(ontrigger).toHaveBeenCalledOnce();
});
});

View File

@@ -163,7 +163,7 @@ describe('Admin system page — mass import card', () => {
ok: true,
json: async () => ({
state: 'FAILED',
message: 'Datei nicht gefunden.',
statusCode: 'IMPORT_FAILED_NO_SPREADSHEET',
processed: 0,
startedAt: '2026-01-01T10:00:00'
})
@@ -182,7 +182,7 @@ describe('Admin system page — mass import card', () => {
})
);
render(Page, {});
await expect.element(page.getByText(/Datei nicht gefunden/i)).toBeInTheDocument();
await expect.element(page.getByText(/Keine Tabellendatei gefunden/i)).toBeInTheDocument();
await expect.element(page.getByRole('button', { name: /Erneut starten/i })).toBeInTheDocument();
});
});

View File

@@ -246,7 +246,7 @@ describe('admin/system page', () => {
return new Response(
JSON.stringify({
state: 'FAILED',
message: 'database error',
statusCode: 'IMPORT_FAILED_INTERNAL',
processed: 0,
startedAt: null
}),
@@ -262,7 +262,7 @@ describe('admin/system page', () => {
render(AdminSystemPage, { props: {} });
await vi.waitFor(() => {
expect(document.body.textContent).toContain('database error');
expect(document.body.textContent).toContain('Interner Fehler beim Import');
});
});

View File

@@ -0,0 +1,6 @@
export type ImportStatus = {
state: 'IDLE' | 'RUNNING' | 'DONE' | 'FAILED';
statusCode: string;
processed: number;
startedAt: string | null;
};

View File

@@ -1,24 +1,15 @@
<script lang="ts">
import { enhance } from '$app/forms';
import { beforeNavigate, goto } from '$app/navigation';
import { m } from '$lib/paraglide/messages.js';
import UserProfileSection from '$lib/user/UserProfileSection.svelte';
import UserGroupsSection from '$lib/user/UserGroupsSection.svelte';
import AccountSection from './AccountSection.svelte';
import { createUnsavedWarning } from '$lib/shared/hooks/useUnsavedWarning.svelte';
import UnsavedWarningBanner from '$lib/shared/primitives/UnsavedWarningBanner.svelte';
let { data, form } = $props();
let isDirty = $state(false);
let showUnsavedWarning = $state(false);
let discardTarget: string | null = $state(null);
beforeNavigate(({ cancel, to }) => {
if (isDirty) {
cancel();
showUnsavedWarning = true;
discardTarget = to?.url.href ?? null;
}
});
const unsaved = createUnsavedWarning();
</script>
<div class="flex flex-1 flex-col overflow-hidden">
@@ -44,23 +35,8 @@ beforeNavigate(({ cancel, to }) => {
<!-- Scrollable body -->
<div class="flex-1 overflow-y-auto px-5 py-5">
{#if showUnsavedWarning}
<div
class="mb-5 flex items-center justify-between rounded border border-amber-200 bg-amber-50 p-3 text-sm text-amber-800 dark:border-amber-800 dark:bg-amber-950/40 dark:text-amber-300"
>
<span>{m.admin_unsaved_warning()}</span>
<button
type="button"
onclick={() => {
isDirty = false;
showUnsavedWarning = false;
if (discardTarget) goto(discardTarget);
}}
class="ml-4 shrink-0 font-sans text-xs font-bold tracking-widest text-amber-800 uppercase hover:text-amber-900 dark:text-amber-300"
>
{m.person_discard_changes()}
</button>
</div>
{#if unsaved.showUnsavedWarning}
<UnsavedWarningBanner onDiscard={unsaved.discard} />
{/if}
{#if form?.error}
<div class="mb-5 rounded border border-red-200 bg-red-50 p-3 text-sm text-red-700">
@@ -71,11 +47,11 @@ beforeNavigate(({ cancel, to }) => {
<form
id="new-user-form"
method="POST"
use:enhance
oninput={() => {
isDirty = true;
showUnsavedWarning = false;
use:enhance={() => async ({ result, update }) => {
if (result.type === 'redirect') unsaved.clearOnSuccess();
await update();
}}
oninput={unsaved.markDirty}
class="space-y-5"
>
<div class="rounded-sm border border-line bg-surface p-5 shadow-sm">

View File

@@ -1,9 +1,19 @@
import { afterEach, describe, expect, it, vi } from 'vitest';
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest';
import { cleanup, render } from 'vitest-browser-svelte';
import { page } from 'vitest/browser';
import Page from './+page.svelte';
vi.mock('$app/forms', () => ({ enhance: () => () => {} }));
const enhanceCaptureRef = vi.hoisted(() => ({ submitFn: undefined as unknown }));
vi.mock('$app/forms', () => ({
enhance: (_el: HTMLFormElement, fn?: unknown) => {
enhanceCaptureRef.submitFn = fn;
return { destroy: vi.fn() };
}
}));
vi.mock('$app/navigation', () => ({ beforeNavigate: vi.fn(), goto: vi.fn() }));
import { beforeNavigate, goto } from '$app/navigation';
const groups = [
{ id: 'g1', name: 'Editoren', permissions: ['WRITE_ALL'] },
@@ -20,6 +30,13 @@ const baseData = {
afterEach(cleanup);
type SubmitFn = () => Promise<
(opts: {
result: { type: string; [key: string]: unknown };
update: () => Promise<void>;
}) => Promise<void>
>;
// ─── Rendering ────────────────────────────────────────────────────────────────
describe('Admin new user page rendering', () => {
@@ -66,3 +83,103 @@ describe('Admin new user page error display', () => {
await expect.element(page.getByText('Ein Fehler ist aufgetreten.')).not.toBeInTheDocument();
});
});
// ─── Unsaved-changes guard ────────────────────────────────────────────────────
describe('Admin new user page unsaved-changes guard', () => {
beforeEach(() => {
vi.clearAllMocks();
enhanceCaptureRef.submitFn = undefined;
});
it('does not show unsaved warning initially', async () => {
render(Page, { data: baseData, form: null });
await expect.element(page.getByText(/ungespeicherte Änderungen/i)).not.toBeInTheDocument();
});
it('cancels navigation and shows banner when form is dirty', async () => {
render(Page, { data: baseData, form: null });
const [callback] = vi.mocked(beforeNavigate).mock.calls[0];
document
.querySelector<HTMLInputElement>('input[name="email"]')!
.dispatchEvent(new InputEvent('input', { bubbles: true }));
const cancel = vi.fn();
callback({ cancel, to: { url: new URL('http://localhost/admin/users') } });
expect(cancel).toHaveBeenCalled();
await expect.element(page.getByText(/ungespeicherte Änderungen/i)).toBeInTheDocument();
});
it('does not cancel navigation when form is clean', async () => {
render(Page, { data: baseData, form: null });
const [callback] = vi.mocked(beforeNavigate).mock.calls[0];
const cancel = vi.fn();
callback({ cancel, to: { url: new URL('http://localhost/admin/users') } });
expect(cancel).not.toHaveBeenCalled();
});
it('discard button calls goto with the target URL', async () => {
render(Page, { data: baseData, form: null });
const [callback] = vi.mocked(beforeNavigate).mock.calls[0];
document
.querySelector<HTMLInputElement>('input[name="email"]')!
.dispatchEvent(new InputEvent('input', { bubbles: true }));
callback({ cancel: vi.fn(), to: { url: new URL('http://localhost/admin/users') } });
await page.getByRole('button', { name: /verwerfen/i }).click();
expect(vi.mocked(goto)).toHaveBeenCalledWith('http://localhost/admin/users');
});
it('clears banner when enhance callback receives a redirect result', async () => {
render(Page, { data: baseData, form: null });
const [navCallback] = vi.mocked(beforeNavigate).mock.calls[0];
document
.querySelector<HTMLInputElement>('input[name="email"]')!
.dispatchEvent(new InputEvent('input', { bubbles: true }));
navCallback({ cancel: vi.fn(), to: { url: new URL('http://localhost/admin/users') } });
await expect.element(page.getByText(/ungespeicherte Änderungen/i)).toBeInTheDocument();
const innerFn = await (enhanceCaptureRef.submitFn as SubmitFn)();
await innerFn({
result: { type: 'redirect', location: '/admin/users', status: 303 },
update: vi.fn().mockResolvedValue(undefined)
});
await expect.element(page.getByText(/ungespeicherte Änderungen/i)).not.toBeInTheDocument();
const cancel = vi.fn();
navCallback({ cancel, to: { url: new URL('http://localhost/admin/users') } });
expect(cancel).not.toHaveBeenCalled();
});
it('keeps banner when enhance callback receives a failure result', async () => {
render(Page, { data: baseData, form: null });
const [navCallback] = vi.mocked(beforeNavigate).mock.calls[0];
document
.querySelector<HTMLInputElement>('input[name="email"]')!
.dispatchEvent(new InputEvent('input', { bubbles: true }));
navCallback({ cancel: vi.fn(), to: { url: new URL('http://localhost/admin/users') } });
await expect.element(page.getByText(/ungespeicherte Änderungen/i)).toBeInTheDocument();
const innerFn = await (enhanceCaptureRef.submitFn as SubmitFn)();
await innerFn({
result: { type: 'failure', status: 400, data: { error: 'E-Mail bereits vergeben' } },
update: vi.fn().mockResolvedValue(undefined)
});
const cancel = vi.fn();
navCallback({ cancel, to: { url: new URL('http://localhost/admin/users') } });
expect(cancel).toHaveBeenCalled();
});
});

View File

@@ -24,7 +24,6 @@ export const GET: RequestHandler = async ({ url, fetch }) => {
}
const data = await response.json();
console.log('Tags Data', data);
// 4. Daten zurück an den Browser schicken
return json(data);

View File

@@ -4,7 +4,10 @@ import { page as browserPage } from 'vitest/browser';
const mockPage = {
status: 500,
error: { message: 'Internal Error' } as { message: string } | null
error: { message: 'Internal Error', errorId: undefined } as {
message: string;
errorId?: string;
} | null
};
vi.mock('$app/state', () => ({
@@ -13,6 +16,16 @@ vi.mock('$app/state', () => ({
}
}));
vi.mock('$lib/paraglide/messages.js', () => ({
m: {
page_title_error: () => 'Es ist etwas schiefgelaufen.',
error_internal_error: () => 'Ein unerwarteter Fehler ist aufgetreten.',
error_page_id_label: () => 'Fehler-ID',
error_copy_id_label: () => 'ID kopieren',
error_copied: () => 'Kopiert!'
}
}));
afterEach(cleanup);
async function loadComponent() {
@@ -20,7 +33,7 @@ async function loadComponent() {
}
describe('+error.svelte', () => {
it('renders the page status code prominently', async () => {
it('renders the page status code', async () => {
mockPage.status = 404;
mockPage.error = { message: 'Not Found' };
@@ -40,13 +53,79 @@ describe('+error.svelte', () => {
await expect.element(browserPage.getByText('Database unavailable')).toBeVisible();
});
it('falls back to the literal "Internal Error" when page.error is null', async () => {
it('falls back to error_internal_error message when page.error is null', async () => {
mockPage.status = 500;
mockPage.error = null;
const ErrorPage = await loadComponent();
render(ErrorPage);
await expect.element(browserPage.getByText('Internal Error')).toBeVisible();
await expect
.element(browserPage.getByText('Ein unerwarteter Fehler ist aufgetreten.'))
.toBeVisible();
});
it('shows errorId when page.error.errorId is set', async () => {
mockPage.status = 500;
mockPage.error = { message: 'Something broke', errorId: 'abc-123-def' };
const ErrorPage = await loadComponent();
render(ErrorPage);
await expect.element(browserPage.getByText('abc-123-def')).toBeVisible();
});
it('shows copy button when errorId is present', async () => {
mockPage.status = 500;
mockPage.error = { message: 'Something broke', errorId: 'abc-123-def' };
const ErrorPage = await loadComponent();
render(ErrorPage);
await expect.element(browserPage.getByRole('button', { name: 'ID kopieren' })).toBeVisible();
});
it('does not render errorId section when errorId is absent', async () => {
mockPage.status = 500;
mockPage.error = { message: 'Something broke' };
const ErrorPage = await loadComponent();
render(ErrorPage);
await expect.element(browserPage.getByText('Fehler-ID')).not.toBeInTheDocument();
});
it('shows "Kopiert!" after clicking the copy button', async () => {
mockPage.status = 500;
mockPage.error = { message: 'Something broke', errorId: 'abc-123-def' };
Object.defineProperty(navigator, 'clipboard', {
value: { writeText: vi.fn().mockResolvedValue(undefined) },
configurable: true,
writable: true
});
const ErrorPage = await loadComponent();
render(ErrorPage);
await browserPage.getByRole('button', { name: 'ID kopieren' }).click();
await expect.element(browserPage.getByText('Kopiert!')).toBeVisible();
});
it('does not show "Kopiert!" when clipboard write is rejected', async () => {
mockPage.status = 500;
mockPage.error = { message: 'Something broke', errorId: 'abc-123-def' };
Object.defineProperty(navigator, 'clipboard', {
value: { writeText: vi.fn().mockRejectedValue(new Error('denied')) },
configurable: true,
writable: true
});
const ErrorPage = await loadComponent();
render(ErrorPage);
await browserPage.getByRole('button', { name: 'ID kopieren' }).click();
await expect.element(browserPage.getByText('Kopiert!')).not.toBeInTheDocument();
});
});

View File

@@ -1,6 +1,6 @@
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest';
import { cleanup, render } from 'vitest-browser-svelte';
import { page, userEvent } from 'vitest/browser';
import { page } from 'vitest/browser';
import { createRawSnippet } from 'svelte';
vi.mock('$env/static/public', () => ({ PUBLIC_NOTIFICATION_POLL_MS: '60000' }));
@@ -96,13 +96,13 @@ describe('Layout user dropdown', () => {
it('opens dropdown on button click', async () => {
render(Layout, { data: makeData(), children: emptySnippet });
await page.getByRole('button', { name: /MM/ }).click();
((await page.getByRole('button', { name: /MM/ }).element()) as HTMLElement).click();
await expect.element(page.getByRole('link', { name: /Profil/i })).toBeInTheDocument();
});
it('profile link points to /profile', async () => {
render(Layout, { data: makeData(), children: emptySnippet });
await page.getByRole('button', { name: /MM/ }).click();
((await page.getByRole('button', { name: /MM/ }).element()) as HTMLElement).click();
await expect
.element(page.getByRole('link', { name: /Profil/i }))
.toHaveAttribute('href', '/profile');
@@ -110,16 +110,16 @@ describe('Layout user dropdown', () => {
it('logout button is in the dropdown', async () => {
render(Layout, { data: makeData(), children: emptySnippet });
await page.getByRole('button', { name: /MM/ }).click();
((await page.getByRole('button', { name: /MM/ }).element()) as HTMLElement).click();
await expect.element(page.getByRole('button', { name: /Abmelden/i })).toBeInTheDocument();
});
it('closes dropdown when Escape is pressed', async () => {
render(Layout, { data: makeData(), children: emptySnippet });
const btn = page.getByRole('button', { name: /MM/ });
await btn.click();
const btnEl = (await page.getByRole('button', { name: /MM/ }).element()) as HTMLElement;
btnEl.click();
await expect.element(page.getByRole('link', { name: /Profil/i })).toBeInTheDocument();
await userEvent.keyboard('{Escape}');
btnEl.dispatchEvent(new KeyboardEvent('keydown', { key: 'Escape', bubbles: true }));
await tick();
await expect.element(page.getByRole('link', { name: /Profil/i })).not.toBeInTheDocument();
});

View File

@@ -1,3 +1,4 @@
import { sentrySvelteKit } from '@sentry/sveltekit';
import { paraglideVitePlugin } from '@inlang/paraglide-js';
import devtoolsJson from 'vite-plugin-devtools-json';
import tailwindcss from '@tailwindcss/vite';
@@ -33,6 +34,21 @@ export default defineConfig({
}
},
plugins: [
sentrySvelteKit({
org: 'familienarchiv',
project: 'frontend',
authToken: process.env.SENTRY_AUTH_TOKEN,
sentryUrl: (() => {
const dsn = process.env.VITE_SENTRY_DSN;
if (!dsn) return undefined;
try {
return new URL(dsn).origin;
} catch {
return undefined;
}
})(),
autoUploadSourceMaps: !!process.env.SENTRY_AUTH_TOKEN
}),
tailwindcss(),
sveltekit(),
devtoolsJson(),
@@ -55,7 +71,8 @@ export default defineConfig({
'src/lib/shared/utils/**',
'src/lib/shared/server/**',
'src/lib/shared/discussion/**',
'src/lib/document/**'
'src/lib/document/**',
'src/hooks.server.ts'
],
exclude: ['**/*.svelte', '**/*.svelte.ts', '**/__mocks__/**'],
thresholds: {

View File

@@ -24,6 +24,8 @@ export default defineConfig({
})
],
test: {
testTimeout: 30_000,
hookTimeout: 15_000,
expect: { requireAssertions: true },
browser: {
enabled: true,

View File

@@ -88,3 +88,13 @@ git.raddatz.cloud {
import security_headers
reverse_proxy 127.0.0.1:3005
}
grafana.archiv.raddatz.cloud {
import security_headers
reverse_proxy 127.0.0.1:3003
}
glitchtip.archiv.raddatz.cloud {
import security_headers
reverse_proxy 127.0.0.1:3002
}

View File

@@ -0,0 +1,10 @@
apiVersion: 1
providers:
- name: default
type: file
disableDeletion: true
updateIntervalSeconds: 30
options:
path: /etc/grafana/provisioning/dashboards
foldersFromFilesStructure: false

View File

@@ -0,0 +1,284 @@
{
"__inputs": [
{
"name": "DS_LOKI",
"label": "Loki",
"description": "",
"type": "datasource",
"pluginId": "loki",
"pluginName": "Loki"
}
],
"__requires": [
{
"type": "grafana",
"id": "grafana",
"name": "Grafana",
"version": "7.1.0"
},
{
"type": "panel",
"id": "graph",
"name": "Graph",
"version": ""
},
{
"type": "panel",
"id": "logs",
"name": "Logs",
"version": ""
},
{
"type": "datasource",
"id": "loki",
"name": "Loki",
"version": "1.0.0"
}
],
"annotations": {
"list": [
{
"$$hashKey": "object:75",
"builtIn": 1,
"datasource": "-- Grafana --",
"enable": true,
"hide": true,
"iconColor": "rgba(0, 211, 255, 1)",
"name": "Annotations & Alerts",
"type": "dashboard"
}
]
},
"description": "Log Viewer Dashboard for Loki",
"editable": false,
"gnetId": 13639,
"graphTooltip": 0,
"id": null,
"iteration": 1608932746420,
"links": [
{
"$$hashKey": "object:59",
"icon": "bolt",
"includeVars": true,
"keepTime": true,
"tags": [],
"targetBlank": true,
"title": "View In Explore",
"type": "link",
"url": "/explore?orgId=1&left=[\"now-1h\",\"now\",\"Loki\",{\"expr\":\"{job=\\\"$app\\\"}\"},{\"ui\":[true,true,true,\"none\"]}]"
},
{
"$$hashKey": "object:61",
"icon": "external link",
"tags": [],
"targetBlank": true,
"title": "Learn LogQL",
"type": "link",
"url": "https://grafana.com/docs/loki/latest/logql/"
}
],
"panels": [
{
"aliasColors": {},
"bars": true,
"dashLength": 10,
"dashes": false,
"datasource": {"type": "loki", "uid": "loki"},
"fieldConfig": {
"defaults": {
"custom": {},
"links": []
},
"overrides": []
},
"fill": 1,
"fillGradient": 0,
"gridPos": {
"h": 3,
"w": 24,
"x": 0,
"y": 0
},
"hiddenSeries": false,
"id": 6,
"legend": {
"avg": false,
"current": false,
"max": false,
"min": false,
"show": false,
"total": false,
"values": false
},
"lines": false,
"linewidth": 1,
"nullPointMode": "null",
"percentage": false,
"pluginVersion": "7.1.0",
"pointradius": 2,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "sum(count_over_time({job=\"$app\"} |= \"$search\" [$__interval]))",
"legendFormat": "",
"refId": "A"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"$$hashKey": "object:168",
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": false
},
{
"$$hashKey": "object:169",
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": false
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"datasource": {"type": "loki", "uid": "loki"},
"fieldConfig": {
"defaults": {
"custom": {}
},
"overrides": []
},
"gridPos": {
"h": 25,
"w": 24,
"x": 0,
"y": 3
},
"id": 2,
"maxDataPoints": "",
"options": {
"showLabels": false,
"showTime": true,
"sortOrder": "Descending",
"wrapLogMessage": false
},
"targets": [
{
"expr": "{job=\"$app\"} |= \"$search\" | logfmt",
"hide": false,
"legendFormat": "",
"refId": "A"
}
],
"timeFrom": null,
"timeShift": null,
"title": "",
"transparent": true,
"type": "logs"
}
],
"refresh": false,
"schemaVersion": 26,
"style": "dark",
"tags": [],
"templating": {
"list": [
{
"allValue": null,
"current": {},
"datasource": {"type": "loki", "uid": "loki"},
"definition": "label_values(job)",
"hide": 0,
"includeAll": false,
"label": "App",
"multi": false,
"name": "app",
"options": [],
"query": "label_values(job)",
"refresh": 1,
"regex": "",
"skipUrlSync": false,
"sort": 0,
"tagValuesQuery": "",
"tags": [],
"tagsQuery": "",
"type": "query",
"useTags": false
},
{
"current": {
"selected": false,
"text": "",
"value": ""
},
"hide": 0,
"label": "String Match",
"name": "search",
"options": [
{
"selected": true,
"text": "",
"value": ""
}
],
"query": "",
"skipUrlSync": false,
"type": "textbox"
}
]
},
"time": {
"from": "now-1h",
"to": "now"
},
"timepicker": {
"hidden": false,
"refresh_intervals": [
"10s",
"30s",
"1m",
"5m",
"15m",
"30m",
"1h",
"2h",
"1d"
]
},
"timezone": "",
"title": "Logs / App",
"uid": "sadlil-loki-apps-dashboard",
"version": 13
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,38 @@
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
uid: prometheus
url: http://obs-prometheus:9090
isDefault: true
editable: false
- name: Loki
type: loki
uid: loki
url: http://obs-loki:3100
editable: false
jsonData:
derivedFields:
- name: TraceID
matcherRegex: '"traceId":"(\w+)"'
url: "${__value.raw}"
datasourceUid: tempo
- name: Tempo
type: tempo
uid: tempo
url: http://obs-tempo:3200
editable: false
jsonData:
tracesToLogsV2:
datasourceUid: loki
spanStartTimeShift: "-1m"
spanEndTimeShift: "1m"
filterByTraceID: true
filterBySpanID: false
serviceMap:
datasourceUid: prometheus
nodeGraph:
enabled: true

View File

@@ -0,0 +1,40 @@
auth_enabled: false # safe — loki is not exposed beyond obs-net. Add auth before binding port 3100 to host.
server:
http_listen_port: 3100
common:
instance_addr: 127.0.0.1
path_prefix: /loki
storage:
filesystem:
chunks_directory: /loki/chunks
rules_directory: /loki/rules
replication_factor: 1
ring:
kvstore:
store: inmemory # correct for single-node — no etcd/consul needed here
schema_config:
configs:
- from: 2024-01-01
store: tsdb
object_store: filesystem
schema: v13
index:
prefix: index_
period: 24h
limits_config:
retention_period: 720h # 30 days — low-volume family archive; revisit if log volume grows
compactor:
working_directory: /loki/compactor
compaction_interval: 10m
retention_enabled: true
retention_delete_delay: 2h
retention_delete_worker_count: 150
delete_request_store: filesystem
analytics:
reporting_enabled: false # no telemetry sent to Grafana Labs

View File

@@ -0,0 +1,24 @@
# Non-secret observability stack configuration — tracked in git.
# Secret values (passwords, keys) are injected by CI from Gitea secrets
# into /opt/familienarchiv/obs-secrets.env at deploy time.
#
# For local dev the main .env file supplies these values instead;
# this file is only used in the CI/production path.
# Host ports (all bound to 127.0.0.1 — Caddy is the external entry point)
PORT_GRAFANA=3003
PORT_GLITCHTIP=3002
PORT_PROMETHEUS=9090
# Public URLs — used for internal redirects, alert email links, OAuth callbacks
GF_SERVER_ROOT_URL=https://grafana.archiv.raddatz.cloud
GLITCHTIP_DOMAIN=https://glitchtip.archiv.raddatz.cloud
POSTGRES_USER=archiv
# PostgreSQL hostname for GlitchTip db-init and workers.
# The actual value depends on the Compose project name — it is not a fixed string.
# CI sets POSTGRES_HOST in obs-secrets.env per environment:
# staging: archiv-staging-db-1 (project archiv-staging + service db)
# production: archiv-production-db-1 (project archiv-production + service db)
# For local dev, set POSTGRES_HOST in your .env file (defaults to archive-db there).

View File

View File

@@ -0,0 +1,26 @@
global:
scrape_interval: 15s
evaluation_interval: 15s
scrape_configs:
- job_name: node
static_configs:
- targets: ['node-exporter:9100']
- job_name: cadvisor
static_configs:
- targets: ['cadvisor:8080']
- job_name: spring-boot
metrics_path: /actuator/prometheus
static_configs:
# Uses the Docker service name (not container_name) for reliable DNS resolution.
- targets: ['backend:8081']
- job_name: ocr-service
metrics_path: /metrics
static_configs:
# TODO: remove or add prometheus-client to ocr-service.
# The Python OCR service does not currently expose Prometheus metrics.
# This target will show as DOWN until prometheus-client is added to ocr-service.
- targets: ['ocr:8000']

View File

@@ -0,0 +1,32 @@
server:
http_listen_port: 9080
grpc_listen_port: 0 # gRPC disabled — used for Promtail clustering only; single-node deployment
positions:
filename: /tmp/positions.yaml # /tmp is a named volume (promtail_positions) — persists across restarts
clients:
- url: http://loki:3100/loki/api/v1/push
# Loki HTTP API is unauthenticated internally. Any container on obs-net can push logs.
# Acceptable: only trusted application containers join this network.
scrape_configs:
- job_name: docker-containers
docker_sd_configs:
- host: unix:///var/run/docker.sock
refresh_interval: 5s
relabel_configs:
- source_labels: ['__meta_docker_container_name']
regex: '/(.*)'
target_label: 'container_name'
# Note: container_name differs between dev (archive-backend) and prod
# (archiv-production-backend-1). Prefer compose_service for stable LogQL
# queries across environments — it is stable: backend, db, minio, etc.
- source_labels: ['__meta_docker_container_label_com_docker_compose_service']
target_label: 'compose_service'
- source_labels: ['__meta_docker_container_label_com_docker_compose_project']
target_label: 'compose_project'
- source_labels: ['__meta_docker_container_log_stream']
target_label: 'logstream'
- source_labels: ['__meta_docker_container_label_com_docker_compose_service']
target_label: 'job'

View File

View File

@@ -0,0 +1,48 @@
server:
http_listen_port: 3200
distributor:
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
ingester:
max_block_duration: 5m
compactor:
compaction:
# 30 days — matches Loki retention. Compactor enforces this automatically;
# no manual intervention needed under normal trace volumes.
block_retention: 720h
storage:
trace:
# Local filesystem storage — single-VPS deployment, no S3 backend needed.
# Both paths are on the same named Docker volume (tempo_data) so they
# survive container restarts without split-brain between WAL and blocks.
backend: local
local:
path: /var/tempo/blocks
wal:
path: /var/tempo/wal
metrics_generator:
registry:
external_labels:
source: tempo
storage:
path: /var/tempo/generator/wal
# Tempo HTTP API (port 3200) is unauthenticated. Access is controlled entirely
# by network isolation: only Grafana (on obs-net) should reach this port.
# The OTLP receivers (4317 gRPC, 4318 HTTP) are internal to archiv-net only.
overrides:
defaults:
metrics_generator:
processors:
- service-graphs
- span-metrics

View File

@@ -1,16 +1,26 @@
# runner-config.yaml — only the relevant section
container:
# passed as DOCKER_HOST inside the job container
# join the same network Gitea is on, so job containers can resolve 'gitea'
# for actions/checkout and other internal API calls.
network: gitea_gitea
# passed as DOCKER_HOST inside the job container; act_runner auto-mounts
# this socket path into the job, so no explicit -v option is needed.
docker_host: "unix:///var/run/docker.sock"
# whitelists the socket path so workflows can mount it
# Job workspaces are stored here and mounted at the same absolute path
# inside job containers. Identical host <-> container path is the requirement:
# Compose resolves relative bind mounts to $(pwd) inside the job container
# and passes that absolute path to the host daemon, which must find the file
# at that exact host path. Prerequisite: /srv/gitea-workspace exists on the
# host and is bind-mounted in the runner container (see compose.yaml).
workdir_parent: /srv/gitea-workspace
# whitelists volumes that workflow steps may bind-mount
valid_volumes:
- "/var/run/docker.sock"
# appended to `docker run` when the runner spawns a job container
# SECURITY: Mounting the Docker socket grants job containers root-equivalent
# access to the host Docker daemon. Acceptable here because only trusted code
# from this private repo runs on this runner. Do NOT use on a runner that
# accepts untrusted PRs from external contributors.
options: "-v /var/run/docker.sock:/var/run/docker.sock"
# keep network mode default (bridge) — Testcontainers handles its own networking
- "/srv/gitea-workspace"
- "/opt/familienarchiv"
# mount the workspace and the permanent obs/config directory into job containers.
# /opt/familienarchiv is the stable path CI copies configs to (ADR-016); it must
# be mounted here so deploy steps can write through to the host filesystem.
options: "-v /srv/gitea-workspace:/srv/gitea-workspace -v /opt/familienarchiv:/opt/familienarchiv"
# keep behavior default — Testcontainers handles its own networking
force_pull: false