Compare commits
2 Commits
feat/issue
...
feat/issue
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
f02c59dd98 | ||
|
|
a5d20f264e |
@@ -410,23 +410,6 @@ Never Kafka for teams under 10 or <100k events/day. Never gRPC inside a monolith
|
||||
4. Identify missing database-layer enforcement (constraints, RLS)
|
||||
5. Check transport choices — simpler protocol available?
|
||||
6. Propose a concrete simpler alternative, not just a critique
|
||||
7. Verify documentation currency. For each category below, check whether the PR triggered the update. Flag missing updates as blockers.
|
||||
|
||||
| PR contains | Required doc update |
|
||||
|---|---|
|
||||
| New Flyway migration adding/removing/renaming a table or column | `docs/architecture/db/db-orm.puml` and `docs/architecture/db/db-relationships.puml` |
|
||||
| New `@ManyToMany` join table or FK | Both DB diagrams |
|
||||
| New backend package or domain module | `CLAUDE.md` package table + matching `docs/architecture/c4/l3-backend-*.puml` |
|
||||
| New controller or service in an existing backend domain | Matching `docs/architecture/c4/l3-backend-*.puml` |
|
||||
| New SvelteKit route | `CLAUDE.md` route table + matching `docs/architecture/c4/l3-frontend-*.puml` |
|
||||
| New Docker service or infrastructure component | `docs/architecture/c4/l2-containers.puml` + `docs/DEPLOYMENT.md` |
|
||||
| New external system integrated | `docs/architecture/c4/l1-context.puml` |
|
||||
| Auth or upload flow change | `docs/architecture/c4/seq-auth-flow.puml` or `docs/architecture/c4/seq-document-upload.puml` |
|
||||
| New `ErrorCode` or `Permission` value | `CLAUDE.md` + `docs/ARCHITECTURE.md` |
|
||||
| New domain concept or term | `docs/GLOSSARY.md` |
|
||||
| Architectural decision with lasting consequences | New ADR in `docs/adr/` |
|
||||
|
||||
A doc omission is a blocker, not a concern — the PR does not merge until the diagram or text matches the code.
|
||||
|
||||
### Designing Systems
|
||||
1. Start with the data model — get the schema right before application code
|
||||
|
||||
@@ -980,24 +980,6 @@ Mark with `@pytest.mark.asyncio` so pytest runs the coroutine. Without it, the t
|
||||
5. Refactor — apply clean code, extract if 3+ duplications, rename for intent
|
||||
6. Repeat for the next behavior
|
||||
7. When all behaviors are green, review for SOLID violations across the full stack
|
||||
8. Update documentation before opening the PR. Use the table below to know which doc to touch.
|
||||
|
||||
| What changed in code | Doc(s) to update |
|
||||
|---|---|
|
||||
| New Flyway migration adds/removes/renames a table or column | `docs/architecture/db/db-orm.puml` (add/remove entity or attribute) **and** `docs/architecture/db/db-relationships.puml` (add/remove relationship line) |
|
||||
| New `@ManyToMany` join table or FK relationship | Both DB diagrams above |
|
||||
| New backend package / domain module | `CLAUDE.md` (package structure table) **and** the matching `docs/architecture/c4/l3-backend-*.puml` diagram for that domain |
|
||||
| New Spring Boot controller or service in an existing domain | The matching `docs/architecture/c4/l3-backend-*.puml` for that domain |
|
||||
| New SvelteKit route (`+page.svelte`) | `CLAUDE.md` (route structure section) **and** the matching `docs/architecture/c4/l3-frontend-*.puml` diagram |
|
||||
| New Docker service / infrastructure component | `docs/architecture/c4/l2-containers.puml` **and** `docs/DEPLOYMENT.md` |
|
||||
| New external system integrated (new API, new S3 bucket, etc.) | `docs/architecture/c4/l1-context.puml` |
|
||||
| Auth flow or document-upload flow changes | `docs/architecture/c4/seq-auth-flow.puml` or `docs/architecture/c4/seq-document-upload.puml` |
|
||||
| New `ErrorCode` enum value | `CLAUDE.md` error handling section **and** `CONTRIBUTING.md` |
|
||||
| New `Permission` enum value | `CLAUDE.md` security section **and** `docs/ARCHITECTURE.md` |
|
||||
| New domain term introduced (entity name, status, concept) | `docs/GLOSSARY.md` |
|
||||
| Architectural decision with lasting consequences (new tech, new transport protocol, new pattern) | New ADR in `docs/adr/` |
|
||||
|
||||
Skip a doc only if the change genuinely does not affect what that doc describes.
|
||||
|
||||
### Reviewing Code
|
||||
1. TDD evidence — are there tests? Do they precede the implementation?
|
||||
|
||||
@@ -38,10 +38,10 @@ Screen readers and search engines rely on landmarks to navigate. Every page need
|
||||
|
||||
2. **Use CSS custom properties for all brand colors**
|
||||
```css
|
||||
/* layout.css — semantic tokens backed by CSS variables (see --palette-* for raw values) */
|
||||
--color-ink: var(--c-ink);
|
||||
--color-accent: var(--c-accent);
|
||||
--color-surface: var(--c-surface);
|
||||
/* layout.css */
|
||||
--color-ink: #002850;
|
||||
--color-accent: #A6DAD8;
|
||||
--color-surface: #E4E2D7;
|
||||
```
|
||||
```svelte
|
||||
<div class="text-ink bg-surface border-line">
|
||||
@@ -103,9 +103,9 @@ unsaved work without warning.
|
||||
|
||||
1. **Enforce WCAG AA contrast ratios**
|
||||
```
|
||||
brand-navy (--palette-navy) on white: ~14.5:1 -- AAA pass (verify exact value in layout.css)
|
||||
brand-mint (--palette-mint) on navy: ~7.2:1 -- AAA pass for large text
|
||||
Gray-500 on white: check >= 4.5:1 -- AA minimum for body text
|
||||
brand-navy (#002850) on white: 14.5:1 -- AAA pass
|
||||
brand-mint (#A6DAD8) on navy: 7.2:1 -- AAA pass for large text
|
||||
Gray-500 on white: check >= 4.5:1 -- AA minimum for body text
|
||||
```
|
||||
Always verify contrast with a tool. AA is the floor (4.5:1 normal text, 3:1 large text). Target AAA (7:1) for body copy.
|
||||
|
||||
@@ -134,8 +134,8 @@ Color-blind users (8% of men) cannot distinguish status by color alone. Always p
|
||||
/* Silver #CACAC9 on white = 1.5:1 -- fails all WCAG levels */
|
||||
.caption { color: #CACAC9; }
|
||||
|
||||
/* brand-mint on white = ~2.8:1 -- fails AA for normal text */
|
||||
.label { color: var(--palette-mint); }
|
||||
/* brand-mint on white = 2.8:1 -- fails AA for normal text */
|
||||
.label { color: #A6DAD8; }
|
||||
```
|
||||
Test every text color against its background. Decorative palette colors are for borders and backgrounds, not text.
|
||||
|
||||
@@ -338,7 +338,7 @@ Test at 320px (small phone), 768px (tablet), and 1440px (desktop). Review diffs
|
||||
<table>
|
||||
<tr><td>Section title</td><td><code>text-xs font-bold uppercase tracking-widest</code></td>
|
||||
<td>12px / 700</td><td>Most commonly undersized</td></tr>
|
||||
<tr><td>Card container</td><td><code>bg-surface shadow-sm border border-line rounded-sm p-6</code></td>
|
||||
<tr><td>Card container</td><td><code>bg-white shadow-sm border border-brand-sand rounded-sm p-6</code></td>
|
||||
<td>padding 24px</td><td>—</td></tr>
|
||||
</table>
|
||||
</div>
|
||||
@@ -376,10 +376,10 @@ await page.setViewportSize({ width: 1440, height: 900 });
|
||||
## Domain Expertise
|
||||
|
||||
### Brand Palette
|
||||
- **Primary**: `brand-navy` (`--palette-navy`) — text, buttons, headers; `brand-mint` (`--palette-mint`) — accents, hover; sand (`--palette-sand`) — page background (use `bg-canvas` or `bg-surface` as Tailwind utilities, not `bg-brand-sand`)
|
||||
- **Typography**: `font-serif` (Tinos) for body/titles, `font-sans` (Montserrat) for labels/UI chrome
|
||||
- **Card pattern**: `bg-surface shadow-sm border border-line rounded-sm p-6`
|
||||
- **Section title**: `text-xs font-bold uppercase tracking-widest text-ink-3 mb-5`
|
||||
- **Primary**: brand-navy `#002850` (text, buttons, headers), brand-mint `#A6DAD8` (accents, hover), brand-sand `#E4E2D7` (backgrounds, borders)
|
||||
- **Typography**: `font-serif` (Merriweather) for body/titles, `font-sans` (Montserrat) for labels/UI chrome
|
||||
- **Card pattern**: `bg-white shadow-sm border border-brand-sand rounded-sm p-6`
|
||||
- **Section title**: `text-xs font-bold uppercase tracking-widest text-gray-400 mb-5`
|
||||
|
||||
### Dual-Audience Design (25-42 AND 60+)
|
||||
- Seniors: 16px minimum body text (prefer 18px), 44px touch targets (prefer 48px), redundant cues, calm layouts, persistent navigation, no timed interactions
|
||||
|
||||
@@ -1,3 +1,96 @@
|
||||
# Dev Container
|
||||
# Dev Container — Familienarchiv
|
||||
|
||||
→ See [.devcontainer/README.md](./README.md) for configuration, usage, and known limitations.
|
||||
## Overview
|
||||
|
||||
VS Code Dev Container configuration for a pre-configured development environment. Includes Java 21, Maven, and Node.js 24 — everything needed to work on both backend and frontend.
|
||||
|
||||
## Configuration
|
||||
|
||||
File: `.devcontainer/devcontainer.json`
|
||||
|
||||
### Included Features
|
||||
|
||||
| Feature | Version | Purpose |
|
||||
|---|---|---|
|
||||
| Java | 21 | Spring Boot backend |
|
||||
| Maven | bundled with Java feature | Build tool |
|
||||
| Node.js | 24 | SvelteKit frontend |
|
||||
|
||||
### VS Code Extensions (Auto-installed)
|
||||
|
||||
| Extension | Purpose |
|
||||
|---|---|
|
||||
| `vscjava.vscode-java-pack` | Java language support, debugging, testing |
|
||||
| `vmware.vscode-spring-boot` | Spring Boot tooling |
|
||||
| `gabrielbb.vscode-lombok` | Lombok annotation support |
|
||||
| `humao.rest-client` | HTTP request files (for `backend/api_tests/`) |
|
||||
|
||||
### Ports
|
||||
|
||||
- `8080` forwarded to host — access backend at `http://localhost:8080`
|
||||
|
||||
### User
|
||||
|
||||
Runs as `vscode` user (not root) for security.
|
||||
|
||||
## How to Use
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- VS Code with the **Dev Containers** extension installed
|
||||
- Docker running locally
|
||||
|
||||
### Open in Dev Container
|
||||
|
||||
1. Open the project in VS Code
|
||||
2. Press `F1` → type "Dev Containers: Reopen in Container"
|
||||
3. VS Code will:
|
||||
- Build the container using the root `docker-compose.yml`
|
||||
- Install Java 21, Maven, and Node 24
|
||||
- Install the listed extensions
|
||||
- Mount the workspace folder
|
||||
|
||||
### Working Inside the Container
|
||||
|
||||
Once inside the container, you have access to both stacks:
|
||||
|
||||
```bash
|
||||
# Backend
|
||||
cd backend
|
||||
./mvnw spring-boot:run
|
||||
|
||||
# Frontend (in a new terminal)
|
||||
cd frontend
|
||||
npm install
|
||||
npm run dev
|
||||
```
|
||||
|
||||
The container reuses the `docker-compose.yml` services, so PostgreSQL and MinIO are available automatically.
|
||||
|
||||
### Forwarding Frontend Port
|
||||
|
||||
The devcontainer config only forwards port 8080 by default. To access the frontend dev server (port 5173 or 3000), either:
|
||||
|
||||
1. Add `5173` to `forwardPorts` in `devcontainer.json`, or
|
||||
2. Use the VS Code "Ports" panel to forward it dynamically
|
||||
|
||||
## Limitations
|
||||
|
||||
- The devcontainer attaches to the `backend` service from `docker-compose.yml`, so it inherits those environment variables
|
||||
- OCR service and other containers should be started separately via `docker-compose up -d`
|
||||
- GPU passthrough for OCR training is not configured
|
||||
|
||||
## Customization
|
||||
|
||||
To add more tools or extensions, edit `.devcontainer/devcontainer.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"features": {
|
||||
"ghcr.io/devcontainers/features/python:1": {
|
||||
"version": "3.11"
|
||||
}
|
||||
},
|
||||
"forwardPorts": [8080, 5173, 3000]
|
||||
}
|
||||
```
|
||||
|
||||
@@ -1,94 +0,0 @@
|
||||
# Dev Container — Familienarchiv
|
||||
|
||||
VS Code Dev Container configuration for a pre-configured development environment. Includes Java 21, Maven, and Node.js 24 — everything needed to work on both backend and frontend.
|
||||
|
||||
## Configuration
|
||||
|
||||
File: `.devcontainer/devcontainer.json`
|
||||
|
||||
### Included Features
|
||||
|
||||
| Feature | Version | Purpose |
|
||||
| ------- | ------------------------- | ------------------- |
|
||||
| Java | 21 | Spring Boot backend |
|
||||
| Maven | bundled with Java feature | Build tool |
|
||||
| Node.js | 24 | SvelteKit frontend |
|
||||
|
||||
### VS Code Extensions (Auto-installed)
|
||||
|
||||
| Extension | Purpose |
|
||||
| --------------------------- | --------------------------------------------- |
|
||||
| `vscjava.vscode-java-pack` | Java language support, debugging, testing |
|
||||
| `vmware.vscode-spring-boot` | Spring Boot tooling |
|
||||
| `gabrielbb.vscode-lombok` | Lombok annotation support |
|
||||
| `humao.rest-client` | HTTP request files (for `backend/api_tests/`) |
|
||||
|
||||
### Ports
|
||||
|
||||
- `8080` forwarded to host — access backend at `http://localhost:8080`
|
||||
|
||||
### User
|
||||
|
||||
Runs as `vscode` user (not root) for security.
|
||||
|
||||
## How to Use
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- VS Code with the **Dev Containers** extension installed
|
||||
- Docker running locally
|
||||
|
||||
### Open in Dev Container
|
||||
|
||||
1. Open the project in VS Code
|
||||
2. Press `F1` → type "Dev Containers: Reopen in Container"
|
||||
3. VS Code will:
|
||||
- Build the container using the root `docker-compose.yml`
|
||||
- Install Java 21, Maven, and Node 24
|
||||
- Install the listed extensions
|
||||
- Mount the workspace folder
|
||||
|
||||
### Working Inside the Container
|
||||
|
||||
Once inside the container, you have access to both stacks:
|
||||
|
||||
```bash
|
||||
# Backend
|
||||
cd backend
|
||||
./mvnw spring-boot:run
|
||||
|
||||
# Frontend (in a new terminal)
|
||||
cd frontend
|
||||
npm install
|
||||
npm run dev
|
||||
```
|
||||
|
||||
The container reuses the `docker-compose.yml` services, so PostgreSQL and MinIO are available automatically.
|
||||
|
||||
### Forwarding Frontend Port
|
||||
|
||||
The devcontainer config only forwards port 8080 by default. To access the frontend dev server (port 5173 or 3000), either:
|
||||
|
||||
1. Add `5173` to `forwardPorts` in `devcontainer.json`, or
|
||||
2. Use the VS Code "Ports" panel to forward it dynamically
|
||||
|
||||
## Limitations
|
||||
|
||||
- The devcontainer attaches to the `backend` service from `docker-compose.yml`, so it inherits those environment variables
|
||||
- OCR service and other containers should be started separately via `docker-compose up -d`
|
||||
- GPU passthrough for OCR training is not configured
|
||||
|
||||
## Customization
|
||||
|
||||
To add more tools or extensions, edit `.devcontainer/devcontainer.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"features": {
|
||||
"ghcr.io/devcontainers/features/python:1": {
|
||||
"version": "3.11"
|
||||
}
|
||||
},
|
||||
"forwardPorts": [8080, 5173, 3000]
|
||||
}
|
||||
```
|
||||
30
.env.example
30
.env.example
@@ -26,36 +26,6 @@ PORT_MAILPIT_SMTP=1025
|
||||
# Generate with: python3 -c "import secrets; print(secrets.token_hex(32))"
|
||||
OCR_TRAINING_TOKEN=change-me-in-production
|
||||
|
||||
# --- Observability ---
|
||||
# Optional stack — start with: docker compose -f docker-compose.observability.yml up -d
|
||||
# Requires the main stack to already be running (docker compose up -d creates archiv-net).
|
||||
|
||||
# Ports for host access
|
||||
PORT_GRAFANA=3001
|
||||
PORT_GLITCHTIP=3002
|
||||
PORT_PROMETHEUS=9090
|
||||
|
||||
# Grafana admin password — change this before exposing Grafana beyond localhost
|
||||
GRAFANA_ADMIN_PASSWORD=changeme
|
||||
|
||||
# GlitchTip domain — production: use https://grafana.raddatz.cloud (must match Caddy vhost)
|
||||
GLITCHTIP_DOMAIN=http://localhost:3002
|
||||
|
||||
# GlitchTip secret key — Django SECRET_KEY equivalent, used to sign sessions and tokens.
|
||||
# REQUIRED in production — must not be empty or 'changeme'. Fail-closed: GlitchTip will
|
||||
# refuse to start with an invalid key.
|
||||
# Generate with: python3 -c "import secrets; print(secrets.token_hex(50))"
|
||||
GLITCHTIP_SECRET_KEY=changeme-generate-a-real-secret
|
||||
|
||||
# Error reporting DSNs — leave empty to disable the SDK (safe default).
|
||||
# SENTRY_DSN: backend (Spring Boot) — used by the GlitchTip/Sentry Java SDK
|
||||
SENTRY_DSN=
|
||||
SENTRY_TRACES_SAMPLE_RATE=
|
||||
# VITE_SENTRY_DSN: frontend (SvelteKit) — injected at build time via Vite
|
||||
VITE_SENTRY_DSN=
|
||||
# Sentry/GlitchTip auth token for source map upload at build time (optional)
|
||||
SENTRY_AUTH_TOKEN=
|
||||
|
||||
# Production SMTP — uncomment and fill in to send real emails instead of catching them
|
||||
# APP_BASE_URL=https://your-domain.example.com
|
||||
# MAIL_HOST=smtp.example.com
|
||||
|
||||
@@ -2,7 +2,6 @@ name: CI
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [main]
|
||||
pull_request:
|
||||
|
||||
jobs:
|
||||
@@ -33,116 +32,17 @@ jobs:
|
||||
run: npx @inlang/paraglide-js compile --project ./project.inlang --outdir ./src/lib/paraglide
|
||||
working-directory: frontend
|
||||
|
||||
- name: Sync SvelteKit
|
||||
run: npx svelte-kit sync
|
||||
working-directory: frontend
|
||||
|
||||
- name: Lint
|
||||
run: npm run lint
|
||||
working-directory: frontend
|
||||
|
||||
- name: Assert no banned vi.mock patterns
|
||||
shell: bash
|
||||
run: |
|
||||
# Literal pdfjs-dist (libLoader pattern — ADR 012)
|
||||
if grep -rF "vi.mock('pdfjs-dist'" frontend/src/; then
|
||||
echo "FAIL: banned vi.mock('pdfjs-dist') pattern found — see ADR 012. Use the libLoader prop injection pattern instead."
|
||||
exit 1
|
||||
fi
|
||||
# Async factory with dynamic import in body (named mechanism — ADR 012 / #553).
|
||||
# Multiline PCRE matches `vi.mock(<arg>, async ... { ... await import(...) ... })`
|
||||
# across line breaks. __meta__ is excluded because it contains fixture strings
|
||||
# demonstrating the very pattern this check is meant to forbid.
|
||||
if grep -rPzln 'vi\.mock\([^)]+,\s*async[^{]*\{[\s\S]*?await\s+import\s*\(' \
|
||||
--include='*.spec.ts' --include='*.test.ts' \
|
||||
--exclude-dir='__meta__' \
|
||||
frontend/src/; then
|
||||
echo "FAIL: banned async vi.mock factory with dynamic import in body — see ADR 012 / #553. Use a synchronous factory + vi.hoisted instead."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
- name: Assert no (upload|download)-artifact past v3
|
||||
shell: bash
|
||||
run: |
|
||||
# Self-test: verify the regex catches v4+ and does not catch v3.
|
||||
tmp=$(mktemp)
|
||||
printf ' uses: actions/upload-artifact@v5\n' > "$tmp"
|
||||
grep -qP '^\s+uses:\s+actions/(upload|download)-artifact@v[4-9]' "$tmp" \
|
||||
|| { echo "FAIL: guard self-test — regex missed upload-artifact@v5"; rm "$tmp"; exit 1; }
|
||||
printf ' uses: actions/upload-artifact@v3\n' > "$tmp"
|
||||
grep -qvP '^\s+uses:\s+actions/(upload|download)-artifact@v[4-9]' "$tmp" \
|
||||
|| { echo "FAIL: guard self-test — regex incorrectly flagged upload-artifact@v3"; rm "$tmp"; exit 1; }
|
||||
rm "$tmp"
|
||||
# Guard: Gitea Actions (act_runner) does not implement the v4 artifact protocol.
|
||||
# Both upload-artifact and download-artifact share the same incompatibility.
|
||||
# Pin to @v3. See ADR-014 / #557.
|
||||
if grep -RPn '^\s+uses:\s+actions/(upload|download)-artifact@v[4-9]' .gitea/workflows/; then
|
||||
echo "::error::actions/(upload|download)-artifact@v4+ is unsupported on Gitea Actions (act_runner). Pin to @v3. See ADR-014 / #557."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
- name: Run unit and component tests with coverage
|
||||
shell: bash
|
||||
run: |
|
||||
set -eo pipefail
|
||||
npm run test:coverage 2>&1 | tee /tmp/coverage-test-${{ github.run_id }}.log
|
||||
working-directory: frontend
|
||||
env:
|
||||
TZ: Europe/Berlin
|
||||
|
||||
# Diagnostic guard: covers the coverage run only. If `npm test` (above)
|
||||
# exits 1 with a birpc error, the named pattern appears here — not there.
|
||||
- name: Assert no birpc teardown race in coverage run
|
||||
shell: bash
|
||||
if: always()
|
||||
run: |
|
||||
if grep -qF "[birpc] rpc is closed" /tmp/coverage-test-${{ github.run_id }}.log 2>/dev/null; then
|
||||
echo "FAIL: [birpc] rpc is closed teardown race detected in coverage run"
|
||||
grep -F "[birpc] rpc is closed" /tmp/coverage-test-${{ github.run_id }}.log
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Gitea Actions (act_runner) does not implement upload-artifact v4 protocol — pinned per ADR-014. Do NOT upgrade. See #557.
|
||||
- name: Upload coverage reports
|
||||
if: always()
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: coverage-reports
|
||||
path: |
|
||||
frontend/coverage/
|
||||
/tmp/coverage-test-${{ github.run_id }}.log
|
||||
|
||||
- name: Build frontend
|
||||
run: npm run build
|
||||
- name: Run unit and component tests
|
||||
run: npm test
|
||||
working-directory: frontend
|
||||
|
||||
# ── Prerender output is exactly the public help page ───────────────────
|
||||
# SvelteKit prerender + crawl follows nav links and bakes "redirect to
|
||||
# /login" HTML for every protected route, served BEFORE runtime hooks
|
||||
# (see #514). With `crawl: false` only the explicit entry should land
|
||||
# in build/prerendered/. Anything else is a regression — fail the build.
|
||||
- name: Assert prerender output is only /hilfe/transkription
|
||||
run: |
|
||||
cd frontend
|
||||
set -e
|
||||
extra=$(find build/prerendered -type f \
|
||||
-not -path 'build/prerendered/hilfe/*' \
|
||||
-not -name '*.br' -not -name '*.gz' \
|
||||
|| true)
|
||||
if [ -n "$extra" ]; then
|
||||
echo "FAIL: unexpected prerendered files (would shadow runtime hooks):"
|
||||
echo "$extra"
|
||||
exit 1
|
||||
fi
|
||||
# And the help page must still be there.
|
||||
test -f build/prerendered/hilfe/transkription.html \
|
||||
|| { echo "FAIL: /hilfe/transkription.html missing from prerender output"; exit 1; }
|
||||
echo "PASS: only /hilfe/transkription.html prerendered."
|
||||
|
||||
# Gitea Actions (act_runner) does not implement upload-artifact v4 protocol — pinned per ADR-014. Do NOT upgrade. See #557.
|
||||
- name: Upload screenshots
|
||||
if: always()
|
||||
uses: actions/upload-artifact@v3
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: unit-test-screenshots
|
||||
path: frontend/test-results/screenshots/
|
||||
@@ -174,8 +74,6 @@ jobs:
|
||||
runs-on: ubuntu-latest
|
||||
env:
|
||||
DOCKER_API_VERSION: "1.43" # NAS runner runs Docker 24.x (max API 1.43); Testcontainers 2.x defaults to 1.44
|
||||
DOCKER_HOST: unix:///var/run/docker.sock
|
||||
TESTCONTAINERS_RYUK_DISABLED: "true"
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
@@ -195,133 +93,4 @@ jobs:
|
||||
run: |
|
||||
chmod +x mvnw
|
||||
./mvnw clean test
|
||||
working-directory: backend
|
||||
|
||||
- name: Upload surefire reports
|
||||
if: always()
|
||||
# Gitea Actions (act_runner) does not implement upload-artifact v4 protocol — pinned per ADR-014. Do NOT upgrade. See #557.
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: surefire-reports
|
||||
path: backend/target/surefire-reports/
|
||||
|
||||
# ─── fail2ban Regex Regression ────────────────────────────────────────────────
|
||||
# The filter parses Caddy's JSON access log; a Caddy upgrade that reorders
|
||||
# the JSON keys would silently break it (fail2ban-regex would return
|
||||
# "0 matches", fail2ban would stop banning, no error surface). This job
|
||||
# pins the contract against a deterministic sample line.
|
||||
fail2ban-regex:
|
||||
name: fail2ban Regex
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Install fail2ban
|
||||
run: |
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y fail2ban
|
||||
|
||||
- name: Matches /api/auth/login 401
|
||||
run: |
|
||||
echo '{"level":"info","ts":1700000000.12,"logger":"http.log.access","msg":"handled request","request":{"remote_ip":"203.0.113.42","method":"POST","host":"archiv.raddatz.cloud","uri":"/api/auth/login"},"status":401}' > /tmp/sample.log
|
||||
out=$(fail2ban-regex /tmp/sample.log infra/fail2ban/filter.d/familienarchiv-auth.conf)
|
||||
echo "$out"
|
||||
echo "$out" | grep -qE '1 matched' \
|
||||
|| { echo "expected 1 match for /api/auth/login 401"; exit 1; }
|
||||
|
||||
- name: Matches /api/auth/login 429
|
||||
run: |
|
||||
echo '{"level":"info","ts":1700000000.12,"logger":"http.log.access","msg":"handled request","request":{"remote_ip":"203.0.113.42","method":"POST","host":"archiv.raddatz.cloud","uri":"/api/auth/login"},"status":429}' > /tmp/sample.log
|
||||
out=$(fail2ban-regex /tmp/sample.log infra/fail2ban/filter.d/familienarchiv-auth.conf)
|
||||
echo "$out"
|
||||
echo "$out" | grep -qE '1 matched' \
|
||||
|| { echo "expected 1 match for /api/auth/login 429"; exit 1; }
|
||||
|
||||
- name: Matches /api/auth/forgot-password 401
|
||||
run: |
|
||||
echo '{"level":"info","ts":1700000000.12,"logger":"http.log.access","msg":"handled request","request":{"remote_ip":"203.0.113.42","method":"POST","host":"archiv.raddatz.cloud","uri":"/api/auth/forgot-password"},"status":401}' > /tmp/sample.log
|
||||
out=$(fail2ban-regex /tmp/sample.log infra/fail2ban/filter.d/familienarchiv-auth.conf)
|
||||
echo "$out"
|
||||
echo "$out" | grep -qE '1 matched' \
|
||||
|| { echo "expected 1 match for /api/auth/forgot-password 401"; exit 1; }
|
||||
|
||||
- name: Does not match /api/auth/login 200
|
||||
run: |
|
||||
echo '{"level":"info","ts":1700000000.12,"logger":"http.log.access","msg":"handled request","request":{"remote_ip":"203.0.113.42","method":"POST","host":"archiv.raddatz.cloud","uri":"/api/auth/login"},"status":200}' > /tmp/sample.log
|
||||
out=$(fail2ban-regex /tmp/sample.log infra/fail2ban/filter.d/familienarchiv-auth.conf)
|
||||
echo "$out"
|
||||
echo "$out" | grep -qE '0 matched' \
|
||||
|| { echo "expected 0 matches for /api/auth/login 200"; exit 1; }
|
||||
|
||||
- name: Does not match /api/documents (unrelated 401)
|
||||
run: |
|
||||
echo '{"level":"info","ts":1700000000.12,"logger":"http.log.access","msg":"handled request","request":{"remote_ip":"203.0.113.42","method":"GET","host":"archiv.raddatz.cloud","uri":"/api/documents"},"status":401}' > /tmp/sample.log
|
||||
out=$(fail2ban-regex /tmp/sample.log infra/fail2ban/filter.d/familienarchiv-auth.conf)
|
||||
echo "$out"
|
||||
echo "$out" | grep -qE '0 matched' \
|
||||
|| { echo "expected 0 matches for /api/documents 401"; exit 1; }
|
||||
|
||||
# ── Backend resolves to file-polling, not systemd ─────────────────────
|
||||
# The Debian/Ubuntu fail2ban package ships defaults-debian.conf with
|
||||
# `[DEFAULT] backend = systemd`. Without `backend = polling` in our
|
||||
# jail, the daemon loads the jail but reads from journald and never
|
||||
# touches /var/log/caddy/access.log — i.e. the regex above passes in
|
||||
# isolation while the live jail is inert. See issue #503.
|
||||
- name: Jail resolves with polling backend (not inherited systemd)
|
||||
run: |
|
||||
sudo ln -sfn "$PWD/infra/fail2ban/jail.d/familienarchiv.conf" /etc/fail2ban/jail.d/familienarchiv.conf
|
||||
sudo ln -sfn "$PWD/infra/fail2ban/filter.d/familienarchiv-auth.conf" /etc/fail2ban/filter.d/familienarchiv-auth.conf
|
||||
dump=$(sudo fail2ban-client -d 2>&1)
|
||||
echo "$dump" | grep -E "add.*familienarchiv-auth" || true
|
||||
echo "$dump" | grep -qE "\['add', 'familienarchiv-auth', 'polling'\]" \
|
||||
|| { echo "FAIL: familienarchiv-auth jail did not resolve to 'polling' backend"; exit 1; }
|
||||
|
||||
# ─── Compose Bucket-Bootstrap Idempotency ─────────────────────────────────────
|
||||
# docker-compose.prod.yml's create-buckets service runs on every
|
||||
# `docker compose up` (one-shot, no restart). Must be idempotent — a
|
||||
# re-deploy must not fail just because the bucket / user / policy
|
||||
# already exists. Validated by running create-buckets twice against a
|
||||
# throwaway minio stack and asserting both invocations exit 0.
|
||||
compose-idempotency:
|
||||
name: Compose Bucket Idempotency
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Write stub env file
|
||||
run: |
|
||||
cat > .env.test <<'EOF'
|
||||
TAG=test
|
||||
PORT_BACKEND=18080
|
||||
PORT_FRONTEND=13000
|
||||
APP_DOMAIN=localhost
|
||||
POSTGRES_PASSWORD=stub
|
||||
MINIO_PASSWORD=stubrootpassword
|
||||
MINIO_APP_PASSWORD=stubapppassword
|
||||
OCR_TRAINING_TOKEN=stub
|
||||
APP_ADMIN_USERNAME=admin@local
|
||||
APP_ADMIN_PASSWORD=stub
|
||||
MAIL_HOST=mailpit
|
||||
MAIL_PORT=1025
|
||||
APP_MAIL_FROM=noreply@local
|
||||
IMPORT_HOST_DIR=/tmp/dummy-import
|
||||
COMPOSE_NETWORK_NAME=test-idem-archiv-net
|
||||
EOF
|
||||
|
||||
- name: Bring up minio
|
||||
run: |
|
||||
docker compose -f docker-compose.prod.yml -p test-idem --env-file .env.test up -d --wait minio
|
||||
|
||||
- name: First create-buckets run
|
||||
run: |
|
||||
docker compose -f docker-compose.prod.yml -p test-idem --env-file .env.test run --rm create-buckets
|
||||
|
||||
- name: Second create-buckets run (idempotency check)
|
||||
run: |
|
||||
docker compose -f docker-compose.prod.yml -p test-idem --env-file .env.test run --rm create-buckets
|
||||
|
||||
- name: Teardown
|
||||
if: always()
|
||||
run: |
|
||||
docker compose -f docker-compose.prod.yml -p test-idem --env-file .env.test down -v
|
||||
rm -f .env.test
|
||||
working-directory: backend
|
||||
@@ -1,65 +0,0 @@
|
||||
name: Coverage Flake Probe
|
||||
|
||||
# Manually-triggered probe for the birpc teardown race documented in ADR 012
|
||||
# / #553. Runs the full coverage suite 20× in parallel against a single SHA
|
||||
# and asserts zero `[birpc] rpc is closed` lines across every cell. Verifies
|
||||
# the acceptance criterion that the race no longer surfaces under coverage.
|
||||
|
||||
on:
|
||||
workflow_dispatch:
|
||||
|
||||
jobs:
|
||||
coverage-flake-probe:
|
||||
name: Coverage flake probe (run ${{ matrix.run }})
|
||||
runs-on: ubuntu-latest
|
||||
container:
|
||||
image: mcr.microsoft.com/playwright:v1.58.2-noble
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
run: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20]
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Cache node_modules
|
||||
id: node-modules-cache
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: frontend/node_modules
|
||||
key: node-modules-${{ hashFiles('frontend/package-lock.json') }}
|
||||
|
||||
- name: Install dependencies
|
||||
if: steps.node-modules-cache.outputs.cache-hit != 'true'
|
||||
run: npm ci
|
||||
working-directory: frontend
|
||||
|
||||
- name: Compile Paraglide i18n
|
||||
run: npx @inlang/paraglide-js compile --project ./project.inlang --outdir ./src/lib/paraglide
|
||||
working-directory: frontend
|
||||
|
||||
- name: Run unit and component tests with coverage
|
||||
shell: bash
|
||||
run: |
|
||||
set -eo pipefail
|
||||
npm run test:coverage 2>&1 | tee /tmp/coverage-test-${{ github.run_id }}-${{ matrix.run }}.log
|
||||
working-directory: frontend
|
||||
env:
|
||||
TZ: Europe/Berlin
|
||||
|
||||
- name: Assert no birpc teardown race
|
||||
shell: bash
|
||||
if: always()
|
||||
run: |
|
||||
if grep -qF "[birpc] rpc is closed" /tmp/coverage-test-${{ github.run_id }}-${{ matrix.run }}.log 2>/dev/null; then
|
||||
echo "FAIL: [birpc] rpc is closed teardown race detected in run ${{ matrix.run }}"
|
||||
grep -F "[birpc] rpc is closed" /tmp/coverage-test-${{ github.run_id }}-${{ matrix.run }}.log
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Gitea Actions (act_runner) does not implement upload-artifact v4 protocol — pinned per ADR-014. Do NOT upgrade. See #557.
|
||||
- name: Upload coverage log on failure
|
||||
if: failure()
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: coverage-log-run-${{ matrix.run }}
|
||||
path: /tmp/coverage-test-${{ github.run_id }}-${{ matrix.run }}.log
|
||||
@@ -1,222 +0,0 @@
|
||||
name: nightly
|
||||
|
||||
# Builds and deploys the staging environment from main every night.
|
||||
# Runs on the self-hosted runner using Docker-out-of-Docker (the docker
|
||||
# socket is mounted in), so `docker compose build` produces images on
|
||||
# the host daemon and `docker compose up` consumes them directly — no
|
||||
# registry hop.
|
||||
#
|
||||
# Operational assumptions (see docs/DEPLOYMENT.md §3 for the full setup):
|
||||
#
|
||||
# 1. Single-tenant self-hosted runner. The "Write staging env file" step
|
||||
# writes every secret to .env.staging on the runner filesystem; the
|
||||
# `if: always()` cleanup step removes it. A multi-tenant runner
|
||||
# would need to switch to docker compose --env-file <(stdin) instead.
|
||||
#
|
||||
# 2. Host docker layer cache is authoritative. There is no
|
||||
# actions/cache; we rely on the host daemon to keep Maven and npm
|
||||
# layers warm between runs. A `docker system prune` on the host
|
||||
# will cause the next nightly build to be cold (5–10 min slower).
|
||||
#
|
||||
# Staging environment isolation:
|
||||
# - project name: archiv-staging
|
||||
# - host ports: backend 8081, frontend 3001
|
||||
# - profile: staging (starts mailpit instead of a real SMTP relay)
|
||||
#
|
||||
# Required Gitea secrets:
|
||||
# STAGING_POSTGRES_PASSWORD
|
||||
# STAGING_MINIO_PASSWORD
|
||||
# STAGING_MINIO_APP_PASSWORD
|
||||
# STAGING_OCR_TRAINING_TOKEN
|
||||
# STAGING_APP_ADMIN_USERNAME
|
||||
# STAGING_APP_ADMIN_PASSWORD
|
||||
# GRAFANA_ADMIN_PASSWORD
|
||||
# GLITCHTIP_SECRET_KEY
|
||||
# SENTRY_DSN (set after GlitchTip first-run; empty = Sentry disabled)
|
||||
|
||||
on:
|
||||
schedule:
|
||||
- cron: "0 2 * * *"
|
||||
workflow_dispatch:
|
||||
|
||||
env:
|
||||
# Ensures the backend Dockerfile's `RUN --mount=type=cache` lines are
|
||||
# honoured (Maven cache survives between runs).
|
||||
DOCKER_BUILDKIT: "1"
|
||||
|
||||
jobs:
|
||||
deploy-staging:
|
||||
# `ubuntu-latest` matches our self-hosted runner's advertised label
|
||||
# (the runner has labels: ubuntu-latest / ubuntu-24.04 / ubuntu-22.04).
|
||||
# `self-hosted` would never match — no runner advertises it — so the
|
||||
# job parks in the queue forever. ADR-011's "single-tenant" promise
|
||||
# is at the repo level; sharing this runner between CI and deploys
|
||||
# for the same repo is within that boundary.
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Write staging env file
|
||||
run: |
|
||||
cat > .env.staging <<EOF
|
||||
TAG=nightly
|
||||
PORT_BACKEND=8081
|
||||
PORT_FRONTEND=3001
|
||||
APP_DOMAIN=staging.raddatz.cloud
|
||||
POSTGRES_PASSWORD=${{ secrets.STAGING_POSTGRES_PASSWORD }}
|
||||
MINIO_PASSWORD=${{ secrets.STAGING_MINIO_PASSWORD }}
|
||||
MINIO_APP_PASSWORD=${{ secrets.STAGING_MINIO_APP_PASSWORD }}
|
||||
OCR_TRAINING_TOKEN=${{ secrets.STAGING_OCR_TRAINING_TOKEN }}
|
||||
APP_ADMIN_USERNAME=${{ secrets.STAGING_APP_ADMIN_USERNAME }}
|
||||
APP_ADMIN_PASSWORD=${{ secrets.STAGING_APP_ADMIN_PASSWORD }}
|
||||
MAIL_HOST=mailpit
|
||||
MAIL_PORT=1025
|
||||
MAIL_USERNAME=
|
||||
MAIL_PASSWORD=
|
||||
MAIL_SMTP_AUTH=false
|
||||
MAIL_STARTTLS_ENABLE=false
|
||||
APP_MAIL_FROM=noreply@staging.raddatz.cloud
|
||||
IMPORT_HOST_DIR=/srv/familienarchiv-staging/import
|
||||
POSTGRES_USER=archiv
|
||||
PORT_GRAFANA=3003
|
||||
PORT_GLITCHTIP=3002
|
||||
PORT_PROMETHEUS=9090
|
||||
GRAFANA_ADMIN_PASSWORD=${{ secrets.GRAFANA_ADMIN_PASSWORD }}
|
||||
GLITCHTIP_SECRET_KEY=${{ secrets.GLITCHTIP_SECRET_KEY }}
|
||||
GLITCHTIP_DOMAIN=https://glitchtip.archiv.raddatz.cloud
|
||||
SENTRY_DSN=${{ secrets.SENTRY_DSN }}
|
||||
EOF
|
||||
|
||||
- name: Verify backend /import:ro mount is wired
|
||||
# Regression guard for #526: the /admin/system mass-import card
|
||||
# only works when the backend service mounts the host import
|
||||
# payload at /import (read-only). If a future "compose cleanup"
|
||||
# PR drops the volumes block, mass import silently breaks again.
|
||||
# `compose config` renders both shorthand and longform mounts as
|
||||
# `target: /import` + `read_only: true`, so we assert against
|
||||
# the rendered form rather than the raw source YAML.
|
||||
run: |
|
||||
set -e
|
||||
docker compose \
|
||||
-f docker-compose.prod.yml \
|
||||
-p archiv-staging \
|
||||
--env-file .env.staging \
|
||||
--profile staging \
|
||||
config > /tmp/compose-rendered.yml
|
||||
grep -q '^[[:space:]]*target: /import$' /tmp/compose-rendered.yml \
|
||||
|| { echo "::error::backend is missing the /import bind mount (see #526)"; exit 1; }
|
||||
grep -A2 '^[[:space:]]*target: /import$' /tmp/compose-rendered.yml \
|
||||
| grep -q 'read_only: true' \
|
||||
|| { echo "::error::backend /import mount is not read-only (see #526)"; exit 1; }
|
||||
|
||||
- name: Build images
|
||||
# `--pull` forces re-fetching pinned base images so a CVE
|
||||
# re-publication of the same tag (e.g. node:20.19.0-alpine3.21,
|
||||
# postgres:16-alpine) is picked up instead of being served
|
||||
# from the host's stale Docker layer cache.
|
||||
run: |
|
||||
docker compose \
|
||||
-f docker-compose.prod.yml \
|
||||
-p archiv-staging \
|
||||
--env-file .env.staging \
|
||||
--profile staging \
|
||||
build --pull
|
||||
|
||||
- name: Deploy staging
|
||||
run: |
|
||||
docker compose \
|
||||
-f docker-compose.prod.yml \
|
||||
-p archiv-staging \
|
||||
--env-file .env.staging \
|
||||
--profile staging \
|
||||
up -d --wait --remove-orphans
|
||||
|
||||
- name: Start observability stack
|
||||
run: |
|
||||
docker compose \
|
||||
-f docker-compose.observability.yml \
|
||||
--env-file .env.staging \
|
||||
up -d --wait --remove-orphans
|
||||
|
||||
- name: Reload Caddy
|
||||
# Apply any committed Caddyfile changes before smoke-testing the
|
||||
# public surface. Without this step, a Caddyfile edit lands in the
|
||||
# repo but Caddy keeps serving the previous config until someone
|
||||
# reloads it manually — the smoke test would then catch a stale
|
||||
# header or a still-proxied /actuator route rather than confirming
|
||||
# the current config is live.
|
||||
#
|
||||
# The runner executes job steps inside Docker containers (DooD).
|
||||
# `systemctl` is not present in container images and cannot reach
|
||||
# the host's systemd directly. We use the Docker socket (mounted
|
||||
# into every job container via runner-config.yaml) to spin up a
|
||||
# privileged sibling container in the host PID namespace; nsenter
|
||||
# then enters the host's namespaces so systemctl talks to the real
|
||||
# host systemd daemon. No sudoers entry is required — the Docker
|
||||
# socket already grants root-equivalent host access.
|
||||
#
|
||||
# Alpine is used: ~5 MB vs ~70 MB for ubuntu, no unnecessary
|
||||
# tooling, and the digest is pinned so any upstream change requires
|
||||
# an explicit bump PR. util-linux (which ships nsenter) is installed
|
||||
# at run time; apk add takes ~1 s on the warm VPS cache.
|
||||
#
|
||||
# `reload` not `restart`: reload sends SIGHUP so Caddy re-reads its
|
||||
# config in-process without dropping TLS connections. `restart`
|
||||
# would briefly stop the service, losing in-flight requests.
|
||||
#
|
||||
# If Caddy is not running this step fails fast before the smoke test
|
||||
# issues a misleading "port 443 refused" error.
|
||||
run: |
|
||||
docker run --rm --privileged --pid=host \
|
||||
alpine:3.21@sha256:48b0309ca019d89d40f670aa1bc06e426dc0931948452e8491e3d65087abc07d \
|
||||
sh -c 'apk add --no-cache util-linux -q && nsenter -t 1 -m -u -n -p -i -- /bin/systemctl reload caddy'
|
||||
|
||||
- name: Smoke test deployed environment
|
||||
# Healthchecks confirm containers are healthy; they do NOT confirm the
|
||||
# public surface works. This step catches: Caddy not reloaded, HSTS
|
||||
# header dropped, /actuator block bypassed.
|
||||
#
|
||||
# --resolve pins staging.raddatz.cloud to the Docker bridge gateway IP
|
||||
# (the host) so we do NOT depend on hairpin NAT on the host router.
|
||||
# 127.0.0.1 cannot be used: job containers run in bridge network mode
|
||||
# (runner-config.yaml), so 127.0.0.1 is the container's loopback, not
|
||||
# the host's. The bridge gateway IS the host; Caddy binds 0.0.0.0:443
|
||||
# and is therefore reachable from the container via that IP.
|
||||
# SNI still uses the public hostname so the TLS cert validates correctly.
|
||||
#
|
||||
# Gateway detection reads /proc/net/route (always present, no package
|
||||
# required) instead of `ip route` to avoid a dependency on iproute2.
|
||||
# Field $2=="00000000" is the default route; field $3 is the gateway as
|
||||
# a little-endian 32-bit hex value which awk decodes to dotted-decimal.
|
||||
run: |
|
||||
set -e
|
||||
HOST="staging.raddatz.cloud"
|
||||
URL="https://$HOST"
|
||||
HOST_IP=$(awk 'NR>1 && $2=="00000000"{h=$3;printf "%d.%d.%d.%d\n",strtonum("0x"substr(h,7,2)),strtonum("0x"substr(h,5,2)),strtonum("0x"substr(h,3,2)),strtonum("0x"substr(h,1,2));exit}' /proc/net/route)
|
||||
[ -n "$HOST_IP" ] || { echo "ERROR: could not detect Docker bridge gateway via /proc/net/route"; exit 1; }
|
||||
RESOLVE="--resolve $HOST:443:$HOST_IP"
|
||||
echo "Smoke test: $URL (pinned to $HOST_IP via bridge gateway)"
|
||||
curl -fsS "$RESOLVE" --max-time 10 "$URL/login" -o /dev/null
|
||||
# Pin the preload-list-eligible HSTS value, not just header presence:
|
||||
# a degraded `max-age=1` or a dropped `includeSubDomains; preload` must
|
||||
# fail this check rather than pass it silently.
|
||||
curl -fsS "$RESOLVE" --max-time 10 -I "$URL/" \
|
||||
| grep -Eqi 'strict-transport-security:[[:space:]]*max-age=31536000.*includeSubDomains.*preload'
|
||||
# Permissions-Policy denies APIs the app does not use (camera,
|
||||
# microphone, geolocation). A regression that loosens or drops the
|
||||
# header now fails the smoke step.
|
||||
curl -fsS "$RESOLVE" --max-time 10 -I "$URL/" \
|
||||
| grep -Eqi 'permissions-policy:[[:space:]]*camera=\(\),[[:space:]]*microphone=\(\),[[:space:]]*geolocation=\(\)'
|
||||
status=$(curl -s "$RESOLVE" -o /dev/null -w "%{http_code}" --max-time 10 "$URL/actuator/health")
|
||||
[ "$status" = "404" ] || { echo "expected 404 from /actuator/health, got $status"; exit 1; }
|
||||
echo "All smoke checks passed"
|
||||
|
||||
- name: Cleanup env file
|
||||
# LOAD-BEARING: `if: always()` is the linchpin of the ADR-011
|
||||
# single-tenant runner trust model. Every secret in .env.staging
|
||||
# is plain text on the runner filesystem until this step runs.
|
||||
# If a future refactor drops `if: always()`, a failed deploy
|
||||
# leaves the env-file behind. Do not remove this conditional
|
||||
# without first re-evaluating ADR-011.
|
||||
if: always()
|
||||
run: rm -f .env.staging
|
||||
@@ -1,161 +0,0 @@
|
||||
name: release
|
||||
|
||||
# Builds and deploys the production environment on `v*` tag push.
|
||||
# Runs on the self-hosted runner via Docker-out-of-Docker; images are
|
||||
# tagged with the actual git tag (e.g. v1.0.0) so rollback is
|
||||
# `TAG=<previous> docker compose -f docker-compose.prod.yml -p archiv-production up -d --wait`
|
||||
#
|
||||
# Operational assumptions (see docs/DEPLOYMENT.md §3 for the full setup):
|
||||
#
|
||||
# 1. Single-tenant self-hosted runner. The "Write production env file"
|
||||
# step writes every secret to .env.production on the runner
|
||||
# filesystem; the `if: always()` cleanup step removes it. A
|
||||
# multi-tenant runner would need to switch to
|
||||
# `docker compose --env-file <(stdin)` instead.
|
||||
#
|
||||
# 2. Host docker layer cache is authoritative. There is no
|
||||
# actions/cache; we rely on the host daemon to keep Maven and npm
|
||||
# layers warm between runs. A `docker system prune` on the host
|
||||
# will cause the next release build to be cold (5–10 min slower).
|
||||
#
|
||||
# Production environment:
|
||||
# - project name: archiv-production
|
||||
# - host ports: backend 8080, frontend 3000
|
||||
# - profile: (none) — mailpit is excluded; real SMTP relay is used
|
||||
#
|
||||
# Required Gitea secrets:
|
||||
# PROD_POSTGRES_PASSWORD
|
||||
# PROD_MINIO_PASSWORD
|
||||
# PROD_MINIO_APP_PASSWORD
|
||||
# PROD_OCR_TRAINING_TOKEN
|
||||
# PROD_APP_ADMIN_USERNAME (CRITICAL: see docs/DEPLOYMENT.md)
|
||||
# PROD_APP_ADMIN_PASSWORD (CRITICAL: locked in on first deploy)
|
||||
# MAIL_HOST
|
||||
# MAIL_PORT
|
||||
# MAIL_USERNAME
|
||||
# MAIL_PASSWORD
|
||||
# GRAFANA_ADMIN_PASSWORD
|
||||
# GLITCHTIP_SECRET_KEY
|
||||
# SENTRY_DSN (set after GlitchTip first-run; empty = Sentry disabled)
|
||||
|
||||
on:
|
||||
push:
|
||||
tags:
|
||||
- "v*"
|
||||
|
||||
env:
|
||||
DOCKER_BUILDKIT: "1"
|
||||
|
||||
jobs:
|
||||
deploy-production:
|
||||
# See nightly.yml — same rationale: `ubuntu-latest` matches the
|
||||
# advertised label of our single-tenant self-hosted runner.
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Write production env file
|
||||
run: |
|
||||
cat > .env.production <<EOF
|
||||
TAG=${{ gitea.ref_name }}
|
||||
PORT_BACKEND=8080
|
||||
PORT_FRONTEND=3000
|
||||
APP_DOMAIN=archiv.raddatz.cloud
|
||||
POSTGRES_PASSWORD=${{ secrets.PROD_POSTGRES_PASSWORD }}
|
||||
MINIO_PASSWORD=${{ secrets.PROD_MINIO_PASSWORD }}
|
||||
MINIO_APP_PASSWORD=${{ secrets.PROD_MINIO_APP_PASSWORD }}
|
||||
OCR_TRAINING_TOKEN=${{ secrets.PROD_OCR_TRAINING_TOKEN }}
|
||||
APP_ADMIN_USERNAME=${{ secrets.PROD_APP_ADMIN_USERNAME }}
|
||||
APP_ADMIN_PASSWORD=${{ secrets.PROD_APP_ADMIN_PASSWORD }}
|
||||
MAIL_HOST=${{ secrets.MAIL_HOST }}
|
||||
MAIL_PORT=${{ secrets.MAIL_PORT }}
|
||||
MAIL_USERNAME=${{ secrets.MAIL_USERNAME }}
|
||||
MAIL_PASSWORD=${{ secrets.MAIL_PASSWORD }}
|
||||
MAIL_SMTP_AUTH=true
|
||||
MAIL_STARTTLS_ENABLE=true
|
||||
APP_MAIL_FROM=noreply@raddatz.cloud
|
||||
IMPORT_HOST_DIR=/srv/familienarchiv-production/import
|
||||
POSTGRES_USER=archiv
|
||||
PORT_GRAFANA=3003
|
||||
PORT_GLITCHTIP=3002
|
||||
PORT_PROMETHEUS=9090
|
||||
GRAFANA_ADMIN_PASSWORD=${{ secrets.GRAFANA_ADMIN_PASSWORD }}
|
||||
GLITCHTIP_SECRET_KEY=${{ secrets.GLITCHTIP_SECRET_KEY }}
|
||||
GLITCHTIP_DOMAIN=https://glitchtip.archiv.raddatz.cloud
|
||||
SENTRY_DSN=${{ secrets.SENTRY_DSN }}
|
||||
EOF
|
||||
|
||||
- name: Build images
|
||||
# `--pull` forces re-fetching pinned base images so a CVE
|
||||
# re-publication of the same tag is picked up rather than served
|
||||
# from the host's stale Docker layer cache.
|
||||
run: |
|
||||
docker compose \
|
||||
-f docker-compose.prod.yml \
|
||||
-p archiv-production \
|
||||
--env-file .env.production \
|
||||
build --pull
|
||||
|
||||
- name: Deploy production
|
||||
run: |
|
||||
docker compose \
|
||||
-f docker-compose.prod.yml \
|
||||
-p archiv-production \
|
||||
--env-file .env.production \
|
||||
up -d --wait --remove-orphans
|
||||
|
||||
- name: Start observability stack
|
||||
run: |
|
||||
docker compose \
|
||||
-f docker-compose.observability.yml \
|
||||
--env-file .env.production \
|
||||
up -d --wait --remove-orphans
|
||||
|
||||
- name: Reload Caddy
|
||||
# See nightly.yml — same rationale and mechanism: DooD job containers
|
||||
# cannot call systemctl directly; nsenter via a privileged sibling
|
||||
# container reaches the host systemd. Must run after deploy (so the
|
||||
# latest Caddyfile is on disk) and before the smoke test (so the
|
||||
# public surface reflects the current config). Alpine with pinned
|
||||
# digest; reload not restart — see nightly.yml for full rationale.
|
||||
run: |
|
||||
docker run --rm --privileged --pid=host \
|
||||
alpine:3.21@sha256:48b0309ca019d89d40f670aa1bc06e426dc0931948452e8491e3d65087abc07d \
|
||||
sh -c 'apk add --no-cache util-linux -q && nsenter -t 1 -m -u -n -p -i -- /bin/systemctl reload caddy'
|
||||
|
||||
- name: Smoke test deployed environment
|
||||
# See nightly.yml — same three checks, against the prod vhost.
|
||||
# --resolve pins to the bridge gateway IP (the host), not 127.0.0.1
|
||||
# — see nightly.yml for the full network topology explanation.
|
||||
run: |
|
||||
set -e
|
||||
HOST="archiv.raddatz.cloud"
|
||||
URL="https://$HOST"
|
||||
HOST_IP=$(ip route show default | awk '/default/ {print $3}')
|
||||
[ -n "$HOST_IP" ] || { echo "ERROR: could not detect Docker bridge gateway via 'ip route'"; exit 1; }
|
||||
RESOLVE="--resolve $HOST:443:$HOST_IP"
|
||||
echo "Smoke test: $URL (pinned to $HOST_IP via bridge gateway)"
|
||||
curl -fsS "$RESOLVE" --max-time 10 "$URL/login" -o /dev/null
|
||||
# Pin the preload-list-eligible HSTS value, not just header presence:
|
||||
# a degraded `max-age=1` or a dropped `includeSubDomains; preload` must
|
||||
# fail this check rather than pass it silently.
|
||||
curl -fsS "$RESOLVE" --max-time 10 -I "$URL/" \
|
||||
| grep -Eqi 'strict-transport-security:[[:space:]]*max-age=31536000.*includeSubDomains.*preload'
|
||||
# Permissions-Policy denies APIs the app does not use (camera,
|
||||
# microphone, geolocation). A regression that loosens or drops the
|
||||
# header now fails the smoke step.
|
||||
curl -fsS "$RESOLVE" --max-time 10 -I "$URL/" \
|
||||
| grep -Eqi 'permissions-policy:[[:space:]]*camera=\(\),[[:space:]]*microphone=\(\),[[:space:]]*geolocation=\(\)'
|
||||
status=$(curl -s "$RESOLVE" -o /dev/null -w "%{http_code}" --max-time 10 "$URL/actuator/health")
|
||||
[ "$status" = "404" ] || { echo "expected 404 from /actuator/health, got $status"; exit 1; }
|
||||
echo "All smoke checks passed"
|
||||
|
||||
- name: Cleanup env file
|
||||
# LOAD-BEARING: `if: always()` is the linchpin of the ADR-011
|
||||
# single-tenant runner trust model. Every secret in
|
||||
# .env.production is plain text on the runner filesystem until
|
||||
# this step runs. If a future refactor drops `if: always()`, a
|
||||
# failed deploy leaves the env-file behind. Do not remove this
|
||||
# conditional without first re-evaluating ADR-011.
|
||||
if: always()
|
||||
run: rm -f .env.production
|
||||
6
.gitignore
vendored
6
.gitignore
vendored
@@ -18,11 +18,5 @@ scripts/large-data.sql
|
||||
.claude/worktrees/
|
||||
.claude/scheduled_tasks.lock
|
||||
|
||||
# Run artifacts from verification tooling
|
||||
proofshot-artifacts/
|
||||
|
||||
# Root-level Node.js tooling artifacts
|
||||
node_modules/
|
||||
|
||||
# Repo uses npm; yarn.lock is ignored to avoid double-lockfile drift.
|
||||
frontend/yarn.lock
|
||||
|
||||
4
.vscode/settings.json
vendored
4
.vscode/settings.json
vendored
@@ -1,6 +1,4 @@
|
||||
{
|
||||
"java.configuration.updateBuildConfiguration": "interactive",
|
||||
"java.compile.nullAnalysis.mode": "automatic",
|
||||
"plantuml.render": "PlantUMLServer",
|
||||
"plantuml.server": "http://heim-nas:8500"
|
||||
"java.compile.nullAnalysis.mode": "automatic"
|
||||
}
|
||||
238
CLAUDE.md
238
CLAUDE.md
@@ -4,8 +4,6 @@
|
||||
|
||||
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
|
||||
|
||||
> For a human-readable project overview, see [README.md](./README.md).
|
||||
|
||||
## Project Overview
|
||||
|
||||
**Familienarchiv** is a family document archival system — a full-stack web app for digitizing, organizing, and searching family documents. Key features: file uploads (stored in MinIO/S3), metadata management, Excel/ODS batch import, full-text search, conversation threads between family members, and role-based access control.
|
||||
@@ -20,8 +18,6 @@ See [CODESTYLE.md](./CODESTYLE.md) for coding standards: Clean Code, DRY/KISS tr
|
||||
|
||||
## Stack
|
||||
|
||||
→ See [README.md §Tech Stack](./README.md#tech-stack)
|
||||
|
||||
- **Backend**: Spring Boot 4.0 (Java 21, Maven, Jetty, JPA/Hibernate, Flyway, Spring Security, Spring Session JDBC)
|
||||
- **Frontend**: SvelteKit 2 with Svelte 5, TypeScript, Tailwind CSS 4, Paraglide.js (i18n: de/en/es)
|
||||
- **Database**: PostgreSQL 16
|
||||
@@ -31,13 +27,12 @@ See [CODESTYLE.md](./CODESTYLE.md) for coding standards: Clean Code, DRY/KISS tr
|
||||
## Common Commands
|
||||
|
||||
### Running the Full Stack
|
||||
|
||||
```bash
|
||||
# From repo root — starts PostgreSQL, MinIO, and Spring Boot backend
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
### Backend (Spring Boot)
|
||||
|
||||
```bash
|
||||
cd backend
|
||||
|
||||
@@ -49,12 +44,11 @@ cd backend
|
||||
```
|
||||
|
||||
### Frontend (SvelteKit)
|
||||
|
||||
```bash
|
||||
cd frontend
|
||||
|
||||
npm install
|
||||
npm run dev # Dev server (port 5173)
|
||||
npm run dev # Dev server (port 3000)
|
||||
npm run build # Production build
|
||||
npm run preview # Preview production build
|
||||
|
||||
@@ -72,7 +66,7 @@ npm run generate:api # Regenerate TypeScript API types from OpenAPI spec
|
||||
|
||||
### Package Structure
|
||||
|
||||
<!-- TODO: rewrite post-REFACTOR-1 — see Epic 4 -->
|
||||
Package-by-domain: each domain owns its controller, service, repository, entities, and DTOs.
|
||||
|
||||
```
|
||||
backend/src/main/java/org/raddatz/familienarchiv/
|
||||
@@ -96,21 +90,27 @@ backend/src/main/java/org/raddatz/familienarchiv/
|
||||
└── user/ User domain — AppUser, UserGroup, UserService, auth controllers
|
||||
```
|
||||
|
||||
### Layering Rules
|
||||
### Layering Rules (strictly enforced)
|
||||
|
||||
→ See [docs/ARCHITECTURE.md §Layering rule](./docs/ARCHITECTURE.md#layering-rule)
|
||||
```
|
||||
Controller → Service → Repository → DB
|
||||
```
|
||||
|
||||
**LLM reminder:** controllers never call repositories directly; services never reach into another domain's repository — always call the other domain's service instead.
|
||||
- **Controllers** never inject or call repositories directly.
|
||||
- **Services** never reach into another domain's repository. Call the other domain's service instead.
|
||||
- ✅ `DocumentService` → `PersonService.getById()` → `PersonRepository`
|
||||
- ❌ `DocumentService` → `PersonRepository` directly
|
||||
- This keeps domain boundaries clear and business logic testable in isolation.
|
||||
|
||||
### Domain Model
|
||||
|
||||
| Entity | Table | Key relationships |
|
||||
| ----------- | ------------- | ------------------------------------------------------------------------------------- |
|
||||
| `Document` | `documents` | ManyToOne `sender` (Person), ManyToMany `receivers` (Person), ManyToMany `tags` (Tag) |
|
||||
| `Person` | `persons` | Referenced by documents as sender/receiver |
|
||||
| `Tag` | `tag` | ManyToMany with documents via `document_tags` |
|
||||
| `AppUser` | `app_users` | ManyToMany `groups` (UserGroup) |
|
||||
| `UserGroup` | `user_groups` | Has a `Set<String> permissions` |
|
||||
| Entity | Table | Key relationships |
|
||||
|---|---|---|
|
||||
| `Document` | `documents` | ManyToOne `sender` (Person), ManyToMany `receivers` (Person), ManyToMany `tags` (Tag) |
|
||||
| `Person` | `persons` | Referenced by documents as sender/receiver |
|
||||
| `Tag` | `tag` | ManyToMany with documents via `document_tags` |
|
||||
| `AppUser` | `app_users` | ManyToMany `groups` (UserGroup) |
|
||||
| `UserGroup` | `user_groups` | Has a `Set<String> permissions` |
|
||||
|
||||
**`DocumentStatus` lifecycle:** `PLACEHOLDER → UPLOADED → TRANSCRIBED → REVIEWED → ARCHIVED`
|
||||
|
||||
@@ -120,7 +120,6 @@ backend/src/main/java/org/raddatz/familienarchiv/
|
||||
### Entity Code Style
|
||||
|
||||
All entities use these Lombok annotations:
|
||||
|
||||
```java
|
||||
@Entity
|
||||
@Table(name = "table_name")
|
||||
@@ -149,29 +148,65 @@ Services are annotated with `@Service`, `@RequiredArgsConstructor`, and optional
|
||||
- Read methods are not annotated (default non-transactional is fine).
|
||||
- Each service owns its domain's repository. Cross-domain data access goes through the other domain's service.
|
||||
|
||||
**Existing services:**
|
||||
|
||||
| Service | Responsibility |
|
||||
|---|---|
|
||||
| `DocumentService` | Document CRUD, search, tag cascade delete |
|
||||
| `PersonService` | Person CRUD, find-or-create by alias |
|
||||
| `TagService` | Tag find/create/update/delete |
|
||||
| `UserService` | User and group CRUD |
|
||||
| `FileService` | S3/MinIO upload and download |
|
||||
| `MassImportService` | Async ODS/Excel import; delegates to PersonService and TagService |
|
||||
|
||||
### DTOs
|
||||
|
||||
Input DTOs live flat in the domain package. Response types are the model entities themselves (no response DTOs).
|
||||
Input DTOs live in `dto/`. Response types are the model entities themselves (no response DTOs).
|
||||
|
||||
- `@Schema(requiredMode = REQUIRED)` on every field the backend always populates — drives TypeScript generation.
|
||||
- `DocumentUpdateDTO` — used for both create and update (all fields optional)
|
||||
- `CreateUserRequest` — user creation
|
||||
- `GroupDTO` — group create/update
|
||||
|
||||
### Error Handling
|
||||
|
||||
→ See [CONTRIBUTING.md §Error handling](./CONTRIBUTING.md#error-handling)
|
||||
Use `DomainException` for all domain errors. Never throw raw exceptions from service methods.
|
||||
|
||||
**LLM reminder:** use `DomainException.notFound/forbidden/conflict/internal()` from service methods — never throw raw exceptions. When adding a new `ErrorCode`: (1) add to `ErrorCode.java`, (2) add to `ErrorCode` type in `frontend/src/lib/shared/errors.ts`, (3) add a `case` in `getErrorMessage()`, (4) add i18n keys in `messages/{de,en,es}.json`.
|
||||
```java
|
||||
// Static factories match common HTTP status codes:
|
||||
DomainException.notFound(ErrorCode.DOCUMENT_NOT_FOUND, "Document not found: " + id)
|
||||
DomainException.forbidden("Access denied")
|
||||
DomainException.conflict(ErrorCode.IMPORT_ALREADY_RUNNING, "Already running")
|
||||
DomainException.internal(ErrorCode.FILE_UPLOAD_FAILED, "Upload failed: " + e.getMessage())
|
||||
```
|
||||
|
||||
`ErrorCode` is an enum in `exception/ErrorCode.java`. When adding a new error case, add the value there **and** mirror it in the frontend's `src/lib/errors.ts` + add a Paraglide translation key.
|
||||
|
||||
For simple validation in controllers (not domain logic), `ResponseStatusException` is acceptable:
|
||||
```java
|
||||
throw new ResponseStatusException(HttpStatus.BAD_REQUEST, "firstName is required");
|
||||
```
|
||||
|
||||
### Security / Permissions
|
||||
|
||||
→ See [docs/ARCHITECTURE.md §Permission system](./docs/ARCHITECTURE.md#permission-system)
|
||||
Use `@RequirePermission` on controller methods (or the whole controller class):
|
||||
|
||||
**LLM reminder:** `@RequirePermission(Permission.WRITE_ALL)` is **required** on every `POST`, `PUT`, `PATCH`, `DELETE` endpoint — not optional. Do not mix with Spring Security's `@PreAuthorize`. Available permissions: `READ_ALL`, `WRITE_ALL`, `ADMIN`, `ADMIN_USER`, `ADMIN_TAG`, `ADMIN_PERMISSION`, `ANNOTATE_ALL`, `BLOG_WRITE`.
|
||||
```java
|
||||
@RequirePermission(Permission.WRITE_ALL)
|
||||
public Document updateDocument(...) { ... }
|
||||
```
|
||||
|
||||
Available permissions: `READ_ALL`, `WRITE_ALL`, `ADMIN`, `ADMIN_USER`, `ADMIN_TAG`, `ADMIN_PERMISSION`
|
||||
|
||||
`PermissionAspect` (AOP) checks the current user's `UserGroup.permissions` at runtime.
|
||||
|
||||
### OpenAPI / API Types
|
||||
|
||||
→ See [CONTRIBUTING.md §Walkthrough B — Add a new endpoint](./CONTRIBUTING.md#4-walkthrough-b--add-a-new-endpoint)
|
||||
SpringDoc generates the spec at `/v3/api-docs` (only accessible when running with `--spring.profiles.active=dev`).
|
||||
|
||||
**LLM reminder:** always run `npm run generate:api` in `frontend/` after any backend model or endpoint change — this is the most common cause of TypeScript type errors.
|
||||
When changing any model field or endpoint:
|
||||
1. Rebuild the backend JAR with `-DskipTests`
|
||||
2. Start it with `--spring.profiles.active=dev`
|
||||
3. Run `npm run generate:api` in `frontend/`
|
||||
|
||||
---
|
||||
|
||||
@@ -181,98 +216,147 @@ Input DTOs live flat in the domain package. Response types are the model entitie
|
||||
|
||||
```
|
||||
frontend/src/routes/
|
||||
├── +layout.svelte / +layout.server.ts Global layout, auth cookie
|
||||
├── +page.svelte / +page.server.ts Home / document search dashboard
|
||||
├── +layout.svelte Global header (sticky), nav links, logout
|
||||
├── +layout.server.ts Loads current user, injects auth cookie
|
||||
├── +page.svelte Home / document search
|
||||
├── +page.server.ts Load: search documents; no actions
|
||||
├── documents/
|
||||
│ ├── [id]/ Document detail (view + file preview)
|
||||
│ ├── [id]/edit/ Edit form (all metadata + file upload)
|
||||
│ ├── new/ Upload form
|
||||
│ └── bulk-edit/ Multi-document edit
|
||||
│ ├── [id]/+page.svelte Document detail (view + file preview)
|
||||
│ └── [id]/edit/ Edit form (all metadata + file upload)
|
||||
│ └── new/ Create form (same fields, empty)
|
||||
├── persons/
|
||||
│ ├── [id]/ Person detail
|
||||
│ ├── [id]/edit/ Person edit form
|
||||
│ ├── +page.svelte Person list with search
|
||||
│ ├── [id]/+page.svelte Person detail (inline edit + merge)
|
||||
│ └── new/ Create person form
|
||||
├── briefwechsel/ Bilateral conversation timeline (Briefwechsel)
|
||||
├── aktivitaeten/ Unified activity feed (Chronik)
|
||||
├── geschichten/ Stories — list, [id], [id]/edit, new
|
||||
├── stammbaum/ Family tree (Stammbaum)
|
||||
├── enrich/ Enrichment workflow — [id], done
|
||||
├── admin/ User, group, tag, OCR, system management
|
||||
├── hilfe/transkription/ Transcription help page
|
||||
├── profile/ User profile settings
|
||||
├── users/[id]/ Public user profile page
|
||||
├── login/ logout/ register/
|
||||
└── forgot-password/ reset-password/
|
||||
├── conversations/ Bilateral conversation timeline
|
||||
├── admin/ User + group + tag management
|
||||
└── login/ logout/ Auth pages
|
||||
```
|
||||
|
||||
### API Client Pattern
|
||||
|
||||
→ See [CONTRIBUTING.md §Frontend API client](./CONTRIBUTING.md#frontend-api-client)
|
||||
All server-side API calls use the typed client from `$lib/api.server.ts`:
|
||||
|
||||
**LLM reminder:** check `!result.response.ok` (not `result.error` — breaks when spec has no error responses defined); cast errors as `result.error as unknown as { code?: string }`; use `result.data!` after an ok check.
|
||||
```typescript
|
||||
const api = createApiClient(fetch);
|
||||
const result = await api.GET('/api/persons/{id}', { params: { path: { id } } });
|
||||
|
||||
// Always check via response.ok, NOT result.error
|
||||
if (!result.response.ok) {
|
||||
const code = (result.error as unknown as { code?: string })?.code;
|
||||
throw error(result.response.status, getErrorMessage(code));
|
||||
}
|
||||
return { person: result.data! };
|
||||
```
|
||||
|
||||
Key rules:
|
||||
- Use `!result.response.ok` for error checking (not `if (result.error)` — this breaks when the spec has no error responses defined)
|
||||
- Cast errors as `result.error as unknown as { code?: string }` to extract the backend error code
|
||||
- Use `result.data!` (non-null assertion) after an ok check — TypeScript knows it's present
|
||||
|
||||
For multipart/form-data endpoints (file uploads), bypass the typed client and use raw `fetch`:
|
||||
```typescript
|
||||
const res = await fetch(`${baseUrl}/api/documents`, { method: 'POST', body: formData });
|
||||
```
|
||||
|
||||
### Form Actions Pattern
|
||||
|
||||
```typescript
|
||||
// +page.server.ts
|
||||
export const actions = {
|
||||
default: async ({ request, fetch }) => {
|
||||
const formData = await request.formData();
|
||||
const name = formData.get("name") as string;
|
||||
// ...
|
||||
return fail(400, { error: "message" }); // on error
|
||||
throw redirect(303, "/target"); // on success
|
||||
},
|
||||
default: async ({ request, fetch }) => {
|
||||
const formData = await request.formData();
|
||||
const name = formData.get('name') as string; // cast needed — FormData returns FormDataEntryValue
|
||||
// ...
|
||||
return fail(400, { error: 'message' }); // on error
|
||||
throw redirect(303, '/target'); // on success
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
### Date Handling
|
||||
|
||||
→ See [CONTRIBUTING.md §Date handling](./CONTRIBUTING.md#date-handling)
|
||||
|
||||
**LLM reminder:** always append `T12:00:00` when constructing `new Date()` from an ISO date string — prevents UTC timezone off-by-one errors.
|
||||
- **Forms**: German format `dd.mm.yyyy` with auto-dot insertion via `handleDateInput()`. A hidden `<input type="hidden" name="documentDate" value={dateIso}>` sends ISO format to the backend.
|
||||
- **Display**: Always use `Intl.DateTimeFormat` with `T12:00:00` suffix to prevent UTC timezone off-by-one:
|
||||
```typescript
|
||||
new Intl.DateTimeFormat('de-DE', { day: 'numeric', month: 'long', year: 'numeric' })
|
||||
.format(new Date(doc.documentDate + 'T12:00:00'))
|
||||
```
|
||||
|
||||
### UI Component Library
|
||||
|
||||
→ See per-domain READMEs: [`frontend/src/lib/person/README.md`](./frontend/src/lib/person/README.md), [`frontend/src/lib/tag/README.md`](./frontend/src/lib/tag/README.md), [`frontend/src/lib/document/README.md`](./frontend/src/lib/document/README.md), [`frontend/src/lib/shared/README.md`](./frontend/src/lib/shared/README.md)
|
||||
Custom components in `src/lib/components/`:
|
||||
|
||||
| Component | Props | Description |
|
||||
|---|---|---|
|
||||
| `PersonTypeahead` | `name`, `label`, `value`, `initialName`, `on:change` | Single-person selector with typeahead dropdown |
|
||||
| `PersonMultiSelect` | `selectedPersons` (bind) | Chip-based multi-person selector |
|
||||
| `TagInput` | `tags` (bind), `allowCreation?`, `on:change` | Tag chip input with typeahead |
|
||||
|
||||
### Styling Conventions (Tailwind CSS 4)
|
||||
|
||||
Brand color tokens (defined in `layout.css`):
|
||||
Brand color utilities (defined in `layout.css`):
|
||||
|
||||
| Token / Utility | CSS variable | Usage |
|
||||
| ---------------- | ---------------- | ------------------------------------------------------- |
|
||||
| `brand-navy` | `--palette-navy` | Tailwind utility — buttons, headers, primary text |
|
||||
| `brand-mint` | `--palette-mint` | Tailwind utility — accents, hover underlines, icons |
|
||||
| `--palette-sand` | `--palette-sand` | Palette constant only — use `bg-canvas` or `bg-surface` |
|
||||
| Class | Value | Usage |
|
||||
|---|---|---|
|
||||
| `brand-navy` | `#002850` | Primary text, buttons, headers |
|
||||
| `brand-mint` | `#A6DAD8` | Accents, hover underlines, icons |
|
||||
| `brand-sand` | `#E4E2D7` | Page background, card borders |
|
||||
|
||||
Typography:
|
||||
|
||||
- `font-serif` (Tinos) — body text, document titles, names
|
||||
- `font-serif` (Merriweather) — body text, document titles, names
|
||||
- `font-sans` (Montserrat) — labels, metadata, UI chrome
|
||||
|
||||
Card pattern for content sections:
|
||||
|
||||
```svelte
|
||||
<div class="rounded-sm border border-line bg-surface shadow-sm p-6">
|
||||
<h2 class="text-xs font-bold uppercase tracking-widest text-ink-3 mb-5">Section Title</h2>
|
||||
<div class="bg-white shadow-sm border border-brand-sand rounded-sm p-6">
|
||||
<h2 class="text-xs font-bold uppercase tracking-widest text-gray-400 mb-5">Section Title</h2>
|
||||
<!-- content -->
|
||||
</div>
|
||||
```
|
||||
|
||||
Back button pattern — use the shared `<BackButton>` component from `$lib/shared/primitives/BackButton.svelte`. Do not use a static `<a href>` for back navigation.
|
||||
Save bar pattern — use **sticky full-bleed** for long forms (edit document), **card-style with `mt-4`** for short forms (new person):
|
||||
```svelte
|
||||
<!-- Long forms: sticky, full-bleed -->
|
||||
<div class="sticky bottom-0 z-10 -mx-4 px-6 py-4 bg-white border-t border-brand-sand shadow-[0_-2px_8px_rgba(0,0,0,0.06)] flex items-center justify-between">
|
||||
|
||||
<!-- Short forms: card, top margin -->
|
||||
<div class="mt-4 flex items-center justify-between rounded-sm border border-brand-sand bg-white px-6 py-4 shadow-sm">
|
||||
```
|
||||
|
||||
Back button pattern — use the shared `<BackButton>` component from `$lib/components/BackButton.svelte`:
|
||||
```svelte
|
||||
<script lang="ts">
|
||||
import BackButton from '$lib/components/BackButton.svelte';
|
||||
</script>
|
||||
|
||||
<BackButton />
|
||||
```
|
||||
The component calls `history.back()` so the user returns to wherever they came from. Label is always "Zurück" (no contextual suffix — destination is unknown). Touch target ≥ 44px and focus ring are built in. Do not use a static `<a href>` for back navigation.
|
||||
|
||||
Subtle action link (e.g. "new document/person"):
|
||||
```svelte
|
||||
<a href="/documents/new" class="inline-flex items-center gap-1 text-sm font-medium text-brand-navy/60 hover:text-brand-navy transition-colors">
|
||||
<svg class="w-4 h-4" ...><!-- plus icon --></svg>
|
||||
Neues Dokument
|
||||
</a>
|
||||
```
|
||||
|
||||
### Error Handling (Frontend)
|
||||
|
||||
→ See [CONTRIBUTING.md §Error handling](./CONTRIBUTING.md#error-handling)
|
||||
|
||||
**LLM reminder:** when adding a new `ErrorCode`: (1) add to `ErrorCode.java`, (2) add to `ErrorCode` type in `frontend/src/lib/shared/errors.ts`, (3) add a `case` in `getErrorMessage()`, (4) add i18n keys in `messages/{de,en,es}.json`.
|
||||
`src/lib/errors.ts` mirrors the backend `ErrorCode` enum and maps codes to Paraglide translation keys. When adding a new `ErrorCode` on the backend:
|
||||
1. Add it to `ErrorCode.java`
|
||||
2. Add it to the `ErrorCode` type in `errors.ts`
|
||||
3. Add a `case` in `getErrorMessage()`
|
||||
4. Add the translation key in `messages/de.json`, `en.json`, `es.json`
|
||||
|
||||
---
|
||||
|
||||
## Infrastructure
|
||||
|
||||
→ See [docs/DEPLOYMENT.md](./docs/DEPLOYMENT.md)
|
||||
The `docker-compose.yml` at the repo root orchestrates everything. A MinIO MC helper container runs at startup to create the `archive-documents` bucket. The backend container depends on both `db` and `minio` being healthy.
|
||||
|
||||
Database migrations live in `backend/src/main/resources/db/migration/` (Flyway, SQL files named `V{n}__{description}.sql`).
|
||||
|
||||
## API Testing
|
||||
|
||||
@@ -280,4 +364,4 @@ HTTP test files are in `backend/api_tests/` for use with the VS Code REST Client
|
||||
|
||||
## Dev Container
|
||||
|
||||
→ See [.devcontainer/README.md](./.devcontainer/README.md)
|
||||
A `.devcontainer/` config is available (Java 21 + Node 24, ports 8080 and 3000 forwarded). Use VS Code's "Reopen in Container" for a pre-configured environment.
|
||||
|
||||
@@ -180,8 +180,6 @@ When in doubt, commit more often rather than less.
|
||||
|
||||
See [CODESTYLE.md](./CODESTYLE.md) for the full guide: Clean Code (Uncle Bob), DRY/KISS trade-offs, and SOLID principles applied to this stack.
|
||||
|
||||
For domain terminology (Person vs AppUser, DocumentStatus lifecycle, Chronik vs Aktivität, etc.) see [docs/GLOSSARY.md](./docs/GLOSSARY.md).
|
||||
|
||||
Quick reminders:
|
||||
- Pure functions over stateful helpers where possible
|
||||
- No premature abstractions — KISS beats DRY
|
||||
|
||||
305
CONTRIBUTING.md
305
CONTRIBUTING.md
@@ -1,305 +0,0 @@
|
||||
# Contributing to Familienarchiv
|
||||
|
||||
For the full collaboration rules (issue workflow, PR process, Red/Green TDD, commit conventions) see [COLLABORATING.md](./COLLABORATING.md).
|
||||
For coding style see [CODESTYLE.md](./CODESTYLE.md).
|
||||
For the system architecture see [docs/ARCHITECTURE.md](./docs/ARCHITECTURE.md) (introduced in DOC-2; until that PR merges, see [docs/architecture/c4-diagrams.md](./docs/architecture/c4-diagrams.md)).
|
||||
For domain terminology see [docs/GLOSSARY.md](./docs/GLOSSARY.md).
|
||||
|
||||
---
|
||||
|
||||
## 1. Environment setup
|
||||
|
||||
**Prerequisites:** Java 21 (SDKMAN), Node 24 (nvm), Docker
|
||||
|
||||
**Activate SDKMAN and nvm before running `java`, `mvn`, `node`, or `npm`:**
|
||||
|
||||
```bash
|
||||
source "$HOME/.sdkman/bin/sdkman-init.sh"
|
||||
export NVM_DIR="$HOME/.nvm" && [ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 2. Daily development workflow
|
||||
|
||||
**Startup order — services must start in this sequence:**
|
||||
|
||||
```bash
|
||||
# 1. Start PostgreSQL and MinIO
|
||||
docker compose up -d db minio
|
||||
|
||||
# 2. Start the backend (separate terminal)
|
||||
cd backend && ./mvnw spring-boot:run
|
||||
|
||||
# 3. Start the frontend (separate terminal)
|
||||
cd frontend && npm install && npm run dev
|
||||
```
|
||||
|
||||
> `npm install` also wires up the Husky pre-commit hook via the `prepare` script.
|
||||
> Run it before your first commit, or the hook will fail to execute.
|
||||
|
||||
> **Do not use `docker-compose.ci.yml` locally** — it disables the bind mounts that the dev workflow depends on.
|
||||
|
||||
**Regenerate TypeScript types after any backend API change:**
|
||||
|
||||
```bash
|
||||
# Backend must be running with dev profile
|
||||
cd frontend && npm run generate:api
|
||||
```
|
||||
|
||||
> ⚠️ Forgetting this step is the most common cause of "where did my TypeScript type go?" — always regenerate after changing models or endpoints.
|
||||
|
||||
**Test commands:**
|
||||
|
||||
```bash
|
||||
cd backend && ./mvnw test # backend unit + slice tests
|
||||
cd frontend && npm run test # Vitest unit tests
|
||||
cd frontend && npm run check # svelte-check (type errors)
|
||||
cd frontend && npx playwright test # Playwright e2e tests
|
||||
```
|
||||
|
||||
**Branch naming:** `<type>/<issue-number>-<short-description>`, e.g. `feat/398-contributing`
|
||||
|
||||
**Commits:** one logical change per commit; reference the Gitea issue:
|
||||
|
||||
```
|
||||
feat(person): add aliases endpoint
|
||||
|
||||
Closes #42
|
||||
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
|
||||
```
|
||||
|
||||
### Test-type decision matrix
|
||||
|
||||
| What you're testing | Test type | Tool |
|
||||
|---|---|---|
|
||||
| Service business logic, calculations | Unit test | JUnit + `@ExtendWith(MockitoExtension.class)` |
|
||||
| HTTP contract, request validation, error codes | Controller slice test | `@WebMvcTest` |
|
||||
| Server `load` function | Vitest unit | Import directly, mock `fetch` |
|
||||
| Shared UI component | Vitest browser-mode | `render()` + `getByRole()` |
|
||||
| Full user-facing flow, navigation, forms | E2E | Playwright |
|
||||
|
||||
---
|
||||
|
||||
## 3. Walkthrough A — Add a new domain
|
||||
|
||||
**Example:** adding a `citation` domain (formal references to documents).
|
||||
|
||||
Both the backend and frontend are organised **domain-first**. A new domain means adding a package on both sides under the same name.
|
||||
|
||||
### Backend
|
||||
|
||||
1. Create `backend/src/main/java/org/raddatz/familienarchiv/citation/`
|
||||
|
||||
2. Add entity, repository, service, controller, and DTOs flat in the package:
|
||||
- **Entity** `Citation.java` — annotate with `@Entity @Data @Builder @NoArgsConstructor @AllArgsConstructor`; use `@GeneratedValue(strategy = GenerationType.UUID)` for the `id` field; add `@Schema(requiredMode = REQUIRED)` on every field the backend always populates
|
||||
- **Repository** `CitationRepository.java` — extends `JpaRepository<Citation, UUID>`
|
||||
- **Service** `CitationService.java` — `@Service @RequiredArgsConstructor`; write methods `@Transactional`, read methods unannotated; cross-domain data goes through the other domain's service, never its repository
|
||||
- **Controller** `CitationController.java` — `@RestController @RequestMapping("/api/citations")`
|
||||
|
||||
3. Add `@RequirePermission(Permission.WRITE_ALL)` on every `POST`, `PUT`, `PATCH`, and `DELETE` endpoint — **this is not optional**. Read-only `GET` endpoints stay unannotated.
|
||||
|
||||
4. Add a Flyway migration: `backend/src/main/resources/db/migration/V{n}__{description}.sql` (use the next sequential number after the highest existing one).
|
||||
|
||||
5. **Write failing tests before any implementation** (Red step):
|
||||
- Service unit test for business logic (`@ExtendWith(MockitoExtension.class)`)
|
||||
- `@WebMvcTest` slice test for each HTTP endpoint
|
||||
|
||||
6. Rebuild with `--spring.profiles.active=dev` and run `npm run generate:api` in `frontend/`.
|
||||
|
||||
### Frontend
|
||||
|
||||
7. Create `frontend/src/lib/citation/` — domain-specific Svelte components and TypeScript utilities go here.
|
||||
|
||||
8. Add routes under `frontend/src/routes/citations/` as needed.
|
||||
|
||||
9. Add a per-domain `README.md` in both the backend package folder and `frontend/src/lib/citation/` (per DOC-6).
|
||||
|
||||
### Documentation
|
||||
|
||||
10. Update `docs/ARCHITECTURE.md` Section 2 to include the new domain.
|
||||
11. Update `docs/GLOSSARY.md` if new terms are introduced.
|
||||
12. Update the ESLint boundary allow-list in `frontend/eslint.config.js` if the domain needs to import from another domain.
|
||||
|
||||
---
|
||||
|
||||
## 4. Walkthrough B — Add a new endpoint
|
||||
|
||||
**Example:** `POST /api/persons/{id}/aliases` — attach a name alias to an existing person.
|
||||
|
||||
### Red (write failing tests first)
|
||||
|
||||
1. Write a failing `@WebMvcTest` controller slice test:
|
||||
```java
|
||||
@Test
|
||||
void addAlias_returns201_whenAliasCreated() { ... }
|
||||
```
|
||||
|
||||
2. Write a failing service unit test:
|
||||
```java
|
||||
@Test
|
||||
void addAlias_throwsNotFound_whenPersonDoesNotExist() { ... }
|
||||
```
|
||||
|
||||
### Green (implement)
|
||||
|
||||
3. Add the service method in `PersonService.java`:
|
||||
```java
|
||||
@Transactional
|
||||
public PersonNameAlias addAlias(UUID personId, PersonNameAliasDTO dto) { ... }
|
||||
```
|
||||
|
||||
4. Add the controller method in `PersonController.java`:
|
||||
```java
|
||||
@PostMapping("/{id}/aliases")
|
||||
@RequirePermission(Permission.WRITE_ALL)
|
||||
public ResponseEntity<PersonNameAlias> addAlias(@PathVariable UUID id,
|
||||
@RequestBody PersonNameAliasDTO dto) { ... }
|
||||
```
|
||||
`@RequirePermission(Permission.WRITE_ALL)` on every state-mutating endpoint — **not optional**.
|
||||
|
||||
5. Validate user-supplied inputs at the controller boundary:
|
||||
```java
|
||||
if (dto.name() == null || dto.name().isBlank())
|
||||
throw new ResponseStatusException(HttpStatus.BAD_REQUEST, "name is required");
|
||||
```
|
||||
Validate at system boundaries; trust internal service code.
|
||||
|
||||
6. Use `DomainException` for domain errors:
|
||||
```java
|
||||
DomainException.notFound(ErrorCode.PERSON_NOT_FOUND, "Person not found: " + id)
|
||||
```
|
||||
If you need a new error code, add it to `ErrorCode.java`, mirror it in
|
||||
`frontend/src/lib/shared/errors.ts`, and add translation keys in `messages/{de,en,es}.json`.
|
||||
|
||||
7. Mark every field the backend always populates with `@Schema(requiredMode = REQUIRED)` — this drives TypeScript type generation.
|
||||
|
||||
### Types and tests
|
||||
|
||||
8. Rebuild with `--spring.profiles.active=dev`, then `npm run generate:api` in `frontend/`.
|
||||
|
||||
> ⚠️ **Always regenerate types after any API change.** This is the #1 cause of "where did my TypeScript type go?"
|
||||
|
||||
9. Run the full test suite — all green before committing.
|
||||
|
||||
---
|
||||
|
||||
## 5. Walkthrough C — Add a new frontend page
|
||||
|
||||
**Example:** `/persons/[id]/timeline` — a chronological event timeline for one person.
|
||||
|
||||
### Red (write failing test first)
|
||||
|
||||
1. Write a failing Playwright E2E test for the user flow:
|
||||
```typescript
|
||||
test('timeline shows events in chronological order', async ({ page }) => {
|
||||
await page.goto('/persons/1/timeline');
|
||||
// assertions...
|
||||
});
|
||||
```
|
||||
|
||||
### Green (implement)
|
||||
|
||||
2. Create `frontend/src/routes/persons/[id]/timeline/+page.svelte`
|
||||
|
||||
3. Add `frontend/src/routes/persons/[id]/timeline/+page.server.ts` for the SSR load:
|
||||
```typescript
|
||||
import { createApiClient } from '$lib/shared/api.server';
|
||||
export const load: PageServerLoad = async ({ params, fetch }) => {
|
||||
const api = createApiClient(fetch);
|
||||
const result = await api.GET('/api/persons/{id}', { params: { path: { id: params.id } } });
|
||||
if (!result.response.ok) throw error(result.response.status, '...');
|
||||
return { person: result.data! };
|
||||
};
|
||||
```
|
||||
|
||||
4. Domain-specific components (e.g. `TimelineEntry.svelte`) → `frontend/src/lib/person/`
|
||||
|
||||
5. Shared primitives (e.g. a generic date-range display) → `frontend/src/lib/shared/primitives/`
|
||||
|
||||
6. UI patterns to follow:
|
||||
- Back navigation: `import BackButton from '$lib/shared/primitives/BackButton.svelte'`
|
||||
- Date display: always append `T12:00:00` — `new Intl.DateTimeFormat('de-DE', …).format(new Date(val + 'T12:00:00'))` — prevents UTC off-by-one errors
|
||||
- Brand colors: `brand-navy`, `brand-mint`, `brand-sand` (defined in `src/routes/layout.css`)
|
||||
- Accessibility: touch targets ≥ 44 px (`min-h-[44px]`); focus rings (`focus-visible:ring-2 focus-visible:ring-brand-navy`); `aria-label` on icon-only buttons; `aria-live="polite"` on dynamic status messages
|
||||
|
||||
7. Add Paraglide i18n keys in `messages/de.json`, `messages/en.json`, `messages/es.json`.
|
||||
|
||||
8. If adding a new error code: mirror in `frontend/src/lib/shared/errors.ts` and add translation keys.
|
||||
|
||||
9. Make all tests green before committing.
|
||||
|
||||
---
|
||||
|
||||
## 6. Conventions reference
|
||||
|
||||
### Error handling
|
||||
|
||||
| Scenario | Pattern |
|
||||
|---|---|
|
||||
| Domain entity not found | `DomainException.notFound(ErrorCode.X, "…")` |
|
||||
| Permission denied | `DomainException.forbidden("…")` |
|
||||
| Concurrent edit conflict | `DomainException.conflict(ErrorCode.X, "…")` |
|
||||
| Infrastructure failure | `DomainException.internal(ErrorCode.X, "…")` |
|
||||
| Simple controller validation | `throw new ResponseStatusException(HttpStatus.BAD_REQUEST, "…")` |
|
||||
|
||||
New error code: `ErrorCode.java` → `frontend/src/lib/shared/errors.ts` → `messages/{de,en,es}.json`.
|
||||
|
||||
### DTOs
|
||||
|
||||
- Input DTOs live flat in the domain package (e.g. `PersonUpdateDTO.java`)
|
||||
- Responses are the entity itself — no separate response DTOs
|
||||
- `@Schema(requiredMode = REQUIRED)` on every field the backend always populates
|
||||
|
||||
### Frontend API client
|
||||
|
||||
```typescript
|
||||
const api = createApiClient(fetch); // from $lib/shared/api.server
|
||||
const result = await api.GET('/api/persons/{id}', { params: { path: { id } } });
|
||||
if (!result.response.ok) {
|
||||
const code = (result.error as unknown as { code?: string })?.code;
|
||||
throw error(result.response.status, getErrorMessage(code));
|
||||
}
|
||||
return { person: result.data! }; // non-null assertion is safe after the ok check
|
||||
```
|
||||
|
||||
For multipart/form-data (file uploads): bypass the typed client and use raw `fetch` — the client cannot handle it.
|
||||
|
||||
### Date handling
|
||||
|
||||
| Context | Pattern |
|
||||
|---|---|
|
||||
| Form display | German `dd.mm.yyyy` with auto-dot insertion via `handleDateInput()` |
|
||||
| Wire format | ISO 8601 via a hidden `<input type="hidden" name="documentDate" value={dateIso}>` |
|
||||
| Display | `new Intl.DateTimeFormat('de-DE', …).format(new Date(val + 'T12:00:00'))` |
|
||||
|
||||
### Security checklist (new endpoint)
|
||||
|
||||
- `@RequirePermission(Permission.WRITE_ALL)` on every `POST`, `PUT`, `PATCH`, `DELETE` — required, not optional
|
||||
- Validate all user-supplied inputs at the controller boundary before passing to the service
|
||||
- Parameterised queries only — never interpolate user input into JPQL/SQL strings
|
||||
- No raw user input in log messages — use `{}` placeholders: `log.warn("Not found: {}", id)`
|
||||
- Validate content-type and size on upload endpoints before reading the stream
|
||||
|
||||
### Accessibility baseline (new frontend page)
|
||||
|
||||
- Touch targets ≥ 44 px on all interactive elements (`min-h-[44px]`)
|
||||
- Focus rings on all focusable elements (`focus-visible:ring-2 focus-visible:ring-brand-navy`)
|
||||
- `aria-label` on every icon-only button
|
||||
- `aria-live="polite"` on dynamic status messages
|
||||
- Color is never the sole status indicator
|
||||
|
||||
Full WCAG 2.1 AA reference: [docs/STYLEGUIDE.md](./docs/STYLEGUIDE.md).
|
||||
|
||||
### Lint and format
|
||||
|
||||
```bash
|
||||
# Frontend
|
||||
cd frontend && npm run lint # Prettier + ESLint check
|
||||
cd frontend && npm run format # Auto-fix formatting
|
||||
cd frontend && npm run check # svelte-check (type errors)
|
||||
|
||||
# Backend — no standalone lint tool; compilation and test runs catch style issues
|
||||
cd backend && ./mvnw test # compile + test
|
||||
cd backend && ./mvnw clean package -DskipTests # compile-only check
|
||||
```
|
||||
@@ -11,7 +11,7 @@ Spring Boot 4.0 monolith serving the Familienarchiv REST API. Handles document m
|
||||
- **Server**: Jetty (not Tomcat — excluded in pom.xml)
|
||||
- **Data**: PostgreSQL 16, JPA/Hibernate, Spring Data JPA
|
||||
- **Migrations**: Flyway (SQL files in `src/main/resources/db/migration/`)
|
||||
- **Security**: Spring Security, Spring Session JDBC
|
||||
- **Security**: Spring Security, Spring Session JDBC, JWT tokens
|
||||
- **File Storage**: MinIO via AWS SDK v2 (S3-compatible)
|
||||
- **Spreadsheet Import**: Apache POI 5.5.0 (Excel/ODS)
|
||||
- **API Docs**: SpringDoc OpenAPI 3.x (`/v3/api-docs` — dev profile only)
|
||||
@@ -19,7 +19,7 @@ Spring Boot 4.0 monolith serving the Familienarchiv REST API. Handles document m
|
||||
|
||||
## Package Structure
|
||||
|
||||
<!-- TODO: rewrite post-REFACTOR-1 — see Epic 4 -->
|
||||
Package-by-domain: each domain owns its controller, service, repository, entities, and DTOs.
|
||||
|
||||
```
|
||||
src/main/java/org/raddatz/familienarchiv/
|
||||
@@ -43,28 +43,31 @@ src/main/java/org/raddatz/familienarchiv/
|
||||
└── user/ # User domain — AppUser, UserGroup, UserService, auth controllers
|
||||
```
|
||||
|
||||
For per-domain ownership and public surface, see each domain's `README.md`.
|
||||
## Layering Rules (Strict)
|
||||
|
||||
## Layering Rules
|
||||
```
|
||||
Controller → Service → Repository → DB
|
||||
```
|
||||
|
||||
→ See [docs/ARCHITECTURE.md §Layering rule](../docs/ARCHITECTURE.md#layering-rule)
|
||||
|
||||
**LLM reminder:** controllers never call repositories directly; services never reach into another domain's repository — always call the other domain's service.
|
||||
- **Controllers never call repositories directly.**
|
||||
- **Services never reach into another domain's repository.** Call the other domain's service instead.
|
||||
- ✅ `DocumentService` → `PersonService.getById()` → `PersonRepository`
|
||||
- ❌ `DocumentService` → `PersonRepository` directly
|
||||
|
||||
## Key Entities
|
||||
|
||||
| Entity | Table | Key Relationships |
|
||||
| --------------------------- | ------------------------------- | ------------------------------------------------------------------------------- |
|
||||
| `Document` | `documents` | ManyToOne sender (Person), ManyToMany receivers (Person), ManyToMany tags (Tag) |
|
||||
| `Person` | `persons` | Referenced by documents as sender/receiver; name aliases table |
|
||||
| `Tag` | `tag` | ManyToMany with documents via `document_tags`; self-referencing parent for tree |
|
||||
| `AppUser` | `app_users` | ManyToMany groups (UserGroup) |
|
||||
| `UserGroup` | `user_groups` | Has a `Set<String> permissions` |
|
||||
| `TranscriptionBlock` | `transcription_blocks` | Per-document, per-page text blocks with polygons |
|
||||
| `DocumentAnnotation` | `document_annotations` | Free-form annotations on document pages |
|
||||
| `Comment` | `document_comments` | Threaded comments with mentions |
|
||||
| `Notification` | `notifications` | User notification feed |
|
||||
| `OcrJob` / `OcrJobDocument` | `ocr_jobs`, `ocr_job_documents` | Batch OCR job tracking |
|
||||
| Entity | Table | Key Relationships |
|
||||
|---|---|---|
|
||||
| `Document` | `documents` | ManyToOne sender (Person), ManyToMany receivers (Person), ManyToMany tags (Tag) |
|
||||
| `Person` | `persons` | Referenced by documents as sender/receiver; name aliases table |
|
||||
| `Tag` | `tag` | ManyToMany with documents via `document_tags`; self-referencing parent for tree |
|
||||
| `AppUser` | `app_users` | ManyToMany groups (UserGroup) |
|
||||
| `UserGroup` | `user_groups` | Has a `Set<String> permissions` |
|
||||
| `TranscriptionBlock` | `transcription_blocks` | Per-document, per-page text blocks with polygons |
|
||||
| `DocumentAnnotation` | `document_annotations` | Free-form annotations on document pages |
|
||||
| `Comment` | `document_comments` | Threaded comments with mentions |
|
||||
| `Notification` | `notifications` | User notification feed |
|
||||
| `OcrJob` / `OcrJobDocument` | `ocr_jobs`, `ocr_job_documents` | Batch OCR job tracking |
|
||||
|
||||
**`DocumentStatus` lifecycle:** `PLACEHOLDER → UPLOADED → TRANSCRIBED → REVIEWED → ARCHIVED`
|
||||
|
||||
@@ -101,15 +104,32 @@ public class MyEntity {
|
||||
|
||||
## Error Handling
|
||||
|
||||
→ See [CONTRIBUTING.md §Error handling](../CONTRIBUTING.md#error-handling)
|
||||
Use `DomainException` for all domain errors:
|
||||
|
||||
**LLM reminder:** use `DomainException.notFound/forbidden/conflict/internal()` — never throw raw exceptions from service methods. For simple controller validation (not domain logic), `ResponseStatusException` is acceptable: `throw new ResponseStatusException(HttpStatus.BAD_REQUEST, "…")`. When adding a new `ErrorCode`: add to `ErrorCode.java`, mirror in `frontend/src/lib/shared/errors.ts`, add i18n keys in `messages/{de,en,es}.json`.
|
||||
```java
|
||||
DomainException.notFound(ErrorCode.DOCUMENT_NOT_FOUND, "...")
|
||||
DomainException.forbidden("...")
|
||||
DomainException.conflict(ErrorCode.IMPORT_ALREADY_RUNNING, "...")
|
||||
DomainException.internal(ErrorCode.FILE_UPLOAD_FAILED, "...")
|
||||
```
|
||||
|
||||
When adding a new `ErrorCode`:
|
||||
1. Add to `ErrorCode.java`
|
||||
2. Mirror in frontend `src/lib/errors.ts`
|
||||
3. Add Paraglide translation key in `messages/{de,en,es}.json`
|
||||
|
||||
## Security / Permissions
|
||||
|
||||
→ See [docs/ARCHITECTURE.md §Permission system](../docs/ARCHITECTURE.md#permission-system)
|
||||
Use `@RequirePermission` on controller methods or classes:
|
||||
|
||||
**LLM reminder:** `@RequirePermission(Permission.WRITE_ALL)` is **required** on every `POST`, `PUT`, `PATCH`, `DELETE` endpoint — not optional. Do not mix with Spring Security's `@PreAuthorize`. Available permissions: `READ_ALL`, `WRITE_ALL`, `ADMIN`, `ADMIN_USER`, `ADMIN_TAG`, `ADMIN_PERMISSION`, `ANNOTATE_ALL`, `BLOG_WRITE`.
|
||||
```java
|
||||
@RequirePermission(Permission.WRITE_ALL)
|
||||
public Document updateDocument(...) { ... }
|
||||
```
|
||||
|
||||
Available permissions: `READ_ALL`, `WRITE_ALL`, `ADMIN`, `ADMIN_USER`, `ADMIN_TAG`, `ADMIN_PERMISSION`
|
||||
|
||||
`PermissionAspect` checks the current user's `UserGroup.permissions` at runtime.
|
||||
|
||||
## OCR Integration
|
||||
|
||||
@@ -121,35 +141,49 @@ The backend orchestrates OCR by calling the Python `ocr-service` microservice vi
|
||||
- `OcrBatchService` — handles batch/job workflows
|
||||
- `OcrAsyncRunner` — async execution of OCR jobs
|
||||
|
||||
For ocr-service internals, see [`ocr-service/README.md`](../ocr-service/README.md).
|
||||
|
||||
## API Testing
|
||||
|
||||
HTTP test files in `backend/api_tests/` for the VS Code REST Client extension.
|
||||
|
||||
## How to Run
|
||||
|
||||
### Local Development
|
||||
|
||||
```bash
|
||||
cd backend
|
||||
|
||||
./mvnw spring-boot:run # Run with dev profile (requires PostgreSQL + MinIO)
|
||||
./mvnw clean package # Build JAR (with tests)
|
||||
# Run with dev profile (requires PostgreSQL + MinIO running via docker-compose)
|
||||
./mvnw spring-boot:run
|
||||
|
||||
# Build JAR (with tests)
|
||||
./mvnw clean package
|
||||
|
||||
# Build JAR skipping tests
|
||||
./mvnw clean package -DskipTests
|
||||
./mvnw test # Run all tests
|
||||
./mvnw test -Dtest=ClassName # Run a single test class
|
||||
./mvnw clean verify # Run with JaCoCo coverage report
|
||||
|
||||
# Run all tests
|
||||
./mvnw test
|
||||
|
||||
# Run a single test class
|
||||
./mvnw test -Dtest=ClassName
|
||||
|
||||
# Run with coverage (JaCoCo)
|
||||
./mvnw clean verify
|
||||
```
|
||||
|
||||
**OpenAPI / TypeScript type generation:**
|
||||
### OpenAPI TypeScript Generation
|
||||
|
||||
1. Start backend with `--spring.profiles.active=dev`
|
||||
2. In `frontend/`: `npm run generate:api`
|
||||
1. Build and start backend with `--spring.profiles.active=dev`
|
||||
2. In `frontend/`, run: `npm run generate:api`
|
||||
|
||||
**LLM reminder:** always regenerate types after any model or endpoint change — the most common cause of "where did my TypeScript type go?"
|
||||
### Profiles
|
||||
|
||||
- **dev** (default): Enables OpenAPI, dev configs, e2e seeds
|
||||
- **prod**: Production profile — no dev endpoints
|
||||
|
||||
## Testing
|
||||
|
||||
- Unit tests: Mockito + JUnit, pure in-memory
|
||||
- Slice tests: `@WebMvcTest`, `@DataJpaTest` with Testcontainers PostgreSQL
|
||||
- Integration tests: Full Spring context with Testcontainers
|
||||
- Coverage gate: 88% branch coverage (JaCoCo)
|
||||
- Coverage gate: 88% branch coverage overall (JaCoCo)
|
||||
|
||||
@@ -29,20 +29,6 @@
|
||||
<properties>
|
||||
<java.version>21</java.version>
|
||||
</properties>
|
||||
<dependencyManagement>
|
||||
<dependencies>
|
||||
<!-- opentelemetry-spring-boot-starter:2.27.0 was built against opentelemetry-api:1.61.0,
|
||||
but Spring Boot 4.0.0 BOM only manages 1.55.0 (missing GlobalOpenTelemetry.getOrNoop()).
|
||||
Import the core OTel BOM here to override it before the Spring Boot BOM applies. -->
|
||||
<dependency>
|
||||
<groupId>io.opentelemetry</groupId>
|
||||
<artifactId>opentelemetry-bom</artifactId>
|
||||
<version>1.61.0</version>
|
||||
<type>pom</type>
|
||||
<scope>import</scope>
|
||||
</dependency>
|
||||
</dependencies>
|
||||
</dependencyManagement>
|
||||
<dependencies>
|
||||
<dependency>
|
||||
<groupId>org.springframework.boot</groupId>
|
||||
@@ -204,49 +190,6 @@
|
||||
<artifactId>owasp-java-html-sanitizer</artifactId>
|
||||
<version>20240325.1</version>
|
||||
</dependency>
|
||||
|
||||
<!-- HTML → plain-text extraction for comment previews -->
|
||||
<dependency>
|
||||
<groupId>org.jsoup</groupId>
|
||||
<artifactId>jsoup</artifactId>
|
||||
<version>1.18.1</version>
|
||||
</dependency>
|
||||
|
||||
<!-- Observability: Prometheus metrics scrape endpoint (version managed by Spring Boot BOM) -->
|
||||
<dependency>
|
||||
<groupId>io.micrometer</groupId>
|
||||
<artifactId>micrometer-registry-prometheus</artifactId>
|
||||
</dependency>
|
||||
|
||||
<!-- Observability: Micrometer → OpenTelemetry tracing bridge (version managed by Spring Boot BOM) -->
|
||||
<dependency>
|
||||
<groupId>io.micrometer</groupId>
|
||||
<artifactId>micrometer-tracing-bridge-otel</artifactId>
|
||||
</dependency>
|
||||
|
||||
<!-- Observability: OTel Spring Boot auto-instrumentation — NOT in Spring Boot BOM, pinned explicitly -->
|
||||
<dependency>
|
||||
<groupId>io.opentelemetry.instrumentation</groupId>
|
||||
<artifactId>opentelemetry-spring-boot-starter</artifactId>
|
||||
<version>2.27.0</version>
|
||||
<exclusions>
|
||||
<!-- Excludes AzureAppServiceResourceProvider which references ServiceAttributes.SERVICE_INSTANCE_ID
|
||||
that does not exist in the semconv version pulled by this project. -->
|
||||
<exclusion>
|
||||
<groupId>io.opentelemetry.contrib</groupId>
|
||||
<artifactId>opentelemetry-azure-resources</artifactId>
|
||||
</exclusion>
|
||||
</exclusions>
|
||||
</dependency>
|
||||
|
||||
<!-- Sentry error reporting (GlitchTip-compatible) — sentry-spring-boot-4 is the
|
||||
Spring Boot 4 / Spring Framework 7 compatible module (replaces the jakarta starter
|
||||
which crashes with SF7 due to bean-name generation for triply-nested @Import classes) -->
|
||||
<dependency>
|
||||
<groupId>io.sentry</groupId>
|
||||
<artifactId>sentry-spring-boot-4</artifactId>
|
||||
<version>8.41.0</version>
|
||||
</dependency>
|
||||
</dependencies>
|
||||
|
||||
|
||||
@@ -323,16 +266,6 @@
|
||||
</profiles>
|
||||
</configuration>
|
||||
</plugin>
|
||||
<plugin>
|
||||
<groupId>org.apache.maven.plugins</groupId>
|
||||
<artifactId>maven-surefire-plugin</artifactId>
|
||||
<configuration>
|
||||
<forkedProcessTimeoutInSeconds>600</forkedProcessTimeoutInSeconds>
|
||||
<systemPropertyVariables>
|
||||
<junit.jupiter.execution.timeout.default>90 s</junit.jupiter.execution.timeout.default>
|
||||
</systemPropertyVariables>
|
||||
</configuration>
|
||||
</plugin>
|
||||
</plugins>
|
||||
</build>
|
||||
|
||||
|
||||
@@ -1,37 +0,0 @@
|
||||
# audit
|
||||
|
||||
Append-only event store for all domain mutations. Every write across the application produces an `audit_log` row. The activity feed and Family Pulse dashboard aggregate from this table.
|
||||
|
||||
## What this domain owns
|
||||
|
||||
Table: `audit_log` (append-only by convention — no UPDATE or DELETE in application code).
|
||||
Features: log mutations, query activity feed, query per-entity history.
|
||||
|
||||
**Admission criteria (why this is cross-cutting, not a Tier-1 domain):** consumed by 5+ domains; has no user-facing CRUD of its own; the data model is fixed (event log, not a business entity).
|
||||
|
||||
## What this domain does NOT own
|
||||
|
||||
Nothing beyond the log table. `audit/` is an infrastructure layer, not a business domain.
|
||||
|
||||
## Public surface (called from other domains)
|
||||
|
||||
| Method | Consumer | Purpose |
|
||||
|---|---|---|
|
||||
| `logAfterCommit(event)` | document, person, user, ocr, geschichte | Record a mutation event after the DB transaction commits |
|
||||
|
||||
`logAfterCommit` is the only write-path. Query paths (`AuditLogQueryService`) are consumed by `dashboard/` and the activity feed route.
|
||||
|
||||
## Internal layout
|
||||
|
||||
- `AuditService` — `logAfterCommit()` (write)
|
||||
- `AuditLogQueryService` — query by entity, by user, for the activity feed
|
||||
- `AuditLog` (entity) → table `audit_log`
|
||||
- `AuditLogRepository`
|
||||
|
||||
## Cross-domain dependencies
|
||||
|
||||
None. `audit/` is consumed by other domains; it does not call out to any of them.
|
||||
|
||||
## Frontend counterpart
|
||||
|
||||
No direct frontend counterpart. Audit data surfaces in the `activity/` and `conversation/` frontend domains via the dashboard API.
|
||||
@@ -29,11 +29,5 @@ public record ActivityFeedItemDTO(
|
||||
requiredMode = Schema.RequiredMode.NOT_REQUIRED,
|
||||
description = "Annotation associated with the comment; populated only for COMMENT_ADDED and MENTION_CREATED kinds."
|
||||
)
|
||||
UUID annotationId,
|
||||
@Nullable
|
||||
@Schema(
|
||||
requiredMode = Schema.RequiredMode.NOT_REQUIRED,
|
||||
description = "Plain-text preview of the comment body (HTML stripped server-side, truncated to 120 chars); null for non-comment feed items or deleted comments."
|
||||
)
|
||||
String commentPreview
|
||||
UUID annotationId
|
||||
) {}
|
||||
|
||||
@@ -12,7 +12,6 @@ import org.raddatz.familienarchiv.document.Document;
|
||||
import org.raddatz.familienarchiv.person.Person;
|
||||
import org.raddatz.familienarchiv.document.transcription.TranscriptionBlock;
|
||||
import org.raddatz.familienarchiv.document.comment.CommentService;
|
||||
import org.raddatz.familienarchiv.document.comment.CommentData;
|
||||
import org.raddatz.familienarchiv.document.DocumentService;
|
||||
import org.raddatz.familienarchiv.document.transcription.TranscriptionService;
|
||||
import org.raddatz.familienarchiv.user.UserService;
|
||||
@@ -134,9 +133,9 @@ public class DashboardService {
|
||||
.filter(Objects::nonNull)
|
||||
.distinct()
|
||||
.toList();
|
||||
Map<UUID, CommentData> commentDataByComment = commentIds.isEmpty()
|
||||
Map<UUID, UUID> annotationByComment = commentIds.isEmpty()
|
||||
? Map.of()
|
||||
: commentService.findDataByIds(commentIds);
|
||||
: commentService.findAnnotationIdsByIds(commentIds);
|
||||
|
||||
return rows.stream().map(row -> {
|
||||
ActivityActorDTO actor = row.getActorId() != null
|
||||
@@ -147,10 +146,7 @@ public class DashboardService {
|
||||
? row.getHappenedAtUntil().atOffset(ZoneOffset.UTC)
|
||||
: null;
|
||||
UUID commentId = row.getCommentId();
|
||||
CommentData commentData = commentId != null ? commentDataByComment.get(commentId) : null;
|
||||
UUID annotationId = commentData != null ? commentData.annotationId() : null;
|
||||
String commentPreview = commentData != null && !commentData.preview().isBlank()
|
||||
? commentData.preview() : null;
|
||||
UUID annotationId = commentId != null ? annotationByComment.get(commentId) : null;
|
||||
return new ActivityFeedItemDTO(
|
||||
org.raddatz.familienarchiv.audit.AuditKind.valueOf(row.getKind()),
|
||||
actor,
|
||||
@@ -162,8 +158,7 @@ public class DashboardService {
|
||||
row.getCount(),
|
||||
happenedAtUntil,
|
||||
commentId,
|
||||
annotationId,
|
||||
commentPreview
|
||||
annotationId
|
||||
);
|
||||
}).toList();
|
||||
}
|
||||
|
||||
@@ -1,39 +0,0 @@
|
||||
# dashboard
|
||||
|
||||
Stats aggregation for the admin dashboard and the Family Pulse widget. This is a derived domain — it has no tables of its own; all data is computed on-the-fly from Tier-1 domain data.
|
||||
|
||||
## What this domain owns
|
||||
|
||||
No entities. Routes: `/api/dashboard/*`, `/api/stats/*`.
|
||||
Features: document counts, person counts, publication stats, weekly activity data, incomplete-document list, enrichment queue, Family Pulse widget data, admin statistics.
|
||||
|
||||
**Admission criteria (cross-cutting):** aggregates from 3+ domains; no owned entities.
|
||||
|
||||
## What this domain does NOT own
|
||||
|
||||
None of the underlying data — it reads from `document/`, `person/`, `audit/`, `notification/`, `geschichte/`.
|
||||
|
||||
## Public surface
|
||||
|
||||
`dashboard/` is a leaf domain — no other domain calls its services. It is the aggregator, not the aggregated.
|
||||
|
||||
## Internal layout
|
||||
|
||||
- `StatsController` — REST under `/api/stats`
|
||||
- `DashboardController` — REST under `/api/dashboard`
|
||||
- `StatsService` — aggregated counts (documents, persons, geschichten, incomplete, etc.)
|
||||
- `DashboardService` — activity feed composition, Family Pulse data
|
||||
|
||||
## Cross-domain dependencies
|
||||
|
||||
- `DocumentService.count()` — total document count (StatsService)
|
||||
- `DocumentService.getDocumentById(UUID)` / `getDocumentsByIds(List<UUID>)` — document enrichment for activity feed (DashboardService)
|
||||
- `PersonService.count()` — total person count (StatsService)
|
||||
- `TranscriptionService.listBlocks(UUID)` — transcription block lookup for resume widget (DashboardService)
|
||||
- `UserService.getById(UUID)` — actor name resolution in activity feed (DashboardService)
|
||||
- `CommentService.findAnnotationIdsByIds(...)` — annotation context lookup for activity feed (DashboardService)
|
||||
- `AuditLogQueryService.findMostRecentDocumentForUser()` / `getPulseStats()` / `findActivityFeed()` — audit-sourced feed rows (DashboardService)
|
||||
|
||||
## Frontend counterpart
|
||||
|
||||
Activity feed and Pulse widget are assembled in `frontend/src/lib/shared/dashboard/` and in the `aktivitaeten` route; no dedicated `dashboard/` lib folder.
|
||||
@@ -1,12 +1,7 @@
|
||||
package org.raddatz.familienarchiv.dashboard;
|
||||
|
||||
import io.swagger.v3.oas.annotations.media.Schema;
|
||||
|
||||
/**
|
||||
* Aggregate counts for the dashboard/persons stats bar.
|
||||
*/
|
||||
public record StatsDTO(
|
||||
@Schema(requiredMode = Schema.RequiredMode.REQUIRED) long totalPersons,
|
||||
@Schema(requiredMode = Schema.RequiredMode.REQUIRED) long totalDocuments,
|
||||
@Schema(requiredMode = Schema.RequiredMode.REQUIRED) long totalStories) {
|
||||
public record StatsDTO(long totalPersons, long totalDocuments) {
|
||||
}
|
||||
|
||||
@@ -2,7 +2,6 @@ package org.raddatz.familienarchiv.dashboard;
|
||||
|
||||
import lombok.RequiredArgsConstructor;
|
||||
import org.raddatz.familienarchiv.document.DocumentService;
|
||||
import org.raddatz.familienarchiv.geschichte.GeschichteService;
|
||||
import org.raddatz.familienarchiv.person.PersonService;
|
||||
import org.raddatz.familienarchiv.dashboard.StatsDTO;
|
||||
import org.springframework.stereotype.Service;
|
||||
@@ -13,9 +12,8 @@ public class StatsService {
|
||||
|
||||
private final PersonService personService;
|
||||
private final DocumentService documentService;
|
||||
private final GeschichteService geschichteService;
|
||||
|
||||
public StatsDTO getStats() {
|
||||
return new StatsDTO(personService.count(), documentService.count(), geschichteService.countPublished());
|
||||
return new StatsDTO(personService.count(), documentService.count());
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,23 +0,0 @@
|
||||
package org.raddatz.familienarchiv.document;
|
||||
|
||||
import org.raddatz.familienarchiv.tag.TagOperator;
|
||||
|
||||
import java.util.List;
|
||||
import java.util.UUID;
|
||||
|
||||
/**
|
||||
* The non-date filters honoured by {@link DocumentService#getDensity(DensityFilters)}.
|
||||
* Date bounds (from/to) are deliberately excluded — see the service Javadoc for why.
|
||||
*
|
||||
* Kept as a record so the seven values are passed as one named bundle instead of a
|
||||
* positional argument list where two UUIDs (sender vs. receiver) can be swapped by
|
||||
* accident at the call site.
|
||||
*/
|
||||
public record DensityFilters(
|
||||
String text,
|
||||
UUID sender,
|
||||
UUID receiver,
|
||||
List<String> tags,
|
||||
String tagQ,
|
||||
DocumentStatus status,
|
||||
TagOperator tagOperator) {}
|
||||
@@ -3,7 +3,6 @@ package org.raddatz.familienarchiv.document;
|
||||
import java.io.IOException;
|
||||
import java.time.LocalDate;
|
||||
import java.util.ArrayList;
|
||||
import java.util.concurrent.TimeUnit;
|
||||
import java.util.LinkedHashSet;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
@@ -49,7 +48,6 @@ import org.raddatz.familienarchiv.filestorage.FileService;
|
||||
import org.raddatz.familienarchiv.user.UserService;
|
||||
import org.springframework.data.domain.Sort;
|
||||
import org.springframework.security.core.Authentication;
|
||||
import org.springframework.http.CacheControl;
|
||||
import org.springframework.http.HttpHeaders;
|
||||
import org.springframework.http.MediaType;
|
||||
import org.springframework.http.ResponseEntity;
|
||||
@@ -390,23 +388,6 @@ public class DocumentController {
|
||||
return ResponseEntity.ok(documentService.searchDocuments(q, from, to, senderId, receiverId, tags, tagQ, status, sort, dir, operator, pageable));
|
||||
}
|
||||
|
||||
@GetMapping(value = "/density", produces = MediaType.APPLICATION_JSON_VALUE)
|
||||
public ResponseEntity<DocumentDensityResult> density(
|
||||
@RequestParam(required = false) String q,
|
||||
@RequestParam(required = false) UUID senderId,
|
||||
@RequestParam(required = false) UUID receiverId,
|
||||
@RequestParam(required = false, name = "tag") List<String> tags,
|
||||
@RequestParam(required = false) String tagQ,
|
||||
@Parameter(description = "Filter by document status") @RequestParam(required = false) DocumentStatus status,
|
||||
@Parameter(description = "Tag operator: AND (default) or OR") @RequestParam(required = false) String tagOp) {
|
||||
TagOperator operator = "OR".equalsIgnoreCase(tagOp) ? TagOperator.OR : TagOperator.AND;
|
||||
DocumentDensityResult result = documentService.getDensity(
|
||||
new DensityFilters(q, senderId, receiverId, tags, tagQ, status, operator));
|
||||
return ResponseEntity.ok()
|
||||
.cacheControl(CacheControl.maxAge(5, TimeUnit.MINUTES).cachePrivate())
|
||||
.body(result);
|
||||
}
|
||||
|
||||
// --- TRAINING LABELS ---
|
||||
|
||||
public record TrainingLabelRequest(String label, boolean enrolled) {}
|
||||
|
||||
@@ -1,27 +0,0 @@
|
||||
package org.raddatz.familienarchiv.document;
|
||||
|
||||
import io.swagger.v3.oas.annotations.media.Schema;
|
||||
|
||||
import java.time.LocalDate;
|
||||
import java.util.List;
|
||||
|
||||
/**
|
||||
* Result of the timeline density aggregation.
|
||||
*
|
||||
* <p>{@code minDate} / {@code maxDate} are intentionally not marked
|
||||
* {@code @Schema(requiredMode = REQUIRED)} — the empty-result case (no
|
||||
* documents match the filter) returns them as {@code null}, which surfaces in
|
||||
* the generated TypeScript as {@code minDate?: string | null}. Frontend code
|
||||
* must treat them as optional.
|
||||
*/
|
||||
public record DocumentDensityResult(
|
||||
@Schema(requiredMode = Schema.RequiredMode.REQUIRED)
|
||||
List<MonthBucket> buckets,
|
||||
LocalDate minDate,
|
||||
LocalDate maxDate
|
||||
) {
|
||||
/** The "no documents match the filter" result, with no buckets and null date bounds. */
|
||||
public static DocumentDensityResult empty() {
|
||||
return new DocumentDensityResult(List.of(), null, null);
|
||||
}
|
||||
}
|
||||
@@ -100,45 +100,7 @@ public interface DocumentRepository extends JpaRepository<Document, UUID>, JpaSp
|
||||
ORDER BY ts_rank(d.search_vector, q.pq) DESC,
|
||||
d.meta_date DESC NULLS LAST
|
||||
""")
|
||||
// Unpaged path — for bulk-edit "select all" and density chart
|
||||
List<UUID> findAllMatchingIdsByFts(@Param("query") String query);
|
||||
|
||||
/**
|
||||
* Returns one page of FTS-ranked document IDs with the total match count.
|
||||
*
|
||||
* <p>Each row contains (in column order):
|
||||
* <ol>
|
||||
* <li>UUID — document id</li>
|
||||
* <li>double — ts_rank score</li>
|
||||
* <li>long — COUNT(*) OVER () — full match count, not page count</li>
|
||||
* </ol>
|
||||
*
|
||||
* <p>Returns an empty list when the query matches no documents (including
|
||||
* stopword-only queries where websearch_to_tsquery returns an empty tsquery).
|
||||
* Use findAllMatchingIdsByFts for the unpaged bulk-edit path.
|
||||
*/
|
||||
@Query(nativeQuery = true, value = """
|
||||
WITH q AS (
|
||||
SELECT CASE WHEN websearch_to_tsquery('german', :query)::text <> ''
|
||||
THEN to_tsquery('simple', regexp_replace(
|
||||
websearch_to_tsquery('german', :query)::text,
|
||||
'''([^'']+)''',
|
||||
'''\\1'':*',
|
||||
'g'))
|
||||
END AS pq
|
||||
), matches AS (
|
||||
SELECT d.id, ts_rank(d.search_vector, q.pq) AS rank
|
||||
FROM documents d, q
|
||||
WHERE d.search_vector @@ q.pq
|
||||
)
|
||||
SELECT id, rank, COUNT(*) OVER () AS total
|
||||
FROM matches
|
||||
ORDER BY rank DESC, id
|
||||
OFFSET :offset LIMIT :limit
|
||||
""")
|
||||
List<Object[]> findFtsPageRaw(@Param("query") String query,
|
||||
@Param("offset") int offset,
|
||||
@Param("limit") int limit);
|
||||
List<UUID> findRankedIdsByFts(@Param("query") String query);
|
||||
|
||||
/**
|
||||
* Returns match-enrichment data for a set of documents identified by their IDs.
|
||||
|
||||
@@ -48,7 +48,6 @@ import java.io.IOException;
|
||||
import java.security.MessageDigest;
|
||||
import java.security.NoSuchAlgorithmException;
|
||||
import java.time.LocalDate;
|
||||
import java.time.YearMonth;
|
||||
import java.util.ArrayList;
|
||||
import java.util.Arrays;
|
||||
import java.util.Collection;
|
||||
@@ -126,74 +125,6 @@ public class DocumentService {
|
||||
return titles;
|
||||
}
|
||||
|
||||
/**
|
||||
* Per-month document counts for the timeline density widget (issue #385).
|
||||
*
|
||||
* <p>Filter-reactive: the chart recomputes when other filters (sender,
|
||||
* receiver, tag, q, status) change so it always matches the list it sits
|
||||
* above. Date bounds (`from`/`to`) are deliberately omitted — the chart is
|
||||
* the surface for picking those, so it must always span the broader space
|
||||
* the user is selecting within.
|
||||
*
|
||||
* <p>Implementation note: groups in memory rather than via SQL GROUP BY
|
||||
* because the existing {@link Specification} predicates compose easily
|
||||
* with {@code findAll(spec)} and the archive size (≈5k docs) keeps this
|
||||
* well under the 200ms p95 target. Cache-Control: max-age=300 on the
|
||||
* controller layer absorbs repeated browse loads.
|
||||
*
|
||||
* <p>Tracked in issue #481 for re-evaluation when {@code documents > 50k}
|
||||
* — at that scale move the aggregation into SQL (GROUP BY TO_CHAR(meta_date,
|
||||
* 'YYYY-MM')) and accept that the criteria/specification surface needs a
|
||||
* parallel native-query path.
|
||||
*/
|
||||
public DocumentDensityResult getDensity(DensityFilters filters) {
|
||||
List<UUID> ftsIds = resolveFtsIds(filters.text());
|
||||
if (ftsIds != null && ftsIds.isEmpty()) {
|
||||
return DocumentDensityResult.empty();
|
||||
}
|
||||
List<LocalDate> dates = loadFilteredDates(filters, ftsIds);
|
||||
return aggregateByMonth(dates);
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns the FTS-ranked document IDs when {@code text} is non-blank, or {@code null}
|
||||
* when no full-text query is active. An empty list means the FTS query ran but
|
||||
* matched zero documents — the caller short-circuits on that signal.
|
||||
*/
|
||||
private List<UUID> resolveFtsIds(String text) {
|
||||
if (!StringUtils.hasText(text)) return null;
|
||||
return documentRepository.findAllMatchingIdsByFts(text);
|
||||
}
|
||||
|
||||
/** Loads matching documents and projects to non-null {@link LocalDate}s. */
|
||||
private List<LocalDate> loadFilteredDates(DensityFilters filters, List<UUID> ftsIds) {
|
||||
boolean hasFts = ftsIds != null;
|
||||
Specification<Document> spec = buildSearchSpec(
|
||||
hasFts, ftsIds, null, null,
|
||||
filters.sender(), filters.receiver(),
|
||||
filters.tags(), filters.tagQ(),
|
||||
filters.status(), filters.tagOperator());
|
||||
return documentRepository.findAll(spec).stream()
|
||||
.map(Document::getDocumentDate)
|
||||
.filter(Objects::nonNull)
|
||||
.toList();
|
||||
}
|
||||
|
||||
/** Buckets {@code dates} into one {@link MonthBucket} per YYYY-MM and computes min/max. */
|
||||
private DocumentDensityResult aggregateByMonth(List<LocalDate> dates) {
|
||||
if (dates.isEmpty()) return DocumentDensityResult.empty();
|
||||
Map<String, Integer> counts = new java.util.TreeMap<>();
|
||||
for (LocalDate d : dates) {
|
||||
counts.merge(YearMonth.from(d).toString(), 1, Integer::sum);
|
||||
}
|
||||
List<MonthBucket> buckets = counts.entrySet().stream()
|
||||
.map(e -> new MonthBucket(e.getKey(), e.getValue()))
|
||||
.toList();
|
||||
LocalDate minDate = dates.stream().min(LocalDate::compareTo).orElse(null);
|
||||
LocalDate maxDate = dates.stream().max(LocalDate::compareTo).orElse(null);
|
||||
return new DocumentDensityResult(buckets, minDate, maxDate);
|
||||
}
|
||||
|
||||
/**
|
||||
* Lädt eine Datei hoch.
|
||||
* - Prüft, ob ein Eintrag (aus Excel) schon existiert.
|
||||
@@ -485,7 +416,7 @@ public class DocumentService {
|
||||
boolean hasText = StringUtils.hasText(text);
|
||||
List<UUID> rankedIds = null;
|
||||
if (hasText) {
|
||||
rankedIds = documentRepository.findAllMatchingIdsByFts(text);
|
||||
rankedIds = documentRepository.findRankedIdsByFts(text);
|
||||
if (rankedIds.isEmpty()) return List.of();
|
||||
}
|
||||
|
||||
@@ -645,43 +576,39 @@ public class DocumentService {
|
||||
// 1. Allgemeine Suche (für das Suchfeld im Frontend)
|
||||
public DocumentSearchResult searchDocuments(String text, LocalDate from, LocalDate to, UUID sender, UUID receiver, List<String> tags, String tagQ, DocumentStatus status, DocumentSort sort, String dir, TagOperator tagOperator, Pageable pageable) {
|
||||
boolean hasText = StringUtils.hasText(text);
|
||||
|
||||
// Pure-text RELEVANCE: push pagination into SQL — skip findAllMatchingIdsByFts entirely (ADR-008).
|
||||
if (isPureTextRelevance(hasText, sort, from, to, sender, receiver, tags, tagQ, status)) {
|
||||
return relevanceSortedPageFromSql(text, pageable);
|
||||
}
|
||||
|
||||
List<UUID> rankedIds = null;
|
||||
|
||||
if (hasText) {
|
||||
rankedIds = documentRepository.findAllMatchingIdsByFts(text);
|
||||
rankedIds = documentRepository.findRankedIdsByFts(text);
|
||||
if (rankedIds.isEmpty()) return DocumentSearchResult.of(List.of());
|
||||
}
|
||||
|
||||
Specification<Document> spec = buildSearchSpec(
|
||||
hasText, rankedIds, from, to, sender, receiver, tags, tagQ, status, tagOperator);
|
||||
|
||||
// SENDER and RECEIVER sorts load the full match set and slice in-memory.
|
||||
// SENDER, RECEIVER and RELEVANCE sorts load the full match set and slice in memory.
|
||||
// JPA's Sort.by("sender.lastName") generates an INNER JOIN that silently drops
|
||||
// documents with null sender/receivers. Cost scales with match count —
|
||||
// acceptable while documents stays under ~10k rows. (ADR-008)
|
||||
// documents with null sender/receivers; RELEVANCE maps a DB order to an external
|
||||
// rank list. Cost scales linearly with match count — acceptable while documents
|
||||
// stays under ~10k rows. Past that, replace with SQL-level LEFT JOIN sort.
|
||||
if (sort == DocumentSort.RECEIVER) {
|
||||
// In-memory sort on page slice (≤ page size rows) — acceptable
|
||||
List<Document> sorted = sortByFirstReceiver(documentRepository.findAll(spec), dir);
|
||||
return buildResultPaged(pageSlice(sorted, pageable), text, pageable, sorted.size());
|
||||
}
|
||||
if (sort == DocumentSort.SENDER) {
|
||||
// In-memory sort on page slice (≤ page size rows) — acceptable
|
||||
List<Document> sorted = sortBySender(documentRepository.findAll(spec), dir);
|
||||
return buildResultPaged(pageSlice(sorted, pageable), text, pageable, sorted.size());
|
||||
}
|
||||
|
||||
// RELEVANCE with active filters: load filtered subset and sort in-memory by rank.
|
||||
// RELEVANCE: default when text present and no explicit sort given
|
||||
boolean useRankOrder = hasText && (sort == null || sort == DocumentSort.RELEVANCE);
|
||||
if (useRankOrder) {
|
||||
List<Document> results = documentRepository.findAll(spec);
|
||||
Map<UUID, Integer> rankMap = new HashMap<>();
|
||||
for (int i = 0; i < rankedIds.size(); i++) rankMap.put(rankedIds.get(i), i);
|
||||
List<Document> sorted = documentRepository.findAll(spec).stream()
|
||||
.sorted(Comparator.comparingInt(doc -> rankMap.getOrDefault(doc.getId(), Integer.MAX_VALUE)))
|
||||
List<Document> sorted = results.stream()
|
||||
.sorted(Comparator.comparingInt(
|
||||
doc -> rankMap.getOrDefault(doc.getId(), Integer.MAX_VALUE)))
|
||||
.toList();
|
||||
return buildResultPaged(pageSlice(sorted, pageable), text, pageable, sorted.size());
|
||||
}
|
||||
@@ -692,39 +619,6 @@ public class DocumentService {
|
||||
return buildResultPaged(page.getContent(), text, pageable, page.getTotalElements());
|
||||
}
|
||||
|
||||
private static boolean isPureTextRelevance(boolean hasText, DocumentSort sort,
|
||||
LocalDate from, LocalDate to, UUID sender, UUID receiver,
|
||||
List<String> tags, String tagQ, DocumentStatus status) {
|
||||
return hasText && (sort == null || sort == DocumentSort.RELEVANCE)
|
||||
&& from == null && to == null && sender == null && receiver == null
|
||||
&& (tags == null || tags.isEmpty()) && (tagQ == null || tagQ.isBlank()) && status == null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Pure-text RELEVANCE path — pagination and ts_rank ordering pushed into SQL.
|
||||
* Called when no non-text filters are active (ADR-008).
|
||||
*/
|
||||
private DocumentSearchResult relevanceSortedPageFromSql(String text, Pageable pageable) {
|
||||
long rawOffset = pageable.getOffset();
|
||||
if (rawOffset > Integer.MAX_VALUE) return DocumentSearchResult.of(List.of());
|
||||
int offset = (int) rawOffset;
|
||||
int limit = pageable.getPageSize();
|
||||
FtsPage ftsPage = toFtsPage(documentRepository.findFtsPageRaw(text, offset, limit));
|
||||
if (ftsPage.hits().isEmpty()) return DocumentSearchResult.of(List.of());
|
||||
|
||||
// Preserve ts_rank order from SQL across the JPA findAllById call.
|
||||
Map<UUID, Integer> rankMap = new HashMap<>();
|
||||
List<UUID> pageIds = new ArrayList<>();
|
||||
for (int i = 0; i < ftsPage.hits().size(); i++) {
|
||||
rankMap.put(ftsPage.hits().get(i).id(), i);
|
||||
pageIds.add(ftsPage.hits().get(i).id());
|
||||
}
|
||||
List<Document> docs = documentRepository.findAllById(pageIds).stream()
|
||||
.sorted(Comparator.comparingInt(d -> rankMap.getOrDefault(d.getId(), Integer.MAX_VALUE)))
|
||||
.toList();
|
||||
return buildResultPaged(docs, text, pageable, ftsPage.total());
|
||||
}
|
||||
|
||||
private static <T> List<T> pageSlice(List<T> sorted, Pageable pageable) {
|
||||
int from = Math.min((int) pageable.getOffset(), sorted.size());
|
||||
int to = Math.min(from + pageable.getPageSize(), sorted.size());
|
||||
@@ -764,7 +658,6 @@ public class DocumentService {
|
||||
return switch (sort) {
|
||||
case TITLE -> Sort.by(direction, "title");
|
||||
case UPLOAD_DATE -> Sort.by(direction, "createdAt");
|
||||
case UPDATED_AT -> Sort.by(direction, "updatedAt");
|
||||
default -> Sort.by(direction, "documentDate");
|
||||
};
|
||||
}
|
||||
@@ -1050,28 +943,6 @@ public class DocumentService {
|
||||
return result;
|
||||
}
|
||||
|
||||
private static final int COL_ID = 0;
|
||||
private static final int COL_RANK = 1;
|
||||
private static final int COL_TOTAL = 2;
|
||||
|
||||
/**
|
||||
* Maps raw Object[] rows from {@link DocumentRepository#findFtsPageRaw} to an
|
||||
* {@link FtsPage}. Uses pattern-matching UUID cast to guard against driver-level
|
||||
* type variance (some JDBC drivers return UUID as String).
|
||||
*/
|
||||
private static FtsPage toFtsPage(List<Object[]> rows) {
|
||||
if (rows.isEmpty()) return new FtsPage(List.of(), 0);
|
||||
long total = ((Number) rows.get(0)[COL_TOTAL]).longValue();
|
||||
List<FtsHit> hits = rows.stream()
|
||||
.map(r -> {
|
||||
UUID id = r[COL_ID] instanceof UUID u ? u : UUID.fromString(r[COL_ID].toString());
|
||||
double rank = ((Number) r[COL_RANK]).doubleValue();
|
||||
return new FtsHit(id, rank);
|
||||
})
|
||||
.toList();
|
||||
return new FtsPage(hits, total);
|
||||
}
|
||||
|
||||
/** Clean text + highlight offsets parsed from a {@code ts_headline} sentinel-delimited string. */
|
||||
public record ParsedHighlight(String cleanText, List<MatchOffset> offsets) {}
|
||||
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
package org.raddatz.familienarchiv.document;
|
||||
|
||||
public enum DocumentSort {
|
||||
DATE, TITLE, SENDER, RECEIVER, UPLOAD_DATE, UPDATED_AT, RELEVANCE
|
||||
DATE, TITLE, SENDER, RECEIVER, UPLOAD_DATE, RELEVANCE
|
||||
}
|
||||
|
||||
@@ -1,6 +0,0 @@
|
||||
package org.raddatz.familienarchiv.document;
|
||||
|
||||
import java.util.UUID;
|
||||
|
||||
/** A single document hit from a paginated FTS query — id and its ts_rank score. */
|
||||
record FtsHit(UUID id, double rank) {}
|
||||
@@ -1,6 +0,0 @@
|
||||
package org.raddatz.familienarchiv.document;
|
||||
|
||||
import java.util.List;
|
||||
|
||||
/** One page of FTS results — the ranked hit list for this page and the total match count. */
|
||||
record FtsPage(List<FtsHit> hits, long total) {}
|
||||
@@ -1,10 +0,0 @@
|
||||
package org.raddatz.familienarchiv.document;
|
||||
|
||||
import io.swagger.v3.oas.annotations.media.Schema;
|
||||
|
||||
public record MonthBucket(
|
||||
@Schema(requiredMode = Schema.RequiredMode.REQUIRED, example = "1915-08")
|
||||
String month,
|
||||
@Schema(requiredMode = Schema.RequiredMode.REQUIRED)
|
||||
int count
|
||||
) {}
|
||||
@@ -1,50 +0,0 @@
|
||||
# document
|
||||
|
||||
The archive's core concept. A `Document` represents one physical artefact (a letter, a postcard, a photo) stored in MinIO and described by metadata.
|
||||
|
||||
## What this domain owns
|
||||
|
||||
Entities: `Document`, `DocumentVersion`, `TranscriptionBlock`, `DocumentAnnotation`, `DocumentComment`.
|
||||
Features: document CRUD, file upload/download, full-text search, bulk editing, transcription workflows, annotation canvas, threaded comments, thumbnail generation (PDFBox).
|
||||
|
||||
## What this domain does NOT own
|
||||
|
||||
- `Person` (sender / receivers) — referenced by ID, resolved via `PersonService`
|
||||
- `Tag` — referenced by ID; the join is on the document side but tags are owned by `tag/`
|
||||
- `AppUser` — comments reference `AppUser` IDs, but user management lives in `user/`
|
||||
- OCR processing — `ocr/` orchestrates jobs; `ocr-service/` executes them
|
||||
|
||||
## Public surface (called from other domains)
|
||||
|
||||
| Method | Consumer | Purpose |
|
||||
|---|---|---|
|
||||
| `getDocumentById(UUID)` | ocr, notification | Fetch a single document |
|
||||
| `getDocumentsByIds(List<UUID>)` | ocr | Bulk fetch for OCR job |
|
||||
| `findByOriginalFilename(String)` | importing | Deduplication during mass import |
|
||||
| `deleteTagCascading(UUID tagId)` | tag | Remove a tag from all documents before deleting it |
|
||||
| `findWeeklyStats()` | dashboard | Activity data for Family Pulse widget |
|
||||
| `count()` | dashboard | Total document count for stats |
|
||||
| `addTrainingLabel(...)` | ocr | Attach a confirmed sender label to a document |
|
||||
| `findSegmentationQueue(int limit)` / `findTranscriptionQueue(int limit)` / `findReadyToReadQueue(int limit)` | ocr | OCR pipeline queues |
|
||||
|
||||
## Internal layout
|
||||
|
||||
- `DocumentController` — REST under `/api/documents`
|
||||
- `DocumentService` — CRUD, search (JPA Specifications), bulk edit
|
||||
- `DocumentRepository` — includes bidirectional conversation-thread query
|
||||
- `DocumentSpecifications` — composable `Specification` predicates for search
|
||||
- `DocumentVersionService` / `DocumentVersionRepository` — append-only version history
|
||||
- `ThumbnailService` + `ThumbnailAsyncRunner` — PDFBox thumbnail generation (separate thread pool)
|
||||
- Sub-packages: `annotation/`, `comment/`, `transcription/`
|
||||
|
||||
## Cross-domain dependencies
|
||||
|
||||
- `PersonService.getById()` / `getAllById()` — resolve sender and receivers
|
||||
- `TagService.expandTagNamesToDescendantIdSets()` — tag filter expansion
|
||||
- `FileService.uploadFile()` / `downloadFile()` / `generatePresignedUrl()` — S3 I/O
|
||||
- `NotificationService.notifyMentions()` / `.notifyReply()` — comment mentions
|
||||
- `AuditService.logAfterCommit()` — every mutation is audited
|
||||
|
||||
## Frontend counterpart
|
||||
|
||||
`frontend/src/lib/document/README.md`
|
||||
@@ -27,9 +27,7 @@ public class CommentController {
|
||||
// ─── Block (transcription) comments ────────────────────────────────────────
|
||||
|
||||
@GetMapping("/api/documents/{documentId}/transcription-blocks/{blockId}/comments")
|
||||
public List<DocumentComment> getBlockComments(
|
||||
@PathVariable UUID documentId,
|
||||
@PathVariable UUID blockId) {
|
||||
public List<DocumentComment> getBlockComments(@PathVariable UUID blockId) {
|
||||
return commentService.getCommentsForBlock(blockId);
|
||||
}
|
||||
|
||||
@@ -50,7 +48,6 @@ public class CommentController {
|
||||
@RequirePermission({Permission.ANNOTATE_ALL, Permission.WRITE_ALL})
|
||||
public DocumentComment replyToBlockComment(
|
||||
@PathVariable UUID documentId,
|
||||
@PathVariable UUID blockId,
|
||||
@PathVariable UUID commentId,
|
||||
@RequestBody CreateCommentDTO dto,
|
||||
Authentication authentication) {
|
||||
|
||||
@@ -1,6 +0,0 @@
|
||||
package org.raddatz.familienarchiv.document.comment;
|
||||
|
||||
import jakarta.annotation.Nullable;
|
||||
import java.util.UUID;
|
||||
|
||||
public record CommentData(@Nullable UUID annotationId, String preview) {}
|
||||
@@ -13,7 +13,6 @@ import org.raddatz.familienarchiv.document.comment.DocumentComment;
|
||||
import org.raddatz.familienarchiv.document.transcription.TranscriptionBlock;
|
||||
import org.raddatz.familienarchiv.document.comment.CommentRepository;
|
||||
import org.raddatz.familienarchiv.notification.NotificationService;
|
||||
import org.jsoup.Jsoup;
|
||||
import org.springframework.stereotype.Service;
|
||||
import org.springframework.transaction.annotation.Transactional;
|
||||
|
||||
@@ -29,29 +28,21 @@ import java.util.UUID;
|
||||
@RequiredArgsConstructor
|
||||
public class CommentService {
|
||||
|
||||
private static final int PREVIEW_MAX_CHARS = 120;
|
||||
|
||||
private final CommentRepository commentRepository;
|
||||
private final UserService userService;
|
||||
private final NotificationService notificationService;
|
||||
private final AuditService auditService;
|
||||
private final TranscriptionService transcriptionService;
|
||||
|
||||
public Map<UUID, CommentData> findDataByIds(Collection<UUID> commentIds) {
|
||||
public Map<UUID, UUID> findAnnotationIdsByIds(Collection<UUID> commentIds) {
|
||||
if (commentIds == null || commentIds.isEmpty()) return Map.of();
|
||||
Map<UUID, CommentData> result = new HashMap<>();
|
||||
Map<UUID, UUID> result = new HashMap<>();
|
||||
for (DocumentComment c : commentRepository.findAllById(commentIds)) {
|
||||
result.put(c.getId(), new CommentData(c.getAnnotationId(), stripAndTruncate(c.getContent())));
|
||||
if (c.getAnnotationId() != null) result.put(c.getId(), c.getAnnotationId());
|
||||
}
|
||||
return result;
|
||||
}
|
||||
|
||||
private String stripAndTruncate(String html) {
|
||||
if (html == null || html.isBlank()) return "";
|
||||
String text = Jsoup.parse(html).text().trim();
|
||||
return text.length() > PREVIEW_MAX_CHARS ? text.substring(0, PREVIEW_MAX_CHARS) : text;
|
||||
}
|
||||
|
||||
public List<DocumentComment> getCommentsForBlock(UUID blockId) {
|
||||
List<DocumentComment> roots = commentRepository.findByBlockIdAndParentIdIsNull(blockId);
|
||||
return withRepliesAndMentions(roots);
|
||||
|
||||
@@ -30,8 +30,6 @@ public enum ErrorCode {
|
||||
// --- Users ---
|
||||
/** A user with the given ID or username does not exist. 404 */
|
||||
USER_NOT_FOUND,
|
||||
/** A group with the given ID does not exist. 404 */
|
||||
GROUP_NOT_FOUND,
|
||||
/** The supplied email address is already used by another account. 409 */
|
||||
EMAIL_ALREADY_IN_USE,
|
||||
/** The supplied current password does not match the stored hash. 400 */
|
||||
@@ -54,8 +52,6 @@ public enum ErrorCode {
|
||||
INVITE_REVOKED,
|
||||
/** The invite has passed its expiry date. 410 */
|
||||
INVITE_EXPIRED,
|
||||
/** A group cannot be deleted because one or more active invites reference it. 409 */
|
||||
GROUP_HAS_ACTIVE_INVITES,
|
||||
|
||||
// --- Auth ---
|
||||
/** The request is not authenticated. 401 */
|
||||
|
||||
@@ -2,7 +2,6 @@ package org.raddatz.familienarchiv.exception;
|
||||
|
||||
import java.util.stream.Collectors;
|
||||
|
||||
import io.sentry.Sentry;
|
||||
import jakarta.validation.ConstraintViolationException;
|
||||
import org.raddatz.familienarchiv.exception.DomainException;
|
||||
import org.raddatz.familienarchiv.exception.ErrorCode;
|
||||
@@ -16,7 +15,6 @@ import org.springframework.web.server.ResponseStatusException;
|
||||
|
||||
import lombok.extern.slf4j.Slf4j;
|
||||
|
||||
// "Handler" is Spring's @RestControllerAdvice naming convention — not a generic suffix.
|
||||
@RestControllerAdvice
|
||||
@Slf4j
|
||||
public class GlobalExceptionHandler {
|
||||
@@ -64,7 +62,6 @@ public class GlobalExceptionHandler {
|
||||
|
||||
@ExceptionHandler(Exception.class)
|
||||
public ResponseEntity<ErrorResponse> handleGeneric(Exception ex) {
|
||||
Sentry.captureException(ex);
|
||||
log.error("Unhandled exception", ex);
|
||||
return ResponseEntity.internalServerError()
|
||||
.body(new ErrorResponse(ErrorCode.INTERNAL_ERROR, "An unexpected error occurred"));
|
||||
|
||||
@@ -56,10 +56,6 @@ public class GeschichteService {
|
||||
|
||||
// ─── Read API ────────────────────────────────────────────────────────────
|
||||
|
||||
public long countPublished() {
|
||||
return geschichteRepository.count(GeschichteSpecifications.hasStatus(GeschichteStatus.PUBLISHED));
|
||||
}
|
||||
|
||||
public Geschichte getById(UUID id) {
|
||||
Geschichte g = geschichteRepository.findById(id)
|
||||
.orElseThrow(() -> DomainException.notFound(
|
||||
@@ -81,10 +77,8 @@ public class GeschichteService {
|
||||
GeschichteStatus effective = currentUserHasBlogWrite() ? status : GeschichteStatus.PUBLISHED;
|
||||
int safeLimit = limit <= 0 ? DEFAULT_LIMIT : Math.min(limit, MAX_LIMIT);
|
||||
|
||||
UUID authorId = effective == GeschichteStatus.DRAFT ? currentUser().getId() : null;
|
||||
Specification<Geschichte> spec = Specification.allOf(
|
||||
GeschichteSpecifications.hasStatus(effective),
|
||||
GeschichteSpecifications.hasAuthor(authorId),
|
||||
GeschichteSpecifications.hasAllPersons(personIds),
|
||||
GeschichteSpecifications.hasDocument(documentId),
|
||||
GeschichteSpecifications.orderByDisplayDateDesc()
|
||||
|
||||
@@ -42,12 +42,6 @@ public final class GeschichteSpecifications {
|
||||
};
|
||||
}
|
||||
|
||||
// null authorId → no restriction (PUBLISHED path passes null; Spring Data skips null predicates)
|
||||
public static Specification<Geschichte> hasAuthor(UUID authorId) {
|
||||
return (root, query, cb) ->
|
||||
authorId == null ? null : cb.equal(root.get("author").get("id"), authorId);
|
||||
}
|
||||
|
||||
public static Specification<Geschichte> hasDocument(UUID documentId) {
|
||||
return (root, query, cb) -> {
|
||||
if (documentId == null) return null;
|
||||
|
||||
@@ -1,38 +0,0 @@
|
||||
# geschichte
|
||||
|
||||
Family stories — curated narrative pieces that weave together persons, documents, and commentary into a publishable article. German: *Geschichte* (story / history).
|
||||
|
||||
## What this domain owns
|
||||
|
||||
Entity: `Geschichte`.
|
||||
Lifecycle: `DRAFT → PUBLISHED` (only published stories are visible to non-authors).
|
||||
Features: story CRUD, rich-text editing with person and document cross-references, publish/unpublish toggle, comment thread (shared component from `shared/discussion/`).
|
||||
|
||||
## What this domain does NOT own
|
||||
|
||||
- `Person` or `Document` records — stories reference them by ID. Deleting a Person or Document does not cascade to Geschichte.
|
||||
- Comment storage — shared comment infrastructure is in `document/comment/` (or `shared/discussion/` on the frontend).
|
||||
|
||||
## Public surface (called from other domains)
|
||||
|
||||
| Method | Consumer | Purpose |
|
||||
|---|---|---|
|
||||
| `getById(UUID)` | notification | Resolve story context in mention notifications |
|
||||
| `list(...)` | dashboard | Recent stories for the activity feed |
|
||||
| `count()` | dashboard | Published story count for stats |
|
||||
|
||||
## Internal layout
|
||||
|
||||
- `GeschichteController` — REST under `/api/geschichten`
|
||||
- `GeschichteService` — CRUD, publish lifecycle
|
||||
- `GeschichteRepository` — list by status, author
|
||||
|
||||
## Cross-domain dependencies
|
||||
|
||||
- `PersonService.getById()` / `getAllById()` — resolve person references in story body
|
||||
- `DocumentService.getDocumentsByIds()` — resolve document references in story body
|
||||
- `AuditService.logAfterCommit()` — story mutations are audited
|
||||
|
||||
## Frontend counterpart
|
||||
|
||||
`frontend/src/lib/geschichte/README.md`
|
||||
@@ -1,6 +1,5 @@
|
||||
package org.raddatz.familienarchiv.importing;
|
||||
|
||||
import com.fasterxml.jackson.annotation.JsonIgnore;
|
||||
import lombok.RequiredArgsConstructor;
|
||||
import lombok.extern.slf4j.Slf4j;
|
||||
import org.apache.poi.ss.usermodel.*;
|
||||
@@ -53,9 +52,9 @@ public class MassImportService {
|
||||
|
||||
public enum State { IDLE, RUNNING, DONE, FAILED }
|
||||
|
||||
public record ImportStatus(State state, String statusCode, @JsonIgnore String message, int processed, LocalDateTime startedAt) {}
|
||||
public record ImportStatus(State state, String message, int processed, LocalDateTime startedAt) {}
|
||||
|
||||
private volatile ImportStatus currentStatus = new ImportStatus(State.IDLE, "IMPORT_IDLE", "Kein Import gestartet.", 0, null);
|
||||
private volatile ImportStatus currentStatus = new ImportStatus(State.IDLE, "Kein Import gestartet.", 0, null);
|
||||
|
||||
public ImportStatus getStatus() {
|
||||
return currentStatus;
|
||||
@@ -100,9 +99,7 @@ public class MassImportService {
|
||||
@Value("${app.import.col.transcription:13}")
|
||||
private int colTranscription;
|
||||
|
||||
@Value("${app.import.dir:/import}")
|
||||
private String importDir;
|
||||
|
||||
private static final String IMPORT_DIR = "/import";
|
||||
private static final DateTimeFormatter GERMAN_DATE = DateTimeFormatter.ofPattern("d. MMMM yyyy", Locale.GERMAN);
|
||||
|
||||
// ODS XML namespaces
|
||||
@@ -117,39 +114,30 @@ public class MassImportService {
|
||||
if (currentStatus.state() == State.RUNNING) {
|
||||
throw DomainException.conflict(ErrorCode.IMPORT_ALREADY_RUNNING, "A mass import is already in progress");
|
||||
}
|
||||
currentStatus = new ImportStatus(State.RUNNING, "IMPORT_RUNNING", "Import läuft...", 0, LocalDateTime.now());
|
||||
currentStatus = new ImportStatus(State.RUNNING, "Import läuft...", 0, LocalDateTime.now());
|
||||
try {
|
||||
File spreadsheet = findSpreadsheetFile();
|
||||
log.info("Starte Massenimport aus: {}", spreadsheet.getAbsolutePath());
|
||||
int processed = processRows(readSpreadsheet(spreadsheet));
|
||||
currentStatus = new ImportStatus(State.DONE, "IMPORT_DONE",
|
||||
currentStatus = new ImportStatus(State.DONE,
|
||||
"Import abgeschlossen. " + processed + " Dokumente verarbeitet.",
|
||||
processed, currentStatus.startedAt());
|
||||
} catch (NoSpreadsheetException e) {
|
||||
log.error("Massenimport fehlgeschlagen: keine Tabellendatei", e);
|
||||
currentStatus = new ImportStatus(State.FAILED, "IMPORT_FAILED_NO_SPREADSHEET",
|
||||
"Fehler: " + e.getMessage(), 0, currentStatus.startedAt());
|
||||
} catch (Exception e) {
|
||||
log.error("Massenimport fehlgeschlagen", e);
|
||||
currentStatus = new ImportStatus(State.FAILED, "IMPORT_FAILED_INTERNAL",
|
||||
"Fehler: " + e.getMessage(), 0, currentStatus.startedAt());
|
||||
currentStatus = new ImportStatus(State.FAILED, "Fehler: " + e.getMessage(), 0, currentStatus.startedAt());
|
||||
}
|
||||
}
|
||||
|
||||
private static class NoSpreadsheetException extends RuntimeException {
|
||||
NoSpreadsheetException(String message) { super(message); }
|
||||
}
|
||||
|
||||
private File findSpreadsheetFile() throws IOException {
|
||||
try (Stream<Path> files = Files.list(Paths.get(importDir))) {
|
||||
try (Stream<Path> files = Files.list(Paths.get(IMPORT_DIR))) {
|
||||
return files
|
||||
.filter(p -> {
|
||||
String name = p.toString().toLowerCase();
|
||||
return name.endsWith(".ods") || name.endsWith(".xlsx") || name.endsWith(".xls");
|
||||
})
|
||||
.findFirst()
|
||||
.orElseThrow(() -> new NoSpreadsheetException(
|
||||
"Keine Tabellendatei (.ods/.xlsx/.xls) in " + importDir + " gefunden!"))
|
||||
.orElseThrow(() -> new RuntimeException(
|
||||
"Keine Tabellendatei (.ods/.xlsx/.xls) in " + IMPORT_DIR + " gefunden!"))
|
||||
.toFile();
|
||||
}
|
||||
}
|
||||
@@ -390,7 +378,7 @@ public class MassImportService {
|
||||
}
|
||||
|
||||
private Optional<File> findFileRecursive(String filename) {
|
||||
try (Stream<Path> walk = Files.walk(Paths.get(importDir))) {
|
||||
try (Stream<Path> walk = Files.walk(Paths.get(IMPORT_DIR))) {
|
||||
return walk.filter(p -> !Files.isDirectory(p))
|
||||
.filter(p -> p.getFileName().toString().equals(filename))
|
||||
.map(Path::toFile)
|
||||
|
||||
@@ -1,41 +0,0 @@
|
||||
# notification
|
||||
|
||||
In-app messages delivered in real time via SSE and persisted in the bell-icon dropdown. Notifications are created by other domains in response to events (comment mentions, replies).
|
||||
|
||||
## What this domain owns
|
||||
|
||||
Entity: `Notification`.
|
||||
Features: create and deliver notifications, unread count, mark-read, SSE real-time push, per-user delivery preferences (stored as fields on `AppUser`, managed by `user/`).
|
||||
|
||||
## What this domain does NOT own
|
||||
|
||||
- `AppUser` (recipient) — owned by `user/`
|
||||
- `Document` or `Geschichte` (notification context) — referenced by ID only
|
||||
|
||||
## Public surface (called from other domains)
|
||||
|
||||
| Method | Consumer | Purpose |
|
||||
|---|---|---|
|
||||
| `notifyMentions(mentionedUserIds, comment)` | document (comment) | Push mention notifications when a comment contains @mentions |
|
||||
| `notifyReply(reply, participantIds)` | document (comment) | Push reply notification to all thread participants |
|
||||
| `countUnread(userId)` | user session | Unread badge count in the nav bar |
|
||||
| `getNotifications(userId)` | dashboard / activity | Notification list for bell dropdown |
|
||||
| `markRead(id)` / `markAllRead(userId)` | notification controller | User-driven read-state updates |
|
||||
| `updatePreferences(userId, dto)` | notification controller | Per-user delivery preferences |
|
||||
|
||||
## Internal layout
|
||||
|
||||
- `NotificationController` — REST under `/api/notifications`
|
||||
- `NotificationService` — create, query, mark-read
|
||||
- `SseEmitterRegistry` — runtime-stateful component that keeps one `SseEmitter` per connected user. On `notifyMentions()` / `notifyReply()`, the service writes to `SseEmitterRegistry` to push real-time events. SSE connections go **backend → browser directly**, not via the SvelteKit SSR layer.
|
||||
- `NotificationRepository` — persisted notification rows
|
||||
- `NotificationPreferenceDTO` — read/write DTO for preference endpoints (prefs stored on `AppUser`)
|
||||
|
||||
## Cross-domain dependencies
|
||||
|
||||
**Outbound (this domain calls):**
|
||||
- `DocumentService.findTitlesByIds(List<UUID>)` — enriches notification DTOs with document titles for display in the bell dropdown
|
||||
|
||||
## Frontend counterpart
|
||||
|
||||
`frontend/src/lib/notification/README.md`
|
||||
@@ -1,44 +0,0 @@
|
||||
# ocr
|
||||
|
||||
OCR/HTR pipeline orchestration. This domain manages job lifecycle and result ingestion — it does **not** perform OCR. Actual text recognition runs in the Python `ocr-service/` container (port 8000, internal network only).
|
||||
|
||||
## What this domain owns
|
||||
|
||||
Entities: `OcrJob`, `OcrJobDocument`, `SenderModel`.
|
||||
Features: start OCR jobs, track job lifecycle (`PENDING → RUNNING → DONE / FAILED`), stream transcription blocks back into `document/transcription/`, sender-model training, segmentation training.
|
||||
|
||||
## What this domain does NOT own
|
||||
|
||||
- Document content — `Document` and `TranscriptionBlock` are owned by `document/`
|
||||
- File storage — presigned MinIO URLs are generated by `filestorage/FileService` and passed to the OCR service
|
||||
- OCR processing — the Python `ocr-service/` executes Surya (typewritten) and Kraken (Kurrent/Sütterlin HTR) and streams results back
|
||||
|
||||
## Public surface (called from other domains)
|
||||
|
||||
| Method | Consumer | Purpose |
|
||||
|---|---|---|
|
||||
| `startOcr(documentId, ...)` | document | Trigger an OCR job for a document |
|
||||
| `getJob(UUID)` | document | Fetch job status |
|
||||
| `getDocumentOcrStatus(UUID)` | document | Per-document OCR status summary |
|
||||
|
||||
## Internal layout
|
||||
|
||||
- `OcrController` — REST under `/api/ocr`
|
||||
- `OcrService` — job creation, presigned URL generation, result ingestion
|
||||
- `OcrBatchService` — batch job workflows
|
||||
- `OcrAsyncRunner` — `@Async` execution of OCR jobs
|
||||
- `OcrTrainingService` — calls `/train` and `/segtrain` on the Python service (protected by `X-Training-Token` header)
|
||||
- `OcrJobRepository` / `OcrJobDocumentRepository`
|
||||
- `SenderModelRepository` — trained sender-recognition models
|
||||
- `OcrClient` (interface) / `RestClientOcrClient` — HTTP client for the Python OCR service; mockable for tests
|
||||
|
||||
## Cross-domain dependencies
|
||||
|
||||
- `DocumentService.getDocumentById()` / `getDocumentsByIds()` — resolve target documents
|
||||
- `DocumentService.addTrainingLabel()` — attach confirmed sender labels after training
|
||||
- `FileService.generatePresignedUrl()` — generate MinIO presigned URLs passed to the OCR service (PDF bytes never flow through the backend)
|
||||
- `AuditService.logAfterCommit()` — OCR job events are audited
|
||||
|
||||
## Frontend counterpart
|
||||
|
||||
`frontend/src/lib/ocr/README.md`
|
||||
@@ -35,14 +35,7 @@ public class PersonController {
|
||||
|
||||
@GetMapping
|
||||
@RequirePermission(Permission.READ_ALL)
|
||||
public ResponseEntity<List<PersonSummaryDTO>> getPersons(
|
||||
@RequestParam(required = false) String q,
|
||||
@RequestParam(required = false, defaultValue = "0") int size,
|
||||
@RequestParam(required = false) String sort) {
|
||||
if ("documentCount".equals(sort) && size > 0 && q == null) {
|
||||
int safeSize = Math.min(size, 50);
|
||||
return ResponseEntity.ok(personService.findTopByDocumentCount(safeSize));
|
||||
}
|
||||
public ResponseEntity<List<PersonSummaryDTO>> getPersons(@RequestParam(required = false) String q) {
|
||||
return ResponseEntity.ok(personService.findAll(q));
|
||||
}
|
||||
|
||||
|
||||
@@ -69,22 +69,6 @@ public interface PersonRepository extends JpaRepository<Person, UUID> {
|
||||
nativeQuery = true)
|
||||
List<PersonSummaryDTO> searchWithDocumentCount(@Param("query") String query);
|
||||
|
||||
// ORDER BY uses the computed alias "documentCount" — valid PostgreSQL (aliases allowed in ORDER BY,
|
||||
// unlike WHERE/HAVING). This is intentional; it would silently fail on MySQL or H2.
|
||||
@Query(value = """
|
||||
SELECT p.id, p.title, p.first_name AS firstName, p.last_name AS lastName,
|
||||
p.person_type AS personType,
|
||||
p.alias, p.birth_year AS birthYear, p.death_year AS deathYear, p.notes,
|
||||
p.family_member AS familyMember,
|
||||
(SELECT COUNT(*) FROM documents d WHERE d.sender_id = p.id)
|
||||
+ (SELECT COUNT(*) FROM document_receivers dr WHERE dr.person_id = p.id) AS documentCount
|
||||
FROM persons p
|
||||
ORDER BY documentCount DESC
|
||||
LIMIT :limit
|
||||
""",
|
||||
nativeQuery = true)
|
||||
List<PersonSummaryDTO> findTopByDocumentCount(@Param("limit") int limit);
|
||||
|
||||
// --- Correspondent queries ---
|
||||
|
||||
@Query(value = """
|
||||
|
||||
@@ -41,10 +41,6 @@ public class PersonService {
|
||||
return personRepository.searchWithDocumentCount(q.trim());
|
||||
}
|
||||
|
||||
public List<PersonSummaryDTO> findTopByDocumentCount(int limit) {
|
||||
return personRepository.findTopByDocumentCount(limit);
|
||||
}
|
||||
|
||||
public Person getById(UUID id) {
|
||||
return personRepository.findById(id)
|
||||
.orElseThrow(() -> DomainException.notFound(ErrorCode.PERSON_NOT_FOUND, "Person not found: " + id));
|
||||
|
||||
@@ -1,45 +0,0 @@
|
||||
# person
|
||||
|
||||
Historical individuals referenced by documents. A `Person` is a family member who appears as a sender or receiver in the archive — they are never login accounts.
|
||||
|
||||
## What this domain owns
|
||||
|
||||
Entities: `Person`, `PersonNameAlias`, `PersonRelationship`.
|
||||
Features: person CRUD, name alias management, person merge (deduplication), family-member designation, relationship graph, person type classification (FAMILY, CORRESPONDENT, INSTITUTION).
|
||||
|
||||
## What this domain does NOT own
|
||||
|
||||
- `AppUser` — login accounts are in `user/`. A `Person` record has no login credentials. The separation is deliberate: a historical family member from 1905 is never a system user.
|
||||
- Document content — `Person` records are referenced by documents (as sender/receiver), not the other way around.
|
||||
- Relationship rendering — the Stammbaum view is derived by the frontend from `PersonRelationship` data.
|
||||
|
||||
## Public surface (called from other domains)
|
||||
|
||||
| Method | Consumer | Purpose |
|
||||
|---|---|---|
|
||||
| `getById(UUID)` | document, geschichte, ocr | Fetch one person by ID |
|
||||
| `getAllById(List<UUID>)` | document | Bulk fetch for sender/receiver resolution |
|
||||
| `findAll(String q)` | document, dashboard | List all persons |
|
||||
| `findByName(String firstName, String lastName)` | document | Typeahead search |
|
||||
| `findOrCreateByAlias(String rawName)` | importing | Idempotent create during mass import; type classification happens internally |
|
||||
| `findAllFamilyMembers()` | dashboard | Family member list for stats |
|
||||
| `findCorrespondents()` | document | Correspondent list for conversation filter |
|
||||
| `count()` | dashboard | Total person count for stats |
|
||||
|
||||
## Internal layout
|
||||
|
||||
- `PersonController` — REST under `/api/persons`
|
||||
- `PersonService` — CRUD, merge, alias management, family-member designation
|
||||
- `PersonRepository` — sorted list, name search
|
||||
- `PersonNameAlias` / `PersonNameAliasRepository` — alternative name spellings
|
||||
- `PersonNameParser` / `PersonTypeClassifier` — name parsing utilities
|
||||
- `PersonSummaryDTO` — lightweight DTO for typeahead / list views
|
||||
- Sub-package: `relationship/` — `PersonRelationship`, `RelationshipService`, `RelationshipController`
|
||||
|
||||
## Cross-domain dependencies
|
||||
|
||||
- `AuditService.logAfterCommit()` — person mutations are audited
|
||||
|
||||
## Frontend counterpart
|
||||
|
||||
`frontend/src/lib/person/README.md`
|
||||
@@ -1,137 +0,0 @@
|
||||
package org.raddatz.familienarchiv.security;
|
||||
|
||||
import jakarta.servlet.FilterChain;
|
||||
import jakarta.servlet.ServletException;
|
||||
import jakarta.servlet.http.Cookie;
|
||||
import jakarta.servlet.http.HttpServletRequest;
|
||||
import jakarta.servlet.http.HttpServletRequestWrapper;
|
||||
import jakarta.servlet.http.HttpServletResponse;
|
||||
import org.springframework.core.annotation.Order;
|
||||
import org.springframework.http.HttpHeaders;
|
||||
import org.springframework.stereotype.Component;
|
||||
import org.springframework.web.filter.OncePerRequestFilter;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.net.URLDecoder;
|
||||
import java.nio.charset.StandardCharsets;
|
||||
import java.util.Collections;
|
||||
import java.util.Enumeration;
|
||||
|
||||
/**
|
||||
* Promotes the {@code auth_token} cookie to an {@code Authorization} header
|
||||
* so that browser-side requests to {@code /api/*} authenticate the same way
|
||||
* SSR fetches do.
|
||||
*
|
||||
* <p>The SvelteKit login action stores the full HTTP Basic header value
|
||||
* ({@code "Basic <base64>"}) in an HttpOnly cookie. SSR fetches from
|
||||
* {@code hooks.server.ts} read the cookie and pass it explicitly as the
|
||||
* {@code Authorization} header. In the dev environment, Vite's proxy does
|
||||
* the same on every {@code /api/*} request (see {@code vite.config.ts}).
|
||||
* In production, Caddy proxies {@code /api/*} straight to the backend and
|
||||
* does NOT translate the cookie — so client-side {@code fetch} and
|
||||
* {@code EventSource} calls reach the backend without auth, get
|
||||
* {@code 401 WWW-Authenticate: Basic}, and the browser pops a native dialog.
|
||||
*
|
||||
* <p>This filter closes that gap: if a request has an {@code auth_token}
|
||||
* cookie but no explicit {@code Authorization} header, promote the cookie
|
||||
* value (URL-decoded) into the header before Spring Security inspects it.
|
||||
* Explicit {@code Authorization} headers are preserved unchanged.
|
||||
*
|
||||
* <p>See #520. Filter runs at {@code Ordered.HIGHEST_PRECEDENCE} so it
|
||||
* mutates the request before any Spring Security filter sees it.
|
||||
*
|
||||
* <p><b>Scope:</b> only {@code /api/*} requests are touched. The
|
||||
* {@code /actuator/*} block in Caddy plus the open auth/reset paths in
|
||||
* {@link SecurityConfig} must NOT receive a promoted Authorization.
|
||||
*
|
||||
* <p><b>⚠ Log-leakage warning:</b> the wrapped request exposes the
|
||||
* Authorization header via {@code getHeaderNames}/{@code getHeaders}. Any
|
||||
* filter or interceptor that iterates request headers will see the live
|
||||
* Basic credential. Do NOT add a request-header logger downstream of this
|
||||
* filter without explicitly scrubbing the {@code Authorization} field.
|
||||
*/
|
||||
@Component
|
||||
@Order(org.springframework.core.Ordered.HIGHEST_PRECEDENCE)
|
||||
public class AuthTokenCookieFilter extends OncePerRequestFilter {
|
||||
|
||||
static final String COOKIE_NAME = "auth_token";
|
||||
static final String SCOPE_PREFIX = "/api/";
|
||||
|
||||
@Override
|
||||
protected void doFilterInternal(HttpServletRequest request,
|
||||
HttpServletResponse response,
|
||||
FilterChain chain) throws ServletException, IOException {
|
||||
// Scope: only /api/* needs cookie promotion. /actuator/health (open),
|
||||
// /api/auth/forgot-password (open), /login etc. don't.
|
||||
if (!request.getRequestURI().startsWith(SCOPE_PREFIX)) {
|
||||
chain.doFilter(request, response);
|
||||
return;
|
||||
}
|
||||
// An explicit Authorization header wins — this is the SSR fetch path
|
||||
// (hooks.server.ts builds the header itself).
|
||||
if (request.getHeader(HttpHeaders.AUTHORIZATION) != null) {
|
||||
chain.doFilter(request, response);
|
||||
return;
|
||||
}
|
||||
Cookie[] cookies = request.getCookies();
|
||||
if (cookies == null) {
|
||||
chain.doFilter(request, response);
|
||||
return;
|
||||
}
|
||||
for (Cookie c : cookies) {
|
||||
if (COOKIE_NAME.equals(c.getName()) && c.getValue() != null && !c.getValue().isBlank()) {
|
||||
String decoded;
|
||||
try {
|
||||
decoded = URLDecoder.decode(c.getValue(), StandardCharsets.UTF_8);
|
||||
} catch (IllegalArgumentException malformed) {
|
||||
// Malformed percent-encoding — refuse to forward a bogus
|
||||
// Authorization header. Spring Security will treat the
|
||||
// request as unauthenticated.
|
||||
chain.doFilter(request, response);
|
||||
return;
|
||||
}
|
||||
chain.doFilter(new AuthHeaderRequest(request, decoded), response);
|
||||
return;
|
||||
}
|
||||
}
|
||||
chain.doFilter(request, response);
|
||||
}
|
||||
|
||||
/**
|
||||
* Adds (or overrides) the {@code Authorization} header on a wrapped request.
|
||||
* All other headers pass through unchanged.
|
||||
*/
|
||||
static final class AuthHeaderRequest extends HttpServletRequestWrapper {
|
||||
private final String authorization;
|
||||
|
||||
AuthHeaderRequest(HttpServletRequest request, String authorization) {
|
||||
super(request);
|
||||
this.authorization = authorization;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String getHeader(String name) {
|
||||
if (HttpHeaders.AUTHORIZATION.equalsIgnoreCase(name)) {
|
||||
return authorization;
|
||||
}
|
||||
return super.getHeader(name);
|
||||
}
|
||||
|
||||
@Override
|
||||
public Enumeration<String> getHeaders(String name) {
|
||||
if (HttpHeaders.AUTHORIZATION.equalsIgnoreCase(name)) {
|
||||
return Collections.enumeration(Collections.singletonList(authorization));
|
||||
}
|
||||
return super.getHeaders(name);
|
||||
}
|
||||
|
||||
@Override
|
||||
public Enumeration<String> getHeaderNames() {
|
||||
Enumeration<String> base = super.getHeaderNames();
|
||||
java.util.Set<String> names = new java.util.LinkedHashSet<>();
|
||||
while (base.hasMoreElements()) names.add(base.nextElement());
|
||||
names.add(HttpHeaders.AUTHORIZATION);
|
||||
return Collections.enumeration(names);
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -37,20 +37,12 @@ public class SecurityConfig {
|
||||
@Bean
|
||||
public SecurityFilterChain securityFilterChain(HttpSecurity http) throws Exception {
|
||||
http
|
||||
// CSRF is intentionally disabled. With the cookie-promotion model
|
||||
// (auth_token cookie → Authorization header via AuthTokenCookieFilter,
|
||||
// see #520), every authenticated request to /api/* now carries the
|
||||
// credential automatically once the cookie is set. The CSRF defence
|
||||
// for state-changing endpoints is therefore LOAD-BEARING on:
|
||||
//
|
||||
// 1. SameSite=strict on the auth_token cookie (login/+page.server.ts).
|
||||
// A cross-site POST from evil.com cannot include the cookie.
|
||||
// 2. CORS — Spring's default rejects cross-origin requests with
|
||||
// credentials unless explicitly allowed (no allowedOrigins config).
|
||||
//
|
||||
// If either of those is ever weakened (e.g. cookie flipped to
|
||||
// SameSite=lax, CORS allowedOrigins expanded), CSRF protection
|
||||
// MUST be re-enabled here.
|
||||
// CSRF is intentionally disabled: every request from the SvelteKit frontend
|
||||
// carries an explicit Authorization header (Basic Auth token injected by
|
||||
// hooks.server.ts). Browsers block cross-origin requests from setting custom
|
||||
// headers, so cross-site request forgery via a third-party page is not
|
||||
// possible with this auth scheme. If the auth model ever changes to
|
||||
// cookie-based sessions, CSRF protection must be re-enabled.
|
||||
.csrf(csrf -> csrf.disable())
|
||||
|
||||
.authorizeHttpRequests(auth -> {
|
||||
|
||||
@@ -7,7 +7,6 @@ import org.springframework.security.core.Authentication;
|
||||
|
||||
import java.util.UUID;
|
||||
|
||||
// Cross-cutting auth helper; no domain home — "Utils" is the correct suffix here.
|
||||
public final class SecurityUtils {
|
||||
|
||||
private SecurityUtils() {}
|
||||
|
||||
@@ -1,35 +0,0 @@
|
||||
# tag
|
||||
|
||||
Hierarchical document categories. Tags form a tree via a self-referencing `parent_id` column and are applied to documents for filtering and browse navigation.
|
||||
|
||||
## What this domain owns
|
||||
|
||||
Entity: `Tag` (self-referencing `parent_id` tree).
|
||||
Features: tag CRUD, hierarchical deletion (cascade to descendants), tag typeahead, admin tag management (rename, reparent, merge).
|
||||
|
||||
## What this domain does NOT own
|
||||
|
||||
- Documents — the `document_tags` join table is on the document side. `Tag` does not hold document references.
|
||||
- Tag assignment — adding/removing a tag from a document is handled by `DocumentService`.
|
||||
|
||||
## Public surface (called from other domains)
|
||||
|
||||
| Method | Consumer | Purpose |
|
||||
|---|---|---|
|
||||
| `delete(UUID)` | document | Remove the tag record; called by `DocumentService.deleteTagCascading()` after all document references are unlinked |
|
||||
| `deleteWithDescendants(UUID)` | admin tag UI | Recursive subtree deletion |
|
||||
| `expandTagNamesToDescendantIdSets(List<String>)` | document | Expand tag filter to include descendant tags |
|
||||
|
||||
## Internal layout
|
||||
|
||||
- `TagController` — REST under `/api/tags`
|
||||
- `TagService` — CRUD, hierarchy traversal, cascade-delete coordination
|
||||
- `TagRepository` — find-or-create by name (case-insensitive), subtree queries
|
||||
|
||||
## Cross-domain dependencies
|
||||
|
||||
None. Documents reference tags; tags do not reference documents or other domains.
|
||||
|
||||
## Frontend counterpart
|
||||
|
||||
`frontend/src/lib/tag/README.md`
|
||||
@@ -88,8 +88,7 @@ public class AppUser {
|
||||
};
|
||||
|
||||
public static String computeColor(UUID id) {
|
||||
// Math.floorMod avoids the Integer.MIN_VALUE overflow trap in Math.abs(hashCode())
|
||||
return PALETTE[Math.floorMod(id.hashCode(), PALETTE.length)];
|
||||
return PALETTE[Math.abs(id.hashCode()) % PALETTE.length];
|
||||
}
|
||||
|
||||
@PrePersist
|
||||
|
||||
@@ -52,11 +52,7 @@ public class InviteService {
|
||||
public InviteToken createInvite(CreateInviteRequest dto, AppUser creator) {
|
||||
Set<UUID> groupIds = new HashSet<>();
|
||||
if (dto.getGroupIds() != null && !dto.getGroupIds().isEmpty()) {
|
||||
Set<UUID> uniqueIds = new HashSet<>(dto.getGroupIds());
|
||||
List<UserGroup> groups = userService.findGroupsByIds(new ArrayList<>(uniqueIds));
|
||||
if (groups.size() != uniqueIds.size()) {
|
||||
throw DomainException.notFound(ErrorCode.GROUP_NOT_FOUND, "One or more group IDs do not exist");
|
||||
}
|
||||
List<UserGroup> groups = userService.findGroupsByIds(dto.getGroupIds());
|
||||
groups.forEach(g -> groupIds.add(g.getId()));
|
||||
}
|
||||
|
||||
|
||||
@@ -24,7 +24,4 @@ public interface InviteTokenRepository extends JpaRepository<InviteToken, UUID>
|
||||
|
||||
@Query("SELECT t FROM InviteToken t ORDER BY t.createdAt DESC")
|
||||
List<InviteToken> findAllOrderedByCreatedAt();
|
||||
|
||||
@Query("SELECT CASE WHEN COUNT(t) > 0 THEN true ELSE false END FROM InviteToken t JOIN t.groupIds g WHERE g = :groupId AND t.revoked = false AND (t.expiresAt IS NULL OR t.expiresAt > CURRENT_TIMESTAMP) AND (t.maxUses IS NULL OR t.useCount < t.maxUses)")
|
||||
boolean existsActiveWithGroupId(@Param("groupId") UUID groupId);
|
||||
}
|
||||
|
||||
@@ -1,35 +0,0 @@
|
||||
# user
|
||||
|
||||
Login accounts and permission groups. An `AppUser` is a system user who can authenticate and act in the application — they are never a historical family member.
|
||||
|
||||
## What this domain owns
|
||||
|
||||
Entities: `AppUser`, `UserGroup`, password-reset tokens, invite tokens.
|
||||
Features: user CRUD, group CRUD, password change, password reset flow, invite links.
|
||||
|
||||
## What this domain does NOT own
|
||||
|
||||
- `Person` records — historical family members. An `AppUser` is never linked to a `Person`. This separation is intentional: a person who digitized letters in 2024 is not the same entity as their great-grandmother who wrote them in 1912. See `docs/GLOSSARY.md`.
|
||||
- Permission enforcement — `security/` owns `@RequirePermission` and `PermissionAspect`. `user/` only manages which permissions are stored on `UserGroup`.
|
||||
|
||||
## Public surface
|
||||
|
||||
`UserService` methods are consumed primarily by the security infrastructure and the admin UI. No other business-logic domain calls `UserService` directly.
|
||||
|
||||
The Spring Security chain (via `CustomUserDetailsService` in `security/`) calls `AppUserRepository.findByUsername()` on every authenticated request.
|
||||
|
||||
## Internal layout
|
||||
|
||||
- `UserController` — REST under `/api/users` (current user, CRUD)
|
||||
- `AuthController` — password reset, invite flow
|
||||
- `UserService` — BCrypt-encoded passwords, group assignment
|
||||
- `AppUserRepository` — find by username (used by Spring Security)
|
||||
- `UserGroupRepository` — group and permission management
|
||||
|
||||
## Cross-domain dependencies
|
||||
|
||||
- `AuditService.logAfterCommit()` — user-management mutations are audited
|
||||
|
||||
## Frontend counterpart
|
||||
|
||||
`frontend/src/lib/user/README.md`
|
||||
@@ -20,7 +20,6 @@ import org.springframework.boot.CommandLineRunner;
|
||||
import org.springframework.context.annotation.Bean;
|
||||
import org.springframework.context.annotation.Configuration;
|
||||
import org.springframework.context.annotation.Profile;
|
||||
import org.springframework.core.env.Environment;
|
||||
import org.springframework.security.crypto.password.PasswordEncoder;
|
||||
|
||||
import java.time.LocalDate;
|
||||
@@ -32,51 +31,26 @@ import java.util.Set;
|
||||
@DependsOn("flyway")
|
||||
public class UserDataInitializer {
|
||||
|
||||
static final String DEFAULT_ADMIN_EMAIL = "admin@familienarchiv.local";
|
||||
static final String DEFAULT_ADMIN_PASSWORD = "admin123";
|
||||
|
||||
@Value("${app.admin.email:" + DEFAULT_ADMIN_EMAIL + "}")
|
||||
@Value("${app.admin.email:admin@familyarchive.local}")
|
||||
private String adminEmail;
|
||||
|
||||
@Value("${app.admin.password:" + DEFAULT_ADMIN_PASSWORD + "}")
|
||||
@Value("${app.admin.password:admin123}")
|
||||
private String adminPassword;
|
||||
|
||||
private final AppUserRepository userRepository;
|
||||
private final UserGroupRepository groupRepository;
|
||||
private final Environment environment;
|
||||
|
||||
@Bean
|
||||
public CommandLineRunner initAdminUser(PasswordEncoder passwordEncoder) {
|
||||
return args -> {
|
||||
if (userRepository.findByEmail(adminEmail).isEmpty()) {
|
||||
// Fail-closed in production: refuse to seed with the well-known
|
||||
// defaults. Otherwise an operator who forgets APP_ADMIN_USERNAME
|
||||
// / APP_ADMIN_PASSWORD locks production to admin@…/admin123 PERMANENTLY
|
||||
// (UserDataInitializer only seeds when the row is missing — see #513).
|
||||
// Allowed in dev/test/e2e because those run without secrets configured.
|
||||
boolean isLocalProfile = environment.matchesProfiles("dev", "test", "e2e");
|
||||
if (!isLocalProfile
|
||||
&& (DEFAULT_ADMIN_EMAIL.equals(adminEmail)
|
||||
|| DEFAULT_ADMIN_PASSWORD.equals(adminPassword))) {
|
||||
throw new IllegalStateException(
|
||||
"Refusing to seed admin user with default credentials outside "
|
||||
+ "the dev/test/e2e profiles. Set APP_ADMIN_USERNAME and "
|
||||
+ "APP_ADMIN_PASSWORD to non-default values before first boot — "
|
||||
+ "this lock-in is permanent."
|
||||
);
|
||||
}
|
||||
log.info("Kein Admin-User '{}' gefunden. Erstelle Default-Admin...", adminEmail);
|
||||
|
||||
// Reuse the Administrators group if it already exists (e.g. a
|
||||
// previous boot seeded the group but failed before creating
|
||||
// the admin user, or the operator deleted just the user row
|
||||
// to retry the seed with a new email). Blind-INSERTing would
|
||||
// violate user_groups_name_key and abort the context. See #518.
|
||||
UserGroup adminGroup = groupRepository.findByName("Administrators")
|
||||
.orElseGet(() -> groupRepository.save(UserGroup.builder()
|
||||
.name("Administrators")
|
||||
.permissions(Set.of("ADMIN", "READ_ALL", "WRITE_ALL", "ANNOTATE_ALL", "ADMIN_USER", "ADMIN_TAG", "ADMIN_PERMISSION"))
|
||||
.build()));
|
||||
UserGroup adminGroup = UserGroup.builder()
|
||||
.name("Administrators")
|
||||
.permissions(Set.of("ADMIN", "READ_ALL", "WRITE_ALL", "ANNOTATE_ALL", "ADMIN_USER", "ADMIN_TAG", "ADMIN_PERMISSION"))
|
||||
.build();
|
||||
groupRepository.save(adminGroup);
|
||||
|
||||
AppUser admin = AppUser.builder()
|
||||
.email(adminEmail)
|
||||
|
||||
@@ -37,9 +37,6 @@ public class UserService {
|
||||
|
||||
private final AppUserRepository userRepository;
|
||||
private final UserGroupRepository groupRepository;
|
||||
// Injected directly (not via InviteService) to avoid a constructor injection cycle:
|
||||
// InviteService → UserService → InviteService. Spring Framework 7 forbids such cycles.
|
||||
private final InviteTokenRepository inviteTokenRepository;
|
||||
private final PasswordEncoder passwordEncoder;
|
||||
private final AuditService auditService;
|
||||
|
||||
@@ -274,10 +271,9 @@ public class UserService {
|
||||
|
||||
@Transactional
|
||||
public UserGroup createGroup(GroupDTO dto) {
|
||||
UserGroup group = UserGroup.builder()
|
||||
.name(dto.getName())
|
||||
.permissions(dto.getPermissions() != null ? dto.getPermissions() : new HashSet<>())
|
||||
.build();
|
||||
UserGroup group = new UserGroup();
|
||||
group.setName(dto.getName());
|
||||
group.setPermissions(dto.getPermissions());
|
||||
return groupRepository.save(group);
|
||||
}
|
||||
|
||||
@@ -291,10 +287,6 @@ public class UserService {
|
||||
|
||||
@Transactional
|
||||
public void deleteGroup(UUID id) {
|
||||
if (inviteTokenRepository.existsActiveWithGroupId(id)) {
|
||||
throw DomainException.conflict(ErrorCode.GROUP_HAS_ACTIVE_INVITES,
|
||||
"Cannot delete group " + id + " — referenced by one or more active invites");
|
||||
}
|
||||
groupRepository.deleteById(id);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -38,41 +38,10 @@ spring:
|
||||
starttls:
|
||||
enable: true
|
||||
|
||||
server:
|
||||
# Behind Caddy/reverse proxy: trust X-Forwarded-{Proto,For,Host} so that
|
||||
# request.getScheme(), redirect URLs, and Spring Session "Secure" cookies
|
||||
# reflect the original https client request, not the http hop from Caddy.
|
||||
forward-headers-strategy: native
|
||||
|
||||
management:
|
||||
server:
|
||||
# Management port is separate from the app port so that:
|
||||
# (a) Caddy never proxies /actuator/* (it only routes :8080 → the app port)
|
||||
# (b) Prometheus scrapes backend:8081 directly inside archiv-net, not via Caddy
|
||||
# (c) Spring Security's session-authenticated filter chain on :8080 never sees actuator requests
|
||||
port: 8081
|
||||
endpoints:
|
||||
web:
|
||||
exposure:
|
||||
include: health,info,prometheus,metrics
|
||||
endpoint:
|
||||
prometheus:
|
||||
enabled: true
|
||||
health:
|
||||
mail:
|
||||
enabled: false
|
||||
tracing:
|
||||
sampling:
|
||||
probability: 1.0 # 100% in dev; override via MANAGEMENT_TRACING_SAMPLING_PROBABILITY in prod compose
|
||||
|
||||
# OpenTelemetry trace export — failures are non-fatal (app starts cleanly without Tempo running)
|
||||
# The default http://localhost:4317 ensures CI compatibility when no observability stack is present.
|
||||
otel:
|
||||
service:
|
||||
name: familienarchiv-backend
|
||||
exporter:
|
||||
otlp:
|
||||
endpoint: ${OTEL_EXPORTER_OTLP_ENDPOINT:http://localhost:4317}
|
||||
|
||||
springdoc:
|
||||
api-docs:
|
||||
@@ -94,11 +63,7 @@ app:
|
||||
from: ${APP_MAIL_FROM:noreply@familienarchiv.local}
|
||||
|
||||
admin:
|
||||
# Key must be `email`, not `username` — UserDataInitializer reads
|
||||
# `${app.admin.email:...}`. The env-var name stays APP_ADMIN_USERNAME
|
||||
# to match the existing Gitea secrets and DEPLOYMENT.md §3.3.
|
||||
# See #513.
|
||||
email: ${APP_ADMIN_USERNAME:admin@familienarchiv.local}
|
||||
username: ${APP_ADMIN_USERNAME:admin}
|
||||
password: ${APP_ADMIN_PASSWORD:admin123}
|
||||
|
||||
import:
|
||||
@@ -118,12 +83,3 @@ ocr:
|
||||
sender-model:
|
||||
activation-threshold: 100
|
||||
retrain-delta: 50
|
||||
|
||||
sentry:
|
||||
dsn: ${SENTRY_DSN:}
|
||||
environment: ${SPRING_PROFILES_ACTIVE:dev}
|
||||
traces-sample-rate: ${SENTRY_TRACES_SAMPLE_RATE:1.0}
|
||||
send-default-pii: false
|
||||
enable-tracing: true
|
||||
ignored-exceptions-for-type:
|
||||
- org.raddatz.familienarchiv.exception.DomainException
|
||||
|
||||
@@ -1 +0,0 @@
|
||||
CREATE INDEX IF NOT EXISTS idx_documents_updated_at ON documents(updated_at DESC);
|
||||
@@ -1,8 +0,0 @@
|
||||
-- Speeds up "documents by sender" queries used on /persons/[id] Korrespondenz-Überblick (#306),
|
||||
-- /briefwechsel, and bulk-edit flows.
|
||||
CREATE INDEX IF NOT EXISTS idx_documents_sender_id
|
||||
ON documents(sender_id);
|
||||
|
||||
-- Speeds up "comments by author" queries on admin user detail and (future) contributor profile.
|
||||
CREATE INDEX IF NOT EXISTS idx_comments_author_id
|
||||
ON document_comments(author_id);
|
||||
@@ -1,7 +0,0 @@
|
||||
-- Remove duplicate (group_id, permission) rows that accumulated without a UNIQUE constraint.
|
||||
-- Keeps the row with the smallest ctid (earliest physical insertion order).
|
||||
DELETE FROM group_permissions a
|
||||
USING group_permissions b
|
||||
WHERE a.ctid < b.ctid
|
||||
AND a.group_id = b.group_id
|
||||
AND a.permission = b.permission;
|
||||
@@ -1,11 +0,0 @@
|
||||
-- Add NOT NULL and PRIMARY KEY to group_permissions.
|
||||
-- Requires V63 to have run first (no duplicates can remain).
|
||||
--
|
||||
-- After this migration, future seed migrations can use:
|
||||
-- INSERT INTO group_permissions ... ON CONFLICT DO NOTHING
|
||||
-- instead of the INSERT ... WHERE NOT EXISTS pattern used before V64.
|
||||
ALTER TABLE group_permissions
|
||||
ALTER COLUMN permission SET NOT NULL;
|
||||
|
||||
ALTER TABLE group_permissions
|
||||
ADD CONSTRAINT pk_group_permissions PRIMARY KEY (group_id, permission);
|
||||
@@ -1,8 +0,0 @@
|
||||
-- Promote the de-facto unique constraint on transcription_block_mentioned_persons to a named PK.
|
||||
-- uq_tbmp_block_person (added in V57) is backed by a B-tree index identical to a PK;
|
||||
-- this rename makes the naming convention explicit (pk_* vs uq_*).
|
||||
ALTER TABLE transcription_block_mentioned_persons
|
||||
DROP CONSTRAINT uq_tbmp_block_person;
|
||||
|
||||
ALTER TABLE transcription_block_mentioned_persons
|
||||
ADD CONSTRAINT pk_tbmp PRIMARY KEY (block_id, person_id);
|
||||
@@ -1,3 +0,0 @@
|
||||
-- The composite PK (invite_token_id, group_id) does not support efficient lookups by group_id alone.
|
||||
-- Add a dedicated index to support existsActiveWithGroupId queries.
|
||||
CREATE INDEX idx_itg_group_id ON invite_token_group_ids (group_id);
|
||||
@@ -1,18 +1,14 @@
|
||||
package org.raddatz.familienarchiv;
|
||||
|
||||
import org.junit.jupiter.api.Test;
|
||||
import org.springframework.beans.factory.annotation.Autowired;
|
||||
import org.springframework.boot.test.context.SpringBootTest;
|
||||
import org.springframework.boot.testcontainers.service.connection.ServiceConnection;
|
||||
import org.springframework.context.ApplicationContext;
|
||||
import org.springframework.context.annotation.Import;
|
||||
import org.springframework.test.context.ActiveProfiles;
|
||||
import org.springframework.test.context.bean.override.mockito.MockitoBean;
|
||||
import org.testcontainers.containers.PostgreSQLContainer;
|
||||
import software.amazon.awssdk.services.s3.S3Client;
|
||||
|
||||
import static org.assertj.core.api.Assertions.assertThat;
|
||||
|
||||
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.NONE)
|
||||
@ActiveProfiles("test")
|
||||
@Import(PostgresContainerConfig.class)
|
||||
@@ -21,18 +17,9 @@ class ApplicationContextTest {
|
||||
@MockitoBean
|
||||
S3Client s3Client;
|
||||
|
||||
@Autowired
|
||||
ApplicationContext ctx;
|
||||
|
||||
@Test
|
||||
void contextLoads() {
|
||||
// verifies that the Spring context starts successfully with all beans wired,
|
||||
// Flyway migrations applied, and no configuration errors
|
||||
}
|
||||
|
||||
@Test
|
||||
void sentry_is_disabled_when_no_dsn_is_configured() {
|
||||
// application-test.yaml has no sentry.dsn — SDK must stay inactive so tests are clean
|
||||
assertThat(io.sentry.Sentry.isEnabled()).isFalse();
|
||||
}
|
||||
}
|
||||
|
||||
@@ -399,86 +399,6 @@ class MigrationIntegrationTest {
|
||||
AND dc.annotation_id IS NOT NULL
|
||||
""";
|
||||
|
||||
// ─── V62: indexes on FK columns ──────────────────────────────────────────
|
||||
|
||||
@Test
|
||||
void v62_idx_documents_sender_id_exists() {
|
||||
Integer count = jdbc.queryForObject(
|
||||
"SELECT COUNT(*) FROM pg_catalog.pg_indexes WHERE tablename = 'documents' AND indexname = 'idx_documents_sender_id'",
|
||||
Integer.class);
|
||||
assertThat(count).isEqualTo(1);
|
||||
}
|
||||
|
||||
@Test
|
||||
void v62_idx_comments_author_id_exists() {
|
||||
Integer count = jdbc.queryForObject(
|
||||
"SELECT COUNT(*) FROM pg_catalog.pg_indexes WHERE tablename = 'document_comments' AND indexname = 'idx_comments_author_id'",
|
||||
Integer.class);
|
||||
assertThat(count).isEqualTo(1);
|
||||
}
|
||||
|
||||
// ─── V63+V64: group_permissions dedup + primary key ──────────────────────
|
||||
|
||||
@Test
|
||||
void v64_pk_group_permissions_exists() {
|
||||
Integer count = jdbc.queryForObject(
|
||||
"""
|
||||
SELECT COUNT(*) FROM pg_catalog.pg_constraint c
|
||||
JOIN pg_catalog.pg_class t ON c.conrelid = t.oid
|
||||
WHERE t.relname = 'group_permissions'
|
||||
AND c.conname = 'pk_group_permissions'
|
||||
AND c.contype = 'p'
|
||||
""",
|
||||
Integer.class);
|
||||
assertThat(count).isEqualTo(1);
|
||||
}
|
||||
|
||||
@Test
|
||||
void v64_permission_column_isNotNullable() {
|
||||
Integer count = jdbc.queryForObject(
|
||||
"""
|
||||
SELECT COUNT(*) FROM information_schema.columns
|
||||
WHERE table_schema = 'public'
|
||||
AND table_name = 'group_permissions'
|
||||
AND column_name = 'permission'
|
||||
AND is_nullable = 'NO'
|
||||
""",
|
||||
Integer.class);
|
||||
assertThat(count).isEqualTo(1);
|
||||
}
|
||||
|
||||
@Test
|
||||
@Transactional(propagation = Propagation.NOT_SUPPORTED)
|
||||
void v64_rejectsDuplicateGroupPermission() {
|
||||
UUID groupId = createUserGroup("DuplicateTestGroup-" + UUID.randomUUID());
|
||||
try {
|
||||
jdbc.update("INSERT INTO group_permissions (group_id, permission) VALUES (?, 'READ_ALL')", groupId);
|
||||
|
||||
assertThatThrownBy(() ->
|
||||
jdbc.update("INSERT INTO group_permissions (group_id, permission) VALUES (?, 'READ_ALL')", groupId)
|
||||
).isInstanceOf(DataIntegrityViolationException.class);
|
||||
} finally {
|
||||
jdbc.update("DELETE FROM group_permissions WHERE group_id = ?", groupId);
|
||||
jdbc.update("DELETE FROM user_groups WHERE id = ?", groupId);
|
||||
}
|
||||
}
|
||||
|
||||
// ─── V65: tbmp UNIQUE promoted to PRIMARY KEY ─────────────────────────────
|
||||
|
||||
@Test
|
||||
void v65_pk_tbmp_exists() {
|
||||
Integer count = jdbc.queryForObject(
|
||||
"""
|
||||
SELECT COUNT(*) FROM pg_catalog.pg_constraint c
|
||||
JOIN pg_catalog.pg_class t ON c.conrelid = t.oid
|
||||
WHERE t.relname = 'transcription_block_mentioned_persons'
|
||||
AND c.conname = 'pk_tbmp'
|
||||
AND c.contype = 'p'
|
||||
""",
|
||||
Integer.class);
|
||||
assertThat(count).isEqualTo(1);
|
||||
}
|
||||
|
||||
// ─── helpers ─────────────────────────────────────────────────────────────
|
||||
|
||||
private UUID createPerson(String firstName, String lastName) {
|
||||
@@ -562,10 +482,4 @@ class MigrationIntegrationTest {
|
||||
""", id, recipientId, docId, commentId);
|
||||
return id;
|
||||
}
|
||||
|
||||
private UUID createUserGroup(String name) {
|
||||
UUID id = UUID.randomUUID();
|
||||
jdbc.update("INSERT INTO user_groups (id, name) VALUES (?, ?)", id, name);
|
||||
return id;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,11 +1,11 @@
|
||||
package org.raddatz.familienarchiv.audit;
|
||||
|
||||
import org.junit.jupiter.api.BeforeEach;
|
||||
import org.junit.jupiter.api.Test;
|
||||
import org.raddatz.familienarchiv.PostgresContainerConfig;
|
||||
import org.springframework.beans.factory.annotation.Autowired;
|
||||
import org.springframework.boot.test.context.SpringBootTest;
|
||||
import org.springframework.context.annotation.Import;
|
||||
import org.springframework.test.annotation.DirtiesContext;
|
||||
import org.springframework.test.context.ActiveProfiles;
|
||||
import org.springframework.test.context.bean.override.mockito.MockitoBean;
|
||||
import org.springframework.transaction.support.TransactionTemplate;
|
||||
@@ -18,6 +18,7 @@ import static org.awaitility.Awaitility.await;
|
||||
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.NONE)
|
||||
@ActiveProfiles("test")
|
||||
@Import(PostgresContainerConfig.class)
|
||||
@DirtiesContext(classMode = DirtiesContext.ClassMode.AFTER_EACH_TEST_METHOD)
|
||||
class AuditServiceIntegrationTest {
|
||||
|
||||
@MockitoBean S3Client s3Client;
|
||||
@@ -25,11 +26,6 @@ class AuditServiceIntegrationTest {
|
||||
@Autowired AuditLogRepository auditLogRepository;
|
||||
@Autowired TransactionTemplate transactionTemplate;
|
||||
|
||||
@BeforeEach
|
||||
void resetAuditLog() {
|
||||
auditLogRepository.deleteAll();
|
||||
}
|
||||
|
||||
@Test
|
||||
void logAfterCommit_writes_ANNOTATION_CREATED_row_after_transaction_commits() {
|
||||
transactionTemplate.execute(status -> {
|
||||
|
||||
@@ -1,48 +0,0 @@
|
||||
package org.raddatz.familienarchiv.config;
|
||||
|
||||
import org.junit.jupiter.api.Test;
|
||||
import org.springframework.beans.factory.config.YamlPropertiesFactoryBean;
|
||||
import org.springframework.boot.web.server.autoconfigure.ServerProperties.ForwardHeadersStrategy;
|
||||
import org.springframework.boot.context.properties.bind.Binder;
|
||||
import org.springframework.boot.context.properties.source.ConfigurationPropertySources;
|
||||
import org.springframework.core.env.PropertiesPropertySource;
|
||||
import org.springframework.core.io.ClassPathResource;
|
||||
|
||||
import java.util.Properties;
|
||||
|
||||
import static org.assertj.core.api.Assertions.assertThat;
|
||||
|
||||
/**
|
||||
* Binds {@code server.forward-headers-strategy} from {@code application.yaml} into
|
||||
* Spring Boot's typed {@link ForwardHeadersStrategy} enum. The binder rejects any
|
||||
* value that is not a valid enum constant ({@code BindException}), so a typo
|
||||
* ({@code "nativ"}, {@code "Native"}, {@code "framework "}) or a future Spring
|
||||
* rename of the property fails the test, not silently degrades to {@code NONE}.
|
||||
*
|
||||
* <p>No Spring context, no embedded server, no Testcontainers — this is the
|
||||
* cheapest test that pins the contract "Caddy's X-Forwarded-Proto is trusted".
|
||||
*/
|
||||
class ForwardHeadersConfigurationTest {
|
||||
|
||||
@Test
|
||||
void forward_headers_strategy_binds_to_NATIVE() {
|
||||
YamlPropertiesFactoryBean yaml = new YamlPropertiesFactoryBean();
|
||||
yaml.setResources(new ClassPathResource("application.yaml"));
|
||||
Properties props = yaml.getObject();
|
||||
assertThat(props).as("application.yaml must be on the classpath").isNotNull();
|
||||
|
||||
Binder binder = new Binder(ConfigurationPropertySources.from(
|
||||
new PropertiesPropertySource("application", props)));
|
||||
|
||||
ForwardHeadersStrategy strategy = binder
|
||||
.bind("server.forward-headers-strategy", ForwardHeadersStrategy.class)
|
||||
.orElseThrow(() -> new AssertionError(
|
||||
"server.forward-headers-strategy is missing from application.yaml"));
|
||||
|
||||
assertThat(strategy)
|
||||
.as("Spring must trust X-Forwarded-Proto from Caddy so that "
|
||||
+ "request.getScheme(), redirect URLs, and the Spring Session "
|
||||
+ "'Secure' cookie reflect the original https client request.")
|
||||
.isEqualTo(ForwardHeadersStrategy.NATIVE);
|
||||
}
|
||||
}
|
||||
@@ -13,7 +13,6 @@ import org.raddatz.familienarchiv.user.AppUser;
|
||||
import org.raddatz.familienarchiv.document.Document;
|
||||
import org.raddatz.familienarchiv.document.transcription.TranscriptionBlock;
|
||||
import org.raddatz.familienarchiv.document.comment.CommentService;
|
||||
import org.raddatz.familienarchiv.document.comment.CommentData;
|
||||
import org.raddatz.familienarchiv.document.DocumentService;
|
||||
import org.raddatz.familienarchiv.document.transcription.TranscriptionService;
|
||||
import org.raddatz.familienarchiv.user.UserService;
|
||||
@@ -143,8 +142,7 @@ class DashboardServiceTest {
|
||||
when(documentService.getDocumentsByIds(List.of(docId))).thenReturn(List.of(
|
||||
Document.builder().id(docId).title("B").originalFilename("b.pdf").receivers(new HashSet<>()).build()
|
||||
));
|
||||
when(commentService.findDataByIds(List.of(commentId)))
|
||||
.thenReturn(Map.of(commentId, new CommentData(null, "preview text")));
|
||||
when(commentService.findAnnotationIdsByIds(List.of(commentId))).thenReturn(Map.of());
|
||||
|
||||
List<ActivityFeedItemDTO> items = dashboardService.getActivity(userId, 5, AuditKind.ROLLUP_ELIGIBLE);
|
||||
|
||||
@@ -164,8 +162,8 @@ class DashboardServiceTest {
|
||||
when(documentService.getDocumentsByIds(List.of(docId))).thenReturn(List.of(
|
||||
Document.builder().id(docId).title("B").originalFilename("b.pdf").receivers(new HashSet<>()).build()
|
||||
));
|
||||
when(commentService.findDataByIds(List.of(commentId)))
|
||||
.thenReturn(Map.of(commentId, new CommentData(annotationId, "preview text")));
|
||||
when(commentService.findAnnotationIdsByIds(List.of(commentId)))
|
||||
.thenReturn(Map.of(commentId, annotationId));
|
||||
|
||||
List<ActivityFeedItemDTO> items = dashboardService.getActivity(userId, 5, AuditKind.ROLLUP_ELIGIBLE);
|
||||
|
||||
@@ -189,62 +187,7 @@ class DashboardServiceTest {
|
||||
assertThat(items).hasSize(1);
|
||||
assertThat(items.get(0).commentId()).isNull();
|
||||
assertThat(items.get(0).annotationId()).isNull();
|
||||
verify(commentService, never()).findDataByIds(anyList());
|
||||
}
|
||||
|
||||
// ─── getActivity commentPreview ───────────────────────────────────────────
|
||||
|
||||
@Test
|
||||
void getActivity_populates_commentPreview_for_COMMENT_ADDED_rows() {
|
||||
UUID userId = UUID.randomUUID();
|
||||
UUID docId = UUID.randomUUID();
|
||||
UUID commentId = UUID.randomUUID();
|
||||
|
||||
ActivityFeedRow row = mockFeedRow(docId, "COMMENT_ADDED", commentId);
|
||||
when(auditLogQueryService.findActivityFeed(userId, 5, AuditKind.ROLLUP_ELIGIBLE)).thenReturn(List.of(row));
|
||||
when(documentService.getDocumentsByIds(List.of(docId))).thenReturn(List.of(
|
||||
Document.builder().id(docId).title("B").originalFilename("b.pdf").receivers(new HashSet<>()).build()
|
||||
));
|
||||
when(commentService.findDataByIds(List.of(commentId)))
|
||||
.thenReturn(Map.of(commentId, new CommentData(null, "Hello family!")));
|
||||
|
||||
List<ActivityFeedItemDTO> items = dashboardService.getActivity(userId, 5, AuditKind.ROLLUP_ELIGIBLE);
|
||||
|
||||
assertThat(items.get(0).commentPreview()).isEqualTo("Hello family!");
|
||||
}
|
||||
|
||||
@Test
|
||||
void getActivity_leaves_commentPreview_null_for_TEXT_SAVED_rows() {
|
||||
UUID userId = UUID.randomUUID();
|
||||
UUID docId = UUID.randomUUID();
|
||||
|
||||
ActivityFeedRow row = mockFeedRow(docId, "TEXT_SAVED", null);
|
||||
when(auditLogQueryService.findActivityFeed(userId, 5, AuditKind.ROLLUP_ELIGIBLE)).thenReturn(List.of(row));
|
||||
when(documentService.getDocumentsByIds(List.of(docId))).thenReturn(List.of(
|
||||
Document.builder().id(docId).title("B").originalFilename("b.pdf").receivers(new HashSet<>()).build()
|
||||
));
|
||||
|
||||
List<ActivityFeedItemDTO> items = dashboardService.getActivity(userId, 5, AuditKind.ROLLUP_ELIGIBLE);
|
||||
|
||||
assertThat(items.get(0).commentPreview()).isNull();
|
||||
}
|
||||
|
||||
@Test
|
||||
void getActivity_leaves_commentPreview_null_when_comment_is_deleted() {
|
||||
UUID userId = UUID.randomUUID();
|
||||
UUID docId = UUID.randomUUID();
|
||||
UUID deletedCommentId = UUID.randomUUID();
|
||||
|
||||
ActivityFeedRow row = mockFeedRow(docId, "COMMENT_ADDED", deletedCommentId);
|
||||
when(auditLogQueryService.findActivityFeed(userId, 5, AuditKind.ROLLUP_ELIGIBLE)).thenReturn(List.of(row));
|
||||
when(documentService.getDocumentsByIds(List.of(docId))).thenReturn(List.of(
|
||||
Document.builder().id(docId).title("B").originalFilename("b.pdf").receivers(new HashSet<>()).build()
|
||||
));
|
||||
when(commentService.findDataByIds(List.of(deletedCommentId))).thenReturn(Map.of());
|
||||
|
||||
List<ActivityFeedItemDTO> items = dashboardService.getActivity(userId, 5, AuditKind.ROLLUP_ELIGIBLE);
|
||||
|
||||
assertThat(items.get(0).commentPreview()).isNull();
|
||||
verify(commentService, never()).findAnnotationIdsByIds(anyList());
|
||||
}
|
||||
|
||||
// ─── getPulse — always uses full ROLLUP_ELIGIBLE set ─────────────────────
|
||||
|
||||
@@ -44,7 +44,7 @@ class StatsControllerTest {
|
||||
@Test
|
||||
@WithMockUser(authorities = "READ_ALL")
|
||||
void getStats_returns200_withCorrectCounts() throws Exception {
|
||||
when(statsService.getStats()).thenReturn(new StatsDTO(4L, 12L, 2L));
|
||||
when(statsService.getStats()).thenReturn(new StatsDTO(4L, 12L));
|
||||
|
||||
mockMvc.perform(get("/api/stats"))
|
||||
.andExpect(status().isOk())
|
||||
@@ -55,7 +55,7 @@ class StatsControllerTest {
|
||||
@Test
|
||||
@WithMockUser(authorities = "READ_ALL")
|
||||
void getStats_returns200_withZeroCounts() throws Exception {
|
||||
when(statsService.getStats()).thenReturn(new StatsDTO(0L, 0L, 0L));
|
||||
when(statsService.getStats()).thenReturn(new StatsDTO(0L, 0L));
|
||||
|
||||
mockMvc.perform(get("/api/stats"))
|
||||
.andExpect(status().isOk())
|
||||
|
||||
@@ -7,7 +7,6 @@ import org.mockito.Mock;
|
||||
import org.mockito.junit.jupiter.MockitoExtension;
|
||||
import org.raddatz.familienarchiv.document.DocumentService;
|
||||
import org.raddatz.familienarchiv.dashboard.StatsDTO;
|
||||
import org.raddatz.familienarchiv.geschichte.GeschichteService;
|
||||
import org.raddatz.familienarchiv.person.PersonService;
|
||||
|
||||
import static org.assertj.core.api.Assertions.assertThat;
|
||||
@@ -18,7 +17,6 @@ class StatsServiceTest {
|
||||
|
||||
@Mock PersonService personService;
|
||||
@Mock DocumentService documentService;
|
||||
@Mock GeschichteService geschichteService;
|
||||
@InjectMocks StatsService statsService;
|
||||
|
||||
@Test
|
||||
@@ -32,17 +30,6 @@ class StatsServiceTest {
|
||||
assertThat(stats.totalDocuments()).isEqualTo(12L);
|
||||
}
|
||||
|
||||
@Test
|
||||
void getStats_includes_totalStories() {
|
||||
when(personService.count()).thenReturn(3L);
|
||||
when(documentService.count()).thenReturn(7L);
|
||||
when(geschichteService.countPublished()).thenReturn(5L);
|
||||
|
||||
StatsDTO stats = statsService.getStats();
|
||||
|
||||
assertThat(stats.totalStories()).isEqualTo(5L);
|
||||
}
|
||||
|
||||
@Test
|
||||
void getStats_returnsZero_whenNoEntities() {
|
||||
when(personService.count()).thenReturn(0L);
|
||||
|
||||
@@ -44,7 +44,6 @@ import static org.mockito.Mockito.when;
|
||||
import static org.springframework.test.web.servlet.request.MockMvcRequestBuilders.get;
|
||||
import static org.springframework.test.web.servlet.request.MockMvcRequestBuilders.multipart;
|
||||
import static org.springframework.test.web.servlet.request.MockMvcRequestBuilders.patch;
|
||||
import static org.springframework.test.web.servlet.result.MockMvcResultMatchers.content;
|
||||
import static org.springframework.test.web.servlet.result.MockMvcResultMatchers.header;
|
||||
import static org.springframework.test.web.servlet.result.MockMvcResultMatchers.jsonPath;
|
||||
import static org.springframework.test.web.servlet.result.MockMvcResultMatchers.status;
|
||||
@@ -1241,100 +1240,4 @@ class DocumentControllerTest {
|
||||
.andExpect(jsonPath("$.errors[0].message").value(
|
||||
org.hamcrest.Matchers.containsString("not found")));
|
||||
}
|
||||
|
||||
// ─── GET /api/documents/density ───────────────────────────────────────────
|
||||
|
||||
@Test
|
||||
void density_returns401_whenUnauthenticated() throws Exception {
|
||||
mockMvc.perform(get("/api/documents/density"))
|
||||
.andExpect(status().isUnauthorized());
|
||||
}
|
||||
|
||||
@Test
|
||||
@WithMockUser
|
||||
void density_returns200_withResultBody_whenAuthenticated() throws Exception {
|
||||
when(documentService.getDensity(any())).thenReturn(
|
||||
new DocumentDensityResult(
|
||||
List.of(new MonthBucket("1915-08", 2), new MonthBucket("1915-09", 1)),
|
||||
java.time.LocalDate.of(1915, 8, 3),
|
||||
java.time.LocalDate.of(1915, 9, 1)));
|
||||
|
||||
mockMvc.perform(get("/api/documents/density"))
|
||||
.andExpect(status().isOk())
|
||||
.andExpect(jsonPath("$.buckets").isArray())
|
||||
.andExpect(jsonPath("$.buckets[0].month").value("1915-08"))
|
||||
.andExpect(jsonPath("$.buckets[0].count").value(2))
|
||||
.andExpect(jsonPath("$.minDate").value("1915-08-03"))
|
||||
.andExpect(jsonPath("$.maxDate").value("1915-09-01"));
|
||||
}
|
||||
|
||||
// Pins produces=APPLICATION_JSON_VALUE on the density mapping so the OpenAPI/TypeScript
|
||||
// codegen records application/json instead of the wildcard. Without produces= the
|
||||
// request-mapping accepts any Accept header and the OpenAPI emit falls back to the
|
||||
// wildcard. Sending an Accept header that JSON cannot satisfy must NOT return 200 —
|
||||
// Spring rejects with 406 (HttpMediaTypeNotAcceptableException), which our
|
||||
// GlobalExceptionHandler may surface as 400. Either way it proves the route is
|
||||
// locked to JSON.
|
||||
@Test
|
||||
@WithMockUser
|
||||
void density_declaresApplicationJsonContentType() throws Exception {
|
||||
when(documentService.getDensity(any())).thenReturn(
|
||||
new DocumentDensityResult(List.of(), null, null));
|
||||
|
||||
mockMvc.perform(get("/api/documents/density")
|
||||
.accept(MediaType.APPLICATION_XML))
|
||||
.andExpect(status().is4xxClientError());
|
||||
}
|
||||
|
||||
@Test
|
||||
@WithMockUser
|
||||
void density_emitsPrivateCacheControlHeader() throws Exception {
|
||||
when(documentService.getDensity(any())).thenReturn(
|
||||
new DocumentDensityResult(List.of(), null, null));
|
||||
|
||||
mockMvc.perform(get("/api/documents/density"))
|
||||
.andExpect(status().isOk())
|
||||
.andExpect(header().string("Cache-Control",
|
||||
org.hamcrest.Matchers.containsString("max-age=300")))
|
||||
.andExpect(header().string("Cache-Control",
|
||||
org.hamcrest.Matchers.containsString("private")));
|
||||
}
|
||||
|
||||
@Test
|
||||
@WithMockUser
|
||||
void density_forwardsSenderAndTagFilters() throws Exception {
|
||||
when(documentService.getDensity(any())).thenReturn(
|
||||
new DocumentDensityResult(List.of(), null, null));
|
||||
UUID senderId = UUID.randomUUID();
|
||||
|
||||
mockMvc.perform(get("/api/documents/density")
|
||||
.param("senderId", senderId.toString())
|
||||
.param("tag", "Familie")
|
||||
.param("tag", "Urlaub")
|
||||
.param("tagOp", "OR"))
|
||||
.andExpect(status().isOk());
|
||||
|
||||
verify(documentService).getDensity(eq(new DensityFilters(
|
||||
null, senderId, null,
|
||||
List.of("Familie", "Urlaub"),
|
||||
null, null,
|
||||
org.raddatz.familienarchiv.tag.TagOperator.OR)));
|
||||
}
|
||||
|
||||
@Test
|
||||
@WithMockUser
|
||||
void density_forwardsStatusAndQueryText() throws Exception {
|
||||
when(documentService.getDensity(any())).thenReturn(
|
||||
new DocumentDensityResult(List.of(), null, null));
|
||||
|
||||
mockMvc.perform(get("/api/documents/density")
|
||||
.param("q", "Brief")
|
||||
.param("status", "REVIEWED"))
|
||||
.andExpect(status().isOk());
|
||||
|
||||
verify(documentService).getDensity(eq(new DensityFilters(
|
||||
"Brief", null, null, null, null,
|
||||
DocumentStatus.REVIEWED,
|
||||
org.raddatz.familienarchiv.tag.TagOperator.AND)));
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,162 +0,0 @@
|
||||
package org.raddatz.familienarchiv.document;
|
||||
|
||||
import org.junit.jupiter.api.BeforeEach;
|
||||
import org.junit.jupiter.api.Test;
|
||||
import org.raddatz.familienarchiv.PostgresContainerConfig;
|
||||
import org.raddatz.familienarchiv.person.Person;
|
||||
import org.raddatz.familienarchiv.person.PersonRepository;
|
||||
import org.raddatz.familienarchiv.tag.Tag;
|
||||
import org.raddatz.familienarchiv.tag.TagRepository;
|
||||
import org.raddatz.familienarchiv.tag.TagOperator;
|
||||
import org.springframework.beans.factory.annotation.Autowired;
|
||||
import org.springframework.boot.test.context.SpringBootTest;
|
||||
import org.springframework.context.annotation.Import;
|
||||
import org.springframework.test.context.ActiveProfiles;
|
||||
import org.springframework.test.context.bean.override.mockito.MockitoBean;
|
||||
import org.springframework.transaction.annotation.Transactional;
|
||||
import software.amazon.awssdk.services.s3.S3Client;
|
||||
|
||||
import java.time.LocalDate;
|
||||
import java.util.HashSet;
|
||||
import java.util.List;
|
||||
import java.util.Set;
|
||||
import java.util.UUID;
|
||||
|
||||
import static org.assertj.core.api.Assertions.assertThat;
|
||||
|
||||
/**
|
||||
* End-to-end test for the filter-reactive density aggregation.
|
||||
* Density bars must recompute as the user changes other filters (sender, tag,
|
||||
* status, …). The endpoint deliberately does NOT honour `from`/`to` — the chart
|
||||
* is the surface for picking those, so it must always span the broader space
|
||||
* the user is selecting within.
|
||||
*/
|
||||
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.NONE)
|
||||
@ActiveProfiles("test")
|
||||
@Import(PostgresContainerConfig.class)
|
||||
@Transactional
|
||||
class DocumentDensityIntegrationTest {
|
||||
|
||||
@MockitoBean S3Client s3Client;
|
||||
@Autowired DocumentService documentService;
|
||||
@Autowired DocumentRepository documentRepository;
|
||||
@Autowired PersonRepository personRepository;
|
||||
@Autowired TagRepository tagRepository;
|
||||
|
||||
private Person hans;
|
||||
private Person anna;
|
||||
private Tag familieTag;
|
||||
private Tag urlaubTag;
|
||||
|
||||
@BeforeEach
|
||||
void seed() {
|
||||
hans = personRepository.save(Person.builder().firstName("Hans").lastName("Müller").build());
|
||||
anna = personRepository.save(Person.builder().firstName("Anna").lastName("Weber").build());
|
||||
familieTag = tagRepository.save(Tag.builder().name("Familie").build());
|
||||
urlaubTag = tagRepository.save(Tag.builder().name("Urlaub").build());
|
||||
}
|
||||
|
||||
private static DensityFilters noFilters() {
|
||||
return new DensityFilters(null, null, null, null, null, null, null);
|
||||
}
|
||||
|
||||
@Test
|
||||
void getDensity_returnsAllMonths_whenNoFiltersApplied() {
|
||||
save("a", LocalDate.of(1915, 8, 3), null, Set.of());
|
||||
save("b", LocalDate.of(1915, 8, 17), null, Set.of());
|
||||
save("c", LocalDate.of(1915, 9, 1), null, Set.of());
|
||||
|
||||
DocumentDensityResult result = documentService.getDensity(noFilters());
|
||||
|
||||
assertThat(result.buckets()).extracting(MonthBucket::month)
|
||||
.containsExactly("1915-08", "1915-09");
|
||||
assertThat(result.buckets()).extracting(MonthBucket::count).containsExactly(2, 1);
|
||||
assertThat(result.minDate()).isEqualTo(LocalDate.of(1915, 8, 3));
|
||||
assertThat(result.maxDate()).isEqualTo(LocalDate.of(1915, 9, 1));
|
||||
}
|
||||
|
||||
@Test
|
||||
void getDensity_filtersBySender() {
|
||||
save("a", LocalDate.of(1915, 8, 3), hans, Set.of());
|
||||
save("b", LocalDate.of(1916, 1, 4), hans, Set.of());
|
||||
save("c", LocalDate.of(1920, 5, 1), anna, Set.of());
|
||||
|
||||
DocumentDensityResult result = documentService.getDensity(
|
||||
new DensityFilters(null, hans.getId(), null, null, null, null, null));
|
||||
|
||||
assertThat(result.buckets()).extracting(MonthBucket::month)
|
||||
.containsExactly("1915-08", "1916-01");
|
||||
assertThat(result.maxDate()).isEqualTo(LocalDate.of(1916, 1, 4));
|
||||
}
|
||||
|
||||
@Test
|
||||
void getDensity_filtersByTag() {
|
||||
save("a", LocalDate.of(1915, 8, 3), null, Set.of(familieTag));
|
||||
save("b", LocalDate.of(1920, 5, 1), null, Set.of(urlaubTag));
|
||||
|
||||
DocumentDensityResult result = documentService.getDensity(
|
||||
new DensityFilters(null, null, null, List.of("Familie"), null, null, TagOperator.AND));
|
||||
|
||||
assertThat(result.buckets()).extracting(MonthBucket::month).containsExactly("1915-08");
|
||||
}
|
||||
|
||||
@Test
|
||||
void getDensity_combinesSenderAndTag() {
|
||||
save("a", LocalDate.of(1915, 8, 3), hans, Set.of(familieTag));
|
||||
save("b", LocalDate.of(1916, 1, 4), hans, Set.of(urlaubTag));
|
||||
save("c", LocalDate.of(1920, 5, 1), anna, Set.of(familieTag));
|
||||
|
||||
DocumentDensityResult result = documentService.getDensity(
|
||||
new DensityFilters(null, hans.getId(), null, List.of("Familie"), null, null, TagOperator.AND));
|
||||
|
||||
assertThat(result.buckets()).extracting(MonthBucket::month).containsExactly("1915-08");
|
||||
}
|
||||
|
||||
@Test
|
||||
void getDensity_filtersByStatus() {
|
||||
save("a", LocalDate.of(1915, 8, 3), null, Set.of(), DocumentStatus.UPLOADED);
|
||||
save("b", LocalDate.of(1916, 1, 4), null, Set.of(), DocumentStatus.PLACEHOLDER);
|
||||
|
||||
DocumentDensityResult result = documentService.getDensity(
|
||||
new DensityFilters(null, null, null, null, null, DocumentStatus.UPLOADED, null));
|
||||
|
||||
assertThat(result.buckets()).extracting(MonthBucket::month).containsExactly("1915-08");
|
||||
}
|
||||
|
||||
@Test
|
||||
void getDensity_returnsEmpty_whenNoDocumentsMatch() {
|
||||
save("a", LocalDate.of(1915, 8, 3), hans, Set.of());
|
||||
|
||||
DocumentDensityResult result = documentService.getDensity(
|
||||
new DensityFilters(null, anna.getId(), null, null, null, null, null));
|
||||
|
||||
assertThat(result.buckets()).isEmpty();
|
||||
assertThat(result.minDate()).isNull();
|
||||
assertThat(result.maxDate()).isNull();
|
||||
}
|
||||
|
||||
@Test
|
||||
void getDensity_excludesDocumentsWithNullDate() {
|
||||
save("dated", LocalDate.of(1915, 8, 3), null, Set.of());
|
||||
save("undated", null, null, Set.of());
|
||||
|
||||
DocumentDensityResult result = documentService.getDensity(noFilters());
|
||||
|
||||
assertThat(result.buckets()).extracting(MonthBucket::count).containsExactly(1);
|
||||
}
|
||||
|
||||
private void save(String suffix, LocalDate date, Person sender, Set<Tag> tags) {
|
||||
save(suffix, date, sender, tags, DocumentStatus.UPLOADED);
|
||||
}
|
||||
|
||||
private void save(String suffix, LocalDate date, Person sender, Set<Tag> tags, DocumentStatus status) {
|
||||
documentRepository.save(Document.builder()
|
||||
.title("Doc " + suffix)
|
||||
.originalFilename("doc-" + suffix + "-" + UUID.randomUUID() + ".pdf")
|
||||
.status(status)
|
||||
.documentDate(date)
|
||||
.sender(sender)
|
||||
.tags(new HashSet<>(tags))
|
||||
.build());
|
||||
}
|
||||
}
|
||||
@@ -1,109 +0,0 @@
|
||||
package org.raddatz.familienarchiv.document;
|
||||
|
||||
import jakarta.persistence.EntityManager;
|
||||
import org.junit.jupiter.api.BeforeEach;
|
||||
import org.junit.jupiter.api.Test;
|
||||
import org.raddatz.familienarchiv.PostgresContainerConfig;
|
||||
import org.raddatz.familienarchiv.config.FlywayConfig;
|
||||
import org.raddatz.familienarchiv.document.DocumentRepository;
|
||||
import org.raddatz.familienarchiv.document.Document;
|
||||
import org.raddatz.familienarchiv.document.DocumentStatus;
|
||||
import org.springframework.beans.factory.annotation.Autowired;
|
||||
import org.springframework.boot.data.jpa.test.autoconfigure.DataJpaTest;
|
||||
import org.springframework.boot.jdbc.test.autoconfigure.AutoConfigureTestDatabase;
|
||||
import org.springframework.context.annotation.Import;
|
||||
import org.springframework.test.annotation.DirtiesContext;
|
||||
|
||||
import java.util.List;
|
||||
import java.util.UUID;
|
||||
|
||||
import static org.assertj.core.api.Assertions.assertThat;
|
||||
import static org.assertj.core.api.Assertions.assertThatNoException;
|
||||
|
||||
/**
|
||||
* Repository-level integration tests for {@code findFtsPageRaw}: verifies that the
|
||||
* paginated FTS query returns exactly page-size rows and that the window-function
|
||||
* total reflects the full match count, not just the page count.
|
||||
*
|
||||
* <p>Uses real Postgres via Testcontainers so the GIN index, tsvector trigger, and
|
||||
* {@code websearch_to_tsquery} semantics are identical to production.
|
||||
*
|
||||
* <p>{@code AFTER_CLASS} dirty-context keeps the Spring context alive for all tests
|
||||
* in this class and rebuilds it once at the end, rather than after every test.
|
||||
*/
|
||||
@DataJpaTest
|
||||
@AutoConfigureTestDatabase(replace = AutoConfigureTestDatabase.Replace.NONE)
|
||||
@Import({PostgresContainerConfig.class, FlywayConfig.class})
|
||||
@DirtiesContext(classMode = DirtiesContext.ClassMode.AFTER_CLASS)
|
||||
class DocumentFtsPagedIntegrationTest {
|
||||
|
||||
@Autowired DocumentRepository documentRepository;
|
||||
@Autowired EntityManager em;
|
||||
|
||||
// 60 docs match "Walter"; 10 docs with "Hans" do not.
|
||||
private static final int WALTER_COUNT = 60;
|
||||
private static final int PAGE_SIZE = 50;
|
||||
|
||||
@BeforeEach
|
||||
void seed() {
|
||||
documentRepository.deleteAll();
|
||||
em.flush();
|
||||
for (int i = 0; i < WALTER_COUNT; i++) {
|
||||
documentRepository.saveAndFlush(doc("Brief von Walter Nr. " + i));
|
||||
}
|
||||
for (int i = 0; i < 10; i++) {
|
||||
documentRepository.saveAndFlush(doc("Brief von Hans Nr. " + i));
|
||||
}
|
||||
em.clear();
|
||||
}
|
||||
|
||||
@Test
|
||||
void findFtsPageRaw_firstPage_returnsPageSizeRows() {
|
||||
List<Object[]> rows = documentRepository.findFtsPageRaw("Walter", 0, PAGE_SIZE);
|
||||
|
||||
assertThat(rows).hasSize(PAGE_SIZE);
|
||||
}
|
||||
|
||||
@Test
|
||||
void findFtsPageRaw_windowTotal_equalsFullMatchCount_notPageSize() {
|
||||
List<Object[]> rows = documentRepository.findFtsPageRaw("Walter", 0, PAGE_SIZE);
|
||||
|
||||
long total = ((Number) rows.get(0)[2]).longValue();
|
||||
assertThat(total).isEqualTo(WALTER_COUNT);
|
||||
}
|
||||
|
||||
@Test
|
||||
void findFtsPageRaw_lastPage_returnsRemainder() {
|
||||
int remainder = WALTER_COUNT % PAGE_SIZE; // 60 % 50 = 10
|
||||
List<Object[]> rows = documentRepository.findFtsPageRaw("Walter", PAGE_SIZE, PAGE_SIZE);
|
||||
|
||||
assertThat(rows).hasSize(remainder);
|
||||
long total = ((Number) rows.get(0)[2]).longValue();
|
||||
assertThat(total).isEqualTo(WALTER_COUNT);
|
||||
}
|
||||
|
||||
@Test
|
||||
void findFtsPageRaw_noMatches_returnsEmptyList() {
|
||||
List<Object[]> rows = documentRepository.findFtsPageRaw("XYZ_KEIN_TREFFER", 0, PAGE_SIZE);
|
||||
|
||||
assertThat(rows).isEmpty();
|
||||
}
|
||||
|
||||
@Test
|
||||
void findFtsPageRaw_stopwordOnlyQuery_returnsEmptyList_noException() {
|
||||
assertThatNoException().isThrownBy(() -> {
|
||||
List<Object[]> rows = documentRepository.findFtsPageRaw("der die das und", 0, PAGE_SIZE);
|
||||
assertThat(rows).isEmpty();
|
||||
});
|
||||
}
|
||||
|
||||
// ─── Helper ───────────────────────────────────────────────────────────────
|
||||
|
||||
private Document doc(String title) {
|
||||
return Document.builder()
|
||||
.title(title)
|
||||
.originalFilename(title.replace(" ", "_") + ".pdf")
|
||||
.status(DocumentStatus.UPLOADED)
|
||||
.build();
|
||||
}
|
||||
}
|
||||
@@ -69,7 +69,7 @@ class DocumentFtsTest {
|
||||
documentRepository.saveAndFlush(document("Alter Brief"));
|
||||
em.clear();
|
||||
|
||||
List<UUID> ids = documentRepository.findAllMatchingIdsByFts("Brief");
|
||||
List<UUID> ids = documentRepository.findRankedIdsByFts("Brief");
|
||||
|
||||
assertThat(ids).hasSize(1);
|
||||
}
|
||||
@@ -79,7 +79,7 @@ class DocumentFtsTest {
|
||||
documentRepository.saveAndFlush(document("Alter Brief"));
|
||||
em.clear();
|
||||
|
||||
List<UUID> ids = documentRepository.findAllMatchingIdsByFts("Briefe");
|
||||
List<UUID> ids = documentRepository.findRankedIdsByFts("Briefe");
|
||||
|
||||
assertThat(ids).hasSize(1);
|
||||
}
|
||||
@@ -89,7 +89,7 @@ class DocumentFtsTest {
|
||||
documentRepository.saveAndFlush(document("Ein furchtbarer Brief"));
|
||||
em.clear();
|
||||
|
||||
List<UUID> ids = documentRepository.findAllMatchingIdsByFts("furchtb");
|
||||
List<UUID> ids = documentRepository.findRankedIdsByFts("furchtb");
|
||||
|
||||
assertThat(ids).hasSize(1);
|
||||
}
|
||||
@@ -99,7 +99,7 @@ class DocumentFtsTest {
|
||||
documentRepository.saveAndFlush(document("Familienfoto"));
|
||||
em.clear();
|
||||
|
||||
List<UUID> ids = documentRepository.findAllMatchingIdsByFts("Brief");
|
||||
List<UUID> ids = documentRepository.findRankedIdsByFts("Brief");
|
||||
|
||||
assertThat(ids).isEmpty();
|
||||
}
|
||||
@@ -115,7 +115,7 @@ class DocumentFtsTest {
|
||||
em.flush();
|
||||
em.clear();
|
||||
|
||||
List<UUID> ids = documentRepository.findAllMatchingIdsByFts("schreiben");
|
||||
List<UUID> ids = documentRepository.findRankedIdsByFts("schreiben");
|
||||
|
||||
assertThat(ids).contains(doc.getId());
|
||||
}
|
||||
@@ -125,14 +125,14 @@ class DocumentFtsTest {
|
||||
Document doc = documentRepository.saveAndFlush(document("Leeres Dokument"));
|
||||
em.clear();
|
||||
|
||||
assertThat(documentRepository.findAllMatchingIdsByFts("Grundbuch")).isEmpty();
|
||||
assertThat(documentRepository.findRankedIdsByFts("Grundbuch")).isEmpty();
|
||||
|
||||
UUID annotationId = annotation(doc.getId());
|
||||
blockRepository.saveAndFlush(block(doc.getId(), annotationId, "Grundbuch Eintrag 1923", 0));
|
||||
em.flush();
|
||||
em.clear();
|
||||
|
||||
assertThat(documentRepository.findAllMatchingIdsByFts("Grundbuch")).contains(doc.getId());
|
||||
assertThat(documentRepository.findRankedIdsByFts("Grundbuch")).contains(doc.getId());
|
||||
}
|
||||
|
||||
@Test
|
||||
@@ -144,13 +144,13 @@ class DocumentFtsTest {
|
||||
em.flush();
|
||||
em.clear();
|
||||
|
||||
assertThat(documentRepository.findAllMatchingIdsByFts("Grundbuch")).contains(doc.getId());
|
||||
assertThat(documentRepository.findRankedIdsByFts("Grundbuch")).contains(doc.getId());
|
||||
|
||||
blockRepository.deleteById(block.getId());
|
||||
em.flush();
|
||||
em.clear();
|
||||
|
||||
assertThat(documentRepository.findAllMatchingIdsByFts("Grundbuch")).doesNotContain(doc.getId());
|
||||
assertThat(documentRepository.findRankedIdsByFts("Grundbuch")).doesNotContain(doc.getId());
|
||||
}
|
||||
|
||||
// ─── Ranking ───────────────────────────────────────────────────────────────
|
||||
@@ -166,7 +166,7 @@ class DocumentFtsTest {
|
||||
em.flush();
|
||||
em.clear();
|
||||
|
||||
List<UUID> ids = documentRepository.findAllMatchingIdsByFts("Grundbuch");
|
||||
List<UUID> ids = documentRepository.findRankedIdsByFts("Grundbuch");
|
||||
|
||||
assertThat(ids).hasSize(2);
|
||||
assertThat(ids.get(0)).isEqualTo(docA.getId());
|
||||
@@ -179,7 +179,7 @@ class DocumentFtsTest {
|
||||
documentRepository.saveAndFlush(document("Ein Brief von der Oma"));
|
||||
em.clear();
|
||||
|
||||
List<UUID> ids = documentRepository.findAllMatchingIdsByFts("der die das und");
|
||||
List<UUID> ids = documentRepository.findRankedIdsByFts("der die das und");
|
||||
|
||||
assertThat(ids).isEmpty();
|
||||
}
|
||||
@@ -195,7 +195,7 @@ class DocumentFtsTest {
|
||||
em.flush();
|
||||
em.clear();
|
||||
|
||||
List<UUID> ids = documentRepository.findAllMatchingIdsByFts("Wille");
|
||||
List<UUID> ids = documentRepository.findRankedIdsByFts("Wille");
|
||||
|
||||
assertThat(ids).contains(doc.getId());
|
||||
}
|
||||
@@ -205,7 +205,7 @@ class DocumentFtsTest {
|
||||
documentRepository.saveAndFlush(document("Brief"));
|
||||
em.clear();
|
||||
|
||||
assertThatNoException().isThrownBy(() -> documentRepository.findAllMatchingIdsByFts("((("));
|
||||
assertThatNoException().isThrownBy(() -> documentRepository.findRankedIdsByFts("((("));
|
||||
}
|
||||
|
||||
// ─── Weight C: sender/receiver names ───────────────────────────────────────
|
||||
@@ -223,7 +223,7 @@ class DocumentFtsTest {
|
||||
em.flush();
|
||||
em.clear();
|
||||
|
||||
List<UUID> ids = documentRepository.findAllMatchingIdsByFts("Schmidt");
|
||||
List<UUID> ids = documentRepository.findRankedIdsByFts("Schmidt");
|
||||
|
||||
assertThat(ids).contains(doc.getId());
|
||||
}
|
||||
@@ -241,7 +241,7 @@ class DocumentFtsTest {
|
||||
em.flush();
|
||||
em.clear();
|
||||
|
||||
List<UUID> ids = documentRepository.findAllMatchingIdsByFts("Raddatz");
|
||||
List<UUID> ids = documentRepository.findRankedIdsByFts("Raddatz");
|
||||
|
||||
assertThat(ids).contains(doc.getId());
|
||||
}
|
||||
@@ -260,7 +260,7 @@ class DocumentFtsTest {
|
||||
em.flush();
|
||||
em.clear();
|
||||
|
||||
List<UUID> ids = documentRepository.findAllMatchingIdsByFts("Familiengeschichte");
|
||||
List<UUID> ids = documentRepository.findRankedIdsByFts("Familiengeschichte");
|
||||
|
||||
assertThat(ids).hasSize(1);
|
||||
}
|
||||
@@ -278,7 +278,7 @@ class DocumentFtsTest {
|
||||
em.flush();
|
||||
em.clear();
|
||||
|
||||
List<UUID> rankedIds = documentRepository.findAllMatchingIdsByFts("Grundbuch");
|
||||
List<UUID> rankedIds = documentRepository.findRankedIdsByFts("Grundbuch");
|
||||
Specification<Document> spec = Specification.where(hasIds(rankedIds))
|
||||
.and(hasStatus(DocumentStatus.UPLOADED));
|
||||
|
||||
|
||||
@@ -12,9 +12,9 @@ import org.springframework.beans.factory.annotation.Autowired;
|
||||
import org.springframework.boot.test.context.SpringBootTest;
|
||||
import org.springframework.context.annotation.Import;
|
||||
import org.springframework.data.domain.PageRequest;
|
||||
import org.springframework.test.annotation.DirtiesContext;
|
||||
import org.springframework.test.context.ActiveProfiles;
|
||||
import org.springframework.test.context.bean.override.mockito.MockitoBean;
|
||||
import org.springframework.transaction.annotation.Transactional;
|
||||
import software.amazon.awssdk.services.s3.S3Client;
|
||||
|
||||
import java.time.LocalDate;
|
||||
@@ -33,7 +33,7 @@ import static org.assertj.core.api.Assertions.assertThat;
|
||||
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.NONE)
|
||||
@ActiveProfiles("test")
|
||||
@Import(PostgresContainerConfig.class)
|
||||
@Transactional
|
||||
@DirtiesContext(classMode = DirtiesContext.ClassMode.AFTER_EACH_TEST_METHOD)
|
||||
class DocumentSearchPagedIntegrationTest {
|
||||
|
||||
private static final int FIXTURE_SIZE = 120;
|
||||
|
||||
@@ -21,22 +21,17 @@ import org.springframework.data.domain.Pageable;
|
||||
import org.springframework.data.jpa.domain.Specification;
|
||||
|
||||
import java.time.LocalDate;
|
||||
import java.util.ArrayList;
|
||||
import java.util.List;
|
||||
import java.util.UUID;
|
||||
|
||||
import static org.assertj.core.api.Assertions.assertThat;
|
||||
import static org.mockito.ArgumentMatchers.any;
|
||||
import static org.mockito.ArgumentMatchers.anyInt;
|
||||
import static org.mockito.ArgumentMatchers.anyString;
|
||||
import static org.mockito.Mockito.never;
|
||||
import static org.mockito.Mockito.verify;
|
||||
import static org.mockito.Mockito.when;
|
||||
|
||||
@ExtendWith(MockitoExtension.class)
|
||||
class DocumentServiceSortTest {
|
||||
|
||||
private static final Pageable PAGE = org.springframework.data.domain.PageRequest.of(0, 10_000);
|
||||
private static final Pageable UNPAGED = org.springframework.data.domain.PageRequest.of(0, 10_000);
|
||||
|
||||
@Mock DocumentRepository documentRepository;
|
||||
@Mock PersonService personService;
|
||||
@@ -48,12 +43,12 @@ class DocumentServiceSortTest {
|
||||
@Mock TranscriptionBlockQueryService transcriptionBlockQueryService;
|
||||
@InjectMocks DocumentService documentService;
|
||||
|
||||
// ─── DATE sort ────────────────────────────────────────────────────────────
|
||||
// ─── searchDocuments — DATE sort ──────────────────────────────────────────
|
||||
|
||||
@Test
|
||||
void searchDocuments_with_DATE_sort_and_text_sorts_chronologically_not_by_relevance() {
|
||||
UUID id1 = UUID.randomUUID(); // higher relevance, older doc
|
||||
UUID id2 = UUID.randomUUID(); // lower relevance, newer doc
|
||||
UUID id1 = UUID.randomUUID(); // rank position 0 (higher relevance, older doc)
|
||||
UUID id2 = UUID.randomUUID(); // rank position 1 (lower relevance, newer doc)
|
||||
|
||||
Document older = Document.builder().id(id1)
|
||||
.title("Brief").status(DocumentStatus.UPLOADED)
|
||||
@@ -62,48 +57,38 @@ class DocumentServiceSortTest {
|
||||
.title("Brief").status(DocumentStatus.UPLOADED)
|
||||
.documentDate(LocalDate.of(1960, 1, 1)).build();
|
||||
|
||||
when(documentRepository.findAllMatchingIdsByFts("Brief")).thenReturn(List.of(id1, id2));
|
||||
// FTS returns id1 first (higher rank), id2 second
|
||||
when(documentRepository.findRankedIdsByFts("Brief")).thenReturn(List.of(id1, id2));
|
||||
// findAll(spec, pageable) — the correct date path — returns date-DESC order
|
||||
when(documentRepository.findAll(any(Specification.class), any(Pageable.class)))
|
||||
.thenReturn(new PageImpl<>(List.of(newer, older)));
|
||||
|
||||
DocumentSearchResult result = documentService.searchDocuments(
|
||||
"Brief", null, null, null, null, null, null, null, DocumentSort.DATE, "DESC", null, PAGE);
|
||||
"Brief", null, null, null, null, null, null, null, DocumentSort.DATE, "DESC", null, UNPAGED);
|
||||
|
||||
// Expect: date order (newer 1960 first), NOT rank order (older 1940 first)
|
||||
assertThat(result.items()).hasSize(2);
|
||||
assertThat(result.items().get(0).document().getId()).isEqualTo(id2); // newer first
|
||||
assertThat(result.items().get(0).document().getId()).isEqualTo(id2); // newer doc first
|
||||
}
|
||||
|
||||
// ─── RELEVANCE sort — pure text (no filters) ──────────────────────────────
|
||||
|
||||
@Test
|
||||
void searchDocuments_relevance_pureText_calls_findFtsPageRaw_not_findAllMatchingIds() {
|
||||
UUID id1 = UUID.randomUUID();
|
||||
List<Object[]> ftsRows = ftsRows(id1, 0.5d, 1L);
|
||||
when(documentRepository.findFtsPageRaw(anyString(), anyInt(), anyInt())).thenReturn(ftsRows);
|
||||
when(documentRepository.findAllById(any()))
|
||||
.thenReturn(List.of(doc(id1)));
|
||||
|
||||
documentService.searchDocuments(
|
||||
"Brief", null, null, null, null, null, null, null, DocumentSort.RELEVANCE, null, null, PAGE);
|
||||
|
||||
verify(documentRepository).findFtsPageRaw(anyString(), anyInt(), anyInt());
|
||||
verify(documentRepository, never()).findAllMatchingIdsByFts(anyString());
|
||||
}
|
||||
// ─── searchDocuments — RELEVANCE sort ─────────────────────────────────────
|
||||
|
||||
@Test
|
||||
void searchDocuments_with_RELEVANCE_sort_and_text_preserves_fts_rank_order() {
|
||||
UUID id1 = UUID.randomUUID(); // higher rank — must appear first
|
||||
UUID id2 = UUID.randomUUID(); // lower rank
|
||||
UUID id1 = UUID.randomUUID(); // rank position 0
|
||||
UUID id2 = UUID.randomUUID(); // rank position 1
|
||||
|
||||
List<Object[]> ftsRows = new ArrayList<>();
|
||||
ftsRows.add(new Object[]{id1, 0.8d, 2L});
|
||||
ftsRows.add(new Object[]{id2, 0.3d, 2L});
|
||||
when(documentRepository.findFtsPageRaw(anyString(), anyInt(), anyInt())).thenReturn(ftsRows);
|
||||
when(documentRepository.findAllById(any())).thenReturn(List.of(doc(id2), doc(id1))); // unordered from JPA
|
||||
Document doc1 = Document.builder().id(id1).title("Brief").status(DocumentStatus.UPLOADED).build();
|
||||
Document doc2 = Document.builder().id(id2).title("Brief").status(DocumentStatus.UPLOADED).build();
|
||||
|
||||
when(documentRepository.findRankedIdsByFts("Brief")).thenReturn(List.of(id1, id2));
|
||||
when(documentRepository.findAll(any(Specification.class)))
|
||||
.thenReturn(List.of(doc2, doc1)); // unordered from DB
|
||||
|
||||
DocumentSearchResult result = documentService.searchDocuments(
|
||||
"Brief", null, null, null, null, null, null, null, DocumentSort.RELEVANCE, null, null, PAGE);
|
||||
"Brief", null, null, null, null, null, null, null, DocumentSort.RELEVANCE, null, null, UNPAGED);
|
||||
|
||||
// Expect: rank order restored (id1 first)
|
||||
assertThat(result.items().get(0).document().getId()).isEqualTo(id1);
|
||||
}
|
||||
|
||||
@@ -112,82 +97,16 @@ class DocumentServiceSortTest {
|
||||
UUID id1 = UUID.randomUUID();
|
||||
UUID id2 = UUID.randomUUID();
|
||||
|
||||
List<Object[]> ftsRows = new ArrayList<>();
|
||||
ftsRows.add(new Object[]{id1, 0.8d, 2L});
|
||||
ftsRows.add(new Object[]{id2, 0.3d, 2L});
|
||||
when(documentRepository.findFtsPageRaw(anyString(), anyInt(), anyInt())).thenReturn(ftsRows);
|
||||
when(documentRepository.findAllById(any())).thenReturn(List.of(doc(id2), doc(id1)));
|
||||
Document doc1 = Document.builder().id(id1).title("Brief").status(DocumentStatus.UPLOADED).build();
|
||||
Document doc2 = Document.builder().id(id2).title("Brief").status(DocumentStatus.UPLOADED).build();
|
||||
|
||||
when(documentRepository.findRankedIdsByFts("Brief")).thenReturn(List.of(id1, id2));
|
||||
when(documentRepository.findAll(any(Specification.class)))
|
||||
.thenReturn(List.of(doc2, doc1));
|
||||
|
||||
DocumentSearchResult result = documentService.searchDocuments(
|
||||
"Brief", null, null, null, null, null, null, null, null, null, null, PAGE);
|
||||
"Brief", null, null, null, null, null, null, null, null, null, null, UNPAGED);
|
||||
|
||||
assertThat(result.items().get(0).document().getId()).isEqualTo(id1);
|
||||
}
|
||||
|
||||
// ─── RELEVANCE sort — overflow guard ─────────────────────────────────────
|
||||
|
||||
@Test
|
||||
void searchDocuments_relevance_returns_empty_when_offset_exceeds_maxInt() {
|
||||
// offset = pageNumber * pageSize; choose values so offset > Integer.MAX_VALUE
|
||||
Pageable hugePage = org.springframework.data.domain.PageRequest.of(Integer.MAX_VALUE / 10 + 1, 10);
|
||||
|
||||
DocumentSearchResult result = documentService.searchDocuments(
|
||||
"Brief", null, null, null, null, null, null, null,
|
||||
DocumentSort.RELEVANCE, null, null, hugePage);
|
||||
|
||||
assertThat(result.items()).isEmpty();
|
||||
verify(documentRepository, never()).findFtsPageRaw(anyString(), anyInt(), anyInt());
|
||||
}
|
||||
|
||||
// ─── toFtsPage — UUID-as-String JDBC driver variance ────────────────────
|
||||
|
||||
@Test
|
||||
void searchDocuments_relevance_handles_string_uuid_from_jdbc_driver() {
|
||||
String stringId = "11111111-1111-1111-1111-111111111111";
|
||||
UUID uuidId = UUID.fromString(stringId);
|
||||
// Simulate a JDBC driver that returns the id column as String instead of UUID
|
||||
List<Object[]> ftsRows = new ArrayList<>();
|
||||
ftsRows.add(new Object[]{stringId, 0.5d, 1L});
|
||||
when(documentRepository.findFtsPageRaw(anyString(), anyInt(), anyInt())).thenReturn(ftsRows);
|
||||
when(documentRepository.findAllById(any())).thenReturn(List.of(doc(uuidId)));
|
||||
|
||||
DocumentSearchResult result = documentService.searchDocuments(
|
||||
"Brief", null, null, null, null, null, null, null,
|
||||
DocumentSort.RELEVANCE, null, null, PAGE);
|
||||
|
||||
assertThat(result.items()).hasSize(1);
|
||||
assertThat(result.items().get(0).document().getId()).isEqualTo(uuidId);
|
||||
}
|
||||
|
||||
// ─── RELEVANCE sort — text + active filter ────────────────────────────────
|
||||
|
||||
@Test
|
||||
void searchDocuments_relevance_with_active_filter_uses_inMemory_path() {
|
||||
UUID id1 = UUID.randomUUID();
|
||||
UUID id2 = UUID.randomUUID();
|
||||
|
||||
when(documentRepository.findAllMatchingIdsByFts("Brief")).thenReturn(List.of(id1, id2));
|
||||
when(documentRepository.findAll(any(Specification.class)))
|
||||
.thenReturn(List.of(doc(id2), doc(id1)));
|
||||
|
||||
// sender filter is active → triggers in-memory path, not findFtsPageRaw
|
||||
LocalDate from = LocalDate.of(1900, 1, 1);
|
||||
documentService.searchDocuments(
|
||||
"Brief", from, null, null, null, null, null, null, DocumentSort.RELEVANCE, null, null, PAGE);
|
||||
|
||||
verify(documentRepository, never()).findFtsPageRaw(anyString(), anyInt(), anyInt());
|
||||
verify(documentRepository).findAllMatchingIdsByFts("Brief");
|
||||
}
|
||||
|
||||
// ─── Helpers ──────────────────────────────────────────────────────────────
|
||||
|
||||
private static Document doc(UUID id) {
|
||||
return Document.builder().id(id).title("Brief").status(DocumentStatus.UPLOADED).build();
|
||||
}
|
||||
|
||||
private static List<Object[]> ftsRows(UUID id, double rank, long total) {
|
||||
List<Object[]> rows = new ArrayList<>();
|
||||
rows.add(new Object[]{id, rank, total});
|
||||
return rows;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -33,7 +33,6 @@ import org.springframework.data.domain.PageImpl;
|
||||
import org.springframework.data.domain.PageRequest;
|
||||
import org.springframework.data.domain.Pageable;
|
||||
import org.springframework.data.domain.Sort;
|
||||
import org.springframework.data.jpa.domain.Specification;
|
||||
import org.springframework.mock.web.MockMultipartFile;
|
||||
|
||||
import java.time.LocalDate;
|
||||
@@ -1403,21 +1402,6 @@ class DocumentServiceTest {
|
||||
assertThat(result.items()).hasSize(1); // only the slice is enriched
|
||||
}
|
||||
|
||||
@Test
|
||||
void searchDocuments_UPDATED_AT_sort_resolves_to_updatedAt_field() {
|
||||
ArgumentCaptor<Pageable> captor = ArgumentCaptor.forClass(Pageable.class);
|
||||
when(documentRepository.findAll(any(org.springframework.data.jpa.domain.Specification.class), any(Pageable.class)))
|
||||
.thenReturn(new PageImpl<>(List.of()));
|
||||
|
||||
documentService.searchDocuments(null, null, null, null, null, null, null, null,
|
||||
DocumentSort.UPDATED_AT, "DESC", null,
|
||||
org.springframework.data.domain.PageRequest.of(0, 5));
|
||||
|
||||
verify(documentRepository).findAll(any(org.springframework.data.jpa.domain.Specification.class), captor.capture());
|
||||
assertThat(captor.getValue().getSort())
|
||||
.isEqualTo(Sort.by(Sort.Direction.DESC, "updatedAt"));
|
||||
}
|
||||
|
||||
@Test
|
||||
void searchDocuments_senderSort_slicesInMemoryAndReportsFullTotal() {
|
||||
// Fixture: 120 docs with senders; request page 1, size 50 → expect 50 items
|
||||
@@ -1620,10 +1604,9 @@ class DocumentServiceTest {
|
||||
// chr(1)=\u0001 marks start, chr(2)=\u0002 marks end of highlighted term
|
||||
List<Object[]> rows = Collections.singletonList(new Object[]{docId, "\u0001Brief\u0002 an Anna", null, false, null, null, null});
|
||||
|
||||
List<Object[]> ftsRows = new java.util.ArrayList<>();
|
||||
ftsRows.add(new Object[]{docId, 0.5d, 1L});
|
||||
when(documentRepository.findFtsPageRaw(anyString(), anyInt(), anyInt())).thenReturn(ftsRows);
|
||||
when(documentRepository.findAllById(any())).thenReturn(List.of(doc));
|
||||
when(documentRepository.findRankedIdsByFts("Brief")).thenReturn(List.of(docId));
|
||||
when(documentRepository.findAll(any(org.springframework.data.jpa.domain.Specification.class)))
|
||||
.thenReturn(List.of(doc));
|
||||
when(documentRepository.findEnrichmentData(any(), eq("Brief"))).thenReturn(rows);
|
||||
|
||||
DocumentSearchResult result = documentService.searchDocuments(
|
||||
@@ -1655,10 +1638,9 @@ class DocumentServiceTest {
|
||||
String snippetHeadline = "Hier ist der \u0001Brief\u0002 aus Berlin";
|
||||
List<Object[]> rows = Collections.singletonList(new Object[]{docId, "Dok", snippetHeadline, false, null, null, null});
|
||||
|
||||
List<Object[]> snippetFtsRows = new java.util.ArrayList<>();
|
||||
snippetFtsRows.add(new Object[]{docId, 0.5d, 1L});
|
||||
when(documentRepository.findFtsPageRaw(anyString(), anyInt(), anyInt())).thenReturn(snippetFtsRows);
|
||||
when(documentRepository.findAllById(any())).thenReturn(List.of(doc));
|
||||
when(documentRepository.findRankedIdsByFts("Brief")).thenReturn(List.of(docId));
|
||||
when(documentRepository.findAll(any(org.springframework.data.jpa.domain.Specification.class)))
|
||||
.thenReturn(List.of(doc));
|
||||
when(documentRepository.findEnrichmentData(any(), eq("Brief"))).thenReturn(rows);
|
||||
|
||||
DocumentSearchResult result = documentService.searchDocuments(
|
||||
@@ -2204,7 +2186,7 @@ class DocumentServiceTest {
|
||||
|
||||
@Test
|
||||
void findIdsForFilter_returnsEmpty_whenFtsHasNoMatches() {
|
||||
when(documentRepository.findAllMatchingIdsByFts("xyz")).thenReturn(List.of());
|
||||
when(documentRepository.findRankedIdsByFts("xyz")).thenReturn(List.of());
|
||||
|
||||
List<UUID> result = documentService.findIdsForFilter(
|
||||
"xyz", null, null, null, null, null, null, null, null);
|
||||
@@ -2339,61 +2321,4 @@ class DocumentServiceTest {
|
||||
assertThat(documentService.save(doc)).isEqualTo(doc);
|
||||
verify(documentRepository).save(doc);
|
||||
}
|
||||
|
||||
// ─── getDensity ────────────────────────────────────────────────────────────
|
||||
|
||||
private static DensityFilters anyFilters() {
|
||||
return new DensityFilters(null, null, null, null, null, null, null);
|
||||
}
|
||||
|
||||
@Test
|
||||
void getDensity_returnsEmptyResult_whenNoDocumentsMatch() {
|
||||
when(documentRepository.findAll(any(Specification.class))).thenReturn(List.of());
|
||||
|
||||
DocumentDensityResult result = documentService.getDensity(anyFilters());
|
||||
|
||||
assertThat(result.buckets()).isEmpty();
|
||||
assertThat(result.minDate()).isNull();
|
||||
assertThat(result.maxDate()).isNull();
|
||||
}
|
||||
|
||||
@Test
|
||||
void getDensity_groupsMatchingDocumentsByMonth() {
|
||||
Document a = Document.builder().documentDate(LocalDate.of(1915, 8, 3)).build();
|
||||
Document b = Document.builder().documentDate(LocalDate.of(1915, 8, 17)).build();
|
||||
Document c = Document.builder().documentDate(LocalDate.of(1915, 9, 1)).build();
|
||||
when(documentRepository.findAll(any(Specification.class))).thenReturn(List.of(a, b, c));
|
||||
when(tagService.expandTagNamesToDescendantIdSets(any())).thenReturn(List.of());
|
||||
|
||||
DocumentDensityResult result = documentService.getDensity(anyFilters());
|
||||
|
||||
assertThat(result.buckets()).extracting(MonthBucket::month)
|
||||
.containsExactly("1915-08", "1915-09");
|
||||
assertThat(result.buckets()).extracting(MonthBucket::count).containsExactly(2, 1);
|
||||
assertThat(result.minDate()).isEqualTo(LocalDate.of(1915, 8, 3));
|
||||
assertThat(result.maxDate()).isEqualTo(LocalDate.of(1915, 9, 1));
|
||||
}
|
||||
|
||||
@Test
|
||||
void getDensity_excludesDocumentsWithNullDate() {
|
||||
Document dated = Document.builder().documentDate(LocalDate.of(1915, 8, 3)).build();
|
||||
Document undated = Document.builder().documentDate(null).build();
|
||||
when(documentRepository.findAll(any(Specification.class))).thenReturn(List.of(dated, undated));
|
||||
when(tagService.expandTagNamesToDescendantIdSets(any())).thenReturn(List.of());
|
||||
|
||||
DocumentDensityResult result = documentService.getDensity(anyFilters());
|
||||
|
||||
assertThat(result.buckets()).extracting(MonthBucket::count).containsExactly(1);
|
||||
}
|
||||
|
||||
@Test
|
||||
void getDensity_shortCircuits_whenFtsReturnsNoMatches() {
|
||||
when(documentRepository.findAllMatchingIdsByFts("xyz")).thenReturn(List.of());
|
||||
|
||||
DocumentDensityResult result = documentService.getDensity(
|
||||
new DensityFilters("xyz", null, null, null, null, null, null));
|
||||
|
||||
assertThat(result.buckets()).isEmpty();
|
||||
verify(documentRepository, org.mockito.Mockito.never()).findAll(any(Specification.class));
|
||||
}
|
||||
}
|
||||
|
||||
@@ -10,7 +10,6 @@ import org.raddatz.familienarchiv.document.DocumentStatus;
|
||||
import org.raddatz.familienarchiv.document.DocumentRepository;
|
||||
import org.springframework.beans.factory.annotation.Autowired;
|
||||
import org.springframework.boot.test.context.SpringBootTest;
|
||||
import org.springframework.test.context.ActiveProfiles;
|
||||
import org.springframework.context.annotation.Import;
|
||||
import org.springframework.test.context.DynamicPropertyRegistry;
|
||||
import org.springframework.test.context.DynamicPropertySource;
|
||||
@@ -42,7 +41,6 @@ import static org.assertj.core.api.Assertions.assertThat;
|
||||
* test pyramid mocks at the FileService boundary.
|
||||
*/
|
||||
@SpringBootTest
|
||||
@ActiveProfiles("test")
|
||||
@Import(PostgresContainerConfig.class)
|
||||
class ThumbnailServiceIntegrationTest {
|
||||
|
||||
|
||||
@@ -44,14 +44,6 @@ class CommentControllerTest {
|
||||
|
||||
// ─── Block comment endpoints ─────────────────────────────────────────────
|
||||
|
||||
@Test
|
||||
@WithMockUser
|
||||
void getBlockComments_returns400_when_documentId_is_not_a_UUID() throws Exception {
|
||||
UUID blockId = UUID.randomUUID();
|
||||
mockMvc.perform(get("/api/documents/NOT-A-UUID/transcription-blocks/" + blockId + "/comments"))
|
||||
.andExpect(status().isBadRequest());
|
||||
}
|
||||
|
||||
@Test
|
||||
@WithMockUser
|
||||
void getBlockComments_returns200() throws Exception {
|
||||
@@ -123,15 +115,6 @@ class CommentControllerTest {
|
||||
|
||||
// ─── Block reply endpoints ───────────────────────────────────────────────
|
||||
|
||||
@Test
|
||||
@WithMockUser(authorities = "ANNOTATE_ALL")
|
||||
void replyToBlockComment_returns400_when_blockId_is_not_a_UUID() throws Exception {
|
||||
mockMvc.perform(post("/api/documents/" + DOC_ID + "/transcription-blocks/NOT-A-UUID"
|
||||
+ "/comments/" + COMMENT_ID + "/replies")
|
||||
.contentType(MediaType.APPLICATION_JSON).content(COMMENT_JSON))
|
||||
.andExpect(status().isBadRequest());
|
||||
}
|
||||
|
||||
@Test
|
||||
void replyToBlockComment_returns401_whenUnauthenticated() throws Exception {
|
||||
UUID blockId = UUID.randomUUID();
|
||||
|
||||
@@ -19,7 +19,6 @@ import org.raddatz.familienarchiv.notification.NotificationService;
|
||||
|
||||
import java.time.LocalDateTime;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.Optional;
|
||||
import java.util.Set;
|
||||
import java.util.UUID;
|
||||
@@ -645,99 +644,62 @@ class CommentServiceTest {
|
||||
verify(auditService, never()).logAfterCommit(eq(AuditKind.MENTION_CREATED), any(), any(), any());
|
||||
}
|
||||
|
||||
// ─── findDataByIds ────────────────────────────────────────────────────────
|
||||
// ─── findAnnotationIdsByIds ───────────────────────────────────────────────
|
||||
|
||||
@Test
|
||||
void findDataByIds_returns_empty_map_when_input_is_empty() {
|
||||
assertThat(commentService.findDataByIds(List.of())).isEmpty();
|
||||
void findAnnotationIdsByIds_returnsMap_forKnownIds() {
|
||||
UUID commentA = UUID.randomUUID();
|
||||
UUID annotationA = UUID.randomUUID();
|
||||
UUID commentB = UUID.randomUUID();
|
||||
UUID annotationB = UUID.randomUUID();
|
||||
when(commentRepository.findAllById(List.of(commentA, commentB)))
|
||||
.thenReturn(List.of(
|
||||
DocumentComment.builder().id(commentA).annotationId(annotationA).build(),
|
||||
DocumentComment.builder().id(commentB).annotationId(annotationB).build()
|
||||
));
|
||||
|
||||
assertThat(commentService.findAnnotationIdsByIds(List.of(commentA, commentB)))
|
||||
.containsOnly(
|
||||
java.util.Map.entry(commentA, annotationA),
|
||||
java.util.Map.entry(commentB, annotationB)
|
||||
);
|
||||
}
|
||||
|
||||
@Test
|
||||
void findAnnotationIdsByIds_returnsEmptyMap_forEmptyInput() {
|
||||
assertThat(commentService.findAnnotationIdsByIds(List.of())).isEmpty();
|
||||
verify(commentRepository, never()).findAllById(anyList());
|
||||
}
|
||||
|
||||
@Test
|
||||
void findDataByIds_strips_html_and_extracts_plain_text() {
|
||||
UUID id = UUID.randomUUID();
|
||||
when(commentRepository.findAllById(List.of(id)))
|
||||
.thenReturn(List.of(DocumentComment.builder().id(id)
|
||||
.content("<p><strong>Hello</strong> world</p>").build()));
|
||||
void findAnnotationIdsByIds_omitsUnknownIds() {
|
||||
UUID known = UUID.randomUUID();
|
||||
UUID knownAnnotation = UUID.randomUUID();
|
||||
UUID missing = UUID.randomUUID();
|
||||
when(commentRepository.findAllById(List.of(known, missing)))
|
||||
.thenReturn(List.of(
|
||||
DocumentComment.builder().id(known).annotationId(knownAnnotation).build()
|
||||
));
|
||||
|
||||
Map<UUID, CommentData> result = commentService.findDataByIds(List.of(id));
|
||||
|
||||
assertThat(result.get(id).preview()).isEqualTo("Hello world");
|
||||
assertThat(commentService.findAnnotationIdsByIds(List.of(known, missing)))
|
||||
.containsOnly(java.util.Map.entry(known, knownAnnotation))
|
||||
.doesNotContainKey(missing);
|
||||
}
|
||||
|
||||
@Test
|
||||
void findDataByIds_truncates_at_exactly_120_chars() {
|
||||
UUID id = UUID.randomUUID();
|
||||
String text121 = "a".repeat(121);
|
||||
when(commentRepository.findAllById(List.of(id)))
|
||||
.thenReturn(List.of(DocumentComment.builder().id(id).content(text121).build()));
|
||||
void findAnnotationIdsByIds_omitsCommentsWithNullAnnotationId() {
|
||||
UUID legacy = UUID.randomUUID();
|
||||
UUID block = UUID.randomUUID();
|
||||
UUID annotation = UUID.randomUUID();
|
||||
when(commentRepository.findAllById(List.of(legacy, block)))
|
||||
.thenReturn(List.of(
|
||||
DocumentComment.builder().id(legacy).annotationId(null).build(),
|
||||
DocumentComment.builder().id(block).annotationId(annotation).build()
|
||||
));
|
||||
|
||||
assertThat(commentService.findDataByIds(List.of(id)).get(id).preview()).hasSize(120);
|
||||
}
|
||||
|
||||
@Test
|
||||
void findDataByIds_preserves_content_at_exactly_120_chars() {
|
||||
UUID id = UUID.randomUUID();
|
||||
String text120 = "a".repeat(120);
|
||||
when(commentRepository.findAllById(List.of(id)))
|
||||
.thenReturn(List.of(DocumentComment.builder().id(id).content(text120).build()));
|
||||
|
||||
assertThat(commentService.findDataByIds(List.of(id)).get(id).preview()).hasSize(120);
|
||||
}
|
||||
|
||||
@Test
|
||||
void findDataByIds_returns_empty_string_for_blank_content() {
|
||||
UUID id = UUID.randomUUID();
|
||||
when(commentRepository.findAllById(List.of(id)))
|
||||
.thenReturn(List.of(DocumentComment.builder().id(id).content(" ").build()));
|
||||
|
||||
assertThat(commentService.findDataByIds(List.of(id)).get(id).preview()).isEmpty();
|
||||
}
|
||||
|
||||
@Test
|
||||
void findDataByIds_returns_empty_string_for_null_content() {
|
||||
UUID id = UUID.randomUUID();
|
||||
when(commentRepository.findAllById(List.of(id)))
|
||||
.thenReturn(List.of(DocumentComment.builder().id(id).content(null).build()));
|
||||
|
||||
assertThat(commentService.findDataByIds(List.of(id)).get(id).preview()).isEmpty();
|
||||
}
|
||||
|
||||
@Test
|
||||
void findDataByIds_omits_deleted_comments_from_result_map() {
|
||||
UUID present = UUID.randomUUID();
|
||||
UUID deleted = UUID.randomUUID();
|
||||
when(commentRepository.findAllById(List.of(present, deleted)))
|
||||
.thenReturn(List.of(DocumentComment.builder().id(present).content("Hi").build()));
|
||||
|
||||
Map<UUID, CommentData> result = commentService.findDataByIds(List.of(present, deleted));
|
||||
|
||||
assertThat(result).containsKey(present);
|
||||
assertThat(result).doesNotContainKey(deleted);
|
||||
}
|
||||
|
||||
@Test
|
||||
void findDataByIds_preserves_annotationId_alongside_preview() {
|
||||
UUID id = UUID.randomUUID();
|
||||
UUID annotationId = UUID.randomUUID();
|
||||
when(commentRepository.findAllById(List.of(id)))
|
||||
.thenReturn(List.of(DocumentComment.builder().id(id)
|
||||
.annotationId(annotationId).content("Text").build()));
|
||||
|
||||
CommentData data = commentService.findDataByIds(List.of(id)).get(id);
|
||||
|
||||
assertThat(data.annotationId()).isEqualTo(annotationId);
|
||||
assertThat(data.preview()).isEqualTo("Text");
|
||||
}
|
||||
|
||||
@Test
|
||||
void findDataByIds_sets_null_annotationId_when_comment_has_no_annotation() {
|
||||
UUID id = UUID.randomUUID();
|
||||
when(commentRepository.findAllById(List.of(id)))
|
||||
.thenReturn(List.of(DocumentComment.builder().id(id)
|
||||
.annotationId(null).content("Text").build()));
|
||||
|
||||
assertThat(commentService.findDataByIds(List.of(id)).get(id).annotationId()).isNull();
|
||||
assertThat(commentService.findAnnotationIdsByIds(List.of(legacy, block)))
|
||||
.containsOnly(java.util.Map.entry(block, annotation))
|
||||
.doesNotContainKey(legacy);
|
||||
}
|
||||
|
||||
private void stubBlock(UUID docId, UUID blockId) {
|
||||
|
||||
@@ -1,33 +0,0 @@
|
||||
package org.raddatz.familienarchiv.exception;
|
||||
|
||||
import io.sentry.Sentry;
|
||||
import org.junit.jupiter.api.Test;
|
||||
import org.junit.jupiter.api.extension.ExtendWith;
|
||||
import org.mockito.InjectMocks;
|
||||
import org.mockito.MockedStatic;
|
||||
import org.mockito.junit.jupiter.MockitoExtension;
|
||||
import org.springframework.http.ResponseEntity;
|
||||
|
||||
import static org.assertj.core.api.Assertions.assertThat;
|
||||
import static org.mockito.Mockito.mockStatic;
|
||||
|
||||
@ExtendWith(MockitoExtension.class)
|
||||
class GlobalExceptionHandlerTest {
|
||||
|
||||
@InjectMocks
|
||||
private GlobalExceptionHandler handler;
|
||||
|
||||
@Test
|
||||
void handleGeneric_captures_exception_in_sentry_and_returns_500() {
|
||||
RuntimeException ex = new RuntimeException("unexpected failure");
|
||||
|
||||
try (MockedStatic<Sentry> sentryMock = mockStatic(Sentry.class)) {
|
||||
ResponseEntity<GlobalExceptionHandler.ErrorResponse> response = handler.handleGeneric(ex);
|
||||
|
||||
sentryMock.verify(() -> Sentry.captureException(ex));
|
||||
assertThat(response.getStatusCode().value()).isEqualTo(500);
|
||||
assertThat(response.getBody()).isNotNull();
|
||||
assertThat(response.getBody().code()).isEqualTo(ErrorCode.INTERNAL_ERROR);
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -19,9 +19,9 @@ import org.springframework.context.annotation.Import;
|
||||
import org.springframework.security.authentication.UsernamePasswordAuthenticationToken;
|
||||
import org.springframework.security.core.authority.SimpleGrantedAuthority;
|
||||
import org.springframework.security.core.context.SecurityContextHolder;
|
||||
import org.springframework.test.annotation.DirtiesContext;
|
||||
import org.springframework.test.context.ActiveProfiles;
|
||||
import org.springframework.test.context.bean.override.mockito.MockitoBean;
|
||||
import org.springframework.transaction.annotation.Transactional;
|
||||
import software.amazon.awssdk.services.s3.S3Client;
|
||||
|
||||
import java.util.List;
|
||||
@@ -32,7 +32,7 @@ import static org.assertj.core.api.Assertions.assertThat;
|
||||
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.NONE)
|
||||
@ActiveProfiles("test")
|
||||
@Import(PostgresContainerConfig.class)
|
||||
@Transactional
|
||||
@DirtiesContext(classMode = DirtiesContext.ClassMode.AFTER_EACH_TEST_METHOD)
|
||||
class GeschichteServiceIntegrationTest {
|
||||
|
||||
@MockitoBean
|
||||
@@ -159,26 +159,6 @@ class GeschichteServiceIntegrationTest {
|
||||
.isEmpty();
|
||||
}
|
||||
|
||||
@Test
|
||||
void list_DRAFT_does_not_return_other_users_drafts() {
|
||||
// writer creates a draft; writer2 (also BLOG_WRITE) should not see it
|
||||
AppUser writer2 = appUserRepository.save(AppUser.builder()
|
||||
.email("writer2-int@test")
|
||||
.password("hash")
|
||||
.build());
|
||||
|
||||
authenticateAs(writer, Permission.BLOG_WRITE);
|
||||
GeschichteUpdateDTO dto = new GeschichteUpdateDTO();
|
||||
dto.setTitle("Writer 1 draft");
|
||||
dto.setBody("<p>private</p>");
|
||||
geschichteService.create(dto);
|
||||
|
||||
authenticateAs(writer2, Permission.BLOG_WRITE);
|
||||
List<Geschichte> result = geschichteService.list(GeschichteStatus.DRAFT, List.of(), null, 50);
|
||||
|
||||
assertThat(result).isEmpty();
|
||||
}
|
||||
|
||||
private UUID publishedStoryWithPersons(String title, List<UUID> personIds) {
|
||||
GeschichteUpdateDTO dto = new GeschichteUpdateDTO();
|
||||
dto.setTitle(title);
|
||||
|
||||
@@ -20,10 +20,7 @@ import software.amazon.awssdk.core.sync.RequestBody;
|
||||
import software.amazon.awssdk.services.s3.S3Client;
|
||||
import software.amazon.awssdk.services.s3.model.PutObjectRequest;
|
||||
|
||||
import org.apache.poi.xssf.usermodel.XSSFWorkbook;
|
||||
|
||||
import java.io.File;
|
||||
import java.io.OutputStream;
|
||||
import java.nio.file.Files;
|
||||
import java.nio.file.Path;
|
||||
import java.time.LocalDate;
|
||||
@@ -53,7 +50,6 @@ class MassImportServiceTest {
|
||||
void setUp() {
|
||||
service = new MassImportService(documentService, personService, tagService, s3Client, thumbnailAsyncRunner);
|
||||
ReflectionTestUtils.setField(service, "bucketName", "test-bucket");
|
||||
ReflectionTestUtils.setField(service, "importDir", "/import");
|
||||
ReflectionTestUtils.setField(service, "colIndex", 0);
|
||||
ReflectionTestUtils.setField(service, "colBox", 1);
|
||||
ReflectionTestUtils.setField(service, "colFolder", 2);
|
||||
@@ -73,64 +69,20 @@ class MassImportServiceTest {
|
||||
assertThat(service.getStatus().state()).isEqualTo(MassImportService.State.IDLE);
|
||||
}
|
||||
|
||||
@Test
|
||||
void getStatus_hasStatusCode_IMPORT_IDLE_byDefault() {
|
||||
assertThat(service.getStatus().statusCode()).isEqualTo("IMPORT_IDLE");
|
||||
}
|
||||
|
||||
// ─── runImportAsync ───────────────────────────────────────────────────────
|
||||
|
||||
@Test
|
||||
void runImportAsync_setsFailedStatus_whenImportDirectoryDoesNotExist() {
|
||||
// /import directory doesn't exist in test environment → IOException → IMPORT_FAILED_INTERNAL
|
||||
// /import directory doesn't exist in test environment → findSpreadsheetFile throws
|
||||
service.runImportAsync();
|
||||
|
||||
assertThat(service.getStatus().state()).isEqualTo(MassImportService.State.FAILED);
|
||||
assertThat(service.getStatus().statusCode()).isEqualTo("IMPORT_FAILED_INTERNAL");
|
||||
}
|
||||
|
||||
@Test
|
||||
void runImportAsync_readsFromConfiguredImportDir(@TempDir Path tempDir) {
|
||||
// Empty temp dir → findSpreadsheetFile throws "no spreadsheet" with the
|
||||
// configured path in the message. Proves the field, not a constant,
|
||||
// drives the lookup.
|
||||
ReflectionTestUtils.setField(service, "importDir", tempDir.toString());
|
||||
|
||||
service.runImportAsync();
|
||||
|
||||
assertThat(service.getStatus().state()).isEqualTo(MassImportService.State.FAILED);
|
||||
assertThat(service.getStatus().message()).contains(tempDir.toString());
|
||||
}
|
||||
|
||||
@Test
|
||||
void runImportAsync_setsStatusCode_IMPORT_FAILED_NO_SPREADSHEET_whenDirIsEmpty(@TempDir Path tempDir) {
|
||||
ReflectionTestUtils.setField(service, "importDir", tempDir.toString());
|
||||
|
||||
service.runImportAsync();
|
||||
|
||||
assertThat(service.getStatus().statusCode()).isEqualTo("IMPORT_FAILED_NO_SPREADSHEET");
|
||||
}
|
||||
|
||||
@Test
|
||||
void runImportAsync_setsStatusCode_IMPORT_DONE_whenSpreadsheetHasNoDataRows(@TempDir Path tempDir) throws Exception {
|
||||
Path xlsx = tempDir.resolve("import.xlsx");
|
||||
try (XSSFWorkbook wb = new XSSFWorkbook()) {
|
||||
wb.createSheet("Sheet1");
|
||||
try (OutputStream out = Files.newOutputStream(xlsx)) {
|
||||
wb.write(out);
|
||||
}
|
||||
}
|
||||
ReflectionTestUtils.setField(service, "importDir", tempDir.toString());
|
||||
|
||||
service.runImportAsync();
|
||||
|
||||
assertThat(service.getStatus().statusCode()).isEqualTo("IMPORT_DONE");
|
||||
}
|
||||
|
||||
@Test
|
||||
void runImportAsync_throwsConflict_whenAlreadyRunning() {
|
||||
MassImportService.ImportStatus running = new MassImportService.ImportStatus(
|
||||
MassImportService.State.RUNNING, "IMPORT_RUNNING", "Running...", 0, LocalDateTime.now());
|
||||
MassImportService.State.RUNNING, "Running...", 0, LocalDateTime.now());
|
||||
ReflectionTestUtils.setField(service, "currentStatus", running);
|
||||
|
||||
assertThatThrownBy(() -> service.runImportAsync())
|
||||
|
||||
@@ -81,29 +81,6 @@ class PersonControllerTest {
|
||||
.andExpect(jsonPath("$[0].firstName").value("Hans"));
|
||||
}
|
||||
|
||||
@Test
|
||||
@WithMockUser(authorities = "READ_ALL")
|
||||
void getPersons_delegatesTopByDocumentCount_whenSortAndSizeGiven() throws Exception {
|
||||
PersonSummaryDTO top = mockPersonSummary("Käthe", "Raddatz");
|
||||
when(personService.findTopByDocumentCount(4)).thenReturn(List.of(top));
|
||||
|
||||
mockMvc.perform(get("/api/persons").param("sort", "documentCount").param("size", "4"))
|
||||
.andExpect(status().isOk())
|
||||
.andExpect(jsonPath("$[0].firstName").value("Käthe"));
|
||||
}
|
||||
|
||||
@Test
|
||||
@WithMockUser(authorities = "READ_ALL")
|
||||
void getPersons_capsTopByDocumentCount_atFifty() throws Exception {
|
||||
ArgumentCaptor<Integer> sizeCaptor = ArgumentCaptor.forClass(Integer.class);
|
||||
when(personService.findTopByDocumentCount(sizeCaptor.capture())).thenReturn(Collections.emptyList());
|
||||
|
||||
mockMvc.perform(get("/api/persons").param("sort", "documentCount").param("size", "999"))
|
||||
.andExpect(status().isOk());
|
||||
|
||||
assertThat(sizeCaptor.getValue()).isEqualTo(50);
|
||||
}
|
||||
|
||||
private PersonSummaryDTO mockPersonSummary(String firstName, String lastName) {
|
||||
return new PersonSummaryDTO() {
|
||||
public java.util.UUID getId() { return UUID.randomUUID(); }
|
||||
|
||||
@@ -8,9 +8,9 @@ import org.raddatz.familienarchiv.person.PersonRepository;
|
||||
import org.springframework.beans.factory.annotation.Autowired;
|
||||
import org.springframework.boot.test.context.SpringBootTest;
|
||||
import org.springframework.context.annotation.Import;
|
||||
import org.springframework.test.annotation.DirtiesContext;
|
||||
import org.springframework.test.context.ActiveProfiles;
|
||||
import org.springframework.test.context.bean.override.mockito.MockitoBean;
|
||||
import org.springframework.transaction.annotation.Transactional;
|
||||
import software.amazon.awssdk.services.s3.S3Client;
|
||||
|
||||
import static org.assertj.core.api.Assertions.assertThat;
|
||||
@@ -18,7 +18,7 @@ import static org.assertj.core.api.Assertions.assertThat;
|
||||
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.NONE)
|
||||
@ActiveProfiles("test")
|
||||
@Import(PostgresContainerConfig.class)
|
||||
@Transactional
|
||||
@DirtiesContext(classMode = DirtiesContext.ClassMode.AFTER_EACH_TEST_METHOD)
|
||||
class PersonServiceIntegrationTest {
|
||||
|
||||
@MockitoBean S3Client s3Client;
|
||||
|
||||
@@ -1,134 +0,0 @@
|
||||
package org.raddatz.familienarchiv.security;
|
||||
|
||||
import jakarta.servlet.FilterChain;
|
||||
import jakarta.servlet.http.Cookie;
|
||||
import jakarta.servlet.http.HttpServletRequest;
|
||||
import jakarta.servlet.http.HttpServletResponse;
|
||||
import org.junit.jupiter.api.Test;
|
||||
import org.mockito.ArgumentCaptor;
|
||||
import org.springframework.mock.web.MockHttpServletRequest;
|
||||
import org.springframework.mock.web.MockHttpServletResponse;
|
||||
|
||||
import static org.assertj.core.api.Assertions.assertThat;
|
||||
import static org.mockito.Mockito.mock;
|
||||
import static org.mockito.Mockito.times;
|
||||
import static org.mockito.Mockito.verify;
|
||||
|
||||
/**
|
||||
* The filter must turn a browser-side {@code Cookie: auth_token=Basic%20<base64>}
|
||||
* into {@code Authorization: Basic <base64>} (URL-decoded) so that Spring's
|
||||
* Basic-auth filter accepts it. Skips when the request already has an explicit
|
||||
* {@code Authorization} header, or when no {@code auth_token} cookie is present.
|
||||
*
|
||||
* <p>See #520.
|
||||
*/
|
||||
class AuthTokenCookieFilterTest {
|
||||
|
||||
private final AuthTokenCookieFilter filter = new AuthTokenCookieFilter();
|
||||
|
||||
@Test
|
||||
void promotes_url_encoded_auth_token_cookie_to_decoded_Authorization_header() throws Exception {
|
||||
MockHttpServletRequest req = new MockHttpServletRequest();
|
||||
req.setRequestURI("/api/users/me");
|
||||
req.setCookies(new Cookie("auth_token", "Basic%20YWRtaW5AZmFtaWx5YXJjaGl2ZS5sb2NhbDpzZWNyZXQ%3D"));
|
||||
MockHttpServletResponse res = new MockHttpServletResponse();
|
||||
FilterChain chain = mock(FilterChain.class);
|
||||
|
||||
filter.doFilter(req, res, chain);
|
||||
|
||||
ArgumentCaptor<HttpServletRequest> captor = ArgumentCaptor.forClass(HttpServletRequest.class);
|
||||
verify(chain, times(1)).doFilter(captor.capture(), org.mockito.ArgumentMatchers.any(HttpServletResponse.class));
|
||||
|
||||
HttpServletRequest forwarded = captor.getValue();
|
||||
assertThat(forwarded.getHeader("Authorization"))
|
||||
.as("Authorization must be URL-decoded so Spring's Basic parser sees a literal space")
|
||||
.isEqualTo("Basic YWRtaW5AZmFtaWx5YXJjaGl2ZS5sb2NhbDpzZWNyZXQ=");
|
||||
}
|
||||
|
||||
@Test
|
||||
void preserves_explicit_Authorization_header_and_ignores_cookie() throws Exception {
|
||||
MockHttpServletRequest req = new MockHttpServletRequest();
|
||||
req.setRequestURI("/api/users/me");
|
||||
req.addHeader("Authorization", "Basic explicit-header-wins");
|
||||
req.setCookies(new Cookie("auth_token", "Basic%20cookie-would-have-promoted"));
|
||||
MockHttpServletResponse res = new MockHttpServletResponse();
|
||||
FilterChain chain = mock(FilterChain.class);
|
||||
|
||||
filter.doFilter(req, res, chain);
|
||||
|
||||
// Forwards the original request unchanged — same instance, no wrapping.
|
||||
verify(chain).doFilter(req, res);
|
||||
}
|
||||
|
||||
@Test
|
||||
void passes_through_when_no_cookies_at_all() throws Exception {
|
||||
MockHttpServletRequest req = new MockHttpServletRequest();
|
||||
req.setRequestURI("/api/users/me");
|
||||
MockHttpServletResponse res = new MockHttpServletResponse();
|
||||
FilterChain chain = mock(FilterChain.class);
|
||||
|
||||
filter.doFilter(req, res, chain);
|
||||
|
||||
verify(chain).doFilter(req, res);
|
||||
}
|
||||
|
||||
@Test
|
||||
void passes_through_when_auth_token_cookie_is_absent() throws Exception {
|
||||
MockHttpServletRequest req = new MockHttpServletRequest();
|
||||
req.setRequestURI("/api/users/me");
|
||||
req.setCookies(new Cookie("some_other_cookie", "value"));
|
||||
MockHttpServletResponse res = new MockHttpServletResponse();
|
||||
FilterChain chain = mock(FilterChain.class);
|
||||
|
||||
filter.doFilter(req, res, chain);
|
||||
|
||||
verify(chain).doFilter(req, res);
|
||||
}
|
||||
|
||||
@Test
|
||||
void passes_through_when_auth_token_cookie_is_empty() throws Exception {
|
||||
MockHttpServletRequest req = new MockHttpServletRequest();
|
||||
req.setRequestURI("/api/users/me");
|
||||
req.setCookies(new Cookie("auth_token", ""));
|
||||
MockHttpServletResponse res = new MockHttpServletResponse();
|
||||
FilterChain chain = mock(FilterChain.class);
|
||||
|
||||
filter.doFilter(req, res, chain);
|
||||
|
||||
verify(chain).doFilter(req, res);
|
||||
}
|
||||
|
||||
@Test
|
||||
void passes_through_unchanged_when_request_is_outside_api_scope() throws Exception {
|
||||
MockHttpServletRequest req = new MockHttpServletRequest();
|
||||
// /actuator/health and similar must NOT receive a promoted Authorization
|
||||
// header — they have their own access rules and should never be authed
|
||||
// via the cookie.
|
||||
req.setRequestURI("/actuator/health");
|
||||
req.setCookies(new Cookie("auth_token", "Basic%20YWR=="));
|
||||
MockHttpServletResponse res = new MockHttpServletResponse();
|
||||
FilterChain chain = mock(FilterChain.class);
|
||||
|
||||
filter.doFilter(req, res, chain);
|
||||
|
||||
// Forwards the original request unchanged — same instance, no wrapping.
|
||||
verify(chain).doFilter(req, res);
|
||||
}
|
||||
|
||||
@Test
|
||||
void passes_through_unchanged_when_cookie_value_is_malformed_percent_encoding() throws Exception {
|
||||
MockHttpServletRequest req = new MockHttpServletRequest();
|
||||
req.setRequestURI("/api/users/me");
|
||||
// Lone "%" without two hex digits → URLDecoder throws → filter must
|
||||
// refuse to forward a bogus Authorization header.
|
||||
req.setCookies(new Cookie("auth_token", "Basic%2"));
|
||||
MockHttpServletResponse res = new MockHttpServletResponse();
|
||||
FilterChain chain = mock(FilterChain.class);
|
||||
|
||||
filter.doFilter(req, res, chain);
|
||||
|
||||
// Forwards the original request unchanged — Spring Security treats it
|
||||
// as unauthenticated rather than crashing on bad input.
|
||||
verify(chain).doFilter(req, res);
|
||||
}
|
||||
}
|
||||
@@ -40,47 +40,6 @@ class AdminControllerTest {
|
||||
@MockitoBean ThumbnailBackfillService thumbnailBackfillService;
|
||||
@MockitoBean CustomUserDetailsService customUserDetailsService;
|
||||
|
||||
// ─── GET /api/admin/import-status ─────────────────────────────────────────
|
||||
|
||||
@Test
|
||||
@WithMockUser(authorities = "ADMIN")
|
||||
void importStatus_returns200_withStatusCode_whenAdmin() throws Exception {
|
||||
MassImportService.ImportStatus status = new MassImportService.ImportStatus(
|
||||
MassImportService.State.IDLE, "IMPORT_IDLE", "Kein Import gestartet.", 0, null);
|
||||
when(massImportService.getStatus()).thenReturn(status);
|
||||
|
||||
mockMvc.perform(get("/api/admin/import-status"))
|
||||
.andExpect(status().isOk())
|
||||
.andExpect(jsonPath("$.state").value("IDLE"))
|
||||
.andExpect(jsonPath("$.statusCode").value("IMPORT_IDLE"))
|
||||
.andExpect(jsonPath("$.processed").value(0));
|
||||
}
|
||||
|
||||
@Test
|
||||
@WithMockUser(authorities = "ADMIN")
|
||||
void importStatus_messageField_notPresentInApiResponse() throws Exception {
|
||||
MassImportService.ImportStatus status = new MassImportService.ImportStatus(
|
||||
MassImportService.State.IDLE, "IMPORT_IDLE", "Kein Import gestartet.", 0, null);
|
||||
when(massImportService.getStatus()).thenReturn(status);
|
||||
|
||||
mockMvc.perform(get("/api/admin/import-status"))
|
||||
.andExpect(status().isOk())
|
||||
.andExpect(jsonPath("$.message").doesNotExist());
|
||||
}
|
||||
|
||||
@Test
|
||||
void importStatus_returns401_whenUnauthenticated() throws Exception {
|
||||
mockMvc.perform(get("/api/admin/import-status"))
|
||||
.andExpect(status().isUnauthorized());
|
||||
}
|
||||
|
||||
@Test
|
||||
@WithMockUser(authorities = "READ_ALL")
|
||||
void importStatus_returns403_whenUserLacksAdminPermission() throws Exception {
|
||||
mockMvc.perform(get("/api/admin/import-status"))
|
||||
.andExpect(status().isForbidden());
|
||||
}
|
||||
|
||||
@Test
|
||||
void backfillVersions_returns401_whenUnauthenticated() throws Exception {
|
||||
mockMvc.perform(post("/api/admin/backfill-versions"))
|
||||
|
||||
@@ -1,174 +0,0 @@
|
||||
package org.raddatz.familienarchiv.user;
|
||||
|
||||
import org.junit.jupiter.api.BeforeEach;
|
||||
import org.junit.jupiter.api.Test;
|
||||
import org.junit.jupiter.api.extension.ExtendWith;
|
||||
import org.mockito.Mock;
|
||||
import org.mockito.junit.jupiter.MockitoExtension;
|
||||
import org.springframework.boot.CommandLineRunner;
|
||||
import org.springframework.core.env.Environment;
|
||||
import org.springframework.security.crypto.password.PasswordEncoder;
|
||||
import org.springframework.test.util.ReflectionTestUtils;
|
||||
|
||||
import java.util.Optional;
|
||||
|
||||
import static org.assertj.core.api.Assertions.assertThat;
|
||||
import static org.assertj.core.api.Assertions.assertThatThrownBy;
|
||||
import static org.mockito.ArgumentMatchers.any;
|
||||
import static org.mockito.ArgumentMatchers.anyString;
|
||||
import static org.mockito.ArgumentMatchers.eq;
|
||||
import static org.mockito.Mockito.never;
|
||||
import static org.mockito.Mockito.verify;
|
||||
import static org.mockito.Mockito.when;
|
||||
|
||||
/**
|
||||
* UserDataInitializer must refuse to seed the admin user with the hardcoded
|
||||
* dev defaults when running outside the {@code dev} profile.
|
||||
*
|
||||
* <p>Why this matters: per DEPLOYMENT.md §3.5 and ADR-011, the admin password
|
||||
* is permanently locked on first deploy (UserDataInitializer only seeds when
|
||||
* the row is missing). If an operator forgets to set {@code APP_ADMIN_USERNAME}
|
||||
* / {@code APP_ADMIN_PASSWORD}, prod silently boots with the well-known dev
|
||||
* defaults — a credential-disclosure foot-gun, not a config typo. See #513.
|
||||
*/
|
||||
@ExtendWith(MockitoExtension.class)
|
||||
class AdminSeedFailClosedTest {
|
||||
|
||||
@Mock AppUserRepository userRepository;
|
||||
@Mock UserGroupRepository groupRepository;
|
||||
@Mock Environment environment;
|
||||
@Mock PasswordEncoder passwordEncoder;
|
||||
|
||||
UserDataInitializer initializer;
|
||||
|
||||
@BeforeEach
|
||||
void setUp() {
|
||||
initializer = new UserDataInitializer(userRepository, groupRepository, environment);
|
||||
}
|
||||
|
||||
@Test
|
||||
void refuses_to_seed_when_email_is_default_and_profile_is_not_dev() throws Exception {
|
||||
when(userRepository.findByEmail(anyString())).thenReturn(Optional.empty());
|
||||
when(environment.matchesProfiles("dev", "test", "e2e")).thenReturn(false);
|
||||
ReflectionTestUtils.setField(initializer, "adminEmail", UserDataInitializer.DEFAULT_ADMIN_EMAIL);
|
||||
ReflectionTestUtils.setField(initializer, "adminPassword", "operator-set-this-one");
|
||||
|
||||
CommandLineRunner runner = initializer.initAdminUser(passwordEncoder);
|
||||
|
||||
assertThatThrownBy(() -> runner.run())
|
||||
.isInstanceOf(IllegalStateException.class)
|
||||
.hasMessageContaining("default credentials")
|
||||
.hasMessageContaining("permanent");
|
||||
|
||||
verify(userRepository, never()).save(org.mockito.ArgumentMatchers.any());
|
||||
}
|
||||
|
||||
@Test
|
||||
void refuses_to_seed_when_password_is_default_and_profile_is_not_dev() throws Exception {
|
||||
when(userRepository.findByEmail(anyString())).thenReturn(Optional.empty());
|
||||
when(environment.matchesProfiles("dev", "test", "e2e")).thenReturn(false);
|
||||
ReflectionTestUtils.setField(initializer, "adminEmail", "admin@archiv.raddatz.cloud");
|
||||
ReflectionTestUtils.setField(initializer, "adminPassword", UserDataInitializer.DEFAULT_ADMIN_PASSWORD);
|
||||
|
||||
CommandLineRunner runner = initializer.initAdminUser(passwordEncoder);
|
||||
|
||||
assertThatThrownBy(() -> runner.run())
|
||||
.isInstanceOf(IllegalStateException.class)
|
||||
.hasMessageContaining("default credentials");
|
||||
}
|
||||
|
||||
@Test
|
||||
void allows_seed_when_both_values_are_set_and_profile_is_not_dev() throws Exception {
|
||||
when(userRepository.findByEmail(anyString())).thenReturn(Optional.empty());
|
||||
when(groupRepository.findByName("Administrators")).thenReturn(Optional.empty());
|
||||
when(groupRepository.save(any(UserGroup.class))).thenAnswer(inv -> inv.getArgument(0));
|
||||
when(environment.matchesProfiles("dev", "test", "e2e")).thenReturn(false);
|
||||
when(passwordEncoder.encode(anyString())).thenReturn("$2a$10$stub");
|
||||
ReflectionTestUtils.setField(initializer, "adminEmail", "admin@archiv.raddatz.cloud");
|
||||
ReflectionTestUtils.setField(initializer, "adminPassword", "a-real-strong-password");
|
||||
|
||||
CommandLineRunner runner = initializer.initAdminUser(passwordEncoder);
|
||||
runner.run();
|
||||
|
||||
verify(userRepository).save(any(AppUser.class));
|
||||
}
|
||||
|
||||
@Test
|
||||
void allows_seed_with_defaults_when_profile_is_dev() throws Exception {
|
||||
when(userRepository.findByEmail(anyString())).thenReturn(Optional.empty());
|
||||
when(groupRepository.findByName("Administrators")).thenReturn(Optional.empty());
|
||||
when(groupRepository.save(any(UserGroup.class))).thenAnswer(inv -> inv.getArgument(0));
|
||||
when(environment.matchesProfiles("dev", "test", "e2e")).thenReturn(true);
|
||||
when(passwordEncoder.encode(anyString())).thenReturn("$2a$10$stub");
|
||||
ReflectionTestUtils.setField(initializer, "adminEmail", UserDataInitializer.DEFAULT_ADMIN_EMAIL);
|
||||
ReflectionTestUtils.setField(initializer, "adminPassword", UserDataInitializer.DEFAULT_ADMIN_PASSWORD);
|
||||
|
||||
CommandLineRunner runner = initializer.initAdminUser(passwordEncoder);
|
||||
runner.run();
|
||||
|
||||
verify(userRepository).save(any(AppUser.class));
|
||||
}
|
||||
|
||||
@Test
|
||||
void does_not_check_defaults_when_admin_already_exists() throws Exception {
|
||||
AppUser existing = AppUser.builder()
|
||||
.email("someone@example.com")
|
||||
.password("$2a$10$stub")
|
||||
.build();
|
||||
when(userRepository.findByEmail(anyString())).thenReturn(Optional.of(existing));
|
||||
ReflectionTestUtils.setField(initializer, "adminEmail", UserDataInitializer.DEFAULT_ADMIN_EMAIL);
|
||||
ReflectionTestUtils.setField(initializer, "adminPassword", UserDataInitializer.DEFAULT_ADMIN_PASSWORD);
|
||||
|
||||
CommandLineRunner runner = initializer.initAdminUser(passwordEncoder);
|
||||
runner.run();
|
||||
|
||||
verify(userRepository, never()).save(org.mockito.ArgumentMatchers.any());
|
||||
// Importantly, no IllegalStateException — re-deploys must not panic over
|
||||
// historical default-seeded data they cannot retroactively fix.
|
||||
}
|
||||
|
||||
@Test
|
||||
void reuses_existing_Administrators_group_when_seeding_a_new_admin() throws Exception {
|
||||
// Setup: admin user does not exist, but the Administrators group does
|
||||
// (e.g. previous boot seeded the group then failed; operator deleted
|
||||
// the bad user row to retry with a corrected APP_ADMIN_USERNAME). The
|
||||
// re-seed must reuse the group, not blind-INSERT a duplicate. See #518.
|
||||
UserGroup existingGroup = UserGroup.builder()
|
||||
.name("Administrators")
|
||||
.build();
|
||||
when(userRepository.findByEmail(anyString())).thenReturn(Optional.empty());
|
||||
when(groupRepository.findByName("Administrators")).thenReturn(Optional.of(existingGroup));
|
||||
when(environment.matchesProfiles("dev", "test", "e2e")).thenReturn(false);
|
||||
when(passwordEncoder.encode(anyString())).thenReturn("$2a$10$stub");
|
||||
ReflectionTestUtils.setField(initializer, "adminEmail", "admin@archiv.raddatz.cloud");
|
||||
ReflectionTestUtils.setField(initializer, "adminPassword", "a-real-strong-password");
|
||||
|
||||
CommandLineRunner runner = initializer.initAdminUser(passwordEncoder);
|
||||
runner.run();
|
||||
|
||||
// Group must not be re-inserted — that would violate user_groups_name_key.
|
||||
verify(groupRepository, never()).save(any(UserGroup.class));
|
||||
// But the admin user IS created, with the existing group attached.
|
||||
org.mockito.ArgumentCaptor<AppUser> captor = org.mockito.ArgumentCaptor.forClass(AppUser.class);
|
||||
verify(userRepository).save(captor.capture());
|
||||
assertThat(captor.getValue().getGroups()).containsExactly(existingGroup);
|
||||
}
|
||||
|
||||
@Test
|
||||
void creates_Administrators_group_when_seeding_admin_on_a_fresh_database() throws Exception {
|
||||
when(userRepository.findByEmail(anyString())).thenReturn(Optional.empty());
|
||||
when(groupRepository.findByName("Administrators")).thenReturn(Optional.empty());
|
||||
when(groupRepository.save(any(UserGroup.class))).thenAnswer(inv -> inv.getArgument(0));
|
||||
when(environment.matchesProfiles("dev", "test", "e2e")).thenReturn(false);
|
||||
when(passwordEncoder.encode(anyString())).thenReturn("$2a$10$stub");
|
||||
ReflectionTestUtils.setField(initializer, "adminEmail", "admin@archiv.raddatz.cloud");
|
||||
ReflectionTestUtils.setField(initializer, "adminPassword", "a-real-strong-password");
|
||||
|
||||
CommandLineRunner runner = initializer.initAdminUser(passwordEncoder);
|
||||
runner.run();
|
||||
|
||||
// Group should be inserted exactly once.
|
||||
verify(groupRepository).save(any(UserGroup.class));
|
||||
verify(userRepository).save(any(AppUser.class));
|
||||
}
|
||||
}
|
||||
@@ -1,95 +0,0 @@
|
||||
package org.raddatz.familienarchiv.user;
|
||||
|
||||
import org.junit.jupiter.api.Test;
|
||||
import org.springframework.beans.factory.annotation.Value;
|
||||
import org.springframework.beans.factory.config.YamlPropertiesFactoryBean;
|
||||
import org.springframework.boot.context.properties.bind.Binder;
|
||||
import org.springframework.boot.context.properties.source.ConfigurationPropertySources;
|
||||
import org.springframework.core.env.PropertiesPropertySource;
|
||||
import org.springframework.core.io.ClassPathResource;
|
||||
|
||||
import java.lang.reflect.Field;
|
||||
import java.util.Properties;
|
||||
|
||||
import static org.assertj.core.api.Assertions.assertThat;
|
||||
|
||||
/**
|
||||
* Pins the admin-seed property key contract. {@code UserDataInitializer} reads
|
||||
* {@code @Value("${app.admin.email:...}")} and {@code @Value("${app.admin.password:...}")}.
|
||||
* The yaml MUST expose those exact keys, not e.g. {@code app.admin.username}, or
|
||||
* the env vars {@code APP_ADMIN_USERNAME} / {@code APP_ADMIN_PASSWORD} are
|
||||
* silently ignored and the admin user gets seeded with the hardcoded defaults.
|
||||
*
|
||||
* <p>Discovered as a HIGH bug during the production-deploy bootstrap (#513): on
|
||||
* first deploy the prod admin password is permanently locked to whatever ends
|
||||
* up in the database, so a key-name mismatch would lock prod to the dev defaults
|
||||
* {@code admin@familyarchive.local} / {@code admin123}.
|
||||
*
|
||||
* <p>No Spring context — Binder reads application.yaml directly.
|
||||
*/
|
||||
class AdminSeedPropertyKeyTest {
|
||||
|
||||
@Test
|
||||
void admin_email_key_binds_from_yaml() {
|
||||
Binder binder = binderFromApplicationYaml();
|
||||
|
||||
String email = binder.bind("app.admin.email", String.class)
|
||||
.orElseThrow(() -> new AssertionError(
|
||||
"app.admin.email is missing from application.yaml. "
|
||||
+ "UserDataInitializer reads this exact key; if the yaml uses "
|
||||
+ "a different name (e.g. 'username'), the env var "
|
||||
+ "APP_ADMIN_USERNAME is silently ignored."));
|
||||
|
||||
assertThat(email)
|
||||
.as("app.admin.email must resolve from APP_ADMIN_USERNAME or its default")
|
||||
.isNotBlank();
|
||||
}
|
||||
|
||||
@Test
|
||||
void admin_password_key_binds_from_yaml() {
|
||||
Binder binder = binderFromApplicationYaml();
|
||||
|
||||
String password = binder.bind("app.admin.password", String.class)
|
||||
.orElseThrow(() -> new AssertionError(
|
||||
"app.admin.password is missing from application.yaml. "
|
||||
+ "UserDataInitializer reads this exact key."));
|
||||
|
||||
assertThat(password)
|
||||
.as("app.admin.password must resolve from APP_ADMIN_PASSWORD or its default")
|
||||
.isNotBlank();
|
||||
}
|
||||
|
||||
@Test
|
||||
void userDataInitializer_reads_app_admin_email_not_username() throws NoSuchFieldException {
|
||||
// Pin the Java side too: a future rename of the @Value placeholder
|
||||
// (e.g. back to `${app.admin.username:...}`) would silently break the
|
||||
// binding while the yaml-side assertions above still pass. See #513.
|
||||
Field field = UserDataInitializer.class.getDeclaredField("adminEmail");
|
||||
Value annotation = field.getAnnotation(Value.class);
|
||||
assertThat(annotation)
|
||||
.as("UserDataInitializer.adminEmail must be @Value-annotated")
|
||||
.isNotNull();
|
||||
assertThat(annotation.value())
|
||||
.as("UserDataInitializer must read app.admin.email — not username or any other key")
|
||||
.startsWith("${app.admin.email:");
|
||||
}
|
||||
|
||||
@Test
|
||||
void userDataInitializer_reads_app_admin_password() throws NoSuchFieldException {
|
||||
Field field = UserDataInitializer.class.getDeclaredField("adminPassword");
|
||||
Value annotation = field.getAnnotation(Value.class);
|
||||
assertThat(annotation).isNotNull();
|
||||
assertThat(annotation.value())
|
||||
.as("UserDataInitializer must read app.admin.password")
|
||||
.startsWith("${app.admin.password:");
|
||||
}
|
||||
|
||||
private Binder binderFromApplicationYaml() {
|
||||
YamlPropertiesFactoryBean yaml = new YamlPropertiesFactoryBean();
|
||||
yaml.setResources(new ClassPathResource("application.yaml"));
|
||||
Properties props = yaml.getObject();
|
||||
assertThat(props).as("application.yaml must be on the classpath").isNotNull();
|
||||
return new Binder(ConfigurationPropertySources.from(
|
||||
new PropertiesPropertySource("application", props)));
|
||||
}
|
||||
}
|
||||
@@ -35,15 +35,4 @@ class AppUserTest {
|
||||
.count();
|
||||
assertThat(distinct).isGreaterThan(1);
|
||||
}
|
||||
|
||||
@Test
|
||||
void computeColor_returnsValidPaletteColorForIntegerMinValueHash() {
|
||||
// UUID "80000000-0000-0000-0000-000000000000" has hashCode() == Integer.MIN_VALUE.
|
||||
// Math.abs(Integer.MIN_VALUE) overflows back to Integer.MIN_VALUE (negative), making
|
||||
// Math.abs(hashCode()) % n unsafe for palette sizes that don't evenly divide MIN_VALUE.
|
||||
// Math.floorMod eliminates this edge case entirely.
|
||||
UUID minHashId = UUID.fromString("80000000-0000-0000-0000-000000000000");
|
||||
assertThat(minHashId.hashCode()).isEqualTo(Integer.MIN_VALUE);
|
||||
assertThat(EXPECTED_PALETTE).contains(AppUser.computeColor(minHashId));
|
||||
}
|
||||
}
|
||||
|
||||
@@ -20,13 +20,10 @@ import org.springframework.security.test.context.support.WithMockUser;
|
||||
import org.springframework.test.context.bean.override.mockito.MockitoBean;
|
||||
import org.springframework.test.web.servlet.MockMvc;
|
||||
|
||||
import org.mockito.ArgumentCaptor;
|
||||
|
||||
import java.time.LocalDateTime;
|
||||
import java.util.List;
|
||||
import java.util.UUID;
|
||||
|
||||
import static org.assertj.core.api.Assertions.assertThat;
|
||||
import static org.mockito.ArgumentMatchers.*;
|
||||
import static org.mockito.Mockito.verify;
|
||||
import static org.mockito.Mockito.when;
|
||||
@@ -150,30 +147,6 @@ class InviteControllerTest {
|
||||
.andExpect(jsonPath("$.label").value("Für Familie"));
|
||||
}
|
||||
|
||||
@Test
|
||||
@WithMockUser(username = "admin@test.com", authorities = {"ADMIN_USER"})
|
||||
void createInvite_forwardsGroupIdsToService() throws Exception {
|
||||
UUID groupId = UUID.randomUUID();
|
||||
AppUser admin = AppUser.builder().id(UUID.randomUUID()).email("admin@test.com").build();
|
||||
when(userService.findByEmail("admin@test.com")).thenReturn(admin);
|
||||
|
||||
InviteToken savedToken = InviteToken.builder()
|
||||
.id(UUID.randomUUID()).code("ABCDE12345").useCount(0).build();
|
||||
when(inviteService.createInvite(any(), eq(admin))).thenReturn(savedToken);
|
||||
when(inviteService.toListItemDTO(any(), anyString()))
|
||||
.thenReturn(makeInviteDTO(savedToken.getId(), "ABCDE12345"));
|
||||
|
||||
String body = "{\"groupIds\":[\"" + groupId + "\"]}";
|
||||
mockMvc.perform(post("/api/invites")
|
||||
.contentType(MediaType.APPLICATION_JSON)
|
||||
.content(body))
|
||||
.andExpect(status().isCreated());
|
||||
|
||||
ArgumentCaptor<CreateInviteRequest> captor = ArgumentCaptor.forClass(CreateInviteRequest.class);
|
||||
verify(inviteService).createInvite(captor.capture(), eq(admin));
|
||||
assertThat(captor.getValue().getGroupIds()).containsExactly(groupId);
|
||||
}
|
||||
|
||||
// ─── DELETE /api/invites/{id} ─────────────────────────────────────────────
|
||||
|
||||
@Test
|
||||
|
||||
@@ -156,35 +156,6 @@ class InviteServiceTest {
|
||||
assertThat(result.getGroupIds()).contains(g.getId());
|
||||
}
|
||||
|
||||
@Test
|
||||
void createInvite_throwsGroupNotFound_whenSubmittedGroupIdDoesNotExist() {
|
||||
UUID unknownGroupId = UUID.randomUUID();
|
||||
when(userService.findGroupsByIds(anyList())).thenReturn(List.of());
|
||||
|
||||
CreateInviteRequest req = new CreateInviteRequest();
|
||||
req.setGroupIds(List.of(unknownGroupId));
|
||||
|
||||
assertThatThrownBy(() -> inviteService.createInvite(req, admin))
|
||||
.isInstanceOf(DomainException.class)
|
||||
.extracting(e -> ((DomainException) e).getCode())
|
||||
.isEqualTo(ErrorCode.GROUP_NOT_FOUND);
|
||||
}
|
||||
|
||||
@Test
|
||||
void createInvite_doesNotThrowGroupNotFound_whenDuplicateGroupIdsSubmitted() {
|
||||
UUID groupId = UUID.randomUUID();
|
||||
UserGroup group = UserGroup.builder().id(groupId).name("Familie").build();
|
||||
when(inviteTokenRepository.findByCode(anyString())).thenReturn(Optional.empty());
|
||||
when(userService.findGroupsByIds(anyList())).thenReturn(List.of(group));
|
||||
when(inviteTokenRepository.save(any())).thenAnswer(inv -> inv.getArgument(0));
|
||||
|
||||
CreateInviteRequest req = new CreateInviteRequest();
|
||||
req.setGroupIds(List.of(groupId, groupId)); // same UUID submitted twice
|
||||
|
||||
// before deduplication: size(groups)==1 != size(submitted)==2 → false GROUP_NOT_FOUND
|
||||
assertThatCode(() -> inviteService.createInvite(req, admin)).doesNotThrowAnyException();
|
||||
}
|
||||
|
||||
// ─── redeemInvite ─────────────────────────────────────────────────────────
|
||||
|
||||
@Test
|
||||
|
||||
@@ -1,78 +0,0 @@
|
||||
package org.raddatz.familienarchiv.user;
|
||||
|
||||
import org.junit.jupiter.api.BeforeEach;
|
||||
import org.junit.jupiter.api.Test;
|
||||
import org.raddatz.familienarchiv.PostgresContainerConfig;
|
||||
import org.raddatz.familienarchiv.config.FlywayConfig;
|
||||
import org.springframework.beans.factory.annotation.Autowired;
|
||||
import org.springframework.boot.jdbc.test.autoconfigure.AutoConfigureTestDatabase;
|
||||
import org.springframework.boot.data.jpa.test.autoconfigure.DataJpaTest;
|
||||
import org.springframework.context.annotation.Import;
|
||||
|
||||
import java.time.LocalDateTime;
|
||||
import java.util.Set;
|
||||
import java.util.UUID;
|
||||
|
||||
import static org.assertj.core.api.Assertions.assertThat;
|
||||
|
||||
@DataJpaTest
|
||||
@AutoConfigureTestDatabase(replace = AutoConfigureTestDatabase.Replace.NONE)
|
||||
@Import({PostgresContainerConfig.class, FlywayConfig.class})
|
||||
class InviteTokenRepositoryIntegrationTest {
|
||||
|
||||
@Autowired InviteTokenRepository inviteTokenRepository;
|
||||
@Autowired UserGroupRepository userGroupRepository;
|
||||
@Autowired AppUserRepository appUserRepository;
|
||||
|
||||
private UserGroup group;
|
||||
private AppUser admin;
|
||||
|
||||
@BeforeEach
|
||||
void setUp() {
|
||||
inviteTokenRepository.deleteAll();
|
||||
userGroupRepository.deleteAll();
|
||||
appUserRepository.deleteAll();
|
||||
admin = appUserRepository.save(AppUser.builder().email("admin@test.com").password("pw").build());
|
||||
group = userGroupRepository.save(UserGroup.builder().name("Familie").build());
|
||||
}
|
||||
|
||||
// ─── existsActiveWithGroupId ──────────────────────────────────────────────
|
||||
|
||||
@Test
|
||||
void existsActiveWithGroupId_returnsTrueForActiveInviteLinkedToGroup() {
|
||||
inviteTokenRepository.save(token(t -> t));
|
||||
|
||||
assertThat(inviteTokenRepository.existsActiveWithGroupId(group.getId())).isTrue();
|
||||
}
|
||||
|
||||
@Test
|
||||
void existsActiveWithGroupId_returnsFalseWhenInviteIsRevoked() {
|
||||
inviteTokenRepository.save(token(t -> t.revoked(true)));
|
||||
|
||||
assertThat(inviteTokenRepository.existsActiveWithGroupId(group.getId())).isFalse();
|
||||
}
|
||||
|
||||
@Test
|
||||
void existsActiveWithGroupId_returnsFalseWhenInviteIsExpired() {
|
||||
inviteTokenRepository.save(token(t -> t.expiresAt(LocalDateTime.now().minusDays(1))));
|
||||
|
||||
assertThat(inviteTokenRepository.existsActiveWithGroupId(group.getId())).isFalse();
|
||||
}
|
||||
|
||||
@Test
|
||||
void existsActiveWithGroupId_returnsFalseWhenInviteIsExhausted() {
|
||||
inviteTokenRepository.save(token(t -> t.maxUses(1).useCount(1)));
|
||||
|
||||
assertThat(inviteTokenRepository.existsActiveWithGroupId(group.getId())).isFalse();
|
||||
}
|
||||
|
||||
// ─── helpers ─────────────────────────────────────────────────────────────
|
||||
|
||||
private InviteToken token(java.util.function.UnaryOperator<InviteToken.InviteTokenBuilder> customizer) {
|
||||
InviteToken.InviteTokenBuilder builder = InviteToken.builder()
|
||||
.code(UUID.randomUUID().toString().replace("-", "").substring(0, 10))
|
||||
.groupIds(new java.util.HashSet<>(Set.of(group.getId())))
|
||||
.createdBy(admin);
|
||||
return customizer.apply(builder).build();
|
||||
}
|
||||
}
|
||||
@@ -36,7 +36,6 @@ class UserServiceTest {
|
||||
|
||||
@Mock AppUserRepository userRepository;
|
||||
@Mock UserGroupRepository groupRepository;
|
||||
@Mock InviteTokenRepository inviteTokenRepository;
|
||||
@Mock PasswordEncoder passwordEncoder;
|
||||
@Mock AuditService auditService;
|
||||
@InjectMocks UserService userService;
|
||||
@@ -903,41 +902,4 @@ class UserServiceTest {
|
||||
assertThat(result.getName()).isEqualTo("Familie");
|
||||
assertThat(result.getPermissions()).containsExactlyInAnyOrder("READ_ALL", "WRITE_ALL");
|
||||
}
|
||||
|
||||
// ─── deleteGroup ──────────────────────────────────────────────────────────
|
||||
|
||||
@Test
|
||||
void deleteGroup_throwsConflict_whenActiveInviteReferencesGroup() {
|
||||
UUID groupId = UUID.randomUUID();
|
||||
when(inviteTokenRepository.existsActiveWithGroupId(groupId)).thenReturn(true);
|
||||
|
||||
assertThatThrownBy(() -> userService.deleteGroup(groupId))
|
||||
.isInstanceOf(DomainException.class)
|
||||
.extracting(e -> ((DomainException) e).getCode())
|
||||
.isEqualTo(ErrorCode.GROUP_HAS_ACTIVE_INVITES);
|
||||
}
|
||||
|
||||
@Test
|
||||
void deleteGroup_deletesGroup_whenNoActiveInviteReferencesGroup() {
|
||||
UUID groupId = UUID.randomUUID();
|
||||
when(inviteTokenRepository.existsActiveWithGroupId(groupId)).thenReturn(false);
|
||||
|
||||
userService.deleteGroup(groupId);
|
||||
|
||||
verify(groupRepository).deleteById(groupId);
|
||||
}
|
||||
|
||||
@Test
|
||||
void createGroup_withNullPermissions_savesGroupWithEmptyPermissionSet() {
|
||||
org.raddatz.familienarchiv.user.GroupDTO dto = new org.raddatz.familienarchiv.user.GroupDTO();
|
||||
dto.setName("Leser");
|
||||
dto.setPermissions(null);
|
||||
|
||||
UserGroup saved = UserGroup.builder().id(UUID.randomUUID()).name("Leser").build();
|
||||
when(groupRepository.save(any())).thenReturn(saved);
|
||||
|
||||
userService.createGroup(dto);
|
||||
|
||||
verify(groupRepository).save(argThat(g -> g.getPermissions() != null && g.getPermissions().isEmpty()));
|
||||
}
|
||||
}
|
||||
|
||||
@@ -13,18 +13,3 @@ spring:
|
||||
password: test
|
||||
mail:
|
||||
host: localhost
|
||||
|
||||
# Disable OTel SDK entirely in tests — prevents auto-configuration from loading resource providers
|
||||
# (e.g. AzureAppServiceResourceProvider) that fail against the semconv version used here.
|
||||
otel:
|
||||
sdk:
|
||||
disabled: true
|
||||
|
||||
# Disable trace export in tests — prevents OTLP connection attempts when no Tempo is running.
|
||||
# Sampling probability 0.0 means no spans are created, so no export is attempted.
|
||||
management:
|
||||
server:
|
||||
port: 0 # random port per context — prevents TIME_WAIT conflicts when @DirtiesContext restarts the context
|
||||
tracing:
|
||||
sampling:
|
||||
probability: 0.0
|
||||
|
||||
@@ -1,2 +0,0 @@
|
||||
logging.level.root=WARN
|
||||
logging.level.org.raddatz=INFO
|
||||
@@ -1,259 +0,0 @@
|
||||
# Observability stack — Grafana LGTM + GlitchTip
|
||||
#
|
||||
# Requires the main stack to be running first:
|
||||
# docker compose up -d # creates archiv-net
|
||||
# docker compose -f docker-compose.observability.yml up -d
|
||||
#
|
||||
# To validate without starting:
|
||||
# docker compose -f docker-compose.observability.yml config
|
||||
|
||||
services:
|
||||
|
||||
# --- Metrics: Prometheus ---
|
||||
|
||||
prometheus:
|
||||
image: prom/prometheus:v3.4.0
|
||||
container_name: obs-prometheus
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- ./infra/observability/prometheus/prometheus.yml:/etc/prometheus/prometheus.yml:ro
|
||||
- prometheus_data:/prometheus
|
||||
command:
|
||||
- '--config.file=/etc/prometheus/prometheus.yml'
|
||||
- '--storage.tsdb.path=/prometheus'
|
||||
- '--storage.tsdb.retention.time=30d'
|
||||
- '--web.enable-lifecycle'
|
||||
ports:
|
||||
- "127.0.0.1:${PORT_PROMETHEUS:-9090}:9090"
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "-qO-", "http://localhost:9090/-/healthy"]
|
||||
interval: 30s
|
||||
timeout: 5s
|
||||
retries: 3
|
||||
networks:
|
||||
- archiv-net
|
||||
- obs-net
|
||||
|
||||
node-exporter:
|
||||
image: prom/node-exporter:v1.9.0
|
||||
container_name: obs-node-exporter
|
||||
restart: unless-stopped
|
||||
# pid: host — required for process-level CPU/memory metrics; cgroup isolation applies
|
||||
pid: host
|
||||
volumes:
|
||||
- /proc:/host/proc:ro
|
||||
- /sys:/host/sys:ro
|
||||
- /:/rootfs:ro
|
||||
command:
|
||||
- '--path.procfs=/host/proc'
|
||||
- '--path.sysfs=/host/sys'
|
||||
# $$ is YAML Compose escaping for a literal $ in the regex alternation
|
||||
- '--collector.filesystem.ignored-mount-points=^/(sys|proc|dev|host|etc)($$|/)'
|
||||
expose:
|
||||
- "9100"
|
||||
networks:
|
||||
- obs-net
|
||||
|
||||
cadvisor:
|
||||
image: gcr.io/cadvisor/cadvisor:v0.52.1
|
||||
container_name: obs-cadvisor
|
||||
restart: unless-stopped
|
||||
# privileged: true — required for cgroup and namespace metrics, see cAdvisor docs.
|
||||
# Accepted risk: cAdvisor is pinned, on Renovate, and not exposed outside obs-net.
|
||||
privileged: true
|
||||
volumes:
|
||||
- /:/rootfs:ro
|
||||
# /var/run/docker.sock mounted read-only — sufficient for container metadata discovery
|
||||
- /var/run/docker.sock:/var/run/docker.sock:ro
|
||||
- /sys:/sys:ro
|
||||
- /var/lib/docker:/var/lib/docker:ro
|
||||
expose:
|
||||
- "8080"
|
||||
networks:
|
||||
- obs-net
|
||||
|
||||
# --- Logs: Loki + Promtail ---
|
||||
|
||||
loki:
|
||||
image: grafana/loki:3.4.2
|
||||
container_name: obs-loki
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- ./infra/observability/loki/loki-config.yml:/etc/loki/loki-config.yml:ro
|
||||
- loki_data:/loki
|
||||
command: -config.file=/etc/loki/loki-config.yml
|
||||
expose:
|
||||
- "3100"
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "wget -qO- http://localhost:3100/ready | grep -q ready || exit 1"]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 5
|
||||
start_period: 30s
|
||||
networks:
|
||||
- obs-net
|
||||
|
||||
promtail:
|
||||
image: grafana/promtail:3.4.2
|
||||
container_name: obs-promtail
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- ./infra/observability/promtail/promtail-config.yml:/etc/promtail/promtail-config.yml:ro
|
||||
- /var/lib/docker/containers:/var/lib/docker/containers:ro
|
||||
# :ro restricts file-system access but NOT Docker API permissions — a compromised Promtail has full daemon access. Accepted risk on single-operator self-hosted archive.
|
||||
- /var/run/docker.sock:/var/run/docker.sock:ro
|
||||
- promtail_positions:/tmp # persists positions.yaml across restarts — avoids duplicate log ingestion
|
||||
command: -config.file=/etc/promtail/promtail-config.yml
|
||||
networks:
|
||||
- archiv-net # label discovery from application containers via Docker socket
|
||||
- obs-net # log shipping to Loki
|
||||
depends_on:
|
||||
loki:
|
||||
condition: service_healthy
|
||||
|
||||
# --- Traces: Tempo ---
|
||||
|
||||
tempo:
|
||||
image: grafana/tempo:2.7.2
|
||||
container_name: obs-tempo
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- ./infra/observability/tempo/tempo.yml:/etc/tempo.yml:ro
|
||||
- tempo_data:/var/tempo
|
||||
command: -config.file=/etc/tempo.yml
|
||||
expose:
|
||||
- "3200" # Grafana queries Tempo on this port (obs-net only)
|
||||
- "4317" # OTLP gRPC — backend sends traces here (archiv-net)
|
||||
- "4318" # OTLP HTTP — alternative transport (archiv-net)
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "wget -qO- http://localhost:3200/ready | grep -q ready || exit 1"]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 5
|
||||
start_period: 15s
|
||||
networks:
|
||||
- archiv-net # backend (archive-backend) reaches tempo:4317 over this network
|
||||
- obs-net # Grafana reaches tempo:3200 over this network
|
||||
|
||||
# --- Dashboards: Grafana ---
|
||||
|
||||
obs-grafana:
|
||||
image: grafana/grafana-oss:11.6.1
|
||||
container_name: obs-grafana
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "127.0.0.1:${PORT_GRAFANA:-3001}:3000"
|
||||
environment:
|
||||
GF_SECURITY_ADMIN_PASSWORD: ${GRAFANA_ADMIN_PASSWORD:-changeme}
|
||||
GF_USERS_ALLOW_SIGN_UP: "false"
|
||||
volumes:
|
||||
- grafana_data:/var/lib/grafana
|
||||
- ./infra/observability/grafana/provisioning:/etc/grafana/provisioning:ro
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "wget -qO- http://localhost:3000/api/health | grep -q ok || exit 1"]
|
||||
interval: 30s
|
||||
timeout: 5s
|
||||
retries: 3
|
||||
start_period: 30s
|
||||
depends_on:
|
||||
prometheus:
|
||||
condition: service_healthy
|
||||
loki:
|
||||
condition: service_healthy
|
||||
tempo:
|
||||
condition: service_healthy
|
||||
networks:
|
||||
- obs-net
|
||||
|
||||
# --- Error Tracking: GlitchTip ---
|
||||
|
||||
obs-redis:
|
||||
image: redis:7-alpine
|
||||
container_name: obs-redis
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- glitchtip_data:/data
|
||||
expose:
|
||||
- "6379"
|
||||
healthcheck:
|
||||
test: ["CMD", "redis-cli", "ping"]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 5
|
||||
networks:
|
||||
- obs-net
|
||||
|
||||
obs-glitchtip:
|
||||
image: glitchtip/glitchtip:6.1.6
|
||||
container_name: obs-glitchtip
|
||||
restart: unless-stopped
|
||||
depends_on:
|
||||
obs-redis:
|
||||
condition: service_healthy
|
||||
obs-glitchtip-db-init:
|
||||
condition: service_completed_successfully
|
||||
environment:
|
||||
DATABASE_URL: postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@archive-db:5432/glitchtip
|
||||
REDIS_URL: redis://obs-redis:6379/0
|
||||
SECRET_KEY: ${GLITCHTIP_SECRET_KEY}
|
||||
GLITCHTIP_DOMAIN: ${GLITCHTIP_DOMAIN:-http://localhost:3002}
|
||||
DEFAULT_FROM_EMAIL: ${APP_MAIL_FROM:-noreply@familienarchiv.local}
|
||||
EMAIL_URL: smtp://mailpit:1025
|
||||
GLITCHTIP_MAX_EVENT_LIFE_DAYS: 90
|
||||
ports:
|
||||
- "127.0.0.1:${PORT_GLITCHTIP:-3002}:8080"
|
||||
networks:
|
||||
- archiv-net
|
||||
- obs-net
|
||||
|
||||
obs-glitchtip-worker:
|
||||
image: glitchtip/glitchtip:6.1.6
|
||||
container_name: obs-glitchtip-worker
|
||||
restart: unless-stopped
|
||||
command: ./bin/run-celery-with-beat.sh
|
||||
depends_on:
|
||||
obs-redis:
|
||||
condition: service_healthy
|
||||
environment:
|
||||
DATABASE_URL: postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@archive-db:5432/glitchtip
|
||||
REDIS_URL: redis://obs-redis:6379/0
|
||||
SECRET_KEY: ${GLITCHTIP_SECRET_KEY}
|
||||
networks:
|
||||
- archiv-net
|
||||
- obs-net
|
||||
|
||||
obs-glitchtip-db-init:
|
||||
image: postgres:16-alpine
|
||||
container_name: obs-glitchtip-db-init
|
||||
restart: "no"
|
||||
environment:
|
||||
PGPASSWORD: ${POSTGRES_PASSWORD}
|
||||
command: >
|
||||
sh -c "psql -h archive-db -U ${POSTGRES_USER} -tc
|
||||
\"SELECT 1 FROM pg_database WHERE datname = 'glitchtip'\" |
|
||||
grep -q 1 ||
|
||||
psql -h archive-db -U ${POSTGRES_USER} -c \"CREATE DATABASE glitchtip;\""
|
||||
networks:
|
||||
- archiv-net
|
||||
|
||||
networks:
|
||||
# Shared network created by the main docker-compose.yml.
|
||||
# The observability stack joins as a peer so Prometheus can scrape
|
||||
# archive-backend by container name. The observability stack must NOT
|
||||
# attempt to create this network — it will fail with a clear error if
|
||||
# the main stack is not running yet.
|
||||
archiv-net:
|
||||
external: true
|
||||
|
||||
# Internal network for observability-service-to-service traffic
|
||||
# (e.g. Grafana → Prometheus, Grafana → Loki, Grafana → Tempo).
|
||||
obs-net:
|
||||
driver: bridge
|
||||
|
||||
volumes:
|
||||
prometheus_data:
|
||||
loki_data:
|
||||
promtail_positions:
|
||||
tempo_data:
|
||||
grafana_data:
|
||||
glitchtip_data:
|
||||
@@ -1,248 +0,0 @@
|
||||
# Production / staging Docker Compose for Familienarchiv.
|
||||
#
|
||||
# This is a self-contained file (not an overlay over docker-compose.yml).
|
||||
# All services for the prod stack live here. Environment isolation is
|
||||
# achieved via the docker compose project name:
|
||||
#
|
||||
# production: docker compose -f docker-compose.prod.yml -p archiv-production ...
|
||||
# staging: docker compose -f docker-compose.prod.yml -p archiv-staging --profile staging ...
|
||||
#
|
||||
# Volumes, networks and containers are namespaced by the project name,
|
||||
# so the two environments cohabit cleanly on the same host.
|
||||
#
|
||||
# Required env vars (provided by .env.production / .env.staging in CI):
|
||||
# TAG image tag (release tag or "nightly")
|
||||
# PORT_BACKEND, PORT_FRONTEND host-side ports (bound to 127.0.0.1 only)
|
||||
# APP_DOMAIN e.g. archiv.raddatz.cloud / staging.raddatz.cloud
|
||||
# POSTGRES_PASSWORD Postgres password
|
||||
# MINIO_PASSWORD MinIO root password (admin operations only)
|
||||
# MINIO_APP_PASSWORD MinIO application service-account password
|
||||
# (least-privilege scope: archive bucket only)
|
||||
# OCR_TRAINING_TOKEN token guarding ocr-service /train endpoint
|
||||
# APP_ADMIN_USERNAME seeded admin email (e.g. admin@archiv.raddatz.cloud)
|
||||
# APP_ADMIN_PASSWORD seeded admin password — CRITICAL: locked in on
|
||||
# first deploy because UserDataInitializer only
|
||||
# creates the account if the email does not exist
|
||||
# MAIL_HOST, MAIL_PORT, SMTP relay (production only; staging uses mailpit)
|
||||
# MAIL_USERNAME, MAIL_PASSWORD
|
||||
# APP_MAIL_FROM sender address (e.g. noreply@raddatz.cloud)
|
||||
# IMPORT_HOST_DIR absolute host path holding ONLY the ODS
|
||||
# spreadsheet and PDFs for /admin/system mass
|
||||
# import — mounted read-only at /import inside
|
||||
# the backend. Compose refuses to start when
|
||||
# this var is unset, so staging and prod cannot
|
||||
# accidentally share an import source. Must be
|
||||
# readable by the backend container's UID
|
||||
# (currently root via the OpenJDK image — any
|
||||
# world-readable directory works).
|
||||
|
||||
networks:
|
||||
archiv-net:
|
||||
driver: bridge
|
||||
name: ${COMPOSE_NETWORK_NAME:-archiv-net}
|
||||
|
||||
volumes:
|
||||
postgres-data:
|
||||
minio-data:
|
||||
ocr-models:
|
||||
ocr-cache:
|
||||
|
||||
services:
|
||||
db:
|
||||
image: postgres:16-alpine
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
POSTGRES_USER: archiv
|
||||
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
|
||||
POSTGRES_DB: archiv
|
||||
volumes:
|
||||
- postgres-data:/var/lib/postgresql/data
|
||||
networks:
|
||||
- archiv-net
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "pg_isready -U archiv -d archiv"]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 5
|
||||
|
||||
minio:
|
||||
# Pinned MinIO release for reproducible deploys. Bumped manually until
|
||||
# Renovate is bootstrapped for these production images (see follow-up issue).
|
||||
image: minio/minio:RELEASE.2025-02-28T09-55-16Z
|
||||
restart: unless-stopped
|
||||
command: server /data --console-address ":9001"
|
||||
environment:
|
||||
MINIO_ROOT_USER: archiv
|
||||
MINIO_ROOT_PASSWORD: ${MINIO_PASSWORD}
|
||||
volumes:
|
||||
- minio-data:/data
|
||||
networks:
|
||||
- archiv-net
|
||||
healthcheck:
|
||||
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
|
||||
interval: 30s
|
||||
timeout: 20s
|
||||
retries: 3
|
||||
|
||||
# Idempotent bucket bootstrap + service-account creation.
|
||||
# Runs once per `docker compose up` and exits 0. The entrypoint is
|
||||
# extracted to infra/minio/bootstrap.sh so the (non-trivial) idempotent
|
||||
# logic is readable, reviewable, and unit-testable as a script rather
|
||||
# than YAML-escaped shell.
|
||||
create-buckets:
|
||||
# Custom image bakes bootstrap.sh in at build time. A bind-mount fails on
|
||||
# the Docker-out-of-Docker production runner because the host daemon
|
||||
# resolves the relative path against the host filesystem, not the
|
||||
# runner container's CWD. See #506 + infra/minio/Dockerfile.
|
||||
build:
|
||||
context: ./infra/minio
|
||||
# Declare one-shot intent so `docker compose up -d --wait` treats
|
||||
# exited(0) as success rather than "not running, fail". Pair with
|
||||
# backend's `service_completed_successfully` dependency below. See #510.
|
||||
restart: "no"
|
||||
depends_on:
|
||||
minio:
|
||||
condition: service_healthy
|
||||
networks:
|
||||
- archiv-net
|
||||
environment:
|
||||
MINIO_PASSWORD: ${MINIO_PASSWORD}
|
||||
MINIO_APP_PASSWORD: ${MINIO_APP_PASSWORD}
|
||||
|
||||
# Dev-only mail catcher; gated behind the staging profile so production
|
||||
# never starts it. Staging workflow runs with `--profile staging`.
|
||||
mailpit:
|
||||
# Pinned for reproducibility; bumped manually until Renovate is bootstrapped.
|
||||
image: axllent/mailpit:v1.29.7
|
||||
restart: unless-stopped
|
||||
profiles: ["staging"]
|
||||
networks:
|
||||
- archiv-net
|
||||
healthcheck:
|
||||
# TCP-port open check via BusyBox `nc`. The previous wget-based probe
|
||||
# introduced a non-obvious binary dependency on the mailpit image; a
|
||||
# future tag that ships without wget would silently disable the
|
||||
# healthcheck. `nc` is part of BusyBox in the upstream image.
|
||||
test: ["CMD-SHELL", "nc -z localhost 8025 || exit 1"]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 5
|
||||
|
||||
ocr-service:
|
||||
build:
|
||||
context: ./ocr-service
|
||||
restart: unless-stopped
|
||||
expose:
|
||||
- "8000"
|
||||
# Surya OCR loads ~5GB of transformer models at startup; first request
|
||||
# triggers a further ~1GB Kraken model download into ocr-cache.
|
||||
# CX42+ (16 GB RAM) honours the default. On a CX32 (8 GB) override with
|
||||
# OCR_MEM_LIMIT=6g (slower first-request, fits the host).
|
||||
mem_limit: ${OCR_MEM_LIMIT:-12g}
|
||||
memswap_limit: ${OCR_MEM_LIMIT:-12g}
|
||||
volumes:
|
||||
- ocr-models:/app/models
|
||||
- ocr-cache:/root/.cache
|
||||
environment:
|
||||
KRAKEN_MODEL_PATH: /app/models/german_kurrent.mlmodel
|
||||
TRAINING_TOKEN: ${OCR_TRAINING_TOKEN}
|
||||
OCR_CONFIDENCE_THRESHOLD: "0.3"
|
||||
OCR_CONFIDENCE_THRESHOLD_KURRENT: "0.5"
|
||||
# SSRF allowlist pinned explicitly to the internal MinIO hostname.
|
||||
# In prod the OCR service only fetches PDFs from MinIO over the
|
||||
# docker network; localhost/127.0.0.1 are dev-only sources and
|
||||
# must NOT be reachable here. Do not widen to `*`.
|
||||
ALLOWED_PDF_HOSTS: "minio"
|
||||
networks:
|
||||
- archiv-net
|
||||
healthcheck:
|
||||
test: ["CMD", "curl", "-f", "http://localhost:8000/health"]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 12
|
||||
start_period: 120s
|
||||
|
||||
backend:
|
||||
image: familienarchiv/backend:${TAG:-nightly}
|
||||
build:
|
||||
context: ./backend
|
||||
restart: unless-stopped
|
||||
depends_on:
|
||||
db:
|
||||
condition: service_healthy
|
||||
minio:
|
||||
condition: service_healthy
|
||||
ocr-service:
|
||||
condition: service_healthy
|
||||
# Gate startup on the bucket bootstrap. Without this, backend
|
||||
# starts in parallel with create-buckets and may race the policy
|
||||
# bind. Also tells compose's `up -d --wait` that create-buckets
|
||||
# is a one-shot that must complete successfully. See #510.
|
||||
create-buckets:
|
||||
condition: service_completed_successfully
|
||||
# Bound to localhost only — Caddy fronts external traffic.
|
||||
ports:
|
||||
- "127.0.0.1:${PORT_BACKEND}:8080"
|
||||
# Host path holding the ODS spreadsheet + PDFs for the mass-import endpoint.
|
||||
# Read-only; MassImportService only reads (Files.list / Files.walk on /import).
|
||||
# Required — no default — so staging and prod cannot accidentally share an
|
||||
# import source. CI workflows pin this per-env (see .gitea/workflows/).
|
||||
volumes:
|
||||
- ${IMPORT_HOST_DIR:?Set IMPORT_HOST_DIR to a host path holding the mass-import payload (ODS + PDFs). See docs/DEPLOYMENT.md.}:/import:ro
|
||||
environment:
|
||||
SPRING_DATASOURCE_URL: jdbc:postgresql://db:5432/archiv
|
||||
SPRING_DATASOURCE_USERNAME: archiv
|
||||
SPRING_DATASOURCE_PASSWORD: ${POSTGRES_PASSWORD}
|
||||
# Application uses the bucket-scoped service account, not MinIO root.
|
||||
S3_ENDPOINT: http://minio:9000
|
||||
S3_ACCESS_KEY: archiv-app
|
||||
S3_SECRET_KEY: ${MINIO_APP_PASSWORD}
|
||||
S3_BUCKET_NAME: familienarchiv
|
||||
S3_REGION: us-east-1
|
||||
# No SPRING_PROFILES_ACTIVE — base application.yaml is production-ready
|
||||
# (Swagger disabled, show-sql off, open-in-view false).
|
||||
APP_BASE_URL: https://${APP_DOMAIN}
|
||||
APP_ADMIN_USERNAME: ${APP_ADMIN_USERNAME}
|
||||
APP_ADMIN_PASSWORD: ${APP_ADMIN_PASSWORD}
|
||||
APP_OCR_BASE_URL: http://ocr-service:8000
|
||||
APP_OCR_TRAINING_TOKEN: ${OCR_TRAINING_TOKEN}
|
||||
MAIL_HOST: ${MAIL_HOST}
|
||||
MAIL_PORT: ${MAIL_PORT:-587}
|
||||
MAIL_USERNAME: ${MAIL_USERNAME:-}
|
||||
MAIL_PASSWORD: ${MAIL_PASSWORD:-}
|
||||
APP_MAIL_FROM: ${APP_MAIL_FROM:-noreply@raddatz.cloud}
|
||||
SPRING_MAIL_PROPERTIES_MAIL_SMTP_AUTH: ${MAIL_SMTP_AUTH:-true}
|
||||
SPRING_MAIL_PROPERTIES_MAIL_SMTP_STARTTLS_ENABLE: ${MAIL_STARTTLS_ENABLE:-true}
|
||||
OTEL_EXPORTER_OTLP_ENDPOINT: http://tempo:4317
|
||||
networks:
|
||||
- archiv-net
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "wget -qO- http://localhost:8081/actuator/health | grep -q UP || exit 1"]
|
||||
interval: 15s
|
||||
timeout: 5s
|
||||
retries: 10
|
||||
start_period: 30s
|
||||
|
||||
frontend:
|
||||
image: familienarchiv/frontend:${TAG:-nightly}
|
||||
build:
|
||||
context: ./frontend
|
||||
target: production
|
||||
restart: unless-stopped
|
||||
depends_on:
|
||||
backend:
|
||||
condition: service_healthy
|
||||
ports:
|
||||
- "127.0.0.1:${PORT_FRONTEND}:3000"
|
||||
environment:
|
||||
# SSR fetches go inside the docker network; clients hit https://${APP_DOMAIN}
|
||||
API_INTERNAL_URL: http://backend:8080
|
||||
ORIGIN: https://${APP_DOMAIN}
|
||||
networks:
|
||||
- archiv-net
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "wget -qO- http://127.0.0.1:3000/login >/dev/null 2>&1 || exit 1"]
|
||||
interval: 15s
|
||||
timeout: 5s
|
||||
retries: 10
|
||||
start_period: 20s
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user