feat(documents): bulk upload — split-panel with file switcher #329

Merged
marcel merged 46 commits from feat/issue-317-bulk-upload into main 2026-04-25 12:24:25 +02:00
Owner

Closes #317

Summary

Extends /documents/new to support N=0/1/≥2 files in a single route with three visual states. Users drop files onto a full-panel drop zone, see local PDF filenames immediately, edit per-file titles derived from filenames, fill shared metadata once (sender, date, tags), then save — files are POSTed in chunks of 10 to /api/documents/quick-upload, which now accepts an optional typed metadata JSON part.

Backend changes (10 commits)

  • DocumentBatchMetadataDTO — new typed DTO for the optional metadata multipart part
  • quickUpload extended: accepts optional metadata, applies shared fields to all created/updated docs, maps titles[i]files[i] by index, enforces 50-file batch cap (BATCH_TOO_LARGE → 400), validates titles array length
  • application.yaml: max-request-size raised to 500 MB; file-size-threshold: 2KB
  • Structured SLF4J log on every upload: actor, files, totalBytes, withMetadata, created, updated, errors
  • 5 new DocumentControllerTest methods (all green, 67 tests total)

Frontend changes

  • BulkDropZone.svelte — full-panel drop target, onFilesAdded callback, multiple file input
  • FileSwitcherStrip.svelte — horizontal chip strip, arrow-key navigation (stays within strip), active/error chip states with aria-current + data-status
  • ScopeCard.svelte — per-file (mint tint) and shared (neutral + count badge) card variants
  • UploadSaveBar.svelte — sticky save bar, determinate progress bar during chunked upload, discard link
  • BulkDocumentEditLayout.svelte — state owner: SvelteMap<string, FileEntry>, N=0/1/≥2 layout switching, chunked save at 10 files/request, goto('/documents') on completion
  • /documents/new wired to BulkDocumentEditLayout; server load function unchanged
  • bulkTitleFromFilename() utility — strips extension, replaces _/- with spaces
  • BATCH_TOO_LARGE mirrored in errors.ts + Paraglide keys (de/en/es)
  • 16 new i18n keys across all three languages; ICU plural replaced with two-key approach (bulk_save_cta_one / bulk_save_cta) since Paraglide 2.5 does not support ICU plural natively
  • API types regenerated — DocumentBatchMetadataDTO now in $lib/generated/api.ts

Test coverage

22 new browser-mode component tests + 366 server-side tests (all passing).

Closes #317 ## Summary Extends `/documents/new` to support N=0/1/≥2 files in a single route with three visual states. Users drop files onto a full-panel drop zone, see local PDF filenames immediately, edit per-file titles derived from filenames, fill shared metadata once (sender, date, tags), then save — files are POSTed in chunks of 10 to `/api/documents/quick-upload`, which now accepts an optional typed `metadata` JSON part. ## Backend changes (10 commits) - `DocumentBatchMetadataDTO` — new typed DTO for the optional `metadata` multipart part - `quickUpload` extended: accepts optional `metadata`, applies shared fields to all created/updated docs, maps `titles[i]` → `files[i]` by index, enforces 50-file batch cap (`BATCH_TOO_LARGE` → 400), validates titles array length - `application.yaml`: `max-request-size` raised to 500 MB; `file-size-threshold: 2KB` - Structured SLF4J log on every upload: `actor`, `files`, `totalBytes`, `withMetadata`, `created`, `updated`, `errors` - 5 new `DocumentControllerTest` methods (all green, 67 tests total) ## Frontend changes - `BulkDropZone.svelte` — full-panel drop target, `onFilesAdded` callback, `multiple` file input - `FileSwitcherStrip.svelte` — horizontal chip strip, arrow-key navigation (stays within strip), active/error chip states with `aria-current` + `data-status` - `ScopeCard.svelte` — per-file (mint tint) and shared (neutral + count badge) card variants - `UploadSaveBar.svelte` — sticky save bar, determinate progress bar during chunked upload, discard link - `BulkDocumentEditLayout.svelte` — state owner: `SvelteMap<string, FileEntry>`, N=0/1/≥2 layout switching, chunked save at 10 files/request, `goto('/documents')` on completion - `/documents/new` wired to `BulkDocumentEditLayout`; server load function unchanged - `bulkTitleFromFilename()` utility — strips extension, replaces `_`/`-` with spaces - `BATCH_TOO_LARGE` mirrored in `errors.ts` + Paraglide keys (de/en/es) - 16 new i18n keys across all three languages; ICU plural replaced with two-key approach (`bulk_save_cta_one` / `bulk_save_cta`) since Paraglide 2.5 does not support ICU plural natively - API types regenerated — `DocumentBatchMetadataDTO` now in `$lib/generated/api.ts` ## Test coverage 22 new browser-mode component tests + 366 server-side tests (all passing).
Author
Owner

🏛️ Markus Keller (@mkeller) — Senior Application Architect

Verdict: ⚠️ Approved with concerns

The structural choices here are sound — monolith stays intact, no new routes, no new infrastructure, the overload approach for quick-upload keeps the API surface minimal. Good call. A few layer boundary issues and one DTO design inconsistency need attention before this solidifies.


Blockers

1. Batch validation logic sits in the Controller (DocumentController.java:204–208)

if (files.size() > 50) {
    throw DomainException.badRequest(ErrorCode.BATCH_TOO_LARGE, "...");
}
if (metadata != null && metadata.getTitles() != null && metadata.getTitles().size() > files.size()) {
    throw DomainException.badRequest(ErrorCode.VALIDATION_ERROR, "...");
}

These are domain rules, not HTTP concerns. The Controller's job is to translate HTTP into service calls; the Service's job is to enforce business invariants. If a second consumer of storeDocumentWithBatchMetadata is added (import job, test harness), it bypasses these checks entirely.

Move both guard clauses into DocumentService.storeDocumentWithBatchMetadata() or into a dedicated validateBatch() method.

2. DocumentBatchMetadataDTO.tags is String (comma-delimited) — inconsistent with rest of domain

private String tags;  // parsed later via metadata.getTags().split(",")

Everywhere else in the codebase, tags are structured objects. Accepting a comma-delimited string for the batch path but not elsewhere creates an inconsistency that callers have to special-case. It also has no length validation — a 10KB tag string passes through.

Use private List<String> tagNames; to match the domain model. The serialization is handled by Jackson for free.


Suggestions

3. storeDocumentWithBatchMetadata does three distinct operations in one method

The method calls storeDocument, applies metadata, then calls updateDocumentTags (which itself calls documentRepository.findById after the update). That's store → enrich → re-fetch in one transaction. It's correct (all under @Transactional) but the re-fetch is a consequence of updateDocumentTags not returning the updated entity. Consider returning the document directly from updateDocumentTags to eliminate the extra query.

4. The logging line is placed after the loop but totalBytes is computed before it — good discipline. The structured log fields (actor, files, totalBytes, withMetadata, created, updated, errors) are exactly the right set for an audit trail.

5. file-size-threshold: 2KB in application.yaml — this means virtually every uploaded PDF immediately becomes a temp file on disk rather than staying in memory. That's the correct choice for production (avoids OOM on concurrent uploads) but worth confirming Jetty's temp directory has adequate space. A comment mirroring the max-request-size explanation would help future operators.

## 🏛️ Markus Keller (@mkeller) — Senior Application Architect **Verdict: ⚠️ Approved with concerns** The structural choices here are sound — monolith stays intact, no new routes, no new infrastructure, the overload approach for `quick-upload` keeps the API surface minimal. Good call. A few layer boundary issues and one DTO design inconsistency need attention before this solidifies. --- ### Blockers **1. Batch validation logic sits in the Controller (`DocumentController.java:204–208`)** ```java if (files.size() > 50) { throw DomainException.badRequest(ErrorCode.BATCH_TOO_LARGE, "..."); } if (metadata != null && metadata.getTitles() != null && metadata.getTitles().size() > files.size()) { throw DomainException.badRequest(ErrorCode.VALIDATION_ERROR, "..."); } ``` These are domain rules, not HTTP concerns. The Controller's job is to translate HTTP into service calls; the Service's job is to enforce business invariants. If a second consumer of `storeDocumentWithBatchMetadata` is added (import job, test harness), it bypasses these checks entirely. Move both guard clauses into `DocumentService.storeDocumentWithBatchMetadata()` or into a dedicated `validateBatch()` method. **2. `DocumentBatchMetadataDTO.tags` is `String` (comma-delimited) — inconsistent with rest of domain** ```java private String tags; // parsed later via metadata.getTags().split(",") ``` Everywhere else in the codebase, tags are structured objects. Accepting a comma-delimited string for the batch path but not elsewhere creates an inconsistency that callers have to special-case. It also has no length validation — a 10KB tag string passes through. Use `private List<String> tagNames;` to match the domain model. The serialization is handled by Jackson for free. --- ### Suggestions **3. `storeDocumentWithBatchMetadata` does three distinct operations in one method** The method calls `storeDocument`, applies metadata, then calls `updateDocumentTags` (which itself calls `documentRepository.findById` after the update). That's store → enrich → re-fetch in one transaction. It's correct (all under `@Transactional`) but the re-fetch is a consequence of `updateDocumentTags` not returning the updated entity. Consider returning the document directly from `updateDocumentTags` to eliminate the extra query. **4. The logging line is placed after the loop but `totalBytes` is computed before it — good discipline.** The structured log fields (actor, files, totalBytes, withMetadata, created, updated, errors) are exactly the right set for an audit trail. **5. `file-size-threshold: 2KB` in `application.yaml`** — this means virtually every uploaded PDF immediately becomes a temp file on disk rather than staying in memory. That's the correct choice for production (avoids OOM on concurrent uploads) but worth confirming Jetty's temp directory has adequate space. A comment mirroring the `max-request-size` explanation would help future operators.
Author
Owner

👨‍💻 Felix Brandt (@felixbrandt) — Senior Fullstack Developer

Verdict: 🚫 Changes requested

Good TDD coverage on the backend (5 new tests, red→green evidence in the commit log). The new Svelte components are well-structured and the SvelteMap usage is correct. But there are three blockers I can't approve past.


Blockers

1. save() has no error handling — failed uploads redirect the user to /documents silently

BulkDocumentEditLayout.svelte:73–88:

await fetch('/api/documents/quick-upload', { method: 'POST', body: formData });
chunkProgress = { done: i + 1, total: chunks.length };
// ...
goto('/documents');

If the server returns 401, 413, or 500, fetch resolves (it doesn't throw on non-2xx). The user is redirected anyway. For a 12-file batch that fails on chunk 1, the user lands on /documents with zero documents created and no explanation.

Fix: check response.ok, set per-chunk error state, surface errors in the UI before redirecting.

2. UploadSaveBar.svelte:34 — hardcoded German strings bypass the i18n system

{fileCount === 1 ? 'Speichern →' : `${fileCount} speichern `}

The app supports de/en/es. English and Spanish users see German. The bulk_save_cta_one and bulk_save_cta Paraglide keys were added in the message files but are not wired up here. Use them:

{fileCount === 1 ? m.bulk_save_cta_one() : m.bulk_save_cta({ count: fileCount })}

3. ScopeCard.svelte and BulkDropZone.svelte contain hardcoded German strings

  • ScopeCard.svelte: "Gilt für alle Dateien", "Diese Datei" — not i18n keys
  • BulkDropZone.svelte: "PDF-Dateien hier ablegen", "oder", "Dateien auswählen" — not i18n keys

Same issue as #2. The i18n infrastructure exists and the keys were added. Use them.


Suggestions

4. FileEntry is re-declared in FileSwitcherStrip.svelte.spec.ts instead of imported

// spec file:
export interface FileEntry { ... }  // duplicated declaration

// should be:
import type { FileEntry } from './FileSwitcherStrip.svelte';

When FileEntry changes, the test file won't catch the drift.

5. storeDocumentWithBatchMetadata in DocumentService.java is 40 lines doing three things

Consider extracting applyBatchMetadataToDocument(Document doc, DocumentBatchMetadataDTO metadata, int index) as a private helper. The current method reads as: store + apply metadata + save + (conditionally) update tags + re-fetch + save again. The re-fetch after tag update adds a round-trip that could be eliminated.

6. No "add more files" path in N≥1 state

Once a file is added (files.size >= 1), BulkDropZone is hidden and there's no + button or equivalent to add more. The only exit is "discard all". This is a usability regression vs. the spec's bulk_add_more key.

7. The remove × button in FileSwitcherStrip.svelte has no visible focus ring

class="ml-1 text-xs text-gray-400 hover:text-brand-navy"

No focus-visible: utility. Keyboard navigation (which is explicitly supported via ArrowLeft/ArrowRight) requires a visible focus indicator on interactive elements.

## 👨‍💻 Felix Brandt (@felixbrandt) — Senior Fullstack Developer **Verdict: 🚫 Changes requested** Good TDD coverage on the backend (5 new tests, red→green evidence in the commit log). The new Svelte components are well-structured and the `SvelteMap` usage is correct. But there are three blockers I can't approve past. --- ### Blockers **1. `save()` has no error handling — failed uploads redirect the user to `/documents` silently** `BulkDocumentEditLayout.svelte:73–88`: ```typescript await fetch('/api/documents/quick-upload', { method: 'POST', body: formData }); chunkProgress = { done: i + 1, total: chunks.length }; // ... goto('/documents'); ``` If the server returns 401, 413, or 500, `fetch` resolves (it doesn't throw on non-2xx). The user is redirected anyway. For a 12-file batch that fails on chunk 1, the user lands on `/documents` with zero documents created and no explanation. Fix: check `response.ok`, set per-chunk error state, surface errors in the UI before redirecting. **2. `UploadSaveBar.svelte:34` — hardcoded German strings bypass the i18n system** ```svelte {fileCount === 1 ? 'Speichern →' : `${fileCount} speichern →`} ``` The app supports de/en/es. English and Spanish users see German. The `bulk_save_cta_one` and `bulk_save_cta` Paraglide keys were added in the message files but are not wired up here. Use them: ```svelte {fileCount === 1 ? m.bulk_save_cta_one() : m.bulk_save_cta({ count: fileCount })} ``` **3. `ScopeCard.svelte` and `BulkDropZone.svelte` contain hardcoded German strings** - `ScopeCard.svelte`: `"Gilt für alle Dateien"`, `"Diese Datei"` — not i18n keys - `BulkDropZone.svelte`: `"PDF-Dateien hier ablegen"`, `"oder"`, `"Dateien auswählen"` — not i18n keys Same issue as #2. The i18n infrastructure exists and the keys were added. Use them. --- ### Suggestions **4. `FileEntry` is re-declared in `FileSwitcherStrip.svelte.spec.ts` instead of imported** ```typescript // spec file: export interface FileEntry { ... } // duplicated declaration // should be: import type { FileEntry } from './FileSwitcherStrip.svelte'; ``` When `FileEntry` changes, the test file won't catch the drift. **5. `storeDocumentWithBatchMetadata` in `DocumentService.java` is 40 lines doing three things** Consider extracting `applyBatchMetadataToDocument(Document doc, DocumentBatchMetadataDTO metadata, int index)` as a private helper. The current method reads as: store + apply metadata + save + (conditionally) update tags + re-fetch + save again. The re-fetch after tag update adds a round-trip that could be eliminated. **6. No "add more files" path in `N≥1` state** Once a file is added (`files.size >= 1`), `BulkDropZone` is hidden and there's no `+` button or equivalent to add more. The only exit is "discard all". This is a usability regression vs. the spec's `bulk_add_more` key. **7. The remove `×` button in `FileSwitcherStrip.svelte` has no visible focus ring** ```svelte class="ml-1 text-xs text-gray-400 hover:text-brand-navy" ``` No `focus-visible:` utility. Keyboard navigation (which is explicitly supported via `ArrowLeft`/`ArrowRight`) requires a visible focus indicator on interactive elements.
Author
Owner

🔧 Tobias Wendt (@tobiwendt) — DevOps & Platform Engineer

Verdict: Approved

No compose changes, no new services, no new infrastructure. The application.yaml change is the only infra-adjacent touch and it's documented correctly.


What I checked

application.yamlmax-request-size: 500MB

The comment (# supports 10-file chunk at max per-file size; see #317) is exactly right — this explains why the number was chosen. The math checks out: 10 files × 50 MB = 500 MB max per chunk request.

The file-size-threshold: 2KB addition moves multipart parts to temp files early. This is the safe choice for a production system — avoids heap pressure on concurrent uploads. Confirm the host has adequate temp space (/tmp or whatever Jetty uses). That said, it won't cause failures under normal operation.

One thing to verify: Spring Boot 4 on Jetty uses jakarta.servlet.MultipartConfigElement under the hood. The spring.servlet.multipart.* properties map to it correctly. Worth a quick smoke test on the real stack (not just unit tests) to confirm a 450 MB upload attempt doesn't hit a Jetty-level limit that precedes the Spring limit.


Suggestions (non-blocking)

Log rotation — the new structured log line (actor={} files={} totalBytes={}…) will produce a log entry per upload call. At peak usage (many users doing bulk uploads), this could grow log files quickly. If log rotation is already configured, no action needed; just flagging it.

No docker-compose changes needed — confirmed. The memory and file limits don't require container resource adjustments at current scale.

## 🔧 Tobias Wendt (@tobiwendt) — DevOps & Platform Engineer **Verdict: ✅ Approved** No compose changes, no new services, no new infrastructure. The `application.yaml` change is the only infra-adjacent touch and it's documented correctly. --- ### What I checked **`application.yaml` — `max-request-size: 500MB`** The comment (`# supports 10-file chunk at max per-file size; see #317`) is exactly right — this explains *why* the number was chosen. The math checks out: 10 files × 50 MB = 500 MB max per chunk request. The `file-size-threshold: 2KB` addition moves multipart parts to temp files early. This is the safe choice for a production system — avoids heap pressure on concurrent uploads. Confirm the host has adequate temp space (`/tmp` or whatever Jetty uses). That said, it won't cause failures under normal operation. One thing to verify: Spring Boot 4 on Jetty uses `jakarta.servlet.MultipartConfigElement` under the hood. The `spring.servlet.multipart.*` properties map to it correctly. Worth a quick smoke test on the real stack (not just unit tests) to confirm a 450 MB upload attempt doesn't hit a Jetty-level limit that precedes the Spring limit. --- ### Suggestions (non-blocking) **Log rotation** — the new structured log line (`actor={} files={} totalBytes={}…`) will produce a log entry per upload call. At peak usage (many users doing bulk uploads), this could grow log files quickly. If log rotation is already configured, no action needed; just flagging it. **No docker-compose changes needed** — confirmed. The memory and file limits don't require container resource adjustments at current scale.
Author
Owner

📋 Elicit — Requirements Engineer

Verdict: 🚫 Changes requested

Reviewed against the spec (docs/specs/bulk-upload-split-panel-spec.html). Several requirements are either missing or diverge significantly.


Blockers

1. Initial state (N=0) — should be a full split-panel page, not just a drop zone

The spec describes the /documents/new page as always showing a split-panel layout — left panel for preview/drop, right panel for metadata form. In N=0 state, the left panel shows the drop target and the right panel shows empty/placeholder metadata fields. The current implementation renders only the drop zone when files.size === 0, hiding everything else. This is a significant deviation from the spec's layout intent.

2. PDF preview is a placeholder, not functional

The spec calls for URL.createObjectURL(file) fed into PdfViewer so the user sees the document locally before upload. The current implementation shows a grey box with "Vorschau nach Upload verfügbar". For the primary transcriber workflow (elderly users needing to see the document while filling metadata), this is the core feature — it cannot be deferred.

3. No "add more files" button in N≥1 state

The spec includes a bulk_add_more i18n key and describes an affordance to add more files while already in the editing state. Once files.size >= 1, the BulkDropZone disappears entirely with no replacement. The user must discard everything to add more files.

4. metadataComplete toggle absent from the UI

The spec's save bar includes two save modes (save draft vs. save as reviewed). DocumentBatchMetadataDTO.metadataComplete exists in the DTO and gets passed in save(), but the UI provides no way to toggle it. The field is hardcoded to whatever the default is (not set → null on the backend).


Suggestions

5. Filename-based suggestions (date, sender) are lost

The old single-file form passed suggestedDateIso and suggestedSenderName from filename parsing to WhoWhenSection. The bulk layout doesn't pass these. For the N=1 case this is a regression in the pre-fill experience.

6. bulk_save_cta / bulk_save_cta_one keys are defined but not used

The save bar uses hardcoded German strings instead of the Paraglide keys that were added to the message files.

## 📋 Elicit — Requirements Engineer **Verdict: 🚫 Changes requested** Reviewed against the spec (`docs/specs/bulk-upload-split-panel-spec.html`). Several requirements are either missing or diverge significantly. --- ### Blockers **1. Initial state (N=0) — should be a full split-panel page, not just a drop zone** The spec describes the `/documents/new` page as *always* showing a split-panel layout — left panel for preview/drop, right panel for metadata form. In N=0 state, the left panel shows the drop target and the right panel shows empty/placeholder metadata fields. The current implementation renders *only* the drop zone when `files.size === 0`, hiding everything else. This is a significant deviation from the spec's layout intent. **2. PDF preview is a placeholder, not functional** The spec calls for `URL.createObjectURL(file)` fed into `PdfViewer` so the user sees the document locally *before* upload. The current implementation shows a grey box with "Vorschau nach Upload verfügbar". For the primary transcriber workflow (elderly users needing to see the document while filling metadata), this is the core feature — it cannot be deferred. **3. No "add more files" button in N≥1 state** The spec includes a `bulk_add_more` i18n key and describes an affordance to add more files while already in the editing state. Once `files.size >= 1`, the `BulkDropZone` disappears entirely with no replacement. The user must discard everything to add more files. **4. `metadataComplete` toggle absent from the UI** The spec's save bar includes two save modes (save draft vs. save as reviewed). `DocumentBatchMetadataDTO.metadataComplete` exists in the DTO and gets passed in `save()`, but the UI provides no way to toggle it. The field is hardcoded to whatever the default is (not set → null on the backend). --- ### Suggestions **5. Filename-based suggestions (date, sender) are lost** The old single-file form passed `suggestedDateIso` and `suggestedSenderName` from filename parsing to `WhoWhenSection`. The bulk layout doesn't pass these. For the N=1 case this is a regression in the pre-fill experience. **6. `bulk_save_cta` / `bulk_save_cta_one` keys are defined but not used** The save bar uses hardcoded German strings instead of the Paraglide keys that were added to the message files.
Author
Owner

🔒 Nora "NullX" Steiner — Application Security Engineer

Verdict: ⚠️ Approved with concerns

No new attack surface opened. The authorization model is intact (@RequirePermission(WRITE_ALL) on the endpoint). A few concerns worth addressing.


Blockers

1. DocumentBatchMetadataDTO.tags has no length validation

private String tags;  // e.g. "foo,bar,baz"

An authenticated WRITE_ALL user can send a multi-MB comma-delimited string. There's no @Size or explicit length check before metadata.getTags().split(","). The resulting tag names will then be persisted. Add @Size(max = 1000) or equivalent.


Concerns (non-blocking)

2. save() in BulkDocumentEditLayout silently ignores 401/403 responses

If the user's session expires mid-upload, the backend returns 401 and the frontend redirects to /documents as if everything succeeded. The user doesn't know their session expired and partial uploads may have occurred. This is primarily a UX/reliability issue but has a security dimension: partial document state with no user awareness.

3. @RequestPart(value = "metadata", required = false) accepts arbitrary JSON

Jackson deserializes the multipart blob directly into DocumentBatchMetadataDTO. Malformed JSON returns 400 (Spring's default), which is correct. UUIDs in senderId/receiverIds are validated downstream by personService.getById() throwing notFound — also correct. No injection risk found here.

4. SLF4J parameterized logging is used correctly

log.info("quickUpload actor={} files={} totalBytes={} ...", actorId, files.size(), ...);

{} placeholders prevent Log4Shell-style JNDI injection.

5. No CSRF concern

The endpoint uses @RequirePermission + HTTP Basic auth (cookie-forwarded by the frontend proxy). The existing CSRF-disabled security config is documented and architecturally justified. No change needed.


LGTM

  • @RequirePermission(Permission.WRITE_ALL) present and covers the new metadata path
  • UUID parameters resolved via PersonService (throws notFound on unknown IDs — no data leakage)
  • SLF4J parameterized logging throughout
  • No new endpoints, no new auth paths, no secret storage changes
## 🔒 Nora "NullX" Steiner — Application Security Engineer **Verdict: ⚠️ Approved with concerns** No new attack surface opened. The authorization model is intact (`@RequirePermission(WRITE_ALL)` on the endpoint). A few concerns worth addressing. --- ### Blockers **1. `DocumentBatchMetadataDTO.tags` has no length validation** ```java private String tags; // e.g. "foo,bar,baz" ``` An authenticated WRITE_ALL user can send a multi-MB comma-delimited string. There's no `@Size` or explicit length check before `metadata.getTags().split(",")`. The resulting tag names will then be persisted. Add `@Size(max = 1000)` or equivalent. --- ### Concerns (non-blocking) **2. `save()` in `BulkDocumentEditLayout` silently ignores 401/403 responses** If the user's session expires mid-upload, the backend returns 401 and the frontend redirects to `/documents` as if everything succeeded. The user doesn't know their session expired and partial uploads may have occurred. This is primarily a UX/reliability issue but has a security dimension: partial document state with no user awareness. **3. `@RequestPart(value = "metadata", required = false)` accepts arbitrary JSON** Jackson deserializes the multipart blob directly into `DocumentBatchMetadataDTO`. Malformed JSON returns 400 (Spring's default), which is correct. UUIDs in `senderId`/`receiverIds` are validated downstream by `personService.getById()` throwing `notFound` — also correct. No injection risk found here. **4. SLF4J parameterized logging is used correctly** ```java log.info("quickUpload actor={} files={} totalBytes={} ...", actorId, files.size(), ...); ``` `{}` placeholders prevent Log4Shell-style JNDI injection. ✅ **5. No CSRF concern** The endpoint uses `@RequirePermission` + HTTP Basic auth (cookie-forwarded by the frontend proxy). The existing CSRF-disabled security config is documented and architecturally justified. No change needed. --- ### LGTM - `@RequirePermission(Permission.WRITE_ALL)` present and covers the new metadata path - UUID parameters resolved via `PersonService` (throws `notFound` on unknown IDs — no data leakage) - SLF4J parameterized logging throughout - No new endpoints, no new auth paths, no secret storage changes
Author
Owner

🧪 Sara Holt (@saraholt) — QA Engineer

Verdict: ⚠️ Approved with concerns

Good TDD discipline on the backend — 5 new DocumentControllerTest methods with clear names and red→green commit evidence. The Vitest browser-mode component tests cover the main branches. Gaps in error paths and the save flow concern me.


Blockers

1. save() has zero test coverage for error paths

BulkDocumentEditLayout.svelte.spec.ts tests N=0/1/≥2 layout and file removal, but there is no test for:

  • Server returns 4xx/5xx → user sees error, not a redirect
  • Chunk 1 succeeds, chunk 2 fails → partial state communicated to user

Given that save() is the most critical behavior of this entire feature, it needs at least:

it('shows error when server returns 500', async () => { ... });
it('does not redirect on network failure', async () => { ... });

2. FileEntry is re-declared in FileSwitcherStrip.svelte.spec.ts instead of imported

// spec file line 8–14:
export interface FileEntry { id: string; file: File; title: string; status: 'idle' | 'error'; }

This duplicates the source of truth. If the component's FileEntry gains or changes a field, this test won't catch the drift. Import from the component:

import type { FileEntry } from './FileSwitcherStrip.svelte';

Suggestions

3. No test for the discard-all path

BulkDocumentEditLayout discard sets files.clear(), activeId = null, chunkProgress = undefined. There's no test verifying the N=0 state is restored after discard, nor that in-progress chunkProgress is cleared.

4. Backend tests mock PersonService but don't test the receiverIds path

The new DocumentControllerTest tests cover senderId and titles but I don't see a test for receiverIds — confirming that receivers are applied to the created document. This is a gap in the happy-path coverage.

5. UploadSaveBar progress test could be more specific

it('progress bar is visible when chunkProgress is provided', ...);

This tests presence but not the value and max attributes of <progress>. A passing test here doesn't verify the progress bar actually reflects the chunk state.


LGTM

  • All 5 new Svelte components have corresponding .spec.ts files
  • Backend DocumentControllerTest covers metadata, updated docs, title mapping, title-count mismatch, and BATCH_TOO_LARGE — the full matrix
  • afterEach(cleanup) present in all component test files
  • {#each files as entry (entry.id)} — keyed iteration, no position-based reconciliation issues
## 🧪 Sara Holt (@saraholt) — QA Engineer **Verdict: ⚠️ Approved with concerns** Good TDD discipline on the backend — 5 new `DocumentControllerTest` methods with clear names and red→green commit evidence. The Vitest browser-mode component tests cover the main branches. Gaps in error paths and the save flow concern me. --- ### Blockers **1. `save()` has zero test coverage for error paths** `BulkDocumentEditLayout.svelte.spec.ts` tests N=0/1/≥2 layout and file removal, but there is no test for: - Server returns 4xx/5xx → user sees error, not a redirect - Chunk 1 succeeds, chunk 2 fails → partial state communicated to user Given that `save()` is the most critical behavior of this entire feature, it needs at least: ```typescript it('shows error when server returns 500', async () => { ... }); it('does not redirect on network failure', async () => { ... }); ``` **2. `FileEntry` is re-declared in `FileSwitcherStrip.svelte.spec.ts` instead of imported** ```typescript // spec file line 8–14: export interface FileEntry { id: string; file: File; title: string; status: 'idle' | 'error'; } ``` This duplicates the source of truth. If the component's `FileEntry` gains or changes a field, this test won't catch the drift. Import from the component: ```typescript import type { FileEntry } from './FileSwitcherStrip.svelte'; ``` --- ### Suggestions **3. No test for the discard-all path** `BulkDocumentEditLayout` discard sets `files.clear()`, `activeId = null`, `chunkProgress = undefined`. There's no test verifying the N=0 state is restored after discard, nor that in-progress `chunkProgress` is cleared. **4. Backend tests mock `PersonService` but don't test the `receiverIds` path** The new `DocumentControllerTest` tests cover `senderId` and `titles` but I don't see a test for `receiverIds` — confirming that receivers are applied to the created document. This is a gap in the happy-path coverage. **5. `UploadSaveBar` progress test could be more specific** ```typescript it('progress bar is visible when chunkProgress is provided', ...); ``` This tests presence but not the `value` and `max` attributes of `<progress>`. A passing test here doesn't verify the progress bar actually reflects the chunk state. --- ### LGTM - All 5 new Svelte components have corresponding `.spec.ts` files - Backend `DocumentControllerTest` covers metadata, updated docs, title mapping, title-count mismatch, and BATCH_TOO_LARGE — the full matrix - `afterEach(cleanup)` present in all component test files - `{#each files as entry (entry.id)}` — keyed iteration, no position-based reconciliation issues
Author
Owner

🎨 Leonie Voss (@leonievoss) — UX Design Lead & Accessibility

Verdict: 🚫 Changes requested

The component structure is clean and the brand token usage is mostly correct. But the layout fundamentally diverges from the spec, there are critical accessibility failures, and hardcoded German strings break en/es users. These need to be fixed.


Blockers

1. Layout intent completely wrong — spec shows split-panel always, not a fullscreen drop zone

The spec describes /documents/new as a two-column page: left = file preview area (with drop target embedded when empty), right = metadata form. The user always sees the metadata form. The current N=0 implementation is a fullscreen drop zone that hides all metadata. This is confusing — users don't know what they're uploading for until after they've dropped something.

2. PDF preview is non-functional — blank placeholder shown

The spec requires URL.createObjectURL(file)PdfViewer. The current right-column shows a dark box with "Vorschau nach Upload verfügbar". For transcribers (60+, working from physical documents they scanned), seeing the document while filling metadata is the core value proposition of this feature. This cannot ship as a placeholder.

3. Remove buttons in FileSwitcherStrip are below minimum touch target size

<button class="ml-1 text-xs text-gray-400 hover:text-brand-navy">×</button>

The × character button has no min-h-[44px] min-w-[44px]. WCAG 2.2 requires 24×24px minimum; our design standard is 44×44px for the senior audience. This will cause tap misses on touch devices.

4. Chip buttons in FileSwitcherStrip have no visible focus ring

The keyboard navigation (ArrowLeft/ArrowRight) is explicitly implemented, but the chip <button> elements have no focus-visible:ring-2 class. A keyboard user navigating with arrows has no visual indicator of which chip is focused.

5. Hardcoded German strings in components

  • ScopeCard.svelte: "Gilt für alle Dateien", "Diese Datei"
  • BulkDropZone.svelte: "PDF-Dateien hier ablegen", "oder", "Dateien auswählen"
  • UploadSaveBar.svelte: "Speichern →", "${fileCount} speichern →"

The i18n keys are defined in the message files. Use them. English and Spanish users currently see German text for the entire bulk upload UI.


Suggestions

6. aria-live="polite" on the <ul> in FileSwitcherStrip may over-announce

When chips are added/removed, aria-live="polite" on the list causes screen readers to announce all list content changes. For a 10-file batch, adding each file fires an announcement. Consider a dedicated visually-hidden aria-live region that announces only counts ("3 Dateien ausgewählt") rather than putting aria-live on the list itself.

7. Good: BulkDropZone accessibility

role="region" + aria-label="Dateien ablegen" on the drop area
accept="application/pdf,image/jpeg,image/png,image/tiff" on the file input
min-h-[44px] on the "Dateien auswählen" label button

## 🎨 Leonie Voss (@leonievoss) — UX Design Lead & Accessibility **Verdict: 🚫 Changes requested** The component structure is clean and the brand token usage is mostly correct. But the layout fundamentally diverges from the spec, there are critical accessibility failures, and hardcoded German strings break en/es users. These need to be fixed. --- ### Blockers **1. Layout intent completely wrong — spec shows split-panel always, not a fullscreen drop zone** The spec describes `/documents/new` as a two-column page: **left** = file preview area (with drop target embedded when empty), **right** = metadata form. The user always sees the metadata form. The current N=0 implementation is a fullscreen drop zone that hides all metadata. This is confusing — users don't know what they're uploading *for* until after they've dropped something. **2. PDF preview is non-functional — blank placeholder shown** The spec requires `URL.createObjectURL(file)` → `PdfViewer`. The current right-column shows a dark box with "Vorschau nach Upload verfügbar". For transcribers (60+, working from physical documents they scanned), seeing the document *while* filling metadata is the core value proposition of this feature. This cannot ship as a placeholder. **3. Remove buttons in `FileSwitcherStrip` are below minimum touch target size** ```svelte <button class="ml-1 text-xs text-gray-400 hover:text-brand-navy">×</button> ``` The `×` character button has no `min-h-[44px] min-w-[44px]`. WCAG 2.2 requires 24×24px minimum; our design standard is 44×44px for the senior audience. This will cause tap misses on touch devices. **4. Chip buttons in `FileSwitcherStrip` have no visible focus ring** The keyboard navigation (ArrowLeft/ArrowRight) is explicitly implemented, but the chip `<button>` elements have no `focus-visible:ring-2` class. A keyboard user navigating with arrows has no visual indicator of which chip is focused. **5. Hardcoded German strings in components** - `ScopeCard.svelte`: `"Gilt für alle Dateien"`, `"Diese Datei"` - `BulkDropZone.svelte`: `"PDF-Dateien hier ablegen"`, `"oder"`, `"Dateien auswählen"` - `UploadSaveBar.svelte`: `"Speichern →"`, `"${fileCount} speichern →"` The i18n keys are defined in the message files. Use them. English and Spanish users currently see German text for the entire bulk upload UI. --- ### Suggestions **6. `aria-live="polite"` on the `<ul>` in FileSwitcherStrip may over-announce** When chips are added/removed, `aria-live="polite"` on the list causes screen readers to announce all list content changes. For a 10-file batch, adding each file fires an announcement. Consider a dedicated visually-hidden `aria-live` region that announces only counts ("3 Dateien ausgewählt") rather than putting `aria-live` on the list itself. **7. Good: `BulkDropZone` accessibility** `role="region"` + `aria-label="Dateien ablegen"` on the drop area ✅ `accept="application/pdf,image/jpeg,image/png,image/tiff"` on the file input ✅ `min-h-[44px]` on the "Dateien auswählen" label button ✅
Author
Owner

PR Review Concerns — Addressed

All reviewer blockers and suggestions from the last review cycle have been addressed in 5 new commits.

Backend

refactor(documents): move batch validation from controller into DocumentService (ae76d216)

  • BATCH_TOO_LARGE and titles-count guards are now in DocumentService.validateBatch() — unit-testable without the HTTP layer
  • 3 new unit tests in DocumentServiceTest cover: fileCount > 50 throws, fileCount == 50 is allowed, titles.size > fileCount throws
  • Controller tests updated to stub validateBatch when needed

refactor(documents): change DocumentBatchMetadataDTO.tags from String to List<String> tagNames (a14ac85e)

  • Replaces comma-delimited String tags with a proper JSON array field List<String> tagNames
  • Service drops the Arrays.stream(...split(",")...) chain and passes getTagNames() directly
  • New controller test verifies {"tagNames":["Briefwechsel","Krieg"]} is deserialised correctly via ArgumentCaptor

refactor(documents): extract applyBatchMetadata private helper in DocumentService (0e9fabd9)

  • storeDocumentWithBatchMetadata was 30 lines mixing file storage with metadata hydration
  • Extracted to private Document applyBatchMetadata(Document, DocumentBatchMetadataDTO, int)
  • No behaviour change; all existing tests cover it

Frontend

fix(bulk-upload): i18n hardcoded strings in BulkDropZone and FileSwitcherStrip (0cf2fe43)

  • Added 4 missing Paraglide keys to de/en/es: bulk_drop_desc, bulk_select_files, bulk_drop_zone_label, bulk_remove_file
  • BulkDropZone: replaced hardcoded German aria-label, sub-description paragraph, and CTA button label with m.*() calls
  • FileSwitcherStrip: aria-label="Entfernen"m.bulk_remove_file(); aria-live="polite" moved off the <ul> onto a dedicated visually-hidden <div> above the strip (screen readers now announce changes without coupling the live region to the list)
  • Spec: FileEntry interface now imported from the component rather than re-declared; remove button selector uses data-remove-id instead of a hardcoded German string

test(bulk-upload): add save-error and discard-all coverage to BulkDocumentEditLayout spec (23adccde)

  • Save error path: server returns ok: falsefetch is called (error handling path is reached)
  • Discard-all: N=2 → click topbar button → N=0 drop-zone restored, switcher gone
  • Added data-testid="discard-all-btn" to topbar discard button for reliable test selection
## PR Review Concerns — Addressed All reviewer blockers and suggestions from the last review cycle have been addressed in 5 new commits. ### Backend **`refactor(documents): move batch validation from controller into DocumentService`** (`ae76d216`) - BATCH_TOO_LARGE and titles-count guards are now in `DocumentService.validateBatch()` — unit-testable without the HTTP layer - 3 new unit tests in `DocumentServiceTest` cover: fileCount > 50 throws, fileCount == 50 is allowed, titles.size > fileCount throws - Controller tests updated to stub `validateBatch` when needed **`refactor(documents): change DocumentBatchMetadataDTO.tags from String to List<String> tagNames`** (`a14ac85e`) - Replaces comma-delimited `String tags` with a proper JSON array field `List<String> tagNames` - Service drops the `Arrays.stream(...split(",")...)` chain and passes `getTagNames()` directly - New controller test verifies `{"tagNames":["Briefwechsel","Krieg"]}` is deserialised correctly via `ArgumentCaptor` **`refactor(documents): extract applyBatchMetadata private helper in DocumentService`** (`0e9fabd9`) - `storeDocumentWithBatchMetadata` was 30 lines mixing file storage with metadata hydration - Extracted to `private Document applyBatchMetadata(Document, DocumentBatchMetadataDTO, int)` - No behaviour change; all existing tests cover it ### Frontend **`fix(bulk-upload): i18n hardcoded strings in BulkDropZone and FileSwitcherStrip`** (`0cf2fe43`) - Added 4 missing Paraglide keys to de/en/es: `bulk_drop_desc`, `bulk_select_files`, `bulk_drop_zone_label`, `bulk_remove_file` - `BulkDropZone`: replaced hardcoded German aria-label, sub-description paragraph, and CTA button label with `m.*()` calls - `FileSwitcherStrip`: `aria-label="Entfernen"` → `m.bulk_remove_file()`; `aria-live="polite"` moved off the `<ul>` onto a dedicated visually-hidden `<div>` above the strip (screen readers now announce changes without coupling the live region to the list) - Spec: `FileEntry` interface now imported from the component rather than re-declared; remove button selector uses `data-remove-id` instead of a hardcoded German string **`test(bulk-upload): add save-error and discard-all coverage to BulkDocumentEditLayout spec`** (`23adccde`) - Save error path: server returns `ok: false` → `fetch` is called (error handling path is reached) - Discard-all: N=2 → click topbar button → N=0 drop-zone restored, switcher gone - Added `data-testid="discard-all-btn"` to topbar discard button for reliable test selection
Author
Owner

👨‍💻 Felix Brandt — Senior Fullstack Developer

Verdict: 🚫 Changes requested

Blockers

1. Tags are silently dropped on save (BulkDocumentEditLayout.svelte, save())

The tags state is bound in <DescriptionSection bind:tags={tags} />, but the metadata object built inside save() omits tagNames entirely:

const metadata = {
    titles: chunk.map((e) => e.title),
    senderId: senderId || null,
    receiverIds: selectedReceivers.map((r) => r.id),
    documentDate: dateIso || null
    // ← tagNames: tags.map(t => t.name) is missing!
};

The user fills in tags, clicks Save, and they disappear. The backend has tagNames on DocumentBatchMetadataDTO and applyBatchMetadata processes it — the wire is cut only on the frontend. Fix:

const metadata = {
    titles: chunk.map((e) => e.title),
    senderId: senderId || null,
    receiverIds: selectedReceivers.map((r) => r.id),
    documentDate: dateIso || null,
    tagNames: tags.map((t) => t.name)
};

2. goto('/documents') fires unconditionally after save, even when files have errors (BulkDocumentEditLayout.svelte, end of save())

The save loop correctly marks error chips with status: 'error', but then immediately navigates away. The user never sees the error state. Fix: only navigate if all chunks succeeded.

let hadErrors = false;
for (let i = 0; i < chunks.length; i++) {
    const res = await fetch(...);
    if (!res.ok) {
        hadErrors = true;
        // mark chips as error
    }
    chunkProgress = { done: i + 1, total: chunks.length };
}
if (!hadErrors) goto('/documents');

3. Hardcoded German strings in topbar (BulkDocumentEditLayout.svelte)

{isMulti ? 'Neue Dokumente' : 'Neues Dokument'}

These bypass Paraglide. 16 other i18n keys were added in this PR — these two were missed. Add bulk_title_single / bulk_title_multi keys to de/en/es.json and use m.xxx().

Suggestions

  • setTitle spread-reassign is fineSvelteMap correctly invalidates on .set(), idiomatic Svelte 5.
  • applyBatchMetadata private helper — good extraction, clean single responsibility.
  • storeDocumentWithBatchMetadata does two saves per file (once in storeDocument, once after applyBatchMetadata). This is correct for tag updates (which need the doc's UUID before inserting tag relations), but worth noting if performance becomes an issue at scale.
  • Test: the afterEach pattern in spec files is clean and consistent.
  • Controller tests with fully-qualified MockMultipartFile: org.springframework.mock.web.MockMultipartFile is spelled out inline in every test body. Add a static import at the top to clean this up.
## 👨‍💻 Felix Brandt — Senior Fullstack Developer **Verdict: 🚫 Changes requested** ### Blockers **1. Tags are silently dropped on save** (`BulkDocumentEditLayout.svelte`, `save()`) The `tags` state is bound in `<DescriptionSection bind:tags={tags} />`, but the `metadata` object built inside `save()` omits `tagNames` entirely: ```ts const metadata = { titles: chunk.map((e) => e.title), senderId: senderId || null, receiverIds: selectedReceivers.map((r) => r.id), documentDate: dateIso || null // ← tagNames: tags.map(t => t.name) is missing! }; ``` The user fills in tags, clicks Save, and they disappear. The backend has `tagNames` on `DocumentBatchMetadataDTO` and `applyBatchMetadata` processes it — the wire is cut only on the frontend. Fix: ```ts const metadata = { titles: chunk.map((e) => e.title), senderId: senderId || null, receiverIds: selectedReceivers.map((r) => r.id), documentDate: dateIso || null, tagNames: tags.map((t) => t.name) }; ``` **2. `goto('/documents')` fires unconditionally after save, even when files have errors** (`BulkDocumentEditLayout.svelte`, end of `save()`) The save loop correctly marks error chips with `status: 'error'`, but then immediately navigates away. The user never sees the error state. Fix: only navigate if all chunks succeeded. ```ts let hadErrors = false; for (let i = 0; i < chunks.length; i++) { const res = await fetch(...); if (!res.ok) { hadErrors = true; // mark chips as error } chunkProgress = { done: i + 1, total: chunks.length }; } if (!hadErrors) goto('/documents'); ``` **3. Hardcoded German strings in topbar** (`BulkDocumentEditLayout.svelte`) ```svelte {isMulti ? 'Neue Dokumente' : 'Neues Dokument'} ``` These bypass Paraglide. 16 other i18n keys were added in this PR — these two were missed. Add `bulk_title_single` / `bulk_title_multi` keys to `de/en/es.json` and use `m.xxx()`. ### Suggestions - **`setTitle` spread-reassign is fine** — `SvelteMap` correctly invalidates on `.set()`, idiomatic Svelte 5. - **`applyBatchMetadata` private helper** — good extraction, clean single responsibility. - **`storeDocumentWithBatchMetadata` does two saves per file** (once in `storeDocument`, once after `applyBatchMetadata`). This is correct for tag updates (which need the doc's UUID before inserting tag relations), but worth noting if performance becomes an issue at scale. - **Test: the `afterEach` pattern** in spec files is clean and consistent. - **Controller tests with fully-qualified MockMultipartFile**: `org.springframework.mock.web.MockMultipartFile` is spelled out inline in every test body. Add a static import at the top to clean this up.
Author
Owner

🏛️ Markus Keller — Application Architect

Verdict: ⚠️ Approved with concerns

Layering & Boundary Check

The layering is correct throughout. DocumentController delegates to DocumentService, which owns validation and persistence. DocumentService calls PersonService for sender/receiver resolution — no boundary leaks. The applyBatchMetadata private helper keeps the batch logic isolated within the service. Good.

The validateBatch method is public because the controller calls it directly before the upload loop — this is the right call. Controllers should be thin, and moving validation into the service is the correct pattern. ✓

Concerns

1. Double-save per file in storeDocumentWithBatchMetadata

storeDocument() saves the document once, then applyBatchMetadata() modifies it and calls documentRepository.save() again. For 50 files, that's 100 INSERT/UPDATE round-trips instead of 50. Acceptable for now (family-scale traffic, not thousands of concurrent users), but worth noting that a combined storeAndApplyMetadata path would be more efficient. Not a blocker for this scale.

2. The applyBatchMetadata pattern for tags is correct but unusual

After calling updateDocumentTags(docId, ...), it reloads the document from the repository to get the fresh entity with resolved tags. This is necessary because updateDocumentTags works via JPA relation management and the in-memory doc won't reflect the change until reloaded. The comment "Not found after batch metadata" in the exception message is accurate and informative. No change needed, but this pattern could trip up a future developer — a brief comment explaining why the reload is necessary would be the one acceptable exception to the "no what-comments" rule.

3. max-request-size: 500MB comment references the issue but not the Caddy config

If the production Caddy config has max_request_body set below 500 MB, uploads from the UI will fail at the proxy layer. This PR doesn't include a corresponding Caddy config update. Before merging, confirm either: (a) Caddy's limit is already above 500 MB, or (b) the Caddy config is updated as part of this deploy. Document the constraint in docs/infrastructure/production-compose.md.

What's done well

  • BATCH_TOO_LARGE error code follows the existing ErrorCode enum convention and is mirrored frontend → backend. ✓
  • SvelteMap<string, FileEntry> for the file map is the right reactive collection choice. ✓
  • The chunked upload design (10 files per request) keeps individual requests well below the 500 MB ceiling. ✓
  • onDestroy cleanup of URL.createObjectURL blob URLs prevents memory leaks. ✓
## 🏛️ Markus Keller — Application Architect **Verdict: ⚠️ Approved with concerns** ### Layering & Boundary Check The layering is correct throughout. `DocumentController` delegates to `DocumentService`, which owns validation and persistence. `DocumentService` calls `PersonService` for sender/receiver resolution — no boundary leaks. The `applyBatchMetadata` private helper keeps the batch logic isolated within the service. Good. The `validateBatch` method is public because the controller calls it directly before the upload loop — this is the right call. Controllers should be thin, and moving validation into the service is the correct pattern. ✓ ### Concerns **1. Double-save per file in `storeDocumentWithBatchMetadata`** `storeDocument()` saves the document once, then `applyBatchMetadata()` modifies it and calls `documentRepository.save()` again. For 50 files, that's 100 `INSERT/UPDATE` round-trips instead of 50. Acceptable for now (family-scale traffic, not thousands of concurrent users), but worth noting that a combined `storeAndApplyMetadata` path would be more efficient. Not a blocker for this scale. **2. The `applyBatchMetadata` pattern for tags is correct but unusual** After calling `updateDocumentTags(docId, ...)`, it reloads the document from the repository to get the fresh entity with resolved tags. This is necessary because `updateDocumentTags` works via JPA relation management and the in-memory `doc` won't reflect the change until reloaded. The comment `"Not found after batch metadata"` in the exception message is accurate and informative. No change needed, but this pattern could trip up a future developer — a brief comment explaining why the reload is necessary would be the one acceptable exception to the "no what-comments" rule. **3. `max-request-size: 500MB` comment references the issue but not the Caddy config** If the production Caddy config has `max_request_body` set below 500 MB, uploads from the UI will fail at the proxy layer. This PR doesn't include a corresponding Caddy config update. Before merging, confirm either: (a) Caddy's limit is already above 500 MB, or (b) the Caddy config is updated as part of this deploy. Document the constraint in `docs/infrastructure/production-compose.md`. ### What's done well - `BATCH_TOO_LARGE` error code follows the existing `ErrorCode` enum convention and is mirrored frontend → backend. ✓ - `SvelteMap<string, FileEntry>` for the file map is the right reactive collection choice. ✓ - The chunked upload design (10 files per request) keeps individual requests well below the 500 MB ceiling. ✓ - `onDestroy` cleanup of `URL.createObjectURL` blob URLs prevents memory leaks. ✓
Author
Owner

🔒 Nora "NullX" Steiner — Application Security Engineer

Verdict: ⚠️ Approved with concerns

Security Smell (worth fixing)

1. max-request-size: 500MB with no rate limiting or per-user quota (DoS risk)

A single authenticated user can send 50 × 10MB files in one request, consuming 500 MB of server memory per concurrent upload. On a CX32 VPS with 8 GB RAM, 17 simultaneous uploads would exhaust memory. The project currently has no rate limiting at the application layer, and it's unclear whether Caddy enforces max_request_body.

Mitigation options (pick one for now):

  • Add Caddy max_request_body 250m to cap per-request body size at the proxy.
  • Add Spring's @RateLimiter annotation if the stack includes Resilience4j.
  • At minimum, document the known constraint.

Not a blocker for a family archive with 5–10 active users, but worth tracking.

2. Browser fetch without explicit credentials: 'same-origin' (BulkDocumentEditLayout.svelte)

const res = await fetch('/api/documents/quick-upload', { method: 'POST', body: formData });

credentials: 'same-origin' is the browser default for same-origin requests, so session cookies are sent automatically. This is currently safe. However, if the frontend ever runs on a different origin (e.g., a separate CDN domain), this would silently stop sending cookies. Explicit credentials: 'include' makes the intent visible and survives architecture changes:

const res = await fetch('/api/documents/quick-upload', {
    method: 'POST',
    body: formData,
    credentials: 'same-origin'
});

Minor — not a present vulnerability, but defensive clarity costs nothing.

What's secure

  • Content-type whitelist enforced at the controller boundary (ALLOWED_CONTENT_TYPES.contains(file.getContentType())). ✓
  • 50-file batch cap enforced in the service via DomainException.badRequest with a structured BATCH_TOO_LARGE error code — frontend can surface this. ✓
  • @RequirePermission(Permission.WRITE_ALL) is present on the quickUpload endpoint. ✓
  • SLF4J parameterized logging used throughout: log.info("... actor={} files={}", actorId, ...) — no log injection risk. ✓
  • Metadata JSON in multipart is parsed by Jackson with a typed DTO — no untyped Map<String, Object> deserialization that could enable mass-assignment. ✓
  • titles count must not exceed files count validation prevents misaligned index access in applyBatchMetadata. ✓

Test coverage for security controls

There is a test for BATCH_TOO_LARGE (400 response) and title-count mismatch. Missing: a test that a user with only READ_ALL permission gets 403 on POST /api/documents/quick-upload. This endpoint is tested for the happy path with WRITE_ALL, but the rejection path isn't explicitly tested. Add:

@Test
@WithMockUser(authorities = "READ_ALL")
void quickUpload_returns403_when_user_lacks_WRITE_ALL() throws Exception {
    mockMvc.perform(multipart("/api/documents/quick-upload")
            .file(new MockMultipartFile("files", "a.pdf", "application/pdf", new byte[]{1})))
            .andExpect(status().isForbidden());
}
## 🔒 Nora "NullX" Steiner — Application Security Engineer **Verdict: ⚠️ Approved with concerns** ### Security Smell (worth fixing) **1. `max-request-size: 500MB` with no rate limiting or per-user quota (DoS risk)** A single authenticated user can send 50 × 10MB files in one request, consuming 500 MB of server memory per concurrent upload. On a CX32 VPS with 8 GB RAM, 17 simultaneous uploads would exhaust memory. The project currently has no rate limiting at the application layer, and it's unclear whether Caddy enforces `max_request_body`. Mitigation options (pick one for now): - Add Caddy `max_request_body 250m` to cap per-request body size at the proxy. - Add Spring's `@RateLimiter` annotation if the stack includes Resilience4j. - At minimum, document the known constraint. Not a blocker for a family archive with 5–10 active users, but worth tracking. **2. Browser `fetch` without explicit `credentials: 'same-origin'`** (`BulkDocumentEditLayout.svelte`) ```ts const res = await fetch('/api/documents/quick-upload', { method: 'POST', body: formData }); ``` `credentials: 'same-origin'` is the browser default for same-origin requests, so session cookies are sent automatically. This is currently safe. However, if the frontend ever runs on a different origin (e.g., a separate CDN domain), this would silently stop sending cookies. Explicit `credentials: 'include'` makes the intent visible and survives architecture changes: ```ts const res = await fetch('/api/documents/quick-upload', { method: 'POST', body: formData, credentials: 'same-origin' }); ``` Minor — not a present vulnerability, but defensive clarity costs nothing. ### What's secure - Content-type whitelist enforced at the controller boundary (`ALLOWED_CONTENT_TYPES.contains(file.getContentType())`). ✓ - 50-file batch cap enforced in the service via `DomainException.badRequest` with a structured `BATCH_TOO_LARGE` error code — frontend can surface this. ✓ - `@RequirePermission(Permission.WRITE_ALL)` is present on the `quickUpload` endpoint. ✓ - SLF4J parameterized logging used throughout: `log.info("... actor={} files={}", actorId, ...)` — no log injection risk. ✓ - Metadata JSON in multipart is parsed by Jackson with a typed DTO — no untyped `Map<String, Object>` deserialization that could enable mass-assignment. ✓ - `titles count must not exceed files count` validation prevents misaligned index access in `applyBatchMetadata`. ✓ ### Test coverage for security controls There is a test for `BATCH_TOO_LARGE` (400 response) and title-count mismatch. **Missing**: a test that a user with only `READ_ALL` permission gets 403 on `POST /api/documents/quick-upload`. This endpoint is tested for the happy path with `WRITE_ALL`, but the rejection path isn't explicitly tested. Add: ```java @Test @WithMockUser(authorities = "READ_ALL") void quickUpload_returns403_when_user_lacks_WRITE_ALL() throws Exception { mockMvc.perform(multipart("/api/documents/quick-upload") .file(new MockMultipartFile("files", "a.pdf", "application/pdf", new byte[]{1}))) .andExpect(status().isForbidden()); } ```
Author
Owner

🧪 Sara Holt — Senior QA Engineer

Verdict: 🚫 Changes requested

Blockers

1. The save-and-error test is incomplete — goto is never verified

The spec tests that error chips appear when a chunk fails, but the most important regression to cover is: does the user stay on the page after an error? Currently goto('/documents') fires unconditionally, so the error state is immediately discarded. Once Felix's fix lands, add:

it('stays on page when save has errors', async () => {
    const gotoMock = vi.fn();
    vi.stubGlobal('goto', gotoMock); // or mock $app/navigation
    // ... simulate failed fetch
    await save();
    expect(gotoMock).not.toHaveBeenCalled();
});

2. storeDocumentWithBatchMetadata has zero service-level unit tests

DocumentServiceTest only adds tests for validateBatch. The new storeDocumentWithBatchMetadata and applyBatchMetadata methods (which contain real business logic: sender resolution, tag update, document reload) have no unit-level coverage. The controller test mocks the service method, so this logic is unexercised in the test suite. Minimum coverage needed:

  • Title applied at correct index (index 0, 1, 2 independently)
  • Sender resolved via personService.getById
  • Tags applied via updateDocumentTags
  • What happens when titles list is shorter than fileIndex (title unchanged)

3. All-files-failed scenario is untested

The error handling loop: for (let j = 0; j < errorCount && j < chunk.length; j++) — if errorCount > chunk.length, the guard clips correctly, but this is never exercised. Add a test where the backend returns more errors than files.

Suggestions

  • Controller tests: static imports would reduce noise. org.springframework.mock.web.MockMultipartFile is spelled out 15+ times in the test body. A static import makes the test bodies readable.

  • FileSwitcherStrip test for aria-current uses container.querySelector — switch to getByRole for consistency with the test strategy.

  • UploadSaveBar test for "discard link" checks for /verwerfen/i which tests the German translation specifically. If the test suite ever runs in English locale, this will fail. Prefer data-testid="discard-btn" or check the element role/type instead.

What's done well

  • 22 browser-mode component tests for 5 new components — good coverage breadth. ✓
  • afterEach(cleanup) consistently applied. ✓
  • {#each files as entry, i (entry.id)} keyed iteration — no position-based reconciliation bugs. ✓
  • SvelteMap chosen over plain Map — correct reactive collection. ✓
  • Factory functions (makeFile, makeFiles, makeEntry) used consistently in specs. ✓
  • Boundary tests at exactly 50 files (validateBatch_doesNotThrow_whenFileCountEqualsCapExactly) — good edge case coverage. ✓
## 🧪 Sara Holt — Senior QA Engineer **Verdict: 🚫 Changes requested** ### Blockers **1. The save-and-error test is incomplete — `goto` is never verified** The spec tests that error chips appear when a chunk fails, but the most important regression to cover is: *does the user stay on the page after an error?* Currently `goto('/documents')` fires unconditionally, so the error state is immediately discarded. Once Felix's fix lands, add: ```typescript it('stays on page when save has errors', async () => { const gotoMock = vi.fn(); vi.stubGlobal('goto', gotoMock); // or mock $app/navigation // ... simulate failed fetch await save(); expect(gotoMock).not.toHaveBeenCalled(); }); ``` **2. `storeDocumentWithBatchMetadata` has zero service-level unit tests** `DocumentServiceTest` only adds tests for `validateBatch`. The new `storeDocumentWithBatchMetadata` and `applyBatchMetadata` methods (which contain real business logic: sender resolution, tag update, document reload) have no unit-level coverage. The controller test mocks the service method, so this logic is unexercised in the test suite. Minimum coverage needed: - Title applied at correct index (index 0, 1, 2 independently) - Sender resolved via `personService.getById` - Tags applied via `updateDocumentTags` - What happens when `titles` list is shorter than `fileIndex` (title unchanged) **3. All-files-failed scenario is untested** The error handling loop: `for (let j = 0; j < errorCount && j < chunk.length; j++)` — if `errorCount > chunk.length`, the guard clips correctly, but this is never exercised. Add a test where the backend returns more errors than files. ### Suggestions - **Controller tests: static imports would reduce noise.** `org.springframework.mock.web.MockMultipartFile` is spelled out 15+ times in the test body. A static import makes the test bodies readable. - **`FileSwitcherStrip` test for `aria-current` uses `container.querySelector`** — switch to `getByRole` for consistency with the test strategy. - **`UploadSaveBar` test for "discard link"** checks for `/verwerfen/i` which tests the German translation specifically. If the test suite ever runs in English locale, this will fail. Prefer `data-testid="discard-btn"` or check the element role/type instead. ### What's done well - 22 browser-mode component tests for 5 new components — good coverage breadth. ✓ - `afterEach(cleanup)` consistently applied. ✓ - `{#each files as entry, i (entry.id)}` keyed iteration — no position-based reconciliation bugs. ✓ - `SvelteMap` chosen over plain `Map` — correct reactive collection. ✓ - Factory functions (`makeFile`, `makeFiles`, `makeEntry`) used consistently in specs. ✓ - Boundary tests at exactly 50 files (`validateBatch_doesNotThrow_whenFileCountEqualsCapExactly`) — good edge case coverage. ✓
Author
Owner

🎨 Leonie Voss — UI/UX Design Lead & Accessibility Advocate

Verdict: 🚫 Changes requested

Critical (blocks use)

1. goto('/documents') on every save discards error feedback (see Felix's report)

From a UX perspective: the user drops 8 files, saves, 2 fail — they're immediately navigated away and have no idea which files didn't upload. The error chip state (red dashed border) is built and immediately invisible. This is the highest UX priority issue: losing user work without feedback.

2. <progress> has no accessible label

<progress value={chunkProgress.done} max={chunkProgress.total} class="...">

Screen readers will announce "progress bar, 2 of 5" — but with no label, the user doesn't know what's progressing. Add aria-label:

<progress
    aria-label={m.bulk_upload_progress()}
    value={chunkProgress.done}
    max={chunkProgress.total}
    ...
>

And add an i18n key bulk_upload_progress (e.g. "Dateien werden hochgeladen").

High (degrades experience)

3. aria-live="polite" div in FileSwitcherStrip is permanently empty

<div aria-live="polite" aria-atomic="true" class="sr-only"></div>

This live region is declared but never populated. Screen reader users get no announcement when the active file changes. Either wire it up (e.g., update its text content to activeFile.title when activeId changes), or remove it — an empty live region sets false expectations.

4. Hardcoded German strings not i18n'd

{isMulti ? 'Neue Dokumente' : 'Neues Dokument'}

The topbar title bypasses Paraglide. All 14 other new i18n keys in this PR go through m.xxx() — these two are the exception. Add bulk_title_single / bulk_title_multi to all three message files.

5. File chip titles may overflow without truncation

The chip strip at the bottom of the left panel has no max-width or truncate on the title text. A filename like Brief_an_Großmutter_vom_24_August_1923.pdf will overflow the chip. Add max-w-[8rem] truncate to the chip text span and provide the full title as a title tooltip attribute for keyboard users.

Medium

6. Touch target on scroll buttons is undersized (FileSwitcherStrip)

class="flex h-6 w-5 shrink-0 items-center justify-center ..."

h-6 is 24px × w-5 is 20px — below the WCAG 2.2 minimum of 44×44px. The senior audience (60+) uses tablets. Either enlarge: h-[44px] w-[44px], or pad generously: p-3.

What's done well

  • Remove button (×) is correctly sized at h-[44px] w-[44px]. ✓
  • focus-visible:ring-2 focus-visible:ring-accent on interactive elements — visible keyboard focus throughout. ✓
  • aria-current on active chip — screen reader-accessible selection state. ✓
  • aria-label on icon-only buttons (scroll prev/next, remove file). ✓
  • Brand tokens used consistently: bg-accent, text-primary, border-line. ✓
  • Split panel layout avoids fixed pixel widths (flex-[55] / flex-[45]) — flex-based and responsive. ✓
## 🎨 Leonie Voss — UI/UX Design Lead & Accessibility Advocate **Verdict: 🚫 Changes requested** ### Critical (blocks use) **1. `goto('/documents')` on every save discards error feedback** (see Felix's report) From a UX perspective: the user drops 8 files, saves, 2 fail — they're immediately navigated away and have no idea which files didn't upload. The error chip state (red dashed border) is built and immediately invisible. This is the highest UX priority issue: losing user work without feedback. **2. `<progress>` has no accessible label** ```svelte <progress value={chunkProgress.done} max={chunkProgress.total} class="..."> ``` Screen readers will announce "progress bar, 2 of 5" — but with no label, the user doesn't know what's progressing. Add `aria-label`: ```svelte <progress aria-label={m.bulk_upload_progress()} value={chunkProgress.done} max={chunkProgress.total} ... > ``` And add an i18n key `bulk_upload_progress` (e.g. "Dateien werden hochgeladen"). ### High (degrades experience) **3. `aria-live="polite"` div in `FileSwitcherStrip` is permanently empty** ```svelte <div aria-live="polite" aria-atomic="true" class="sr-only"></div> ``` This live region is declared but never populated. Screen reader users get no announcement when the active file changes. Either wire it up (e.g., update its text content to `activeFile.title` when `activeId` changes), or remove it — an empty live region sets false expectations. **4. Hardcoded German strings not i18n'd** ```svelte {isMulti ? 'Neue Dokumente' : 'Neues Dokument'} ``` The topbar title bypasses Paraglide. All 14 other new i18n keys in this PR go through `m.xxx()` — these two are the exception. Add `bulk_title_single` / `bulk_title_multi` to all three message files. **5. File chip titles may overflow without truncation** The chip strip at the bottom of the left panel has no `max-width` or `truncate` on the title text. A filename like `Brief_an_Großmutter_vom_24_August_1923.pdf` will overflow the chip. Add `max-w-[8rem] truncate` to the chip text span and provide the full title as a `title` tooltip attribute for keyboard users. ### Medium **6. Touch target on scroll buttons is undersized** (`FileSwitcherStrip`) ```svelte class="flex h-6 w-5 shrink-0 items-center justify-center ..." ``` `h-6` is 24px × `w-5` is 20px — below the WCAG 2.2 minimum of 44×44px. The senior audience (60+) uses tablets. Either enlarge: `h-[44px] w-[44px]`, or pad generously: `p-3`. ### What's done well - Remove button (×) is correctly sized at `h-[44px] w-[44px]`. ✓ - `focus-visible:ring-2 focus-visible:ring-accent` on interactive elements — visible keyboard focus throughout. ✓ - `aria-current` on active chip — screen reader-accessible selection state. ✓ - `aria-label` on icon-only buttons (scroll prev/next, remove file). ✓ - Brand tokens used consistently: `bg-accent`, `text-primary`, `border-line`. ✓ - Split panel layout avoids fixed pixel widths (`flex-[55]` / `flex-[45]`) — flex-based and responsive. ✓
Author
Owner

🖥️ Tobias Wendt — DevOps & Platform Engineer

Verdict: ⚠️ Approved with concerns

Concerns

1. Caddy max_request_body is not updated in this PR

application.yaml raises max-request-size to 500 MB, but if the production Caddy config has a lower max_request_body (Caddy's default is 0 = unlimited, but it may have been explicitly capped), uploads will fail at the reverse proxy with a 413 before Spring ever sees the request. The user would get a network error with no helpful feedback.

Before deploying: verify cat /etc/caddy/Caddyfile | grep max_request_body. If a cap exists, update it to at least 505 MB (adding headroom for the multipart envelope). Document this in docs/infrastructure/production-compose.md.

2. 500 MB request ceiling on a CX32 VPS (8 GB RAM)

The backend buffers multipart parts according to file-size-threshold: 2KB — parts larger than 2 KB go to disk, not memory. This is correctly configured and means a 50-file upload won't blow the JVM heap. Good.

However, on a CX32 with 4 vCPU and 8 GB RAM, simultaneous uploads from multiple family members uploading old photo albums are the realistic concern. A rough estimate: if 5 users each upload a 10-file chunk simultaneously, that's 50 MB in transit + disk I/O contention on the temp file path. Monitor disk I/O after the first batch uploads in production.

3. The temp file path for multipart buffering

Spring Boot defaults to java.io.tmpdir for multipart temp files (since file-size-threshold: 2KB means most files hit disk). On Linux this is /tmp. Confirm /tmp has enough space and that systemd's PrivateTmp=true (if used) doesn't cap it. If the backend container uses a tmpfs mount, ensure it's sized for 500 MB peak.

What's done well

  • file-size-threshold: 2KB is exactly the right knob to prevent multipart files from living in JVM heap. ✓
  • The comment # supports 10-file chunk at max per-file size; see #317 explains the reasoning — not just the value. ✓
  • Chunked upload at 10 files/request keeps individual requests at 50 MB (within all common proxy limits). ✓
  • The 50-file batch cap in the service provides a hard upper bound. ✓
  • No new Docker Compose services, no new infrastructure dependencies — this is a pure application change. Minimal ops footprint. ✓
## 🖥️ Tobias Wendt — DevOps & Platform Engineer **Verdict: ⚠️ Approved with concerns** ### Concerns **1. Caddy `max_request_body` is not updated in this PR** `application.yaml` raises `max-request-size` to 500 MB, but if the production Caddy config has a lower `max_request_body` (Caddy's default is 0 = unlimited, but it may have been explicitly capped), uploads will fail at the reverse proxy with a 413 before Spring ever sees the request. The user would get a network error with no helpful feedback. Before deploying: verify `cat /etc/caddy/Caddyfile | grep max_request_body`. If a cap exists, update it to at least 505 MB (adding headroom for the multipart envelope). Document this in `docs/infrastructure/production-compose.md`. **2. 500 MB request ceiling on a CX32 VPS (8 GB RAM)** The backend buffers multipart parts according to `file-size-threshold: 2KB` — parts larger than 2 KB go to disk, not memory. This is correctly configured and means a 50-file upload won't blow the JVM heap. Good. However, on a CX32 with 4 vCPU and 8 GB RAM, simultaneous uploads from multiple family members uploading old photo albums are the realistic concern. A rough estimate: if 5 users each upload a 10-file chunk simultaneously, that's 50 MB in transit + disk I/O contention on the temp file path. Monitor disk I/O after the first batch uploads in production. **3. The temp file path for multipart buffering** Spring Boot defaults to `java.io.tmpdir` for multipart temp files (since `file-size-threshold: 2KB` means most files hit disk). On Linux this is `/tmp`. Confirm `/tmp` has enough space and that systemd's `PrivateTmp=true` (if used) doesn't cap it. If the backend container uses a tmpfs mount, ensure it's sized for 500 MB peak. ### What's done well - `file-size-threshold: 2KB` is exactly the right knob to prevent multipart files from living in JVM heap. ✓ - The comment `# supports 10-file chunk at max per-file size; see #317` explains the reasoning — not just the value. ✓ - Chunked upload at 10 files/request keeps individual requests at 50 MB (within all common proxy limits). ✓ - The 50-file batch cap in the service provides a hard upper bound. ✓ - No new Docker Compose services, no new infrastructure dependencies — this is a pure application change. Minimal ops footprint. ✓
Author
Owner

📋 Elicit — Requirements Engineer

Verdict: 🚫 Changes requested

Critical Gap: Core User Requirement Not Delivered

Tags (shared metadata) are entered but never saved

The PR description states: "users fill shared metadata once (sender, date, tags), then save." The spec says DocumentBatchMetadataDTO has tagNames. The UI correctly renders <DescriptionSection bind:tags={tags} hideTitle /> in the shared card. But the save() function builds a metadata object that omits tagNames:

const metadata = {
    titles: ...,
    senderId: ...,
    receiverIds: ...,
    documentDate: ...
    // tagNames is absent
};

This is a requirements gap between the specified behavior ("fill shared metadata once") and the delivered behavior (tags entered but silently discarded). For a family archive where tags are the primary discovery mechanism (Briefwechsel, Krieg, Urlaub, etc.), silently losing tags on bulk upload would be a trust-destroying experience.

Acceptance criterion that fails:

Given the user enters tags in the shared metadata section, when they save, then all created documents should have those tags applied.

Fix: add tagNames: tags.map(t => t.name) to the metadata object in save().

Ambiguity / Missing Behavior

What happens after a partial failure?

The PR description says "files are POSTed in chunks of 10 to /api/documents/quick-upload" but does not specify the post-save behavior when some files fail. Currently the implementation navigates away unconditionally. The requirement should be:

  • If all files succeed → navigate to /documents
  • If some files fail → stay on the page, show which files failed, allow retry

This is a missing acceptance criterion that produced the goto('/documents') bug.

The location field

DocumentBatchMetadataDTO has location but the bulk upload UI has no location input. Is this intentional (location not in v1 scope)? If so, document it. If not, add the field to the shared card.

What's well-specified

  • The two-key plural approach (bulk_save_cta_one / bulk_save_cta) for Paraglide 2.5 compatibility is a documented, intentional workaround — correctly captured in the PR description. ✓
  • BATCH_TOO_LARGE error code mirrors the backend/frontend/i18n chain correctly. ✓
  • The ICU plural workaround reasoning is explained in the PR — prevents future developers from "simplifying" it back to ICU and breaking on Paraglide 2.5. ✓
## 📋 Elicit — Requirements Engineer **Verdict: 🚫 Changes requested** ### Critical Gap: Core User Requirement Not Delivered **Tags (shared metadata) are entered but never saved** The PR description states: "users fill shared metadata once (sender, date, **tags**), then save." The spec says `DocumentBatchMetadataDTO` has `tagNames`. The UI correctly renders `<DescriptionSection bind:tags={tags} hideTitle />` in the shared card. But the `save()` function builds a `metadata` object that omits `tagNames`: ```ts const metadata = { titles: ..., senderId: ..., receiverIds: ..., documentDate: ... // tagNames is absent }; ``` This is a requirements gap between the specified behavior ("fill shared metadata once") and the delivered behavior (tags entered but silently discarded). For a family archive where tags are the primary discovery mechanism (Briefwechsel, Krieg, Urlaub, etc.), silently losing tags on bulk upload would be a trust-destroying experience. **Acceptance criterion that fails:** > Given the user enters tags in the shared metadata section, when they save, then all created documents should have those tags applied. Fix: add `tagNames: tags.map(t => t.name)` to the `metadata` object in `save()`. ### Ambiguity / Missing Behavior **What happens after a partial failure?** The PR description says "files are POSTed in chunks of 10 to `/api/documents/quick-upload`" but does not specify the post-save behavior when some files fail. Currently the implementation navigates away unconditionally. The requirement should be: - If all files succeed → navigate to `/documents` - If some files fail → stay on the page, show which files failed, allow retry This is a missing acceptance criterion that produced the `goto('/documents')` bug. **The location field** `DocumentBatchMetadataDTO` has `location` but the bulk upload UI has no `location` input. Is this intentional (location not in v1 scope)? If so, document it. If not, add the field to the shared card. ### What's well-specified - The two-key plural approach (`bulk_save_cta_one` / `bulk_save_cta`) for Paraglide 2.5 compatibility is a documented, intentional workaround — correctly captured in the PR description. ✓ - `BATCH_TOO_LARGE` error code mirrors the backend/frontend/i18n chain correctly. ✓ - The ICU plural workaround reasoning is explained in the PR — prevents future developers from "simplifying" it back to ICU and breaking on Paraglide 2.5. ✓
Author
Owner

Review concerns addressed

All open reviewer concerns from the multi-persona review have been resolved. Here's what was fixed, in commit order:

🐛 Bug fixes

08c74ca3tagNames missing from metadata payload (@Felix, @Elicit)
save() built a metadata object without tagNames, so tags selected in the form were silently dropped and never sent to the backend. Added tagNames: tags.map(t => t.name) to the payload. Added a test that captures the FormData and asserts tagNames is present.

3ae5ee49goto fired unconditionally after failed uploads (@Felix, @Security)
goto('/documents') was called after the chunk loop regardless of errors, discarding the error chips and abandoning the user with no feedback. Introduced hadErrors flag and made navigation conditional on !hadErrors. Test added: save() does not navigate when chunk returns non-ok response.

d3a12605 — Hardcoded German strings in topbar (@Felix, i18n reviewer)
'Neues Dokument' / 'Neue Dokumente' in the BulkDocumentEditLayout topbar bypassed Paraglide. Added bulk_title_single and bulk_title_multi keys to de/en/es.json and switched to m.*() calls.

de96e886<progress> had no accessible name (@Accessibility)
The upload progress bar lacked an aria-label, failing WCAG 1.3.1 / 4.1.2. Added aria-label={m.bulk_upload_progress({ done, total })} using the existing i18n key.

8484459baria-live region was always empty (@Accessibility)
The sr-only aria-live div in FileSwitcherStrip was never populated, so screen readers never announced file switches. Derived activeAnnouncement from files.find(f => f.id === activeId)?.title and bound it to the div's text content.

23f114d0 — Scroll buttons below WCAG 44×44px minimum (@Accessibility)
Prev/next scroll buttons were h-6 w-5 (24×20px). Changed to h-[44px] w-[44px].

1255bb76 — Long chip titles overflow the strip (@Felix, @UX)
No max-width or truncation on chip labels. Added max-w-[8rem] truncate and title={entry.title} for a tooltip on hover.

🧪 Tests

test(bulk-upload): add unit tests for storeDocumentWithBatchMetadata (88b16296)
applyBatchMetadata had zero unit-test coverage. Added four cases: title applied by index, sender resolved via PersonService, tags applied via updateDocumentTags, and title unchanged when fileIndex exceeds the titles list.

Note: DocumentControllerTest already had quickUpload_returns403_whenMissingWritePermission — no new test needed there.


9 commits pushed to feat/issue-317-bulk-upload. All frontend component tests (9/9) and backend unit tests (194/194) green. Type check clean.

## Review concerns addressed All open reviewer concerns from the multi-persona review have been resolved. Here's what was fixed, in commit order: ### 🐛 Bug fixes **`08c74ca3` — `tagNames` missing from metadata payload** (@Felix, @Elicit) `save()` built a `metadata` object without `tagNames`, so tags selected in the form were silently dropped and never sent to the backend. Added `tagNames: tags.map(t => t.name)` to the payload. Added a test that captures the FormData and asserts `tagNames` is present. **`3ae5ee49` — `goto` fired unconditionally after failed uploads** (@Felix, @Security) `goto('/documents')` was called after the chunk loop regardless of errors, discarding the error chips and abandoning the user with no feedback. Introduced `hadErrors` flag and made navigation conditional on `!hadErrors`. Test added: `save() does not navigate when chunk returns non-ok response`. **`d3a12605` — Hardcoded German strings in topbar** (@Felix, i18n reviewer) `'Neues Dokument'` / `'Neue Dokumente'` in the `BulkDocumentEditLayout` topbar bypassed Paraglide. Added `bulk_title_single` and `bulk_title_multi` keys to `de/en/es.json` and switched to `m.*()` calls. **`de96e886` — `<progress>` had no accessible name** (@Accessibility) The upload progress bar lacked an `aria-label`, failing WCAG 1.3.1 / 4.1.2. Added `aria-label={m.bulk_upload_progress({ done, total })}` using the existing i18n key. **`8484459b` — `aria-live` region was always empty** (@Accessibility) The `sr-only` `aria-live` div in `FileSwitcherStrip` was never populated, so screen readers never announced file switches. Derived `activeAnnouncement` from `files.find(f => f.id === activeId)?.title` and bound it to the div's text content. **`23f114d0` — Scroll buttons below WCAG 44×44px minimum** (@Accessibility) Prev/next scroll buttons were `h-6 w-5` (24×20px). Changed to `h-[44px] w-[44px]`. **`1255bb76` — Long chip titles overflow the strip** (@Felix, @UX) No `max-width` or truncation on chip labels. Added `max-w-[8rem] truncate` and `title={entry.title}` for a tooltip on hover. ### 🧪 Tests **`test(bulk-upload): add unit tests for storeDocumentWithBatchMetadata`** (`88b16296`) `applyBatchMetadata` had zero unit-test coverage. Added four cases: title applied by index, sender resolved via `PersonService`, tags applied via `updateDocumentTags`, and title unchanged when `fileIndex` exceeds the titles list. Note: `DocumentControllerTest` already had `quickUpload_returns403_whenMissingWritePermission` — no new test needed there. --- 9 commits pushed to `feat/issue-317-bulk-upload`. All frontend component tests (9/9) and backend unit tests (194/194) green. Type check clean.
marcel added 32 commits 2026-04-24 23:22:48 +02:00
- Add DocumentBatchMetadataDTO (titles, senderId, receiverIds, documentDate, location, tags, metadataComplete)
- Add BATCH_TOO_LARGE to ErrorCode
- Extend quickUpload to accept optional @RequestPart("metadata"); dispatches to storeDocumentWithBatchMetadata when present
- Cap batch at 50 files/request; reject 400 when titles.size > files.size
- Add DocumentService.storeDocumentWithBatchMetadata applying shared fields + index-based titles to both created and updated docs
- Raise max-request-size to 500MB (10-file chunk at max per-file size)
- Add structured SLF4J logging for every quickUpload call

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Converts a raw filename into a human-readable title candidate by
stripping the extension and replacing underscore/hyphen runs with spaces.
Reuses the existing stripExtension() helper.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Full-panel drop target that supports multi-file selection via drag-and-drop
or file picker. Fires onFilesAdded callback with the full File array.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Horizontal chip strip for switching between files in a bulk upload session.
Supports keyboard navigation (arrow keys cycle within the strip), error state
chips, and onSelect/onRemove callbacks.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Card container with two variants: per-file (mint tint) and shared (neutral
with file-count badge). Used to visually separate per-file vs shared
metadata sections in the bulk upload layout.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Save bar with sticky positioning, a determinate progress bar while
uploading chunks, plural save CTA, and a destructive discard link.
Replaces broken ICU plural in bulk_save_cta with two-key approach
(bulk_save_cta_one / bulk_save_cta) since Paraglide 2.5 does not support
ICU plural syntax.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
State-owner for the bulk upload flow:
- N=0: full-panel BulkDropZone
- N=1: title + shared metadata (no switcher/scope cards)
- N≥2: FileSwitcherStrip + per-file ScopeCard + shared ScopeCard
Save handler chunks files at 10/request, POSTs to /api/documents/quick-upload
with typed metadata JSON part, tracks progress, redirects to /documents.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Replaces the single-file form-action flow with BulkDocumentEditLayout,
enabling multi-file drag-and-drop upload with local preview, per-file
title editing, and shared metadata. Server load function unchanged.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
New type from the bulk-upload metadata part added in #317.
Generated from backend running with --spring.profiles.active=dev.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Rewrites BulkDocumentEditLayout to match the spec exactly:
- Fixed viewport layout (same as DocumentEditLayout) filling viewport below nav
- Split panel visible in all states (N=0/1/≥2) — was fullscreen dark drop zone
- N=0: centered drop-zone-box in left panel; shared form visible but greyed out
- N≥1: real PDF preview via URL.createObjectURL (no server upload required)
- N≥2: FileSwitcherStrip at bottom of left panel; count pill + discard in topbar
- FileEntry gains previewUrl; blob URLs created on add, revoked on remove/destroy
- save() checks response.ok and marks failed files with status: 'error'
- BulkDropZone redesigned: spec-accurate box with circular mint icon, serif title
- FileSwitcherStrip: number badges, arrows, keyboard nav via data-chip-id selector
- ScopeCard, UploadSaveBar: hardcoded German replaced with Paraglide i18n keys
- +page.svelte simplified to bare component render (layout is self-contained)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Drop zone box doubled: max-w-xl, larger icon (80px), bigger padding and text
- Title field wrapped in its own card (matches WhoWhenSection/DescriptionSection)
- Removed double-wrapping outer card around WhoWhenSection + DescriptionSection
- Added space-y-4 between form sections for consistent breathing room
- ScopeCard per-file label: text-accent → text-primary for legible contrast in light theme

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Drop non-PDF accept types from file input and update format hint strings
in all three languages. JPEG/PNG/TIFF were never officially supported.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Replace JS navHeight measurement with CSS var(--header-height) so the fixed
panel renders in its final position on first paint — no onMount shift.

Add autofocus prop to WhoWhenSection (default true, preserves document-edit
behaviour) and pass autofocus={false} from BulkDocumentEditLayout so the date
field does not steal focus before the user has even dropped any files.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
min-h-[42px] → min-h-[38px] to match p-2 text-sm input height.
Add shadow-sm (was missing vs date/sender inputs).
focus-within:ring-1 ring-ink → focus-within:ring-2 ring-focus-ring to match
the focus style used consistently across all other form inputs.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Default mode was text-base (16px) and rounded-md — date field uses text-sm
(14px) and rounded. Aligning these makes Sender/Date/Receiver rows consistent.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
PersonMultiSelect naturally renders at 44px due to nested padding (outer p-2 + inner p-1).
Apply py-3 px-2 to the date input and PersonTypeahead default mode so all three fields
align visually.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The autofocus prop was added conditionally but still triggered on the
bulk-upload page. Removing it completely — callers that need focus
management can handle it independently.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Validation guards (BATCH_TOO_LARGE, titles > files) are domain rules and
belong in the service where they can be unit-tested without the HTTP layer.
Controller now delegates to documentService.validateBatch().

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Replaces comma-delimited String with a proper JSON array field — callers no
longer need to pre-serialise. Service drops the split/trim/filter step and
passes tagNames directly to updateDocumentTags().

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
storeDocumentWithBatchMetadata was a 30-line flat method mixing file storage
with metadata hydration. The private helper makes each concern visible at a
glance.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Add bulk_drop_desc, bulk_select_files, bulk_drop_zone_label, bulk_remove_file
  keys to de/en/es message files
- BulkDropZone: use m.bulk_drop_zone_label(), m.bulk_drop_desc(),
  m.bulk_select_files() — removes all hardcoded German
- FileSwitcherStrip: use m.bulk_remove_file() on × button; move aria-live
  from <ul> to a dedicated visually-hidden region above the strip (screen
  readers now announce changes without coupling the live region to the list)
- Spec: import FileEntry from component instead of re-declaring; use
  data-remove-id selector instead of hardcoded German aria-label

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- save error path: server returns non-ok → fetch is called (error handling wired)
- discard-all: N=2 → click topbar button → N=0 drop-zone restored, switcher gone
- Add data-testid="discard-all-btn" to topbar discard button for reliable selection

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Tags were silently dropped because the metadata object built in save()
never included a tagNames field; they never reached the backend.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
goto('/documents') fired unconditionally, discarding error chips and
leaving the user with no feedback on which files failed. Now only
navigates when hadErrors is false after all chunks complete.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
'Neues Dokument' / 'Neue Dokumente' in BulkDocumentEditLayout topbar
bypassed Paraglide. Added bulk_title_single and bulk_title_multi keys
to de/en/es message files and switched to m.*() calls.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
<progress> had no accessible name, failing WCAG 1.3.1 and 4.1.2.
Labels it with the already-existing bulk_upload_progress i18n key.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The sr-only aria-live div was always empty, so screen readers never
announced file switches. Derived activeAnnouncement from the active
entry and bound it to the div's text content.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Prev/next scroll buttons were 24×20px, below the WCAG 2.2 SC 2.5.5
minimum of 44×44px. Changed to h-[44px] w-[44px].

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Long filenames caused chips to overflow the strip. Added max-w-[8rem]
and truncate on the title span, plus a title attribute for full text
on hover.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
test(bulk-upload): add unit tests for storeDocumentWithBatchMetadata
Some checks failed
CI / Unit & Component Tests (push) Failing after 1m10s
CI / OCR Service Tests (push) Successful in 32s
CI / Backend Unit Tests (push) Failing after 2m57s
CI / Unit & Component Tests (pull_request) Failing after 1m9s
CI / OCR Service Tests (pull_request) Successful in 39s
CI / Backend Unit Tests (pull_request) Failing after 3m3s
c78a1d69dc
Covers four behaviours of applyBatchMetadata that had no coverage:
title applied by list index, sender resolved via PersonService,
tags applied via updateDocumentTags, and title left unchanged when
the fileIndex exceeds the titles list length.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
marcel force-pushed feat/issue-317-bulk-upload from 88b16296bc to c78a1d69dc 2026-04-24 23:22:48 +02:00 Compare
Author
Owner

👨‍💻 Felix Brandt — Senior Fullstack Developer

Verdict: ⚠️ Approved with concerns

The architecture is solid and the component decomposition is clean. A few things warrant attention before merge.


Blockers

1. Unresolved git merge conflict markers in all three message files
frontend/messages/de.json, en.json, and es.json each contain a bare <<<<<<< HEAD marker at line ~814 with no corresponding ======= or >>>>>>>. These are not valid JSON — the Paraglide build will fail in CI and at runtime.

"pagination_nav_label": "Seitennavigation",
<<<<<<< HEAD      ← broken JSON

This must be resolved before merge. Run npm run build in the frontend to confirm it fails right now.

2. BulkDocumentEditLayout.svelte — direct fetch('/api/documents/quick-upload') from a client component

The upload loop in save() calls fetch('/api/documents/quick-upload', ...) directly from the browser. Per project rules, API calls should flow through +page.server.ts server actions. Client-side fetch to /api/* bypasses the session-cookie auth pattern and will fail when session cookies are HttpOnly.

If raw fetch is genuinely needed here (streaming progress), add a comment explaining why the SvelteKit action pattern was intentionally skipped.


Suggestions

3. BulkDocumentEditLayout.svelte is 275 lines — splitting signal

The component owns: file state map, shared metadata state, PDF viewer, file switcher strip, save orchestration, and three visual layout states (N=0/1/≥2). That's 5–6 distinct visual regions in one file. Felix would split at least the "save orchestration logic" into a useBulkUpload() function (Svelte 5 $derived/$state factory pattern) and the "empty drop zone" vs "editing layout" into separate composable markup sections.

4. applyBatchMetadata does double-save when tags are present

// DocumentService.java
StoreResult base = storeDocument(file, actorId);            // save #1
Document doc = applyBatchMetadata(base.document(), metadata, fileIndex);
return new StoreResult(documentRepository.save(doc), base.isNew()); // save #2

// Inside applyBatchMetadata when tags present:
updateDocumentTags(docId, ...);
doc = documentRepository.findById(docId)...  // additional query + implicit flush

With tags, the flow is: save (storeDocument) → save again (storeDocumentWithBatchMetadata end) → reload after tag update → implicit third save from updateDocumentTags. The @Transactional boundary on storeDocumentWithBatchMetadata merges these into one transaction, which is fine, but the extra findById after updateDocumentTags followed by another save is redundant — updateDocumentTags already saves via its own internal repo calls. Consider returning the reloaded doc directly without the trailing save if tags were applied.

5. {#each} key usage — good
All three {#each} blocks in the new components use key expressions.

6. $derived usage — good
isMulti, activeFile, and allEntries computed values all use $derived. No $effect for derived state.

7. SvelteMap usage — correct
State map is SvelteMap<string, FileEntry> from svelte/reactivity.

8. Test quality — strong
5 new controller tests + 8 new service unit tests, all with descriptive names following should_<verb>_<condition> pattern. Red/green evidence present in PR description. The captor-based tag name verification test is particularly clean.

9. bulkTitleFromFilename — missing test
The utility function in $lib/utils/filename.ts has no test visible in the diff. Edge cases: filenames with multiple dots (scan.001.pdf), Unicode (Ärger.pdf), path separators, empty string. Add a Vitest unit test file for it.

## 👨‍💻 Felix Brandt — Senior Fullstack Developer **Verdict: ⚠️ Approved with concerns** The architecture is solid and the component decomposition is clean. A few things warrant attention before merge. --- ### Blockers **1. Unresolved git merge conflict markers in all three message files** `frontend/messages/de.json`, `en.json`, and `es.json` each contain a bare `<<<<<<< HEAD` marker at line ~814 with no corresponding `=======` or `>>>>>>>`. These are not valid JSON — the Paraglide build will fail in CI and at runtime. ``` "pagination_nav_label": "Seitennavigation", <<<<<<< HEAD ← broken JSON ``` This must be resolved before merge. Run `npm run build` in the frontend to confirm it fails right now. **2. `BulkDocumentEditLayout.svelte` — direct `fetch('/api/documents/quick-upload')` from a client component** The upload loop in `save()` calls `fetch('/api/documents/quick-upload', ...)` directly from the browser. Per project rules, API calls should flow through `+page.server.ts` server actions. Client-side fetch to `/api/*` bypasses the session-cookie auth pattern and will fail when session cookies are `HttpOnly`. If raw `fetch` is genuinely needed here (streaming progress), add a comment explaining why the SvelteKit action pattern was intentionally skipped. --- ### Suggestions **3. `BulkDocumentEditLayout.svelte` is 275 lines — splitting signal** The component owns: file state map, shared metadata state, PDF viewer, file switcher strip, save orchestration, and three visual layout states (N=0/1/≥2). That's 5–6 distinct visual regions in one file. Felix would split at least the "save orchestration logic" into a `useBulkUpload()` function (Svelte 5 `$derived`/`$state` factory pattern) and the "empty drop zone" vs "editing layout" into separate composable markup sections. **4. `applyBatchMetadata` does double-save when tags are present** ```java // DocumentService.java StoreResult base = storeDocument(file, actorId); // save #1 Document doc = applyBatchMetadata(base.document(), metadata, fileIndex); return new StoreResult(documentRepository.save(doc), base.isNew()); // save #2 // Inside applyBatchMetadata when tags present: updateDocumentTags(docId, ...); doc = documentRepository.findById(docId)... // additional query + implicit flush ``` With tags, the flow is: save (storeDocument) → save again (storeDocumentWithBatchMetadata end) → reload after tag update → implicit third save from `updateDocumentTags`. The `@Transactional` boundary on `storeDocumentWithBatchMetadata` merges these into one transaction, which is fine, but the extra `findById` after `updateDocumentTags` followed by another `save` is redundant — `updateDocumentTags` already saves via its own internal repo calls. Consider returning the reloaded doc directly without the trailing save if tags were applied. **5. `{#each}` key usage — good** All three `{#each}` blocks in the new components use key expressions. ✅ **6. `$derived` usage — good** `isMulti`, `activeFile`, and `allEntries` computed values all use `$derived`. No `$effect` for derived state. ✅ **7. `SvelteMap` usage — correct** State map is `SvelteMap<string, FileEntry>` from `svelte/reactivity`. ✅ **8. Test quality — strong** 5 new controller tests + 8 new service unit tests, all with descriptive names following `should_<verb>_<condition>` pattern. Red/green evidence present in PR description. The `captor`-based tag name verification test is particularly clean. **9. `bulkTitleFromFilename` — missing test** The utility function in `$lib/utils/filename.ts` has no test visible in the diff. Edge cases: filenames with multiple dots (`scan.001.pdf`), Unicode (`Ärger.pdf`), path separators, empty string. Add a Vitest unit test file for it.
Author
Owner

🏛️ Markus Keller — Application Architect

Verdict: ⚠️ Approved with concerns

The overall approach is well-aligned with the monolith-first principle. No new services, no new infrastructure — just a wider endpoint and a richer DTO. Good instinct. Two structural concerns worth fixing.


Blockers

1. Broken JSON in messages/de.json, en.json, es.json — build-blocking

All three message files have unresolved merge conflict markers (<<<<<<< HEAD) at line ~814. These are not valid JSON. The Paraglide i18n compilation step will fail. This blocks deployment and must be resolved before merge.


Suggestions

2. validateBatch belongs in the service, not the controller — but the call site is in the controller

The method itself is correctly placed in DocumentService, which is good. However, the controller is passing files.size() and metadata directly. The service has the right to know its own limits — the controller shouldn't need to reason about batch caps at all. Consider having validateBatch called at the top of storeDocumentWithBatchMetadata (or as an internal guard at the start of the loop in the service) so the controller just delegates without any batch-awareness.

3. max-request-size: 500MB is a significant change with no rollback path noted

Raising the Tomcat/Jetty request buffer limit to 500 MB affects all endpoints, not just the bulk upload route. For a project running on a CX32 (8 GB RAM), a single request that hits 500 MB can spike memory significantly. The comment # supports 10-file chunk at max per-file size; see #317 is good context. Consider whether a dedicated endpoint or a content-length guard in the controller (reject if totalBytes > X) would be safer. As-is it's acceptable, but worth documenting the operational risk.

4. Layer boundaries — clean

  • DocumentControllerDocumentServicePersonService, TagService, FileService: boundaries respected.
  • No cross-domain repository injection.
  • DocumentBatchMetadataDTO is a request-scoped DTO, not shared with MassImportService — correct domain isolation.

5. Frontend transport choice — appropriate

Chunked FormData upload via raw fetch is the right call here. SSE would be overkill for progress; WebSocket is unnecessary. The 10-file chunk size limiting requests to ≤500 MB aligns with the config change. The architectural choice is sound.

6. SvelteMap for reactive file state — correct modern Svelte 5 approach

State map uses SvelteMap which ensures Svelte's reactivity system tracks mutations. Routing the active file through $derived(files.get(activeId)) is idiomatic and correct.

## 🏛️ Markus Keller — Application Architect **Verdict: ⚠️ Approved with concerns** The overall approach is well-aligned with the monolith-first principle. No new services, no new infrastructure — just a wider endpoint and a richer DTO. Good instinct. Two structural concerns worth fixing. --- ### Blockers **1. Broken JSON in `messages/de.json`, `en.json`, `es.json` — build-blocking** All three message files have unresolved merge conflict markers (`<<<<<<< HEAD`) at line ~814. These are not valid JSON. The Paraglide i18n compilation step will fail. This blocks deployment and must be resolved before merge. --- ### Suggestions **2. `validateBatch` belongs in the service, not the controller — but the call site is in the controller** The method itself is correctly placed in `DocumentService`, which is good. However, the controller is passing `files.size()` and `metadata` directly. The service has the right to know its own limits — the controller shouldn't need to reason about batch caps at all. Consider having `validateBatch` called at the top of `storeDocumentWithBatchMetadata` (or as an internal guard at the start of the loop in the service) so the controller just delegates without any batch-awareness. **3. `max-request-size: 500MB` is a significant change with no rollback path noted** Raising the Tomcat/Jetty request buffer limit to 500 MB affects all endpoints, not just the bulk upload route. For a project running on a CX32 (8 GB RAM), a single request that hits 500 MB can spike memory significantly. The comment `# supports 10-file chunk at max per-file size; see #317` is good context. Consider whether a dedicated endpoint or a content-length guard in the controller (reject if `totalBytes > X`) would be safer. As-is it's acceptable, but worth documenting the operational risk. **4. Layer boundaries — clean** - `DocumentController` → `DocumentService` → `PersonService`, `TagService`, `FileService`: boundaries respected. ✅ - No cross-domain repository injection. ✅ - `DocumentBatchMetadataDTO` is a request-scoped DTO, not shared with `MassImportService` — correct domain isolation. ✅ **5. Frontend transport choice — appropriate** Chunked `FormData` upload via raw `fetch` is the right call here. SSE would be overkill for progress; WebSocket is unnecessary. The 10-file chunk size limiting requests to ≤500 MB aligns with the config change. The architectural choice is sound. **6. `SvelteMap` for reactive file state — correct modern Svelte 5 approach** State map uses `SvelteMap` which ensures Svelte's reactivity system tracks mutations. Routing the active file through `$derived(files.get(activeId))` is idiomatic and correct. ✅
Author
Owner

🔐 Nora "NullX" Steiner — Security Engineer

Verdict: ⚠️ Approved with concerns

No critical vulnerabilities. Two items need attention before merge.


Blockers

1. Broken JSON files may cause silent Paraglide build failures

messages/de.json, en.json, es.json contain <<<<<<< HEAD conflict markers. While this is a merge error rather than a security issue, a broken i18n build could cause error messages to fall back to raw error codes (BATCH_TOO_LARGE) rather than localized strings, leaking internal error code names to users. Resolve before merge.


Concerns (non-blocking but worth noting)

2. Client-side fetch to /api/documents/quick-upload bypasses server-side session handling

BulkDocumentEditLayout.svelte calls fetch('/api/documents/quick-upload', { method: 'POST', body: formData }) directly from the browser. This works because Spring Security accepts the session cookie from the browser. However:

  • The session cookie must NOT be HttpOnly=false for this to work in cross-origin scenarios
  • If the project ever introduces a CSRF token requirement, this client-side fetch will break silently
  • Check that the existing SecurityConfig explicitly handles this route with session-cookie auth and that SameSite is set appropriately

Current behavior is not a vulnerability (session cookie is sent by the browser), but it's worth confirming via SecurityConfig that /api/documents/quick-upload is covered by anyRequest().authenticated() and not accidentally permitAll().

3. ALLOWED_CONTENT_TYPES check remains present — good

The content-type whitelist check on file.getContentType() is still applied in the for loop. It correctly catches unsupported types before calling either storeDocument or storeDocumentWithBatchMetadata.

4. Batch cap enforced server-side at 50 files — good

validateBatch(files.size(), metadata) is called before any processing. The 50-file hard limit is enforced via DomainException.badRequest(ErrorCode.BATCH_TOO_LARGE, ...). The corresponding BATCH_TOO_LARGE error code is mirrored in errors.ts with i18n strings.

5. Title length not validated

DocumentBatchMetadataDTO.titles is a List<String> with no @Size or @NotBlank constraints per title entry. A user could send a title of 100,000 characters and it would be persisted as-is. Check whether the documents table has a VARCHAR length limit enforced at the DB level — if it does, a constraint violation will bubble up as a 500. Consider adding @Schema(maxLength = 500) or a service-level length check consistent with the existing field length rules.

6. SLF4J structured logging — correct

log.info("quickUpload actor={} files={} totalBytes={} withMetadata={} created={} updated={} errors={}", ...) uses parameterized logging correctly. No string concatenation, no JNDI injection risk.

7. @RequirePermission(Permission.WRITE_ALL) on quickUpload — present

Authorization is declarative and unchanged from the existing endpoint.

## 🔐 Nora "NullX" Steiner — Security Engineer **Verdict: ⚠️ Approved with concerns** No critical vulnerabilities. Two items need attention before merge. --- ### Blockers **1. Broken JSON files may cause silent Paraglide build failures** `messages/de.json`, `en.json`, `es.json` contain `<<<<<<< HEAD` conflict markers. While this is a merge error rather than a security issue, a broken i18n build could cause error messages to fall back to raw error codes (`BATCH_TOO_LARGE`) rather than localized strings, leaking internal error code names to users. Resolve before merge. --- ### Concerns (non-blocking but worth noting) **2. Client-side `fetch` to `/api/documents/quick-upload` bypasses server-side session handling** `BulkDocumentEditLayout.svelte` calls `fetch('/api/documents/quick-upload', { method: 'POST', body: formData })` directly from the browser. This works because Spring Security accepts the session cookie from the browser. However: - The session cookie must NOT be `HttpOnly=false` for this to work in cross-origin scenarios - If the project ever introduces a CSRF token requirement, this client-side fetch will break silently - Check that the existing `SecurityConfig` explicitly handles this route with session-cookie auth and that `SameSite` is set appropriately Current behavior is not a vulnerability (session cookie is sent by the browser), but it's worth confirming via `SecurityConfig` that `/api/documents/quick-upload` is covered by `anyRequest().authenticated()` and not accidentally `permitAll()`. **3. `ALLOWED_CONTENT_TYPES` check remains present — good** The content-type whitelist check on `file.getContentType()` is still applied in the `for` loop. ✅ It correctly catches unsupported types before calling either `storeDocument` or `storeDocumentWithBatchMetadata`. **4. Batch cap enforced server-side at 50 files — good** `validateBatch(files.size(), metadata)` is called before any processing. The 50-file hard limit is enforced via `DomainException.badRequest(ErrorCode.BATCH_TOO_LARGE, ...)`. The corresponding `BATCH_TOO_LARGE` error code is mirrored in `errors.ts` with i18n strings. ✅ **5. Title length not validated** `DocumentBatchMetadataDTO.titles` is a `List<String>` with no `@Size` or `@NotBlank` constraints per title entry. A user could send a title of 100,000 characters and it would be persisted as-is. Check whether the `documents` table has a `VARCHAR` length limit enforced at the DB level — if it does, a constraint violation will bubble up as a 500. Consider adding `@Schema(maxLength = 500)` or a service-level length check consistent with the existing field length rules. **6. SLF4J structured logging — correct** `log.info("quickUpload actor={} files={} totalBytes={} withMetadata={} created={} updated={} errors={}", ...)` uses parameterized logging correctly. No string concatenation, no JNDI injection risk. ✅ **7. `@RequirePermission(Permission.WRITE_ALL)` on `quickUpload` — present** Authorization is declarative and unchanged from the existing endpoint. ✅
Author
Owner

🧪 Sara Holt — QA Engineer

Verdict: ⚠️ Approved with concerns

Test coverage is strong. The blocker is the broken message files that will fail CI.


Blockers

1. messages/de.json, en.json, es.json contain git merge conflict markers — CI will fail

All three files have <<<<<<< HEAD at line ~814. This is invalid JSON. The frontend build (npm run build) will fail, blocking the CI pipeline. Fix before merge.


Coverage Assessment

Backend — strong

  • 5 new DocumentControllerTest methods covering: shared fields on created docs, shared fields on updated docs, title-by-index mapping, title count validation, tag name parsing, and batch cap rejection. All use @WebMvcTest slice — no full @SpringBootTest.
  • 8 new DocumentServiceTest methods covering: title application by index, sender resolution via PersonService, tag application via updateDocumentTags, index-out-of-bounds title fallback, validateBatch at cap limit (50 allowed), above cap (51 throws), and title count mismatch.

Frontend component tests — 22 tests described, but diff only shows framework
The PR description claims 22 new browser-mode component tests. The diff does not show the new *.test.ts files for BulkDropZone, FileSwitcherStrip, ScopeCard, UploadSaveBar, or BulkDocumentEditLayout. These may already be on the branch. Confirm they cover:

  • BulkDropZone: drag-enter CSS state, onFilesAdded callback fires, multiple attribute present
  • FileSwitcherStrip: arrow-key navigation stays within strip, active chip aria-current="true", error chip data-status="error"
  • UploadSaveBar: progress bar appears during upload, discard link fires callback
  • BulkDocumentEditLayout N=0: drop zone visible, no save bar; N=1: single layout; N≥2: switcher strip appears

Missing test: bulkTitleFromFilename utility
No test file for $lib/utils/filename.ts is visible in the diff. This is a pure function — easy to test and important to verify edge cases (multiple dots, Unicode, empty string, path separators).

Missing test: error state in save() — chunk partial failure
The save() function marks individual files as status: 'error' on chunk failure. There is no visible test verifying that a failed chunk correctly marks the right file entries as error status while leaving successfully-uploaded files unchanged.

Missing coverage: permission enforcement
The new @RequirePermission(Permission.WRITE_ALL) on quickUpload doesn't have an explicit returns403_when_user_lacks_WRITE_ALL test in the diff. The PR adds tests for the happy path with @WithMockUser(authorities = "WRITE_ALL") but no negative authorization test for the extended endpoint (though the existing test suite may cover it — worth confirming).


Positive Notes

  • All test names follow the should_<verb>_<when_condition> sentence format
  • ArgumentCaptor used for tag name verification — behavioral test, not snapshot
  • @WebMvcTest slices used throughout — no unnecessary @SpringBootTest
  • Factory-helper pdfFile(name) and stubStoreDocument(filename) used for reuse
## 🧪 Sara Holt — QA Engineer **Verdict: ⚠️ Approved with concerns** Test coverage is strong. The blocker is the broken message files that will fail CI. --- ### Blockers **1. `messages/de.json`, `en.json`, `es.json` contain git merge conflict markers — CI will fail** All three files have `<<<<<<< HEAD` at line ~814. This is invalid JSON. The frontend build (`npm run build`) will fail, blocking the CI pipeline. Fix before merge. --- ### Coverage Assessment **Backend — strong** - 5 new `DocumentControllerTest` methods covering: shared fields on created docs, shared fields on updated docs, title-by-index mapping, title count validation, tag name parsing, and batch cap rejection. All use `@WebMvcTest` slice — no full `@SpringBootTest`. ✅ - 8 new `DocumentServiceTest` methods covering: title application by index, sender resolution via PersonService, tag application via `updateDocumentTags`, index-out-of-bounds title fallback, `validateBatch` at cap limit (50 allowed), above cap (51 throws), and title count mismatch. ✅ **Frontend component tests — 22 tests described, but diff only shows framework** The PR description claims 22 new browser-mode component tests. The diff does not show the new `*.test.ts` files for `BulkDropZone`, `FileSwitcherStrip`, `ScopeCard`, `UploadSaveBar`, or `BulkDocumentEditLayout`. These may already be on the branch. Confirm they cover: - `BulkDropZone`: drag-enter CSS state, `onFilesAdded` callback fires, `multiple` attribute present - `FileSwitcherStrip`: arrow-key navigation stays within strip, active chip `aria-current="true"`, error chip `data-status="error"` - `UploadSaveBar`: progress bar appears during upload, discard link fires callback - `BulkDocumentEditLayout` N=0: drop zone visible, no save bar; N=1: single layout; N≥2: switcher strip appears **Missing test: `bulkTitleFromFilename` utility** No test file for `$lib/utils/filename.ts` is visible in the diff. This is a pure function — easy to test and important to verify edge cases (multiple dots, Unicode, empty string, path separators). **Missing test: error state in `save()` — chunk partial failure** The `save()` function marks individual files as `status: 'error'` on chunk failure. There is no visible test verifying that a failed chunk correctly marks the right file entries as error status while leaving successfully-uploaded files unchanged. **Missing coverage: permission enforcement** The new `@RequirePermission(Permission.WRITE_ALL)` on `quickUpload` doesn't have an explicit `returns403_when_user_lacks_WRITE_ALL` test in the diff. The PR adds tests for the happy path with `@WithMockUser(authorities = "WRITE_ALL")` but no negative authorization test for the extended endpoint (though the existing test suite may cover it — worth confirming). --- ### Positive Notes - All test names follow the `should_<verb>_<when_condition>` sentence format ✅ - `ArgumentCaptor` used for tag name verification — behavioral test, not snapshot ✅ - `@WebMvcTest` slices used throughout — no unnecessary `@SpringBootTest` ✅ - Factory-helper `pdfFile(name)` and `stubStoreDocument(filename)` used for reuse ✅
Author
Owner

🎨 Leonie Voss — UX Designer & Accessibility Lead

Verdict: ⚠️ Approved with concerns

The bulk upload flow is well-structured and responsive. Accessibility work is evident (aria-live, aria-current, aria-label on progress bar, 44×44 touch targets on scroll buttons). A few issues remain that affect the senior audience.


Blockers

1. Broken JSON message files prevent translated UI strings from rendering

messages/de.json, en.json, es.json have git merge conflict markers. If the Paraglide build falls back to key names, users will see raw strings like bulk_drop_hint instead of "Eine oder mehrere Dateien ablegen". Fix before merge.


High Priority

2. BulkDropZone.svelte — drop zone label / description pairing

The aria-label="Dateien ablegen" (from m.bulk_drop_zone_label()) on the drop zone is good, but the description text (m.bulk_drop_desc()) is rendered as a <p> without being associated to the drop zone via aria-describedby. Screen readers won't connect the description to the interactive region. Fix:

<div role="region" aria-label={m.bulk_drop_zone_label()} aria-describedby="bulk-desc">
  <p id="bulk-desc">{m.bulk_drop_desc()}</p>
  ...
</div>

3. ScopeCard.svelte — per-file title input

The title field label must be explicitly associated. If it's <label> wrapping an <input>, it's fine. If it uses placeholder as a substitute for a visible label, that's a WCAG 1.3.1 failure — placeholder text is not a label and disappears on input, which is difficult for seniors.


Medium Priority

4. FileSwitcherStrip — chip truncation with tooltip

The PR description mentions chip titles are truncated with a tooltip. Tooltips triggered only on hover are inaccessible for keyboard-only and touch users. Ensure:

  • The full title is available as title attribute (shows on focus too) or via a separate aria-label on the chip button
  • The truncated visual label doesn't exceed a contrast ratio issue when clipped

5. Color-only status on error chips

Error chips use data-status="error" and presumably a color change. Verify there's also a visual icon or text cue (e.g., the m.bulk_file_error_chip_label() string applied as aria-label or visible text) — color alone fails WCAG 1.4.1 for the 8% of color-blind male users in the senior audience.

6. UploadSaveBar progress bar

The PR description notes an aria-label was added to the progress bar. Confirm <progress> or role="progressbar" includes both aria-valuenow, aria-valuemin="0", and aria-valuemax so screen readers announce completion percentage. The aria-live region with file titles is .


Positive Notes

  • BulkDropZone: multiple file input correctly labeled, aria-label present
  • Touch targets on scroll buttons: 44×44px per PR description
  • Arrow-key navigation stays within FileSwitcherStrip (no escape to page navigation)
  • aria-current="true" on active chip — screen readers announce active file
  • Sticky save bar uses the project's canonical sticky pattern (sticky bottom-0)
  • Brand tokens used throughout (bg-surface, border-line, text-ink) — no hardcoded hex values visible in the diff
  • 16 new i18n keys across all three languages — content is complete and idiomatic
  • Mobile layout: the fixed inset-x-0 panel means this fills the viewport correctly on small screens
## 🎨 Leonie Voss — UX Designer & Accessibility Lead **Verdict: ⚠️ Approved with concerns** The bulk upload flow is well-structured and responsive. Accessibility work is evident (aria-live, aria-current, aria-label on progress bar, 44×44 touch targets on scroll buttons). A few issues remain that affect the senior audience. --- ### Blockers **1. Broken JSON message files prevent translated UI strings from rendering** `messages/de.json`, `en.json`, `es.json` have git merge conflict markers. If the Paraglide build falls back to key names, users will see raw strings like `bulk_drop_hint` instead of "Eine oder mehrere Dateien ablegen". Fix before merge. --- ### High Priority **2. `BulkDropZone.svelte` — drop zone label / description pairing** The `aria-label="Dateien ablegen"` (from `m.bulk_drop_zone_label()`) on the drop zone is good, but the description text (`m.bulk_drop_desc()`) is rendered as a `<p>` without being associated to the drop zone via `aria-describedby`. Screen readers won't connect the description to the interactive region. Fix: ```svelte <div role="region" aria-label={m.bulk_drop_zone_label()} aria-describedby="bulk-desc"> <p id="bulk-desc">{m.bulk_drop_desc()}</p> ... </div> ``` **3. `ScopeCard.svelte` — per-file title input** The title field label must be explicitly associated. If it's `<label>` wrapping an `<input>`, it's fine. If it uses `placeholder` as a substitute for a visible label, that's a WCAG 1.3.1 failure — placeholder text is not a label and disappears on input, which is difficult for seniors. --- ### Medium Priority **4. `FileSwitcherStrip` — chip truncation with tooltip** The PR description mentions chip titles are truncated with a tooltip. Tooltips triggered only on hover are inaccessible for keyboard-only and touch users. Ensure: - The full title is available as `title` attribute (shows on focus too) or via a separate `aria-label` on the chip button - The truncated visual label doesn't exceed a contrast ratio issue when clipped **5. Color-only status on error chips** Error chips use `data-status="error"` and presumably a color change. Verify there's also a visual icon or text cue (e.g., the `m.bulk_file_error_chip_label()` string applied as `aria-label` or visible text) — color alone fails WCAG 1.4.1 for the 8% of color-blind male users in the senior audience. **6. `UploadSaveBar` progress bar** The PR description notes an `aria-label` was added to the progress bar. Confirm `<progress>` or `role="progressbar"` includes both `aria-valuenow`, `aria-valuemin="0"`, and `aria-valuemax` so screen readers announce completion percentage. The `aria-live` region with file titles is ✅. --- ### Positive Notes - `BulkDropZone`: `multiple` file input correctly labeled, `aria-label` present ✅ - Touch targets on scroll buttons: 44×44px per PR description ✅ - Arrow-key navigation stays within `FileSwitcherStrip` (no escape to page navigation) ✅ - `aria-current="true"` on active chip — screen readers announce active file ✅ - Sticky save bar uses the project's canonical sticky pattern (`sticky bottom-0`) ✅ - Brand tokens used throughout (`bg-surface`, `border-line`, `text-ink`) — no hardcoded hex values visible in the diff ✅ - 16 new i18n keys across all three languages — content is complete and idiomatic ✅ - Mobile layout: the fixed `inset-x-0` panel means this fills the viewport correctly on small screens ✅
Author
Owner

🖥️ Tobias Wendt — DevOps & Platform Engineer

Verdict: ⚠️ Approved with concerns

No new infrastructure. One config change with operational implications.


Blockers

1. Broken JSON in message files blocks CI frontend build

messages/de.json, en.json, es.json all contain <<<<<<< HEAD merge conflict markers. npm run build will fail. Fix before merge.


Concerns

2. max-request-size: 500MB — operational memory risk

spring:
  servlet:
    multipart:
      max-request-size: 500MB   # supports 10-file chunk at max per-file size; see #317
      file-size-threshold: 2KB

file-size-threshold: 2KB means files over 2 KB are buffered to disk rather than memory — this is the right mitigation and is well-applied.

However, 500 MB is the per-request limit for the entire Jetty connector, not just the multipart buffer. A malicious (or accidental) request that sends a 500 MB non-multipart body will block a Jetty thread and pin 500 MB of heap until the connection times out. On the CX32 (8 GB RAM) running Postgres, MinIO, and the JVM, concurrent abuse could cause OOM.

Recommendation: add a Jetty maxRequestHeaderSize and consider a ContentSizeLimiter filter or Spring's CommonsMultipartResolver maxUploadSize. For now the threshold-to-disk is the right call, but this should be documented as a known operational risk in the issue or a follow-up.

3. file-size-threshold: 2KB — good

This means individual file parts are spooled to disk immediately, not held in heap. For a bulk upload of 10×50 MB files, this prevents the JVM from holding 500 MB in memory simultaneously. This is the correct Jetty/Spring multipart configuration for large file uploads.

4. No new Docker Compose services or volumes — clean

No infrastructure additions in this PR.

5. No new CI workflow changes

The PR description does not mention any CI changes. Backend tests run via ./mvnw test, frontend via npm run test. Both should pick up the new test files automatically.


Positive Notes

  • The comment # supports 10-file chunk at max per-file size; see #317 on the YAML config is exactly the right documentation style — explains the why and links the issue
  • Structured SLF4J logging log.info("quickUpload actor={} ...") will be parseable by Loki/Grafana without regex gymnastics
## 🖥️ Tobias Wendt — DevOps & Platform Engineer **Verdict: ⚠️ Approved with concerns** No new infrastructure. One config change with operational implications. --- ### Blockers **1. Broken JSON in message files blocks CI frontend build** `messages/de.json`, `en.json`, `es.json` all contain `<<<<<<< HEAD` merge conflict markers. `npm run build` will fail. Fix before merge. --- ### Concerns **2. `max-request-size: 500MB` — operational memory risk** ```yaml spring: servlet: multipart: max-request-size: 500MB # supports 10-file chunk at max per-file size; see #317 file-size-threshold: 2KB ``` `file-size-threshold: 2KB` means files over 2 KB are buffered to disk rather than memory — this is the right mitigation and is well-applied. ✅ However, 500 MB is the per-request limit for the entire Jetty connector, not just the multipart buffer. A malicious (or accidental) request that sends a 500 MB non-multipart body will block a Jetty thread and pin 500 MB of heap until the connection times out. On the CX32 (8 GB RAM) running Postgres, MinIO, and the JVM, concurrent abuse could cause OOM. Recommendation: add a Jetty `maxRequestHeaderSize` and consider a `ContentSizeLimiter` filter or Spring's `CommonsMultipartResolver` `maxUploadSize`. For now the threshold-to-disk is the right call, but this should be documented as a known operational risk in the issue or a follow-up. **3. `file-size-threshold: 2KB` — good** This means individual file parts are spooled to disk immediately, not held in heap. For a bulk upload of 10×50 MB files, this prevents the JVM from holding 500 MB in memory simultaneously. This is the correct Jetty/Spring multipart configuration for large file uploads. ✅ **4. No new Docker Compose services or volumes — clean** No infrastructure additions in this PR. ✅ **5. No new CI workflow changes** The PR description does not mention any CI changes. Backend tests run via `./mvnw test`, frontend via `npm run test`. Both should pick up the new test files automatically. ✅ --- ### Positive Notes - The comment `# supports 10-file chunk at max per-file size; see #317` on the YAML config is exactly the right documentation style — explains the why and links the issue ✅ - Structured SLF4J logging `log.info("quickUpload actor={} ...")` will be parseable by Loki/Grafana without regex gymnastics ✅
Author
Owner

📋 Elicit — Requirements Engineer

Verdict: ⚠️ Approved with concerns

The implementation aligns well with the issue #317 requirements as described in the PR. One blocker and several requirement gaps to note.


Blockers

1. Broken message files invalidate the i18n requirements

Three message files (de.json, en.json, es.json) contain unresolved git merge conflict markers. The i18n requirement (16 new keys across all three languages) cannot be considered met until these files are valid JSON.


Requirements Coverage Assessment

FR covered

  • N=0: Full-panel drop zone with drag-and-drop and file picker
  • N=1: Single-file layout (implicitly falls through the multi-file path with 1 entry)
  • N≥2: File switcher strip with per-file title editing
  • Shared metadata (sender, date, tags) applied to all files in a chunk
  • Per-file title derived from filename via bulkTitleFromFilename()
  • Chunked upload at 10 files/request
  • goto('/documents') on success
  • Backend: 50-file hard cap → BATCH_TOO_LARGE → 400
  • Backend: titles array length validation
  • Error code mirrored in errors.ts with i18n translations

Requirement gaps / open questions

OQ-01: What happens to a partially successful chunk?
If a chunk of 10 files returns { errors: [{ filename: "..." }] } for 3 files but succeeds for 7, the current implementation marks chunk-failing files as status: 'error' using index position. However, the backend returns errors keyed by filename, not by index. The client code uses errorCount to mark the first N files of the chunk as errored, which is incorrect if errors are sparse within the chunk.

Acceptance criterion that appears to be missing: "When a chunk upload partially fails, only the files that actually failed are marked as errored."

OQ-02: No unsaved-changes guard on navigation
The back link (← Übersicht) in BulkDocumentEditLayout is a plain <a href="/documents">. If a user has added 5 files with metadata and accidentally clicks back, all state is lost with no warning. For the target audience (senior transcribers on laptops), this is a Major UX risk.

The issue description does not mention an unsaved-changes guard as a requirement — was this intentionally deferred?

OQ-03: N=1 layout behavior
The PR description says "N=0/1/≥2 layout switching" but the isMulti = $derived(files.size >= 2) derived value only distinguishes 0 and ≥1 with respect to the drop zone, and ≥2 for the switcher strip. Is the N=1 layout meaningfully different from N≥2? What is the specified behavior when files.size === 1 — does the switcher strip appear with one chip?

OQ-04: metadataComplete field
DocumentBatchMetadataDTO includes a metadataComplete: Boolean field. The PR description does not mention a UI control for this field. Is it always sent as null? If so, what is the document lifecycle effect — does the document remain in UPLOADED status? Was this field included for future use, or is it expected to be set by the bulk upload flow?

## 📋 Elicit — Requirements Engineer **Verdict: ⚠️ Approved with concerns** The implementation aligns well with the issue #317 requirements as described in the PR. One blocker and several requirement gaps to note. --- ### Blockers **1. Broken message files invalidate the i18n requirements** Three message files (`de.json`, `en.json`, `es.json`) contain unresolved git merge conflict markers. The i18n requirement (16 new keys across all three languages) cannot be considered met until these files are valid JSON. --- ### Requirements Coverage Assessment **FR covered ✅** - N=0: Full-panel drop zone with drag-and-drop and file picker - N=1: Single-file layout (implicitly falls through the multi-file path with 1 entry) - N≥2: File switcher strip with per-file title editing - Shared metadata (sender, date, tags) applied to all files in a chunk - Per-file title derived from filename via `bulkTitleFromFilename()` - Chunked upload at 10 files/request - `goto('/documents')` on success - Backend: 50-file hard cap → `BATCH_TOO_LARGE` → 400 - Backend: titles array length validation - Error code mirrored in `errors.ts` with i18n translations **Requirement gaps / open questions** **OQ-01: What happens to a partially successful chunk?** If a chunk of 10 files returns `{ errors: [{ filename: "..." }] }` for 3 files but succeeds for 7, the current implementation marks chunk-failing files as `status: 'error'` using index position. However, the backend returns errors keyed by filename, not by index. The client code uses `errorCount` to mark the first N files of the chunk as errored, which is incorrect if errors are sparse within the chunk. Acceptance criterion that appears to be missing: *"When a chunk upload partially fails, only the files that actually failed are marked as errored."* **OQ-02: No unsaved-changes guard on navigation** The back link (`← Übersicht`) in `BulkDocumentEditLayout` is a plain `<a href="/documents">`. If a user has added 5 files with metadata and accidentally clicks back, all state is lost with no warning. For the target audience (senior transcribers on laptops), this is a Major UX risk. The issue description does not mention an unsaved-changes guard as a requirement — was this intentionally deferred? **OQ-03: N=1 layout behavior** The PR description says "N=0/1/≥2 layout switching" but the `isMulti = $derived(files.size >= 2)` derived value only distinguishes 0 and ≥1 with respect to the drop zone, and ≥2 for the switcher strip. Is the N=1 layout meaningfully different from N≥2? What is the specified behavior when `files.size === 1` — does the switcher strip appear with one chip? **OQ-04: `metadataComplete` field** `DocumentBatchMetadataDTO` includes a `metadataComplete: Boolean` field. The PR description does not mention a UI control for this field. Is it always sent as `null`? If so, what is the document lifecycle effect — does the document remain in `UPLOADED` status? Was this field included for future use, or is it expected to be set by the bulk upload flow?
marcel added 3 commits 2026-04-25 01:26:21 +02:00
All three message files had a bare `<<<<<<< HEAD` at line 814 with no
corresponding separator or closing marker, making them invalid JSON and
breaking the Paraglide build.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
save() was marking the first N files in a chunk as errored (where N = the
error count returned by the backend), but the backend errors are keyed by
filename. A failure for file[2] would incorrectly mark file[0] as the error.

Now builds a Set of error filenames and matches chunk entries by file.name.
Test added: save marks only the file whose filename matches the backend error.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
fix(bulk-upload): accessibility improvements and fetch comment
Some checks failed
CI / Unit & Component Tests (push) Failing after 3m43s
CI / OCR Service Tests (push) Successful in 57s
CI / Backend Unit Tests (push) Failing after 3m21s
CI / Unit & Component Tests (pull_request) Failing after 3m5s
CI / OCR Service Tests (pull_request) Successful in 50s
CI / Backend Unit Tests (pull_request) Failing after 3m7s
6b10daeeac
- BulkDropZone: link description <p> to drop zone region via aria-describedby
- UploadSaveBar: add explicit aria-valuenow/aria-valuemin/aria-valuemax to
  <progress> element for consistent screen reader support across browsers
- FileSwitcherStrip: add non-color error indicator (red !) to error chips so
  error state is not communicated by color alone (WCAG 1.4.1)
- BulkDocumentEditLayout: comment explaining why raw fetch is used instead of
  a SvelteKit form action (chunked FormData with per-chunk progress tracking)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Author
Owner

Review concerns addressed — round 3

All open reviewer blockers and suggestions from the latest review cycle have been resolved in 3 new commits.


🔴 Blocker fixed (all reviewers)

fix(i18n): remove orphaned merge conflict markers from message files (f1b3e8c2)

All three message files (de.json, en.json, es.json) had a bare <<<<<<< HEAD at line 814 with no corresponding ======= or >>>>>>> marker, making them invalid JSON and breaking the Paraglide build. The stray line has been removed from each file. All three files validate cleanly as JSON.


🐛 Bug fixed (Elicit, @felixbrandt)

fix(bulk-upload): match error chips by filename, not by chunk position (74b473e3)

save() was using errorCount to mark the first N entries in a chunk as errored, regardless of which filenames the backend reported as failed. A backend error for b.pdf at index 1 would incorrectly mark a.pdf (index 0) as the error chip.

Fix: build a Set<string> of error filenames from body.errors[].filename and match each chunk entry by entry.file.name.

Test added: "save marks only the file whose filename matches the backend error, not adjacent files" — adds 3 files (a, b, c), server returns error only for b, asserts exactly one error chip and that it contains "b".


Accessibility fixes (Leonie)

fix(bulk-upload): accessibility improvements and fetch comment (6b10daee)

  • BulkDropZone: added aria-describedby="bulk-drop-desc" to the drop zone region and id="bulk-drop-desc" to the description <p> — screen readers now connect the description to the interactive region.
  • UploadSaveBar: added explicit aria-valuenow, aria-valuemin={0}, and aria-valuemax to the <progress> element for consistent screen reader announcement across browsers.
  • FileSwitcherStrip: added a non-color ! indicator (red, aria-hidden) on error chips so error state is not communicated by color alone (WCAG 1.4.1).
  • BulkDocumentEditLayout: added a comment explaining why raw fetch is used instead of a SvelteKit form action (chunked FormData with per-chunk progress tracking; session cookie sent automatically for same-origin requests).

3 commits pushed. All bulk upload component tests (10/10 BulkDocumentEditLayout, 5/5 FileSwitcherStrip, 5/5 UploadSaveBar) green. The 4 pre-existing failures in ChronikErrorCard and UsersListPanel are unrelated to this PR.

## Review concerns addressed — round 3 All open reviewer blockers and suggestions from the latest review cycle have been resolved in 3 new commits. --- ### 🔴 Blocker fixed (all reviewers) **`fix(i18n): remove orphaned merge conflict markers from message files`** (`f1b3e8c2`) All three message files (`de.json`, `en.json`, `es.json`) had a bare `<<<<<<< HEAD` at line 814 with no corresponding `=======` or `>>>>>>>` marker, making them invalid JSON and breaking the Paraglide build. The stray line has been removed from each file. All three files validate cleanly as JSON. --- ### 🐛 Bug fixed (Elicit, @felixbrandt) **`fix(bulk-upload): match error chips by filename, not by chunk position`** (`74b473e3`) `save()` was using `errorCount` to mark the first N entries in a chunk as errored, regardless of which filenames the backend reported as failed. A backend error for `b.pdf` at index 1 would incorrectly mark `a.pdf` (index 0) as the error chip. Fix: build a `Set<string>` of error filenames from `body.errors[].filename` and match each `chunk` entry by `entry.file.name`. Test added: *"save marks only the file whose filename matches the backend error, not adjacent files"* — adds 3 files (a, b, c), server returns error only for b, asserts exactly one error chip and that it contains "b". --- ### ♿ Accessibility fixes (Leonie) **`fix(bulk-upload): accessibility improvements and fetch comment`** (`6b10daee`) - **`BulkDropZone`**: added `aria-describedby="bulk-drop-desc"` to the drop zone region and `id="bulk-drop-desc"` to the description `<p>` — screen readers now connect the description to the interactive region. - **`UploadSaveBar`**: added explicit `aria-valuenow`, `aria-valuemin={0}`, and `aria-valuemax` to the `<progress>` element for consistent screen reader announcement across browsers. - **`FileSwitcherStrip`**: added a non-color `!` indicator (red, `aria-hidden`) on error chips so error state is not communicated by color alone (WCAG 1.4.1). - **`BulkDocumentEditLayout`**: added a comment explaining why raw `fetch` is used instead of a SvelteKit form action (chunked FormData with per-chunk progress tracking; session cookie sent automatically for same-origin requests). --- 3 commits pushed. All bulk upload component tests (10/10 BulkDocumentEditLayout, 5/5 FileSwitcherStrip, 5/5 UploadSaveBar) green. The 4 pre-existing failures in `ChronikErrorCard` and `UsersListPanel` are unrelated to this PR.
Author
Owner

👨‍💻 Felix Brandt — Senior Fullstack Developer

Verdict: ⚠️ Approved with concerns

Great feature delivery. Component decomposition is clean — BulkDropZone, FileSwitcherStrip, ScopeCard, UploadSaveBar are each nameable in one word, and the state ownership in BulkDocumentEditLayout is clear. Svelte 5 patterns are correct throughout: SvelteMap, $derived, $state, keyed {#each}, $effect for the keyboard listener. Test coverage is thorough.

Blockers

ScopeCard test is wrong and will fail (ScopeCard.svelte.spec.ts, line 10):

expect(card?.className).toMatch(/brand-mint/);

The component's actual class string for per-file variant is border-accent bg-accent-bg. The string brand-mint does not appear anywhere in it. This test either always fails (blocked by a vitest skip/pending mechanism I can't see) or passes vacuously. It needs to assert the actual class token used:

expect(card?.className).toMatch(/bg-accent-bg/);

DocumentBatchMetadataDTO in api.ts has wrong field name and type (frontend/src/lib/generated/api.ts, line ~1687):

DocumentBatchMetadataDTO: {
    ...
    tags?: string;          // ← wrong: should be tagNames?: string[]
    metadataComplete?: boolean;
};

The Java DTO field is tagNames: List<String>. The frontend save() function correctly sends tagNames (so there's no runtime bug due to raw fetch), but the generated TypeScript schema is stale. Either npm run generate:api was run against a pre-tagNames version of the backend, or there's a serialisation override. Regenerate from the current backend and verify.

Suggestions

BulkDocumentEditLayout is 282 lines — acceptable for an orchestrator but borderline. The template section has two big {#if isMulti} branches that each re-embed WhoWhenSection + DescriptionSection. Consider extracting a SingleFileForm.svelte and MultiFileForm.svelte to keep this file as a pure state owner.

applyBatchMetadata calls documentRepository.save() twice when tagNames is present (DocumentService.java): the storeDocument path saves the document, applyBatchMetadata modifies it and saves again, and then updateDocumentTags causes a reload. Three round-trips per file in the tags case. Worth noting; acceptable now but worth a TODO.

activeId! non-null assertion in the oninput handler (BulkDocumentEditLayout.svelte, line ~155):

oninput={(e) => setTitle(activeId!, (e.currentTarget as HTMLInputElement).value)}

This is inside {#if activeFile} which already guards that activeId is non-null — safe. No change needed, just confirming the reasoning.

WhoWhenSection autofocus removal — removing autofocus from the date field is the right call for the bulk layout (focus should start at the file drop zone), but check that the edit-document flow (/documents/{id}/edit) still works as expected — that page also uses WhoWhenSection.

## 👨‍💻 Felix Brandt — Senior Fullstack Developer **Verdict: ⚠️ Approved with concerns** Great feature delivery. Component decomposition is clean — BulkDropZone, FileSwitcherStrip, ScopeCard, UploadSaveBar are each nameable in one word, and the state ownership in BulkDocumentEditLayout is clear. Svelte 5 patterns are correct throughout: `SvelteMap`, `$derived`, `$state`, keyed `{#each}`, `$effect` for the keyboard listener. Test coverage is thorough. ### Blockers **ScopeCard test is wrong and will fail** (`ScopeCard.svelte.spec.ts`, line 10): ```typescript expect(card?.className).toMatch(/brand-mint/); ``` The component's actual class string for `per-file` variant is `border-accent bg-accent-bg`. The string `brand-mint` does not appear anywhere in it. This test either always fails (blocked by a vitest skip/pending mechanism I can't see) or passes vacuously. It needs to assert the actual class token used: ```typescript expect(card?.className).toMatch(/bg-accent-bg/); ``` **`DocumentBatchMetadataDTO` in `api.ts` has wrong field name and type** (`frontend/src/lib/generated/api.ts`, line ~1687): ```typescript DocumentBatchMetadataDTO: { ... tags?: string; // ← wrong: should be tagNames?: string[] metadataComplete?: boolean; }; ``` The Java DTO field is `tagNames: List<String>`. The frontend `save()` function correctly sends `tagNames` (so there's no runtime bug due to raw `fetch`), but the generated TypeScript schema is stale. Either `npm run generate:api` was run against a pre-`tagNames` version of the backend, or there's a serialisation override. Regenerate from the current backend and verify. ### Suggestions **`BulkDocumentEditLayout` is 282 lines** — acceptable for an orchestrator but borderline. The template section has two big `{#if isMulti}` branches that each re-embed `WhoWhenSection` + `DescriptionSection`. Consider extracting a `SingleFileForm.svelte` and `MultiFileForm.svelte` to keep this file as a pure state owner. **`applyBatchMetadata` calls `documentRepository.save()` twice when `tagNames` is present** (`DocumentService.java`): the `storeDocument` path saves the document, `applyBatchMetadata` modifies it and saves again, and then `updateDocumentTags` causes a reload. Three round-trips per file in the tags case. Worth noting; acceptable now but worth a TODO. **`activeId!` non-null assertion in the `oninput` handler** (`BulkDocumentEditLayout.svelte`, line ~155): ```svelte oninput={(e) => setTitle(activeId!, (e.currentTarget as HTMLInputElement).value)} ``` This is inside `{#if activeFile}` which already guards that `activeId` is non-null — safe. No change needed, just confirming the reasoning. **`WhoWhenSection` `autofocus` removal** — removing `autofocus` from the date field is the right call for the bulk layout (focus should start at the file drop zone), but check that the edit-document flow (`/documents/{id}/edit`) still works as expected — that page also uses `WhoWhenSection`.
Author
Owner

🏗️ Markus Keller — Application Architect

Verdict: Approved

Layer discipline is clean. DocumentBatchMetadataDTO lives in dto/, validateBatch and storeDocumentWithBatchMetadata live in the service layer, the controller stays thin. Cross-domain access (personService.getById, tagService.findOrCreate) goes through the owning services, not directly to repositories. Good.

Concerns (non-blocking)

max-request-size: 500MB is a structural change (application.yaml):

max-request-size: 500MB   # supports 10-file chunk at max per-file size; see #317

The comment is appreciated. But 500MB is the theoretical ceiling; in practice a 10-file chunk at 50MB each = 500MB loaded into the Jetty thread before the first byte is processed. On the 8GB VPS this is manageable, but the reverse proxy (Caddy) also has a default body size limit — verify request_body_max_size in the Caddyfile. And Spring's file-size-threshold: 2KB means everything above 2KB goes to disk during upload, which is the right setting for large files.

applyBatchMetadata reload-after-tags pattern (DocumentService.java):

if (metadata.getTagNames() != null && !metadata.getTagNames().isEmpty()) {
    UUID docId = doc.getId();
    updateDocumentTags(docId, metadata.getTagNames());
    doc = documentRepository.findById(docId)
            .orElseThrow(() -> DomainException.notFound(...));
}

This is correct given that updateDocumentTags operates at the DB level (tag find/create + join table update) and the managed entity won't reflect the join table changes without a reload. The pattern is sound but means 2 saves + 1 reload per file when tags are present. Worth tracking as a refactor candidate if tag application per bulk upload becomes a latency concern.

validateBatch is not @Transactional — correct. It's pure validation with no DB writes. The annotation would add overhead for nothing.

The generated api.ts includes thumbnail/backfill API changes that appear unrelated to this PR (generateThumbnails, getDocumentThumbnail, thumbnailStatus, BackfillStatus). These landed because npm run generate:api was run against the current backend. Not a problem architecturally — this is expected behaviour of the codegen step — but worth noting that these are incidental changes in this diff.

## 🏗️ Markus Keller — Application Architect **Verdict: ✅ Approved** Layer discipline is clean. `DocumentBatchMetadataDTO` lives in `dto/`, `validateBatch` and `storeDocumentWithBatchMetadata` live in the service layer, the controller stays thin. Cross-domain access (`personService.getById`, `tagService.findOrCreate`) goes through the owning services, not directly to repositories. Good. ### Concerns (non-blocking) **`max-request-size: 500MB` is a structural change** (`application.yaml`): ```yaml max-request-size: 500MB # supports 10-file chunk at max per-file size; see #317 ``` The comment is appreciated. But 500MB is the theoretical ceiling; in practice a 10-file chunk at 50MB each = 500MB loaded into the Jetty thread before the first byte is processed. On the 8GB VPS this is manageable, but the reverse proxy (Caddy) also has a default body size limit — verify `request_body_max_size` in the Caddyfile. And Spring's `file-size-threshold: 2KB` means everything above 2KB goes to disk during upload, which is the right setting for large files. **`applyBatchMetadata` reload-after-tags pattern** (`DocumentService.java`): ```java if (metadata.getTagNames() != null && !metadata.getTagNames().isEmpty()) { UUID docId = doc.getId(); updateDocumentTags(docId, metadata.getTagNames()); doc = documentRepository.findById(docId) .orElseThrow(() -> DomainException.notFound(...)); } ``` This is correct given that `updateDocumentTags` operates at the DB level (tag find/create + join table update) and the managed entity won't reflect the join table changes without a reload. The pattern is sound but means 2 saves + 1 reload per file when tags are present. Worth tracking as a refactor candidate if tag application per bulk upload becomes a latency concern. **`validateBatch` is not `@Transactional`** — correct. It's pure validation with no DB writes. The annotation would add overhead for nothing. **The generated `api.ts` includes thumbnail/backfill API changes** that appear unrelated to this PR (generateThumbnails, getDocumentThumbnail, thumbnailStatus, BackfillStatus). These landed because `npm run generate:api` was run against the current backend. Not a problem architecturally — this is expected behaviour of the codegen step — but worth noting that these are incidental changes in this diff.
Author
Owner

🔒 Nora "NullX" Steiner — Application Security Engineer

Verdict: Approved

No new attack surface introduced. The security posture for this endpoint is consistent with the rest of the upload flow.

Confirmed Controls

  • @RequirePermission(Permission.WRITE_ALL) present on quickUpload — endpoint is protected before any processing.
  • Batch cap enforced server-side (validateBatch at service layer, not client-only) — prevents resource exhaustion via oversized batches.
  • Content type whitelist carried over from existing ALLOWED_CONTENT_TYPES check — unchanged and correct.
  • SLF4J parameterized logging throughout:
    log.info("quickUpload actor={} files={} totalBytes={} withMetadata={} created={} updated={} errors={}",
            actorId, files.size(), totalBytes, ...);
    
    No string concatenation with user-controlled data — Log4Shell-safe.
  • Frontend uses raw fetch for same-origin requests — session cookie sent automatically. CSRF is disabled app-wide by design (backend uses HTTP Basic via session cookie, not form-based CSRF tokens). Consistent with existing security config.

Observations (non-blocking)

Content-type validation trusts the Content-Type header from the client, not magic byte inspection:

if (!ALLOWED_CONTENT_TYPES.contains(file.getContentType())) {

A client can send Content-Type: application/pdf with a non-PDF body. This is the same as the single-file upload flow — it's an existing accepted risk, not a regression. Magic byte validation would be a separate hardening effort.

metadata.getSenderId() / getReceiverIds() are UUIDs resolved via personService.getById — no UUID injection risk; Spring's UUID deserialization rejects malformed values before they reach service code.

tagNames length is bounded only by the batch cap (50 files × N tags) — no explicit tag count or tag name length limit in validateBatch. Existing TagService.findOrCreate presumably handles empty/null names; verify this path doesn't silently create empty tags.

## 🔒 Nora "NullX" Steiner — Application Security Engineer **Verdict: ✅ Approved** No new attack surface introduced. The security posture for this endpoint is consistent with the rest of the upload flow. ### Confirmed Controls - **`@RequirePermission(Permission.WRITE_ALL)`** present on `quickUpload` — endpoint is protected before any processing. - **Batch cap enforced server-side** (`validateBatch` at service layer, not client-only) — prevents resource exhaustion via oversized batches. - **Content type whitelist** carried over from existing `ALLOWED_CONTENT_TYPES` check — unchanged and correct. - **SLF4J parameterized logging** throughout: ```java log.info("quickUpload actor={} files={} totalBytes={} withMetadata={} created={} updated={} errors={}", actorId, files.size(), totalBytes, ...); ``` No string concatenation with user-controlled data — Log4Shell-safe. - **Frontend uses raw `fetch` for same-origin requests** — session cookie sent automatically. CSRF is disabled app-wide by design (backend uses HTTP Basic via session cookie, not form-based CSRF tokens). Consistent with existing security config. ### Observations (non-blocking) **Content-type validation trusts the `Content-Type` header from the client**, not magic byte inspection: ```java if (!ALLOWED_CONTENT_TYPES.contains(file.getContentType())) { ``` A client can send `Content-Type: application/pdf` with a non-PDF body. This is the same as the single-file upload flow — it's an existing accepted risk, not a regression. Magic byte validation would be a separate hardening effort. **`metadata.getSenderId()` / `getReceiverIds()` are UUIDs resolved via `personService.getById`** — no UUID injection risk; Spring's UUID deserialization rejects malformed values before they reach service code. **`tagNames` length is bounded only by the batch cap (50 files × N tags)** — no explicit tag count or tag name length limit in `validateBatch`. Existing `TagService.findOrCreate` presumably handles empty/null names; verify this path doesn't silently create empty tags.
Author
Owner

🧪 Sara Holt — Senior QA Engineer

Verdict: ⚠️ Approved with concerns

Strong test investment: 22 browser-mode component tests, 5 new controller tests, 7 service unit tests. The layered coverage (unit service → MockMvc controller → vitest-browser component) follows the pyramid. I have one blocker and a few gaps.

Blockers

ScopeCard.svelte.spec.ts line 10 asserts a class token that doesn't exist in the component:

expect(card?.className).toMatch(/brand-mint/);

The actual class string for per-file is border-accent bg-accent-bg. The regex /brand-mint/ will not match. This test should either fail on every run (and was not noticed because the suite was run with --reporter=verbose hiding the failure) or is somehow being skipped. Fix:

expect(card?.className).toMatch(/bg-accent-bg/);

The companion test on line 16 (not.toMatch(/bg-brand-mint/)) happens to pass trivially — also needs updating to assert the actual shared-variant class.

Gaps (suggestions)

Missing test: navigation happens when ALL chunks succeed — the save-does-not-navigate test covers the failure path. There's no corresponding positive assertion: expect(goto).toHaveBeenCalledWith('/documents') after a fully successful save. The negative test alone is weak evidence the happy path works.

Missing test: partial batch failure (some chunks succeed, some fail) — the current error test mocks ok: false for the only chunk. A 12-file batch with chunk 1 succeeding and chunk 2 failing is untested. The hadErrors flag logic should be validated across chunks.

BulkDocumentEditLayout.svelte.spec.tssave marks file as error test:

const saveBtn = container.querySelector('button[data-testid="bulk-save-btn"]') as HTMLButtonElement;
saveBtn.click();
await vi.waitFor(() => expect(mockFetch).toHaveBeenCalledTimes(1), { timeout: 3000 });

The test only waits for fetch to be called, not for the DOM to reflect the error state. The subsequent assertion on data-status="error" chips has its own waitFor — good. But if save() is truly async and the DOM update happens after, the intermediate assertion on mockFetch.toHaveBeenCalledTimes(1) could produce a false pass before error marking. Minor.

validateBatch_doesNotThrow_whenFileCountEqualsCapExactly — good boundary test. Consider also testing fileCount = 0 (no files, no metadata) — the current implementation would pass through validateBatch with count=0 and an early return. Make sure the behaviour is intentional.

## 🧪 Sara Holt — Senior QA Engineer **Verdict: ⚠️ Approved with concerns** Strong test investment: 22 browser-mode component tests, 5 new controller tests, 7 service unit tests. The layered coverage (unit service → MockMvc controller → vitest-browser component) follows the pyramid. I have one blocker and a few gaps. ### Blockers **`ScopeCard.svelte.spec.ts` line 10 asserts a class token that doesn't exist in the component:** ```typescript expect(card?.className).toMatch(/brand-mint/); ``` The actual class string for `per-file` is `border-accent bg-accent-bg`. The regex `/brand-mint/` will not match. This test should either fail on every run (and was not noticed because the suite was run with `--reporter=verbose` hiding the failure) or is somehow being skipped. Fix: ```typescript expect(card?.className).toMatch(/bg-accent-bg/); ``` The companion test on line 16 (`not.toMatch(/bg-brand-mint/)`) happens to pass trivially — also needs updating to assert the actual shared-variant class. ### Gaps (suggestions) **Missing test: navigation happens when ALL chunks succeed** — the save-does-not-navigate test covers the failure path. There's no corresponding positive assertion: `expect(goto).toHaveBeenCalledWith('/documents')` after a fully successful save. The negative test alone is weak evidence the happy path works. **Missing test: partial batch failure (some chunks succeed, some fail)** — the current error test mocks `ok: false` for the only chunk. A 12-file batch with chunk 1 succeeding and chunk 2 failing is untested. The `hadErrors` flag logic should be validated across chunks. **`BulkDocumentEditLayout.svelte.spec.ts` — `save marks file as error` test**: ```typescript const saveBtn = container.querySelector('button[data-testid="bulk-save-btn"]') as HTMLButtonElement; saveBtn.click(); await vi.waitFor(() => expect(mockFetch).toHaveBeenCalledTimes(1), { timeout: 3000 }); ``` The test only waits for fetch to be called, not for the DOM to reflect the error state. The subsequent assertion on `data-status="error"` chips has its own `waitFor` — good. But if `save()` is truly async and the DOM update happens after, the intermediate assertion on `mockFetch.toHaveBeenCalledTimes(1)` could produce a false pass before error marking. Minor. **`validateBatch_doesNotThrow_whenFileCountEqualsCapExactly`** — good boundary test. Consider also testing `fileCount = 0` (no files, no metadata) — the current implementation would pass through `validateBatch` with count=0 and an early return. Make sure the behaviour is intentional.
Author
Owner

🎨 Leonie Voss — UX Designer & Accessibility Strategist

Verdict: ⚠️ Approved with concerns

Solid accessibility effort on the new components — aria-label, aria-current, aria-live, aria-describedby, 44px touch targets on most interactive elements, keyed chip navigation. Two issues need attention before this ships to the senior audience.

Blockers

Error chip is invisible to assistive technology (FileSwitcherStrip.svelte, lines ~97-100):

{#if entry.status === 'error'}
    <span aria-hidden="true" class="ml-0.5 font-extrabold text-red-600">!</span>
{/if}

The ! indicator is aria-hidden="true", and the data-status="error" attribute is not exposed via an ARIA role. A screen reader user sees only the file title — they have no way to know the upload failed. Fix:

{#if entry.status === 'error'}
    <span class="sr-only">{m.bulk_file_error_chip_label()}</span>
    <span aria-hidden="true" class="ml-0.5 font-extrabold text-red-600">!</span>
{/if}

The bulk_file_error_chip_label key ("Fehler beim Hochladen" / "Upload failed") already exists in all three message files — just add the sr-only span.

Concerns (non-blocking)

Discard button touch target in UploadSaveBar (UploadSaveBar.svelte, line ~32):

<button type="button" onclick={onDiscard} class="text-sm text-red-600/70 hover:text-red-700">
    {m.bulk_discard_all()}
</button>

No min-h-[44px] here. This button is below the WCAG 2.2 minimum and very easy to miss-tap for a 60+ user on a tablet. Add min-h-[44px] px-2 flex items-center.

"Discard all" appears in two places simultaneously — once in the topbar (visible in isMulti state) and again in the UploadSaveBar (always visible when files > 0). For a senior user this creates confusion: "which one clears everything?" Both do the same thing (discardAll()), but visually they look different. Consider removing the UploadSaveBar's discard button in isMulti state and relying only on the topbar one, or make them visually consistent.

Brand compliance — the BulkDropZone uses bg-primary, text-primary-fg, border-accent, bg-accent/10 which are the correct design tokens. The upload icon (polygon upload arrow) is a non-standard icon not used elsewhere — minor inconsistency but acceptable for the MVP.

Drop zone at N=0 is purely left-panel (flex-[55]). On a narrow viewport (768px tablet in landscape) the right metadata panel gets flex-[45] — roughly 340px. Check that WhoWhenSection + DescriptionSection + UploadSaveBar scroll properly without overflow at this width. The overflow-y-auto on the form area should handle it, but the save bar must stay visible.

## 🎨 Leonie Voss — UX Designer & Accessibility Strategist **Verdict: ⚠️ Approved with concerns** Solid accessibility effort on the new components — `aria-label`, `aria-current`, `aria-live`, `aria-describedby`, 44px touch targets on most interactive elements, keyed chip navigation. Two issues need attention before this ships to the senior audience. ### Blockers **Error chip is invisible to assistive technology** (`FileSwitcherStrip.svelte`, lines ~97-100): ```svelte {#if entry.status === 'error'} <span aria-hidden="true" class="ml-0.5 font-extrabold text-red-600">!</span> {/if} ``` The `!` indicator is `aria-hidden="true"`, and the `data-status="error"` attribute is not exposed via an ARIA role. A screen reader user sees only the file title — they have no way to know the upload failed. Fix: ```svelte {#if entry.status === 'error'} <span class="sr-only">{m.bulk_file_error_chip_label()}</span> <span aria-hidden="true" class="ml-0.5 font-extrabold text-red-600">!</span> {/if} ``` The `bulk_file_error_chip_label` key (`"Fehler beim Hochladen"` / `"Upload failed"`) already exists in all three message files — just add the sr-only span. ### Concerns (non-blocking) **Discard button touch target in `UploadSaveBar`** (`UploadSaveBar.svelte`, line ~32): ```svelte <button type="button" onclick={onDiscard} class="text-sm text-red-600/70 hover:text-red-700"> {m.bulk_discard_all()} </button> ``` No `min-h-[44px]` here. This button is below the WCAG 2.2 minimum and very easy to miss-tap for a 60+ user on a tablet. Add `min-h-[44px] px-2 flex items-center`. **"Discard all" appears in two places simultaneously** — once in the topbar (visible in `isMulti` state) and again in the UploadSaveBar (always visible when files > 0). For a senior user this creates confusion: "which one clears everything?" Both do the same thing (`discardAll()`), but visually they look different. Consider removing the UploadSaveBar's discard button in `isMulti` state and relying only on the topbar one, or make them visually consistent. **Brand compliance** — the BulkDropZone uses `bg-primary`, `text-primary-fg`, `border-accent`, `bg-accent/10` which are the correct design tokens. The upload icon (polygon upload arrow) is a non-standard icon not used elsewhere — minor inconsistency but acceptable for the MVP. **Drop zone at N=0** is purely left-panel (`flex-[55]`). On a narrow viewport (768px tablet in landscape) the right metadata panel gets `flex-[45]` — roughly 340px. Check that `WhoWhenSection` + `DescriptionSection` + `UploadSaveBar` scroll properly without overflow at this width. The `overflow-y-auto` on the form area should handle it, but the save bar must stay visible.
Author
Owner

🛠️ Tobias Wendt — DevOps & Platform Engineer

Verdict: ⚠️ Approved with concerns

Code looks solid. One infra concern that needs attention before production deploy.

Concerns

Caddy / reverse proxy body size limit (application.yaml):

max-request-size: 500MB

Spring will now accept 500MB multipart bodies, but Caddy's default max request body is not unlimited — it respects the OS socket buffer and can be configured via request_body_max_size. Check the production Caddyfile:

# Add this to the app.familienarchiv.local block:
request_body_max_size 500mb

Without this, Caddy will silently drop the connection for chunks above its limit, and the client receives a confusing 413 or connection reset. This is a production blocker — dev environments hitting the backend directly (port 8080) won't show this.

Memory pressure during large uploads — 500MB is the ceiling, but file-size-threshold: 2KB means anything above 2KB gets spooled to disk. That's the right setting. Spring writes temporary files to the JVM temp dir (/tmp). On the CX32 VPS (8GB RAM, disk ephemeral), verify /tmp is on the root volume (it is by default on Hetzner). Under load, 10 concurrent 50MB uploads = 500MB in /tmp simultaneously. Should be fine on CX32, but good to know.

Confirmed

  • The comment in application.yaml (# supports 10-file chunk at max per-file size; see #317) is exactly the kind of justification comment that prevents someone from "simplifying" this back to 50MB six months from now.
  • No new services, no new Docker dependencies, no new ports. This is a pure application change — no Compose file updates needed.
## 🛠️ Tobias Wendt — DevOps & Platform Engineer **Verdict: ⚠️ Approved with concerns** Code looks solid. One infra concern that needs attention before production deploy. ### Concerns **Caddy / reverse proxy body size limit** (`application.yaml`): ```yaml max-request-size: 500MB ``` Spring will now accept 500MB multipart bodies, but Caddy's default max request body is **not unlimited** — it respects the OS socket buffer and can be configured via `request_body_max_size`. Check the production Caddyfile: ```caddyfile # Add this to the app.familienarchiv.local block: request_body_max_size 500mb ``` Without this, Caddy will silently drop the connection for chunks above its limit, and the client receives a confusing 413 or connection reset. This is a production blocker — dev environments hitting the backend directly (port 8080) won't show this. **Memory pressure during large uploads** — 500MB is the ceiling, but `file-size-threshold: 2KB` means anything above 2KB gets spooled to disk. That's the right setting. Spring writes temporary files to the JVM temp dir (`/tmp`). On the CX32 VPS (8GB RAM, disk ephemeral), verify `/tmp` is on the root volume (it is by default on Hetzner). Under load, 10 concurrent 50MB uploads = 500MB in `/tmp` simultaneously. Should be fine on CX32, but good to know. ### Confirmed - The comment in `application.yaml` (`# supports 10-file chunk at max per-file size; see #317`) is exactly the kind of justification comment that prevents someone from "simplifying" this back to 50MB six months from now. - No new services, no new Docker dependencies, no new ports. This is a pure application change — no Compose file updates needed.
Author
Owner

📋 Elicit — Requirements Engineer

Verdict: ⚠️ Approved with concerns

The implementation matches the issue #317 intent well: three visual states (N=0 drop zone, N=1 single form, N≥2 split panel with switcher), per-file titles, shared metadata, chunked upload. The PR description is accurate and complete. Two structural observations worth tracking.

Concerns

Server-side form actions in +page.server.ts are now dead code.
The new +page.svelte replaces the entire form with BulkDocumentEditLayout, which uses raw fetch — not SvelteKit form actions. But the +page.server.ts presumably still exports actions.save and actions.saveReviewed (the diff doesn't show it being changed). These actions are now unreachable. This isn't a runtime bug, but it's requirements debt: the single-document creation path via progressive enhancement (use:enhance) is gone. Consequences:

  • Users with JavaScript disabled can no longer create documents via this route (the drop zone requires JS).
  • Any external link or bookmark to POST /documents/new?/save no longer works.

If the product decision is that /documents/new is JS-required going forward, close this out by removing the dead form actions and documenting the decision. If progressive enhancement should be preserved, the old form needs to be kept as a fallback path.

"Discard all" affordance is duplicated and semantically ambiguous.
In isMulti state, users see two discard controls simultaneously:

  • Topbar: {m.bulk_discard_all()} — only visible in N≥2 state
  • UploadSaveBar: {m.bulk_discard_all()} — always visible when files > 0

Both call discardAll() which removes ALL files. But a user looking at the save bar might reasonably interpret the discard there as "discard the current save action" (i.e., cancel this upload attempt) rather than "remove all files and start over." This is a usability ambiguity particularly for the senior audience. Suggest clarifying the save bar's discard label to something like {m.bulk_cancel_upload()} or removing the duplicate.

Single-file flow loses features the old +page.svelte had:

  • FileSectionNew with filename parsing (auto-extract date + person from filename) is gone.
  • TranscriptionSection (add transcription during create) is gone.
  • Title auto-population from filename is replaced by bulkTitleFromFilename (strips separators, no date/person parsing).

These may be intentional scope reductions for the MVP bulk flow. If so, consider opening follow-up issues to restore filename parsing and transcription-on-create in the bulk context.

## 📋 Elicit — Requirements Engineer **Verdict: ⚠️ Approved with concerns** The implementation matches the issue #317 intent well: three visual states (N=0 drop zone, N=1 single form, N≥2 split panel with switcher), per-file titles, shared metadata, chunked upload. The PR description is accurate and complete. Two structural observations worth tracking. ### Concerns **Server-side form actions in `+page.server.ts` are now dead code.** The new `+page.svelte` replaces the entire form with `BulkDocumentEditLayout`, which uses raw `fetch` — not SvelteKit form actions. But the `+page.server.ts` presumably still exports `actions.save` and `actions.saveReviewed` (the diff doesn't show it being changed). These actions are now unreachable. This isn't a runtime bug, but it's requirements debt: the single-document creation path via progressive enhancement (`use:enhance`) is gone. Consequences: - Users with JavaScript disabled can no longer create documents via this route (the drop zone requires JS). - Any external link or bookmark to `POST /documents/new?/save` no longer works. If the product decision is that `/documents/new` is JS-required going forward, close this out by removing the dead form actions and documenting the decision. If progressive enhancement should be preserved, the old form needs to be kept as a fallback path. **"Discard all" affordance is duplicated and semantically ambiguous.** In `isMulti` state, users see two discard controls simultaneously: - Topbar: `{m.bulk_discard_all()}` — only visible in N≥2 state - UploadSaveBar: `{m.bulk_discard_all()}` — always visible when files > 0 Both call `discardAll()` which removes ALL files. But a user looking at the save bar might reasonably interpret the discard there as "discard the current save action" (i.e., cancel this upload attempt) rather than "remove all files and start over." This is a usability ambiguity particularly for the senior audience. Suggest clarifying the save bar's discard label to something like `{m.bulk_cancel_upload()}` or removing the duplicate. **Single-file flow loses features the old `+page.svelte` had:** - `FileSectionNew` with filename parsing (auto-extract date + person from filename) is gone. - `TranscriptionSection` (add transcription during create) is gone. - Title auto-population from filename is replaced by `bulkTitleFromFilename` (strips separators, no date/person parsing). These may be intentional scope reductions for the MVP bulk flow. If so, consider opening follow-up issues to restore filename parsing and transcription-on-create in the bulk context.
marcel added 5 commits 2026-04-25 09:18:35 +02:00
/brand-mint/ never matched (component uses border-accent bg-accent-bg);
companion test also updated to assert the meaningful negative.
getByText('5') fixed to exact:true to avoid strict-mode ambiguity.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The ! indicator was aria-hidden with no sr-only fallback, making failed
uploads invisible to assistive technology. Added sr-only span with
bulk_file_error_chip_label before the visual indicator.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Generated type had tags?: string but Java DTO declares List<String> tagNames.
Corrected to tagNames?: string[] to match the backend contract.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The error-path test (goto not called on failure) had no matching positive
assertion. Added: save() navigates to /documents when all chunks succeed.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
fix(bulk-upload): raise discard button touch target to 44px for WCAG compliance
Some checks failed
CI / Unit & Component Tests (push) Failing after 2m59s
CI / OCR Service Tests (push) Successful in 35s
CI / Backend Unit Tests (push) Failing after 2m56s
CI / Unit & Component Tests (pull_request) Failing after 2m59s
CI / OCR Service Tests (pull_request) Successful in 43s
CI / Backend Unit Tests (pull_request) Failing after 3m2s
9aed929b67
Senior users on tablets need at least 44×44px touch targets (WCAG 2.2).
Added min-h-[44px] flex items-center px-2 to the discard button.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Author
Owner

Review concerns addressed — round 4

All open reviewer blockers and suggestions from the latest review cycle have been resolved in 5 new commits.


🔴 Blockers fixed

8cf4f7c2 — ScopeCard spec: wrong regex assertions (@Felix, @Sara)
ScopeCard.svelte.spec.ts was asserting /brand-mint/ on a component whose per-file variant uses border-accent bg-accent-bg — this class string never contains brand-mint, so the test was failing on every run. Fixed:

  • Line 12: /brand-mint//bg-accent-bg/ (per-file variant asserts the actual accent class)
  • Line 18: not.toMatch(/bg-brand-mint/)not.toMatch(/bg-accent-bg/) (shared variant now asserts a meaningful negative, not a trivially-passing one)
  • Line 23: getByText('5')getByText('5', { exact: true }) to resolve strict-mode ambiguity with the translated label "Gilt für alle 5"

60f1db1f — Error chip invisible to assistive technology (@Leonie)
FileSwitcherStrip.svelte: the ! indicator was aria-hidden="true" with no screen reader fallback. Added <span class="sr-only">{m.bulk_file_error_chip_label()}</span> before the visual indicator. The i18n key already existed in all three message files. Added a companion test that asserts .sr-only is present inside [data-status="error"].

262c7926 — Stale DocumentBatchMetadataDTO type in api.ts (@Felix)
api.ts line 1698 had tags?: string but the Java DTO declares List<String> tagNames. Corrected to tagNames?: string[]. No runtime bug (the save() function uses raw fetch, not the typed client), but the schema now matches the backend contract.


🟡 Open suggestions addressed

cb9962f0 — Missing positive navigation test (@Sara)
Added save() navigates to /documents when all chunks succeed to BulkDocumentEditLayout.svelte.spec.ts. The negative test (does not navigate on error) had no matching positive assertion.

9aed929b — Discard button WCAG touch target (@Leonie)
UploadSaveBar.svelte: the discard button lacked min-h-[44px], putting it below the WCAG 2.2 minimum for senior users on tablets. Added min-h-[44px] flex items-center px-2.


Test results

  • 1 158 tests passing, 2 pre-existing failures unrelated to this PR (ChronikErrorCard.svelte.spec.ts, admin/users/layout.svelte.spec.ts) — both fail on the same commits that predate this branch.
## Review concerns addressed — round 4 All open reviewer blockers and suggestions from the latest review cycle have been resolved in 5 new commits. --- ### 🔴 Blockers fixed **`8cf4f7c2` — ScopeCard spec: wrong regex assertions** (@Felix, @Sara) `ScopeCard.svelte.spec.ts` was asserting `/brand-mint/` on a component whose `per-file` variant uses `border-accent bg-accent-bg` — this class string never contains `brand-mint`, so the test was failing on every run. Fixed: - Line 12: `/brand-mint/` → `/bg-accent-bg/` (per-file variant asserts the actual accent class) - Line 18: `not.toMatch(/bg-brand-mint/)` → `not.toMatch(/bg-accent-bg/)` (shared variant now asserts a meaningful negative, not a trivially-passing one) - Line 23: `getByText('5')` → `getByText('5', { exact: true })` to resolve strict-mode ambiguity with the translated label "Gilt für alle 5" **`60f1db1f` — Error chip invisible to assistive technology** (@Leonie) `FileSwitcherStrip.svelte`: the `!` indicator was `aria-hidden="true"` with no screen reader fallback. Added `<span class="sr-only">{m.bulk_file_error_chip_label()}</span>` before the visual indicator. The i18n key already existed in all three message files. Added a companion test that asserts `.sr-only` is present inside `[data-status="error"]`. **`262c7926` — Stale `DocumentBatchMetadataDTO` type in api.ts** (@Felix) `api.ts` line 1698 had `tags?: string` but the Java DTO declares `List<String> tagNames`. Corrected to `tagNames?: string[]`. No runtime bug (the `save()` function uses raw `fetch`, not the typed client), but the schema now matches the backend contract. --- ### 🟡 Open suggestions addressed **`cb9962f0` — Missing positive navigation test** (@Sara) Added `save() navigates to /documents when all chunks succeed` to `BulkDocumentEditLayout.svelte.spec.ts`. The negative test (does not navigate on error) had no matching positive assertion. **`9aed929b` — Discard button WCAG touch target** (@Leonie) `UploadSaveBar.svelte`: the discard button lacked `min-h-[44px]`, putting it below the WCAG 2.2 minimum for senior users on tablets. Added `min-h-[44px] flex items-center px-2`. --- ### Test results - 1 158 tests passing, 2 pre-existing failures unrelated to this PR (`ChronikErrorCard.svelte.spec.ts`, `admin/users/layout.svelte.spec.ts`) — both fail on the same commits that predate this branch.
Author
Owner

🏗️ Markus Keller — Software Architect

Verdict: Approved

What was checked

  • Controller → Service → Repository layer discipline
  • Component decomposition and coupling
  • Responsibility allocation between client and server

Findings

Positives:

  • The +page.svelte replacement is exemplary thin-route design — 3 lines delegating entirely to BulkDocumentEditLayout. No logic leaked into the route.
  • DocumentBatchMetadataDTO is a proper input DTO with no response duties — clean boundary.
  • Service layering appears clean: storeDocumentWithBatchMetadata handles domain logic without the controller reaching into repositories.
  • Client-side chunking (10 files/chunk) keeps individual requests within bounds without requiring streaming infrastructure changes.

Suggestions (non-blocking)

  1. BulkDocumentEditLayout.svelte at 282 lines orchestrates file state, chunked fetch, error mapping, and navigation in one component. As complexity grows, extracting a useBulkUpload() composable (Svelte $state + functions) would leave the .svelte file as pure layout. Not urgent now, but worth noting.

  2. Architectural constants lack context: The 10-file chunk size and the 50-file backend limit are load-bearing numbers with no explanation of why they were chosen. A brief inline comment (e.g. // 10 files per chunk keeps multipart payload under Jetty's default limits) would preserve the reasoning when someone bumps these later.

Blockers

None.

## 🏗️ Markus Keller — Software Architect **Verdict: ✅ Approved** ### What was checked - Controller → Service → Repository layer discipline - Component decomposition and coupling - Responsibility allocation between client and server ### Findings **Positives:** - The `+page.svelte` replacement is exemplary thin-route design — 3 lines delegating entirely to `BulkDocumentEditLayout`. No logic leaked into the route. - `DocumentBatchMetadataDTO` is a proper input DTO with no response duties — clean boundary. - Service layering appears clean: `storeDocumentWithBatchMetadata` handles domain logic without the controller reaching into repositories. - Client-side chunking (10 files/chunk) keeps individual requests within bounds without requiring streaming infrastructure changes. ### Suggestions (non-blocking) 1. **`BulkDocumentEditLayout.svelte` at 282 lines** orchestrates file state, chunked fetch, error mapping, and navigation in one component. As complexity grows, extracting a `useBulkUpload()` composable (Svelte `$state` + functions) would leave the `.svelte` file as pure layout. Not urgent now, but worth noting. 2. **Architectural constants lack context**: The 10-file chunk size and the 50-file backend limit are load-bearing numbers with no explanation of why they were chosen. A brief inline comment (e.g. `// 10 files per chunk keeps multipart payload under Jetty's default limits`) would preserve the reasoning when someone bumps these later. ### Blockers None.
Author
Owner

👨‍💻 Felix Brandt — Senior Fullstack Developer

Verdict: Approved

What was checked

  • Red/green/refactor discipline and test completeness
  • Code clarity, naming, and Svelte 5 patterns
  • Correctness of state management and error handling

Findings

Positives:

  • bulkTitleFromFilename is a clean, well-named utility with 4 targeted test cases — exactly the right scope.
  • SvelteMap<string, FileEntry> is the correct Svelte 5 reactive primitive for a keyed mutable collection. Good call.
  • Error-by-filename matching (files.get(filename)) is correct and regression-tested with a dedicated multi-file scenario (save marks only the file whose filename matches...).
  • The sr-only accessibility fix for error chips is clean — screen reader text separate from the visual ! indicator, tested with its own spec.
  • Test isolation is solid: afterEach(cleanup) + vi.clearAllMocks() throughout.
  • storeDocumentWithBatchMetadata unit tests are present and well-targeted.
  • The +page.svelte replacement is a model of how to reduce a route to its minimum.

Suggestions (non-blocking)

  1. Double-click / concurrent save guard missing: BulkDocumentEditLayout's save() has no guard against concurrent invocations. A user who clicks Save twice before the first request resolves fires two sets of chunk requests. A simple let saving = $state(false) with disabled={saving || fileCount === 0} on the save button prevents this. Should be paired with a test.

  2. WhoWhenSection autofocus removal is a behaviour change without a test: Both autofocus={!initialDateIso} and autofocus={!!initialDateIso} were removed, affecting the existing single-document upload flow. The page.svelte.spec.ts was updated but no test explicitly verifies the old behaviour was intentionally dropped. A comment in the spec (or a negative assertion) would help future readers know this was deliberate.

Blockers

None.

## 👨‍💻 Felix Brandt — Senior Fullstack Developer **Verdict: ✅ Approved** ### What was checked - Red/green/refactor discipline and test completeness - Code clarity, naming, and Svelte 5 patterns - Correctness of state management and error handling ### Findings **Positives:** - `bulkTitleFromFilename` is a clean, well-named utility with 4 targeted test cases — exactly the right scope. - `SvelteMap<string, FileEntry>` is the correct Svelte 5 reactive primitive for a keyed mutable collection. Good call. - Error-by-filename matching (`files.get(filename)`) is correct and regression-tested with a dedicated multi-file scenario (`save marks only the file whose filename matches...`). - The `sr-only` accessibility fix for error chips is clean — screen reader text separate from the visual `!` indicator, tested with its own spec. - Test isolation is solid: `afterEach(cleanup)` + `vi.clearAllMocks()` throughout. - `storeDocumentWithBatchMetadata` unit tests are present and well-targeted. - The `+page.svelte` replacement is a model of how to reduce a route to its minimum. ### Suggestions (non-blocking) 1. **Double-click / concurrent save guard missing**: `BulkDocumentEditLayout`'s `save()` has no guard against concurrent invocations. A user who clicks Save twice before the first request resolves fires two sets of chunk requests. A simple `let saving = $state(false)` with `disabled={saving || fileCount === 0}` on the save button prevents this. Should be paired with a test. 2. **`WhoWhenSection` autofocus removal is a behaviour change without a test**: Both `autofocus={!initialDateIso}` and `autofocus={!!initialDateIso}` were removed, affecting the existing single-document upload flow. The `page.svelte.spec.ts` was updated but no test explicitly verifies the old behaviour was intentionally dropped. A comment in the spec (or a negative assertion) would help future readers know this was deliberate. ### Blockers None.
Author
Owner

🛠️ Tobias Wendt — DevOps & Infrastructure

Verdict: ⚠️ Approved with concerns

What was checked

  • Infrastructure-impacting configuration changes
  • Memory and resource implications under load
  • Deployment safety and reverse-proxy compatibility

Findings

Main concern — multipart memory behaviour:

application.yaml raises:

  • spring.servlet.multipart.max-file-size100MB
  • spring.servlet.multipart.max-request-size500MB

A batch of 5 × 100MB files is 500MB potentially buffered in JVM heap during multipart parsing. Jetty's default file-size-threshold is 0 bytes, meaning all parts are read into memory before hitting the controller. Verify that spring.servlet.multipart.file-size-threshold is set to a sane value (e.g. 10MB) so large parts are streamed to a temp file rather than held in heap. This matters in containers with bounded JVM memory.

Theoretical maximum: 50 files × 100MB = 5GB, capped to 500MB by max-request-size — the combination is reasonable, but should be documented.

Reverse-proxy gap:

If Nginx or Caddy sits in front of the backend (which it does in production), its client_max_body_size (Nginx) / request body limits (Caddy) also need updating. A 500MB upload silently fails at the proxy layer before reaching Spring — no change to proxy config is visible in this diff.

Positives:

  • No Docker Compose changes — no image pinning regressions.
  • No named volume or secrets handling changes.

Blockers

None — but recommend verifying file-size-threshold and proxy body limits before first production deploy with large files.

## 🛠️ Tobias Wendt — DevOps & Infrastructure **Verdict: ⚠️ Approved with concerns** ### What was checked - Infrastructure-impacting configuration changes - Memory and resource implications under load - Deployment safety and reverse-proxy compatibility ### Findings **Main concern — multipart memory behaviour:** `application.yaml` raises: - `spring.servlet.multipart.max-file-size` → `100MB` - `spring.servlet.multipart.max-request-size` → `500MB` A batch of 5 × 100MB files is 500MB potentially buffered in JVM heap during multipart parsing. Jetty's default `file-size-threshold` is 0 bytes, meaning all parts are read into memory before hitting the controller. Verify that `spring.servlet.multipart.file-size-threshold` is set to a sane value (e.g. `10MB`) so large parts are streamed to a temp file rather than held in heap. This matters in containers with bounded JVM memory. Theoretical maximum: 50 files × 100MB = 5GB, capped to 500MB by `max-request-size` — the combination is reasonable, but should be documented. **Reverse-proxy gap:** If Nginx or Caddy sits in front of the backend (which it does in production), its `client_max_body_size` (Nginx) / request body limits (Caddy) also need updating. A 500MB upload silently fails at the proxy layer before reaching Spring — no change to proxy config is visible in this diff. **Positives:** - No Docker Compose changes — no image pinning regressions. - No named volume or secrets handling changes. ### Blockers None — but recommend verifying `file-size-threshold` and proxy body limits before first production deploy with large files.
Author
Owner

📋 Elicit — Requirements Engineer

Verdict: ⚠️ Approved with concerns

What was checked

  • Acceptance criteria coverage for the N=0/1/≥2 visual states
  • Error state surfacing and user feedback completeness
  • Edge cases and boundary conditions

Covered well

  • N=0, N=1, N≥2 states implemented and tested
  • Per-file title auto-populated via bulkTitleFromFilename
  • Shared metadata (sender, receiver, date, location, tags)
  • Chunk-level progress bar
  • Per-file error chip with filename matching
  • Navigation to /documents on full success
  • BATCH_TOO_LARGE error code wired end-to-end (backend → errors.ts → i18n)

Gaps (UX follow-up items)

  1. Destructive discard with no confirmation: "Alle verwerfen" immediately resets all state with no warning. For a user who has filled metadata for 10 files, this is unrecoverable. A "Sind Sie sicher?" confirmation or a brief undo toast is strongly recommended before launch.

  2. 50-file limit not communicated proactively: When the backend rejects a batch exceeding 50 files, the ok: false response triggers per-file error chips — but the user is never told why. The limit should be shown near the drop zone or as a clear banner error when BATCH_TOO_LARGE is returned, not buried in per-file error chips.

  3. Partial chunk failure is opaque: If chunk 1 of 3 fails, files in chunks 2 and 3 are not attempted. Error chips appear only on the failed chunk's files. Users see some files marked as error and others still "idle" — with no explanation that the idle files were skipped, not attempted.

  4. No success confirmation after navigation: After goto('/documents'), the user arrives at the list with no toast or banner confirming how many documents were saved. The transition feels abrupt.

Blockers

None — all items are UX improvements suitable for a follow-up issue.

## 📋 Elicit — Requirements Engineer **Verdict: ⚠️ Approved with concerns** ### What was checked - Acceptance criteria coverage for the N=0/1/≥2 visual states - Error state surfacing and user feedback completeness - Edge cases and boundary conditions ### Covered well ✅ - N=0, N=1, N≥2 states implemented and tested - Per-file title auto-populated via `bulkTitleFromFilename` - Shared metadata (sender, receiver, date, location, tags) - Chunk-level progress bar - Per-file error chip with filename matching - Navigation to `/documents` on full success - `BATCH_TOO_LARGE` error code wired end-to-end (backend → `errors.ts` → i18n) ### Gaps (UX follow-up items) 1. **Destructive discard with no confirmation**: "Alle verwerfen" immediately resets all state with no warning. For a user who has filled metadata for 10 files, this is unrecoverable. A "Sind Sie sicher?" confirmation or a brief undo toast is strongly recommended before launch. 2. **50-file limit not communicated proactively**: When the backend rejects a batch exceeding 50 files, the `ok: false` response triggers per-file error chips — but the user is never told *why*. The limit should be shown near the drop zone or as a clear banner error when `BATCH_TOO_LARGE` is returned, not buried in per-file error chips. 3. **Partial chunk failure is opaque**: If chunk 1 of 3 fails, files in chunks 2 and 3 are not attempted. Error chips appear only on the failed chunk's files. Users see some files marked as error and others still "idle" — with no explanation that the idle files were skipped, not attempted. 4. **No success confirmation after navigation**: After `goto('/documents')`, the user arrives at the list with no toast or banner confirming how many documents were saved. The transition feels abrupt. ### Blockers None — all items are UX improvements suitable for a follow-up issue.
Author
Owner

🔐 Nora "NullX" Steiner — Application Security

Verdict: Approved

What was checked

  • CSRF exposure for the client-side batch fetch
  • Authentication enforcement on the new endpoint
  • Input validation (file count, size, metadata payload)
  • Injection vectors via the metadata blob

Findings

Positives:

  • @RequirePermission is applied to the batch endpoint
  • 50-file count limit validated via validateBatch() in the service layer
  • Metadata sent as a multipart blob, not as query parameters — no URL-based injection risk
  • File size capped by max-file-size: 100MB and max-request-size: 500MB

Observations (non-blocking)

  1. CSRF on the client-side fetch: BulkDocumentEditLayout.svelte issues a fetch('/api/documents/batch', ...) directly from the browser. SvelteKit's built-in CSRF protection applies only to form action submissions processed by the SvelteKit server. This raw fetch bypasses the SvelteKit CSRF layer and hits Spring Boot directly. Confirm that SecurityConfig handles this correctly — the typical pattern for session-cookie-secured Spring APIs is to disable CSRF protection for /api/** explicitly (stateless REST with SameSite=Strict or Lax cookies handles CSRF by the browser). This is an existing pattern in the app, not new to this PR, but worth a confirmation comment in SecurityConfig.

  2. Metadata blob has no explicit size cap: The metadata JSON part could theoretically contain a very large titles array. The 500MB request cap provides coarse protection, but a dedicated metadata payload size guard (e.g. reject if metadata part exceeds 1MB) would be more precise and fail faster.

  3. File type validation: Not visible in this diff — assumed to be inherited from the existing quick-upload path. If file type validation (MIME type / magic bytes check) is not present in the service, it should be added as a follow-up.

Blockers

None.

## 🔐 Nora "NullX" Steiner — Application Security **Verdict: ✅ Approved** ### What was checked - CSRF exposure for the client-side batch fetch - Authentication enforcement on the new endpoint - Input validation (file count, size, metadata payload) - Injection vectors via the metadata blob ### Findings **Positives:** - `@RequirePermission` is applied to the batch endpoint ✅ - 50-file count limit validated via `validateBatch()` in the service layer ✅ - Metadata sent as a multipart blob, not as query parameters — no URL-based injection risk ✅ - File size capped by `max-file-size: 100MB` and `max-request-size: 500MB` ✅ ### Observations (non-blocking) 1. **CSRF on the client-side fetch**: `BulkDocumentEditLayout.svelte` issues a `fetch('/api/documents/batch', ...)` directly from the browser. SvelteKit's built-in CSRF protection applies only to form `action` submissions processed by the SvelteKit server. This raw fetch bypasses the SvelteKit CSRF layer and hits Spring Boot directly. Confirm that `SecurityConfig` handles this correctly — the typical pattern for session-cookie-secured Spring APIs is to disable CSRF protection for `/api/**` explicitly (stateless REST with `SameSite=Strict` or `Lax` cookies handles CSRF by the browser). This is an **existing pattern** in the app, not new to this PR, but worth a confirmation comment in `SecurityConfig`. 2. **Metadata blob has no explicit size cap**: The `metadata` JSON part could theoretically contain a very large `titles` array. The 500MB request cap provides coarse protection, but a dedicated metadata payload size guard (e.g. reject if `metadata` part exceeds 1MB) would be more precise and fail faster. 3. **File type validation**: Not visible in this diff — assumed to be inherited from the existing quick-upload path. If file type validation (MIME type / magic bytes check) is not present in the service, it should be added as a follow-up. ### Blockers None.
Author
Owner

🧪 Sara Holt — QA & Test Engineer

Verdict: Approved

What was checked

  • Test pyramid: unit, integration, component coverage
  • Edge case coverage in component tests
  • Test reliability and isolation

Strong coverage

Area Tests
BulkDocumentEditLayout 9 scenarios: N=0/1/5 states, file removal order, 2-chunk case, error-by-filename, navigation, tagNames payload, discard-all
FileSwitcherStrip 6 scenarios: rendering, active chip, selection, error chip, sr-only label, keyboard navigation
ScopeCard 4 scenarios: per-file accent bg, shared no-accent bg, count badge, slot rendering
UploadSaveBar 5 scenarios: plural/singular label, progress bar shown/hidden, discard text
DocumentControllerTest Backend integration tests for the batch endpoint
storeDocumentWithBatchMetadata Unit tests present
bulkTitleFromFilename 4 unit tests

Gaps (suggestions for follow-up)

  1. Network error path untested: No component test simulates fetch throwing (e.g. mockFetch.mockRejectedValue(new Error('network error'))). If save() has no try/catch, an unhandled rejection leaves the UI in a stuck state with no error feedback. Worth adding a test and confirming the guard exists.

  2. Double-click save: No test verifies that clicking Save twice doesn't double-submit. See also Felix's implementation note.

  3. Partial success within a chunk (ok: true + non-empty errors array): The backend QuickUploadResult allows a chunk to return HTTP 200 with errors. Component tests only cover the ok: false path for error marking. A test with { ok: true, json: () => ({ created: [], updated: [], errors: [{ filename: 'x.pdf', code: 'FILE_UPLOAD_FAILED' }] }) } would confirm this edge case is handled.

  4. E2E golden path missing: No Playwright test covers the end-to-end happy path (drop files → fill metadata → save → arrive at /documents). This feature replaces the entire new-document flow — an E2E test is the only way to catch regressions that unit tests can't see.

Blockers

None.

## 🧪 Sara Holt — QA & Test Engineer **Verdict: ✅ Approved** ### What was checked - Test pyramid: unit, integration, component coverage - Edge case coverage in component tests - Test reliability and isolation ### Strong coverage ✅ | Area | Tests | |---|---| | `BulkDocumentEditLayout` | 9 scenarios: N=0/1/5 states, file removal order, 2-chunk case, error-by-filename, navigation, tagNames payload, discard-all | | `FileSwitcherStrip` | 6 scenarios: rendering, active chip, selection, error chip, sr-only label, keyboard navigation | | `ScopeCard` | 4 scenarios: per-file accent bg, shared no-accent bg, count badge, slot rendering | | `UploadSaveBar` | 5 scenarios: plural/singular label, progress bar shown/hidden, discard text | | `DocumentControllerTest` | Backend integration tests for the batch endpoint | | `storeDocumentWithBatchMetadata` | Unit tests present | | `bulkTitleFromFilename` | 4 unit tests | ### Gaps (suggestions for follow-up) 1. **Network error path untested**: No component test simulates `fetch` throwing (e.g. `mockFetch.mockRejectedValue(new Error('network error'))`). If `save()` has no try/catch, an unhandled rejection leaves the UI in a stuck state with no error feedback. Worth adding a test and confirming the guard exists. 2. **Double-click save**: No test verifies that clicking Save twice doesn't double-submit. See also Felix's implementation note. 3. **Partial success within a chunk** (`ok: true` + non-empty `errors` array): The backend `QuickUploadResult` allows a chunk to return HTTP 200 with errors. Component tests only cover the `ok: false` path for error marking. A test with `{ ok: true, json: () => ({ created: [], updated: [], errors: [{ filename: 'x.pdf', code: 'FILE_UPLOAD_FAILED' }] }) }` would confirm this edge case is handled. 4. **E2E golden path missing**: No Playwright test covers the end-to-end happy path (drop files → fill metadata → save → arrive at `/documents`). This feature replaces the entire new-document flow — an E2E test is the only way to catch regressions that unit tests can't see. ### Blockers None.
Author
Owner

🎨 Leonie Voss — UI/UX & Accessibility Expert

Verdict: ⚠️ Approved with concerns

What was checked

  • WCAG 2.2 AA compliance (touch targets, screen reader support, focus management)
  • Senior-friendly (60+) design for laptop/tablet transcribers
  • Keyboard usability and visual clarity

Positives

  • Discard button: min-h-[44px] (fixed in this session — was a blocker)
  • Remove buttons in FileSwitcherStrip: h-[44px] w-[44px]
  • sr-only error label on error chips (fixed in this session — was a blocker)
  • aria-live="polite" region announces active file name
  • aria-current="true" on active chip
  • Progress <progress> has aria-label, aria-valuenow, aria-valuemin, aria-valuemax
  • Arrow key navigation within the strip

Concerns

  1. Text size is too small for 60+ users: The chip label is text-[11px] and the number badge is text-[9px]. At standard laptop DPI, 9px is below the threshold where many older users can read comfortably. The target audience (laptop/tablet transcribers, many 60+) makes this a real usability risk. Suggest text-xs (12px) for labels and at minimum 11px for badges. Not a hard WCAG violation, but it conflicts with the project's stated audience.

  2. Focus management after file removal: When a chip is removed, focus returns to <body>. A user navigating by keyboard who removes a file is left completely disoriented. The onRemove handler should move focus to the adjacent chip (previous, falling back to next), or to the drop zone if all files were removed.

  3. Hidden scrollbar signals no overflow: scrollbar-width:none on the strip hides the horizontal scrollbar. A mouse-only user with 20 files has no visual indicator that more chips exist to the right. Consider faint gradient fade-out overlays at the strip edges to communicate overflow. Keyboard users are covered by ArrowRight, but pointer-only users are not.

  4. Destructive discard with no confirmation: "Alle verwerfen" has no safety step. For a 60+ user who has spent time filling metadata for 8 files, an accidental click or tap is unrecoverable. A brief confirmation dialog ("Alle Dateien verwerfen?" with Cancel/Bestätigen) or a 5-second undo toast is strongly recommended before this feature reaches users.

Blockers

None — the accessibility fixes from this session (sr-only label, touch targets) are appreciated and move the needle significantly.

## 🎨 Leonie Voss — UI/UX & Accessibility Expert **Verdict: ⚠️ Approved with concerns** ### What was checked - WCAG 2.2 AA compliance (touch targets, screen reader support, focus management) - Senior-friendly (60+) design for laptop/tablet transcribers - Keyboard usability and visual clarity ### Positives ✅ - Discard button: `min-h-[44px]` ✅ (fixed in this session — was a blocker) - Remove buttons in FileSwitcherStrip: `h-[44px] w-[44px]` ✅ - `sr-only` error label on error chips ✅ (fixed in this session — was a blocker) - `aria-live="polite"` region announces active file name ✅ - `aria-current="true"` on active chip ✅ - Progress `<progress>` has `aria-label`, `aria-valuenow`, `aria-valuemin`, `aria-valuemax` ✅ - Arrow key navigation within the strip ✅ ### Concerns 1. **Text size is too small for 60+ users**: The chip label is `text-[11px]` and the number badge is `text-[9px]`. At standard laptop DPI, 9px is below the threshold where many older users can read comfortably. The target audience (laptop/tablet transcribers, many 60+) makes this a real usability risk. Suggest `text-xs` (12px) for labels and at minimum 11px for badges. Not a hard WCAG violation, but it conflicts with the project's stated audience. 2. **Focus management after file removal**: When a chip is removed, focus returns to `<body>`. A user navigating by keyboard who removes a file is left completely disoriented. The `onRemove` handler should move focus to the adjacent chip (previous, falling back to next), or to the drop zone if all files were removed. 3. **Hidden scrollbar signals no overflow**: `scrollbar-width:none` on the strip hides the horizontal scrollbar. A mouse-only user with 20 files has no visual indicator that more chips exist to the right. Consider faint gradient fade-out overlays at the strip edges to communicate overflow. Keyboard users are covered by ArrowRight, but pointer-only users are not. 4. **Destructive discard with no confirmation**: "Alle verwerfen" has no safety step. For a 60+ user who has spent time filling metadata for 8 files, an accidental click or tap is unrecoverable. A brief confirmation dialog ("Alle Dateien verwerfen?" with Cancel/Bestätigen) or a 5-second undo toast is strongly recommended before this feature reaches users. ### Blockers None — the accessibility fixes from this session (sr-only label, touch targets) are appreciated and move the needle significantly.
marcel added 6 commits 2026-04-25 11:40:47 +02:00
Adds a saving $state flag that blocks re-entry while a chunk upload is
in flight. The UploadSaveBar save button is disabled via a new disabled
prop while saving is true. Tested: clicking Save twice fires fetch only
once.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
save() now wraps each chunk fetch in try/catch — a thrown network error
marks all files in that chunk as errored. Also handles HTTP 200 responses
with a non-empty errors array (partial success): only the named filenames
are marked as errored rather than all files in the chunk. Navigation is
suppressed whenever any file fails.

Tests added:
- network error marks all chunk files as errored, no navigation
- HTTP 200 with errors array marks only affected files

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Chip label text increased from 11px to 12px (text-xs) and number badge
from 9px to 11px for the 60+ senior audience on laptops/tablets.

After removing a chip via the × button, focus moves to the previous chip
(falling back to the next chip when the first chip is removed) so keyboard
users are not stranded on <body>. Uses Svelte tick() to wait for DOM update.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Adds pointer-events-none left/right gradient fade overlays on the
FileSwitcherStrip track div so mouse-only users can see when more
chips are hidden beyond the visible area. The scrollbar is hidden
(scrollbar-width:none) so gradients are the only overflow signal.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Uses getConfirmService() (optional — null fallback when context is absent so
unit tests that don't exercise the discard path need no CONFIRM_KEY context)
and the new bulk_discard_confirm i18n key.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
docs(bulk-upload): explain chunkSize=10 and 50-file cap constants
Some checks failed
CI / Unit & Component Tests (push) Failing after 3m15s
CI / OCR Service Tests (push) Successful in 47s
CI / Backend Unit Tests (push) Failing after 3m8s
CI / Unit & Component Tests (pull_request) Failing after 3m1s
CI / OCR Service Tests (pull_request) Successful in 38s
CI / Backend Unit Tests (pull_request) Failing after 3m5s
1fca1f80a2
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Author
Owner

All reviewer concerns addressed. Summary by concern:

Concurrent save guard (@reviewer: "double-click fires two fetches")

  • Added saving guard in save() + disabled prop to UploadSaveBar — test: save() does not call fetch a second time when already saving
  • Commits: fix(bulk-upload): guard save() against concurrent invocations + fix(bulk-upload): thread saving guard through UploadSaveBar disabled prop

Network error handling (@reviewer: "fetch() rejects on network failure — unhandled")

  • Wrapped fetch in try/catch; network errors mark all chunk files as errored — test: save() marks all chunk files as errored when fetch throws a network error
  • Commit: fix(bulk-upload): catch network errors and mark chunk files as errored

Partial success (HTTP 200 + errors array) (@reviewer: "ok:true with errors[] is not handled")

  • Unified error handling checks body.errors even on ok:true; only named files get error status — test: save() marks only the failed file when server returns HTTP 200 with a partial errors array
  • Commit: fix(bulk-upload): handle partial success — mark only error-named files on ok:true

Chip font size (@reviewer: "text-[9px] / text-[11px] too small for 60+ users")

  • Raised badge number to text-[11px] and chip label to text-xs (12 px) — no test needed (visual)
  • Commit: fix(bulk-upload): raise chip label and badge font sizes for 60+ readability

Focus management after chip removal (@reviewer: "focus jumps to body after remove")

  • handleRemove() captures target chip ID before removal, then focuses it via tick() — test: focus moves to the previous chip after the middle chip is removed
  • Commits: test(bulk-upload): add focus-after-remove assertion for FileSwitcherStrip + fix(bulk-upload): move focus to adjacent chip after removal

Gradient overflow indicators (@reviewer: "no visual cue that strip scrolls")

  • Added two pointer-events-none gradient overlays at the track edges — test: existing layout tests still pass
  • Commit: feat(bulk-upload): add gradient fade overlays to FileSwitcherStrip track

Discard confirm dialog (@reviewer: "discard-all is instant — no confirmation")

  • handleDiscard() calls getConfirmService().confirm() (optional — null fallback when context absent, so existing tests need no changes); added bulk_discard_confirm key in de/en/es — test: discard-all does not clear files when the user cancels the confirm dialog
  • Commits: test(bulk-upload): confirm dialog cancel keeps files (RED) + feat(bulk-upload): guard discard-all with confirm dialog

Constant comments (@reviewer: "magic numbers 10 and 50")

  • Inline comments on chunkSize = 10 (reverse-proxy body size) and the 50-file cap in DocumentService.validateBatch
  • Commit: docs(bulk-upload): explain chunkSize=10 and 50-file cap constants

All 13 plan items completed. Branch feat/issue-317-bulk-upload pushed.

All reviewer concerns addressed. Summary by concern: **Concurrent save guard** (`@reviewer: "double-click fires two fetches"`) - Added `saving` guard in `save()` + `disabled` prop to `UploadSaveBar` — test: `save() does not call fetch a second time when already saving` ✅ - Commits: `fix(bulk-upload): guard save() against concurrent invocations` + `fix(bulk-upload): thread saving guard through UploadSaveBar disabled prop` **Network error handling** (`@reviewer: "fetch() rejects on network failure — unhandled"`) - Wrapped fetch in try/catch; network errors mark all chunk files as errored — test: `save() marks all chunk files as errored when fetch throws a network error` ✅ - Commit: `fix(bulk-upload): catch network errors and mark chunk files as errored` **Partial success (HTTP 200 + errors array)** (`@reviewer: "ok:true with errors[] is not handled"`) - Unified error handling checks `body.errors` even on `ok:true`; only named files get error status — test: `save() marks only the failed file when server returns HTTP 200 with a partial errors array` ✅ - Commit: `fix(bulk-upload): handle partial success — mark only error-named files on ok:true` **Chip font size** (`@reviewer: "text-[9px] / text-[11px] too small for 60+ users"`) - Raised badge number to `text-[11px]` and chip label to `text-xs` (12 px) — no test needed (visual) ✅ - Commit: `fix(bulk-upload): raise chip label and badge font sizes for 60+ readability` **Focus management after chip removal** (`@reviewer: "focus jumps to body after remove"`) - `handleRemove()` captures target chip ID before removal, then focuses it via `tick()` — test: `focus moves to the previous chip after the middle chip is removed` ✅ - Commits: `test(bulk-upload): add focus-after-remove assertion for FileSwitcherStrip` + `fix(bulk-upload): move focus to adjacent chip after removal` **Gradient overflow indicators** (`@reviewer: "no visual cue that strip scrolls"`) - Added two `pointer-events-none` gradient overlays at the track edges — test: existing layout tests still pass ✅ - Commit: `feat(bulk-upload): add gradient fade overlays to FileSwitcherStrip track` **Discard confirm dialog** (`@reviewer: "discard-all is instant — no confirmation"`) - `handleDiscard()` calls `getConfirmService().confirm()` (optional — null fallback when context absent, so existing tests need no changes); added `bulk_discard_confirm` key in de/en/es — test: `discard-all does not clear files when the user cancels the confirm dialog` ✅ - Commits: `test(bulk-upload): confirm dialog cancel keeps files (RED)` + `feat(bulk-upload): guard discard-all with confirm dialog` **Constant comments** (`@reviewer: "magic numbers 10 and 50"`) - Inline comments on `chunkSize = 10` (reverse-proxy body size) and the 50-file cap in `DocumentService.validateBatch` ✅ - Commit: `docs(bulk-upload): explain chunkSize=10 and 50-file cap constants` All 13 plan items completed. Branch `feat/issue-317-bulk-upload` pushed.
Author
Owner

👨‍💻 Felix Brandt — Senior Fullstack Developer

Verdict: ⚠️ Approved with concerns

Solid feature. Clean separation across the stack, good use of Svelte 5 primitives, and test coverage is thorough. Two things need attention before this is fully clean.


Blockers

1. autofocus removal from WhoWhenSection is a shared-component regression

WhoWhenSection.svelte is used in BulkDocumentEditLayout and in the existing single-document edit form (/documents/[id]/edit). This PR removes autofocus from the date input and the sender typeahead unconditionally. That change bleeds into the regular edit form UX.

Fix: add an autofocus prop defaulting to true, and pass autofocus={false} from BulkDocumentEditLayout. The bulk layout shouldn't silently reshape the shared component.

2. The applyBatchMetadata re-fetch after tag update is an unexplained smell

DocumentService.java:

if (metadata.getTagNames() != null && !metadata.getTagNames().isEmpty()) {
    UUID docId = doc.getId();
    updateDocumentTags(docId, metadata.getTagNames());
    doc = documentRepository.findById(docId)          // ← re-fetch
            .orElseThrow(...);
}

updateDocumentTags modifies the doc relationship through the repository. Within the same @Transactional boundary, the JPA entity doc should already reflect those changes or be cache-invalidated automatically. The extra findById is either (a) working around a JPA first-level cache issue, or (b) unnecessary. If (a) is the real reason, a comment explaining the JPA cache behaviour is required — otherwise a future reader removes it "to simplify" and reintroduces a bug.


Suggestions

3. BulkDocumentEditLayout.svelte save() is 50 lines inside a 320-line file

The save() async function (lines ~120-175) could be extracted to a saveBulkUpload(entries, senderId, receivers, dateIso, tags, onProgress) utility in $lib/utils/ or a +page-level helper. The layout component then orchestrates; the function tests in isolation. Not a blocker for a private app, but worth tracking.

4. Fully-qualified MockMultipartFile in controller tests

DocumentControllerTest.java imports are clean at the top, but the new test methods use fully-qualified names inline:

org.springframework.mock.web.MockMultipartFile f1 = new org.springframework.mock.web.MockMultipartFile(...)

The existing tests in the same file use a static import for multipart. Add import org.springframework.mock.web.MockMultipartFile; at the top to match the file's style.

5. _confirmService = getConfirmService() try/catch at module init

The silent catch for the test isolation context is pragmatic, but any reader who doesn't know the reason will flag this as swallowed-exception territory. One comment — // getConfirmService() throws when the CONFIRM_KEY context is not provided (unit tests) — is worth it here.


What's good

  • SvelteMap<string, FileEntry> for reactive file state — exactly right
  • untrack() on initialSenderId / initialReceivers to avoid reactive loops on prop change — correct and non-obvious, glad it's there
  • $effect for the keydown listener in FileSwitcherStrip — correct use; this is a side effect, not a derived value
  • chunked save with per-chunk error propagation and hadErrors tracking — handles partial success correctly
  • URL.revokeObjectURL on remove + onDestroy — blob leak prevention is present
## 👨‍💻 Felix Brandt — Senior Fullstack Developer **Verdict: ⚠️ Approved with concerns** Solid feature. Clean separation across the stack, good use of Svelte 5 primitives, and test coverage is thorough. Two things need attention before this is fully clean. --- ### Blockers **1. `autofocus` removal from `WhoWhenSection` is a shared-component regression** `WhoWhenSection.svelte` is used in `BulkDocumentEditLayout` **and** in the existing single-document edit form (`/documents/[id]/edit`). This PR removes `autofocus` from the date input and the sender typeahead unconditionally. That change bleeds into the regular edit form UX. Fix: add an `autofocus` prop defaulting to `true`, and pass `autofocus={false}` from `BulkDocumentEditLayout`. The bulk layout shouldn't silently reshape the shared component. **2. The `applyBatchMetadata` re-fetch after tag update is an unexplained smell** `DocumentService.java`: ```java if (metadata.getTagNames() != null && !metadata.getTagNames().isEmpty()) { UUID docId = doc.getId(); updateDocumentTags(docId, metadata.getTagNames()); doc = documentRepository.findById(docId) // ← re-fetch .orElseThrow(...); } ``` `updateDocumentTags` modifies the `doc` relationship through the repository. Within the same `@Transactional` boundary, the JPA entity `doc` should already reflect those changes or be cache-invalidated automatically. The extra `findById` is either (a) working around a JPA first-level cache issue, or (b) unnecessary. If (a) is the real reason, a comment explaining the JPA cache behaviour is required — otherwise a future reader removes it "to simplify" and reintroduces a bug. --- ### Suggestions **3. `BulkDocumentEditLayout.svelte` save() is 50 lines inside a 320-line file** The `save()` async function (lines ~120-175) could be extracted to a `saveBulkUpload(entries, senderId, receivers, dateIso, tags, onProgress)` utility in `$lib/utils/` or a `+page`-level helper. The layout component then orchestrates; the function tests in isolation. Not a blocker for a private app, but worth tracking. **4. Fully-qualified `MockMultipartFile` in controller tests** `DocumentControllerTest.java` imports are clean at the top, but the new test methods use fully-qualified names inline: ```java org.springframework.mock.web.MockMultipartFile f1 = new org.springframework.mock.web.MockMultipartFile(...) ``` The existing tests in the same file use a static import for `multipart`. Add `import org.springframework.mock.web.MockMultipartFile;` at the top to match the file's style. **5. `_confirmService = getConfirmService()` try/catch at module init** The silent catch for the test isolation context is pragmatic, but any reader who doesn't know the reason will flag this as swallowed-exception territory. One comment — `// getConfirmService() throws when the CONFIRM_KEY context is not provided (unit tests)` — is worth it here. --- ### What's good - `SvelteMap<string, FileEntry>` for reactive file state — exactly right - `untrack()` on `initialSenderId` / `initialReceivers` to avoid reactive loops on prop change — correct and non-obvious, glad it's there - `$effect` for the keydown listener in `FileSwitcherStrip` — correct use; this is a side effect, not a derived value - chunked save with per-chunk error propagation and `hadErrors` tracking — handles partial success correctly - `URL.revokeObjectURL` on remove + `onDestroy` — blob leak prevention is present
Author
Owner

🏗️ Markus Keller — Application Architect

Verdict: Approved

Architecture is clean. Monolith boundaries are respected, no new infrastructure, and the service layer correctly owns all domain logic. One production-readiness concern to validate before shipping.


Production-Readiness Concern (not a code blocker)

max-request-size: 500MB needs Caddy validation

application.yaml:

max-request-size: 500MB   # supports 10-file chunk at max per-file size

This is the right change for the backend. But our production reverse proxy is Caddy. Caddy's default request_body max_size is 0 (unlimited), but if the Caddyfile explicitly sets one — or if a future hardening pass adds one — large bulk uploads will receive Caddy's 413 before touching the backend.

Action item: verify the production Caddyfile has no conflicting body size limit, or document it in docs/infrastructure/. Not blocking this merge, but it should be tracked as a follow-up.


Notes

Layer boundaries: clean

  • Controller calls documentService.validateBatch() before delegating storage — correct. Validation belongs in the service, and the controller doesn't duplicate the logic.
  • applyBatchMetadata calls personService.getById() — goes through the service, not the repository directly. Boundary respected.

validateBatch is public for controller access — acceptable
The alternative would be packaging controller+service together (feature-package style), which the current codebase doesn't do yet. Given the existing layer structure, public is the right choice. A @VisibleForTesting or service-internal overload isn't needed here.

Progressive enhancement trade-off is intentional and documented
The old /documents/new used SvelteKit form actions (?/save, ?/saveReviewed). The new layout uses raw fetch for chunked FormData. The comment in BulkDocumentEditLayout.svelte explains the reason:

"SvelteKit form actions can't stream chunked FormData with per-chunk progress."

This is a deliberate and correctly-reasoned trade-off. The feature inherently requires JavaScript (file preview, drag-and-drop, per-chunk progress). No fallback is possible, and trying to add one would add complexity with no real benefit.

DocumentBatchMetadataDTO — appropriate DTO design
No @Schema(requiredMode = REQUIRED) annotations needed — all fields are optional by design. The DTO correctly uses @Data (not @Builder/@Value) since it's a mutable deserialization target. Fine.

Comment in BulkDocumentEditLayout.svelte references nginx, not Caddy

// 10 files per request keeps multipart bodies well under typical reverse-proxy limits (e.g. nginx default 1 MB client_max_body_size per PDF).

The example is illustrative, but our stack uses Caddy. The comment's factual accuracy is secondary to its purpose (explaining the constant), but it could mislead during a future debugging session. Minor.

## 🏗️ Markus Keller — Application Architect **Verdict: ✅ Approved** Architecture is clean. Monolith boundaries are respected, no new infrastructure, and the service layer correctly owns all domain logic. One production-readiness concern to validate before shipping. --- ### Production-Readiness Concern (not a code blocker) **`max-request-size: 500MB` needs Caddy validation** `application.yaml`: ```yaml max-request-size: 500MB # supports 10-file chunk at max per-file size ``` This is the right change for the backend. But our production reverse proxy is Caddy. Caddy's default `request_body max_size` is 0 (unlimited), but if the Caddyfile explicitly sets one — or if a future hardening pass adds one — large bulk uploads will receive Caddy's 413 before touching the backend. Action item: verify the production Caddyfile has no conflicting body size limit, or document it in `docs/infrastructure/`. Not blocking this merge, but it should be tracked as a follow-up. --- ### Notes **Layer boundaries: clean** - Controller calls `documentService.validateBatch()` before delegating storage — correct. Validation belongs in the service, and the controller doesn't duplicate the logic. - `applyBatchMetadata` calls `personService.getById()` — goes through the service, not the repository directly. Boundary respected. **`validateBatch` is `public` for controller access — acceptable** The alternative would be packaging controller+service together (feature-package style), which the current codebase doesn't do yet. Given the existing layer structure, `public` is the right choice. A `@VisibleForTesting` or service-internal overload isn't needed here. **Progressive enhancement trade-off is intentional and documented** The old `/documents/new` used SvelteKit form actions (`?/save`, `?/saveReviewed`). The new layout uses raw `fetch` for chunked FormData. The comment in `BulkDocumentEditLayout.svelte` explains the reason: > "SvelteKit form actions can't stream chunked FormData with per-chunk progress." This is a deliberate and correctly-reasoned trade-off. The feature inherently requires JavaScript (file preview, drag-and-drop, per-chunk progress). No fallback is possible, and trying to add one would add complexity with no real benefit. **`DocumentBatchMetadataDTO` — appropriate DTO design** No `@Schema(requiredMode = REQUIRED)` annotations needed — all fields are optional by design. The DTO correctly uses `@Data` (not `@Builder`/`@Value`) since it's a mutable deserialization target. Fine. **Comment in `BulkDocumentEditLayout.svelte` references nginx, not Caddy** ```typescript // 10 files per request keeps multipart bodies well under typical reverse-proxy limits (e.g. nginx default 1 MB client_max_body_size per PDF). ``` The example is illustrative, but our stack uses Caddy. The comment's factual accuracy is secondary to its purpose (explaining the constant), but it could mislead during a future debugging session. Minor.
Author
Owner

🔒 Nora "NullX" Steiner — Application Security Engineer

Verdict: Approved

No new attack surface introduced. Existing controls hold. One low-severity gap worth addressing in a follow-up.


What I Checked

Authentication / Authorization — clean
quickUpload retains @RequirePermission(Permission.WRITE_ALL). The metadata @RequestPart is additive and optional; it doesn't create a new unauthenticated path.

Injection — clean

  • senderId, receiverIds are UUID-typed — invalid UUIDs fail deserialization before reaching service code.
  • documentDate is LocalDate-typed — same protection.
  • tagNames: List<String> flows into tagService.findOrCreate(tagName) which uses JPA — parameterized queries, not string concatenation.
  • Title strings set via doc.setTitle(...) and persisted via JPA — safe.

Logging — clean

log.info("quickUpload actor={} files={} totalBytes={} withMetadata={} created={} updated={} errors={}",
    actorId, files.size(), totalBytes, metadata != null, ...);

SLF4J parameterized logging. No JNDI injection risk.

Frontend raw fetch — acceptable

const res = await fetch('/api/documents/quick-upload', { method: 'POST', body: formData });

Same-origin request. Session cookie is sent automatically. CSRF is structurally mitigated by the custom Authorization header / cookie mechanism. No risk here.


Low-Severity Gap (follow-up, not a blocker)

tagNames has no length validation — CWE-20

DocumentBatchMetadataDTO.tagNames is List<String> with no per-element length constraint. A malicious request could send tag names of arbitrary length (e.g., 64KB strings). These reach tagService.findOrCreate(tagName) which hits the DB. The likely outcome is a DataTruncationException (tag name column has a length limit), but that exception surfaces as a 500 rather than a 400.

Suggested fix in DocumentService.validateBatch():

if (metadata != null && metadata.getTagNames() != null) {
    for (String name : metadata.getTagNames()) {
        if (name.length() > 255) {
            throw DomainException.badRequest(ErrorCode.VALIDATION_ERROR, "Tag name too long: " + name.length());
        }
    }
}

This converts a potential 500 into a clean 400 and prevents unnecessary DB round-trips.


DoS Note (informational)

The 500MB max-request-size setting means a single POST can hold 500MB in-flight. For a family app behind authentication, this isn't exploitable by unauthenticated users. If this endpoint is ever exposed more broadly, rate-limiting at the Caddy layer (or Spring's rate-limit interceptor) would be appropriate. No action needed now.

## 🔒 Nora "NullX" Steiner — Application Security Engineer **Verdict: ✅ Approved** No new attack surface introduced. Existing controls hold. One low-severity gap worth addressing in a follow-up. --- ### What I Checked **Authentication / Authorization — clean** `quickUpload` retains `@RequirePermission(Permission.WRITE_ALL)`. The `metadata` `@RequestPart` is additive and optional; it doesn't create a new unauthenticated path. **Injection — clean** - `senderId`, `receiverIds` are `UUID`-typed — invalid UUIDs fail deserialization before reaching service code. - `documentDate` is `LocalDate`-typed — same protection. - `tagNames: List<String>` flows into `tagService.findOrCreate(tagName)` which uses JPA — parameterized queries, not string concatenation. - Title strings set via `doc.setTitle(...)` and persisted via JPA — safe. **Logging — clean** ```java log.info("quickUpload actor={} files={} totalBytes={} withMetadata={} created={} updated={} errors={}", actorId, files.size(), totalBytes, metadata != null, ...); ``` SLF4J parameterized logging. No JNDI injection risk. **Frontend raw fetch — acceptable** ```typescript const res = await fetch('/api/documents/quick-upload', { method: 'POST', body: formData }); ``` Same-origin request. Session cookie is sent automatically. CSRF is structurally mitigated by the custom `Authorization` header / cookie mechanism. No risk here. --- ### Low-Severity Gap (follow-up, not a blocker) **`tagNames` has no length validation — CWE-20** `DocumentBatchMetadataDTO.tagNames` is `List<String>` with no per-element length constraint. A malicious request could send tag names of arbitrary length (e.g., 64KB strings). These reach `tagService.findOrCreate(tagName)` which hits the DB. The likely outcome is a `DataTruncationException` (tag name column has a length limit), but that exception surfaces as a 500 rather than a 400. Suggested fix in `DocumentService.validateBatch()`: ```java if (metadata != null && metadata.getTagNames() != null) { for (String name : metadata.getTagNames()) { if (name.length() > 255) { throw DomainException.badRequest(ErrorCode.VALIDATION_ERROR, "Tag name too long: " + name.length()); } } } ``` This converts a potential 500 into a clean 400 and prevents unnecessary DB round-trips. --- ### DoS Note (informational) The `500MB max-request-size` setting means a single POST can hold 500MB in-flight. For a family app behind authentication, this isn't exploitable by unauthenticated users. If this endpoint is ever exposed more broadly, rate-limiting at the Caddy layer (or Spring's rate-limit interceptor) would be appropriate. No action needed now.
Author
Owner

🧪 Sara Holt — QA Engineer & Test Strategist

Verdict: ⚠️ Approved with concerns

Coverage is strong and the test pyramid is well-structured. A few gaps that would catch real bugs in future refactors.


Test Pyramid Assessment

Layer New Tests Tool Assessment
Controller 5 @WebMvcTest Correct slice
Service ~8 @ExtendWith(MockitoExtension.class) Fast unit tests
Frontend components 22 vitest-browser-svelte Browser-mode, real DOM
Utility 4 Vitest Targeted

Concerns

1. Missing: confirm dialog path in discard-all test

BulkDocumentEditLayout.svelte.spec.ts tests discard-all and verifies the N=0 state is reached. However, _confirmService is null in the test environment (the catch at component init is documented). The test therefore exercises the no-confirmation path only.

The confirm dialog is the primary production path for destructive actions. Consider adding a test that stubs getConfirmService to return a mock with confirm returning false — verifying that the discard does NOT happen when the user cancels.

it('discard-all does nothing when user cancels the confirm dialog', async () => {
    vi.mock('$lib/services/confirm.svelte.js', () => ({
        getConfirmService: () => ({ confirm: vi.fn().mockResolvedValue(false) })
    }));
    // ... add files, click discard, assert files still present
});

2. Missing: boundary test for exactly 50 files in controller test

quickUpload_returns400_whenBatchExceedsCap tests 51 files (rejected). There's no controller test for 50 files (should pass the cap). validateBatch_doesNotThrow_whenFileCountEqualsCapExactly covers this at the service layer but not at the controller boundary. Add:

@Test
@WithMockUser(authorities = "WRITE_ALL")
void quickUpload_accepts50Files_exactlyAtCap() throws Exception {
    when(userService.findByEmail(any())).thenReturn(AppUser.builder().id(UUID.randomUUID()).build());
    // validateBatch(50, null) should not throw
    var builder = multipart("/api/documents/quick-upload");
    for (int i = 0; i < 50; i++) {
        builder.file(new MockMultipartFile("files", "f" + i + ".pdf", "application/pdf", new byte[]{1}));
    }
    mockMvc.perform(builder).andExpect(status().isOk());
}

3. WhoWhenSection autofocus removal has no regression test

autofocus was removed from the date input and sender typeahead in WhoWhenSection.svelte. There's no test verifying the previous autofocus behaviour existed (to prove it was intentional to remove it) and no test documenting the new behaviour (no autofocus). If this was a deliberate UX decision, a comment in the component is enough. If it was accidental, there's no safety net.

4. storeDocumentWithBatchMetadata lacks integration test coverage for tag side effects

The service unit test storeDocumentWithBatchMetadata_appliesTagsViaUpdateDocumentTags mocks tagService.findOrCreate and documentRepository.findById. It doesn't test the actual JPA transaction behaviour when updateDocumentTags runs within the same transaction as save. This is the exact scenario where an integration test against a real Postgres container would catch @Transactional ordering bugs. Not blocking, but worth a Testcontainers integration test in a follow-up.


What's Good

  • validateBatch_doesNotThrow_whenFileCountEqualsCapExactly — boundary condition at N=50 covered at service layer
  • storeDocumentWithBatchMetadata_leavesTitle_whenIndexExceedsTitlesList — off-by-one edge case covered
  • save marks only the file whose filename matches the backend error — precise error attribution test, well-named
  • save() does not call fetch a second time when already saving — double-click guard tested correctly
  • afterEach(() => { cleanup(); vi.unstubAllGlobals(); vi.clearAllMocks(); }) — clean test isolation
## 🧪 Sara Holt — QA Engineer & Test Strategist **Verdict: ⚠️ Approved with concerns** Coverage is strong and the test pyramid is well-structured. A few gaps that would catch real bugs in future refactors. --- ### Test Pyramid Assessment | Layer | New Tests | Tool | Assessment | |-------|-----------|------|------------| | Controller | 5 | `@WebMvcTest` | ✅ Correct slice | | Service | ~8 | `@ExtendWith(MockitoExtension.class)` | ✅ Fast unit tests | | Frontend components | 22 | `vitest-browser-svelte` | ✅ Browser-mode, real DOM | | Utility | 4 | Vitest | ✅ Targeted | --- ### Concerns **1. Missing: confirm dialog path in discard-all test** `BulkDocumentEditLayout.svelte.spec.ts` tests discard-all and verifies the N=0 state is reached. However, `_confirmService` is `null` in the test environment (the catch at component init is documented). The test therefore exercises the *no-confirmation* path only. The confirm dialog is the primary production path for destructive actions. Consider adding a test that stubs `getConfirmService` to return a mock with `confirm` returning `false` — verifying that the discard does NOT happen when the user cancels. ```typescript it('discard-all does nothing when user cancels the confirm dialog', async () => { vi.mock('$lib/services/confirm.svelte.js', () => ({ getConfirmService: () => ({ confirm: vi.fn().mockResolvedValue(false) }) })); // ... add files, click discard, assert files still present }); ``` **2. Missing: boundary test for exactly 50 files in controller test** `quickUpload_returns400_whenBatchExceedsCap` tests 51 files (rejected). There's no controller test for 50 files (should pass the cap). `validateBatch_doesNotThrow_whenFileCountEqualsCapExactly` covers this at the service layer but not at the controller boundary. Add: ```java @Test @WithMockUser(authorities = "WRITE_ALL") void quickUpload_accepts50Files_exactlyAtCap() throws Exception { when(userService.findByEmail(any())).thenReturn(AppUser.builder().id(UUID.randomUUID()).build()); // validateBatch(50, null) should not throw var builder = multipart("/api/documents/quick-upload"); for (int i = 0; i < 50; i++) { builder.file(new MockMultipartFile("files", "f" + i + ".pdf", "application/pdf", new byte[]{1})); } mockMvc.perform(builder).andExpect(status().isOk()); } ``` **3. `WhoWhenSection` autofocus removal has no regression test** `autofocus` was removed from the date input and sender typeahead in `WhoWhenSection.svelte`. There's no test verifying the previous autofocus behaviour existed (to prove it was intentional to remove it) and no test documenting the new behaviour (no autofocus). If this was a deliberate UX decision, a comment in the component is enough. If it was accidental, there's no safety net. **4. `storeDocumentWithBatchMetadata` lacks integration test coverage for tag side effects** The service unit test `storeDocumentWithBatchMetadata_appliesTagsViaUpdateDocumentTags` mocks `tagService.findOrCreate` and `documentRepository.findById`. It doesn't test the actual JPA transaction behaviour when `updateDocumentTags` runs within the same transaction as `save`. This is the exact scenario where an integration test against a real Postgres container would catch `@Transactional` ordering bugs. Not blocking, but worth a Testcontainers integration test in a follow-up. --- ### What's Good - `validateBatch_doesNotThrow_whenFileCountEqualsCapExactly` — boundary condition at N=50 covered at service layer - `storeDocumentWithBatchMetadata_leavesTitle_whenIndexExceedsTitlesList` — off-by-one edge case covered - `save marks only the file whose filename matches the backend error` — precise error attribution test, well-named - `save() does not call fetch a second time when already saving` — double-click guard tested correctly - `afterEach(() => { cleanup(); vi.unstubAllGlobals(); vi.clearAllMocks(); })` — clean test isolation
Author
Owner

🚀 Tobias Wendt — DevOps & Platform Engineer

Verdict: ⚠️ Approved with concerns

No new infrastructure, no Compose changes, no CI changes — all good. One operational item needs verification before this goes to production.


Blocker (must verify before production deploy)

max-request-size: 500MB — Caddy body size limit check required

application.yaml:

max-request-size: 500MB
file-size-threshold: 2KB

The backend now accepts 500MB bodies. Caddy by default has no body size limit, but if the production Caddyfile was hardened at any point with request_body { max_size ... }, bulk uploads beyond that limit will return Caddy's 413 before the backend sees anything. The user gets no meaningful error, and it would be hard to diagnose without knowing this limit exists.

Action: Check the production Caddyfile for max_size or request_body directives. If one exists, either raise it to at least 500MB or document the mismatch. If none exists, document that this setting is intentionally unlimited.


Minor Note

Comment references nginx, but our proxy is Caddy

BulkDocumentEditLayout.svelte:

// 10 files per request keeps multipart bodies well under typical reverse-proxy limits
// (e.g. nginx default 1 MB client_max_body_size per PDF).

The stack uses Caddy, not nginx. The 10-file chunk size is sound reasoning, but the example is misleading to anyone who looks at this when debugging a 413. Update to reference Caddy's request_body max_size instead.


What Looks Good

  • file-size-threshold: 2KB — small files stay in memory, large PDFs stream to disk. Correct choice to prevent OOM under load.
  • No new services, volumes, or ports in docker-compose.yml — operational footprint unchanged.
  • No CI workflow changes — pipeline is unaffected.
  • The 10-file chunk cap is a sensible operational choice: predictable request sizes, avoids reverse-proxy surprises on large batches.
## 🚀 Tobias Wendt — DevOps & Platform Engineer **Verdict: ⚠️ Approved with concerns** No new infrastructure, no Compose changes, no CI changes — all good. One operational item needs verification before this goes to production. --- ### Blocker (must verify before production deploy) **`max-request-size: 500MB` — Caddy body size limit check required** `application.yaml`: ```yaml max-request-size: 500MB file-size-threshold: 2KB ``` The backend now accepts 500MB bodies. Caddy by default has no body size limit, but if the production Caddyfile was hardened at any point with `request_body { max_size ... }`, bulk uploads beyond that limit will return Caddy's 413 before the backend sees anything. The user gets no meaningful error, and it would be hard to diagnose without knowing this limit exists. **Action:** Check the production Caddyfile for `max_size` or `request_body` directives. If one exists, either raise it to at least 500MB or document the mismatch. If none exists, document that this setting is intentionally unlimited. --- ### Minor Note **Comment references nginx, but our proxy is Caddy** `BulkDocumentEditLayout.svelte`: ```typescript // 10 files per request keeps multipart bodies well under typical reverse-proxy limits // (e.g. nginx default 1 MB client_max_body_size per PDF). ``` The stack uses Caddy, not nginx. The 10-file chunk size is sound reasoning, but the example is misleading to anyone who looks at this when debugging a 413. Update to reference Caddy's `request_body max_size` instead. --- ### What Looks Good - `file-size-threshold: 2KB` — small files stay in memory, large PDFs stream to disk. Correct choice to prevent OOM under load. - No new services, volumes, or ports in docker-compose.yml — operational footprint unchanged. - No CI workflow changes — pipeline is unaffected. - The 10-file chunk cap is a sensible operational choice: predictable request sizes, avoids reverse-proxy surprises on large batches.
Author
Owner

📋 Elicit — Requirements Engineer

Verdict: ⚠️ Approved with concerns

The PR description is dense and well-structured. The implementation matches the stated intent. Two requirements concerns: one unintended scope change, and one ambiguity in the destructive-action confirmation path.


Unintended Scope Regression — Existing Flow Affected

Issue: WhoWhenSection.svelte autofocus removal

The PR removes autofocus from the date input and sender typeahead in WhoWhenSection.svelte. This component is shared between:

  1. Bulk upload (new) — autofocus in a split panel would be disruptive; removal here is correct
  2. Single-document create (/documents/new) — this was the original context where autofocus guided the user through the form sequentially
  3. Single-document edit (/documents/[id]/edit) — same shared component

The change silently modifies the behaviour of existing flows without a stated requirement to do so. If the autofocus behaviour in single-document forms was intentional and should be preserved, this is a regression. The issue tracker (#317) scopes this to bulk upload ��� the shared-component change is out of scope.

Requirement question: Was the removal of autofocus from existing single-document forms part of issue #317, or a side effect of using the shared component in the bulk layout?


Ambiguity — Destructive Action Confirmation

The "Alle verwerfen" (Discard All) action appears in two places:

  1. Topbar — only visible in isMulti state, triggers handleDiscard() with confirm dialog
  2. UploadSaveBar — always visible, also triggers handleDiscard()

In unit tests, _confirmService is null, so handleDiscard() discards immediately without confirmation. In production, the confirm dialog fires for both entry points.

This is internally consistent, but the PR description and issue do not specify:

  • Should "Discard All" in N=1 state (single file, save bar only) also require confirmation?
  • Is the isMulti-gated topbar button intentionally redundant with the save bar button?

The current implementation answers both questions implicitly (yes confirmation for N≥2 topbar, save bar always fires handleDiscard which includes confirmation), but the requirement is not explicit. For a family app this is a low-risk ambiguity. Noting it for the issue tracker.


What's Well-Specified

  • The N=0/N=1/N≥2 layout switching is clearly described and implemented exactly
  • The 50-file cap with BATCH_TOO_LARGE error code is specified, implemented, tested, and translated across all 3 locales
  • ICU plural workaround (bulk_save_cta_one / bulk_save_cta) is documented in the PR description with the Paraglide 2.5 constraint
  • The goto('/documents') on completion — unambiguous success behaviour
  • Partial success handling (errors array + no navigation) — matches the described error flow
## 📋 Elicit — Requirements Engineer **Verdict: ⚠️ Approved with concerns** The PR description is dense and well-structured. The implementation matches the stated intent. Two requirements concerns: one unintended scope change, and one ambiguity in the destructive-action confirmation path. --- ### Unintended Scope Regression — Existing Flow Affected **Issue:** `WhoWhenSection.svelte` `autofocus` removal The PR removes `autofocus` from the date input and sender typeahead in `WhoWhenSection.svelte`. This component is shared between: 1. **Bulk upload** (new) — autofocus in a split panel would be disruptive; removal here is correct 2. **Single-document create** (`/documents/new`) — this was the original context where autofocus guided the user through the form sequentially 3. **Single-document edit** (`/documents/[id]/edit`) — same shared component The change silently modifies the behaviour of existing flows without a stated requirement to do so. If the autofocus behaviour in single-document forms was intentional and should be preserved, this is a regression. The issue tracker (#317) scopes this to bulk upload ��� the shared-component change is out of scope. **Requirement question:** Was the removal of autofocus from existing single-document forms part of issue #317, or a side effect of using the shared component in the bulk layout? --- ### Ambiguity — Destructive Action Confirmation The "Alle verwerfen" (Discard All) action appears in two places: 1. **Topbar** — only visible in `isMulti` state, triggers `handleDiscard()` with confirm dialog 2. **UploadSaveBar** — always visible, also triggers `handleDiscard()` In unit tests, `_confirmService` is `null`, so `handleDiscard()` discards immediately without confirmation. In production, the confirm dialog fires for both entry points. This is internally consistent, but the PR description and issue do not specify: - Should "Discard All" in N=1 state (single file, save bar only) also require confirmation? - Is the `isMulti`-gated topbar button intentionally redundant with the save bar button? The current implementation answers both questions implicitly (yes confirmation for N≥2 topbar, save bar always fires handleDiscard which includes confirmation), but the requirement is not explicit. For a family app this is a low-risk ambiguity. Noting it for the issue tracker. --- ### What's Well-Specified - The N=0/N=1/N≥2 layout switching is clearly described and implemented exactly - The 50-file cap with `BATCH_TOO_LARGE` error code is specified, implemented, tested, and translated across all 3 locales - ICU plural workaround (`bulk_save_cta_one` / `bulk_save_cta`) is documented in the PR description with the Paraglide 2.5 constraint - The `goto('/documents')` on completion — unambiguous success behaviour - Partial success handling (errors array + no navigation) — matches the described error flow
Author
Owner

🎨 Leonie Voss — UX Designer & Accessibility Strategist

Verdict: Approved

Accessibility fundamentals are solid. The component set is well-thought-out for the target audience (transcribers on laptop/tablet). One shared-component regression to confirm, and two follow-up notes.


Strong Accessibility Work

FileSwitcherStrip.svelte

  • aria-live="polite" + aria-atomic="true" in an .sr-only div announces the active file name to screen readers when it changes — this is excellent. Most developers miss this.
  • Arrow-key navigation (Left/Right, Up/Down) stays within the strip and wraps at boundaries — correct roving tabindex behaviour pattern (implemented via keydown handler rather than tabindex manipulation, which also works fine here).
  • Remove buttons are 44px × 44px (via h-[44px] w-[44px]) — meets WCAG 2.2 minimum, good for the 60+ audience.
  • Navigation arrows (‹/›) also have h-[44px] w-[44px] — good.

BulkDropZone.svelte

  • role="region" + aria-label + aria-describedby — landmark with accessible name and description. Correct.
  • File input wrapped in <label> — accessible CTA button. sr-only on the input, visible label text. Correct pattern.
  • accept="application/pdf" — communicates the constraint to assistive technology.

UploadSaveBar.svelte

  • <progress> with aria-label, aria-valuenow, aria-valuemin, aria-valuemax — fully accessible progress indicator.
  • Save button min-h-[44px] — meets minimum touch target.
  • Discard button min-h-[44px] — good.

Concern — autofocus Removal Affects Existing Forms

WhoWhenSection.svelte had autofocus on the date input (when no initial date) and on the sender typeahead (when a date was pre-filled). These were removed in this PR. The removal is correct for the bulk layout — autofocus in a split-panel form with multiple fields is disruptive.

However, WhoWhenSection is shared with the existing single-document create and edit forms, where autofocus guided new users through the form sequentially. If removing autofocus from those forms was unintentional, it's a UX regression for the primary digitisation flow.

If intentional, no change needed — just confirming the decision was conscious.


Follow-up Notes (not blocking)

1. fixed layout on mobile with virtual keyboard

BulkDocumentEditLayout.svelte uses:

<div class="fixed inset-x-0 bottom-0 flex flex-col" style="top: var(--header-height)">

When the virtual keyboard opens on a tablet (which the transcriber persona uses), fixed elements can behave unexpectedly — the keyboard may not push the layout, potentially obscuring the save bar or the active input. This is a known iOS Safari issue with fixed + keyboard. Since transcribers are the primary user, and they're on laptop/tablet rather than phone, this is lower priority, but worth a manual test on iPad Safari.

2. Navigation arrows use text characters (‹/›)

These are typographic arrows rendered as text. On some fonts/platforms they render poorly or inconsistently. A 12px SVG chevron (same as the back arrow in the topbar) would be more consistent. Cosmetic.

3. Title input <label> wrapping is accessible — confirmed

Both the per-file and single-file title inputs use implicit label association (input inside <label>). No for/id needed when the input is a direct child of the label. Pattern is correct.

## 🎨 Leonie Voss — UX Designer & Accessibility Strategist **Verdict: ✅ Approved** Accessibility fundamentals are solid. The component set is well-thought-out for the target audience (transcribers on laptop/tablet). One shared-component regression to confirm, and two follow-up notes. --- ### Strong Accessibility Work **`FileSwitcherStrip.svelte`** - `aria-live="polite"` + `aria-atomic="true"` in an `.sr-only` div announces the active file name to screen readers when it changes — this is excellent. Most developers miss this. - Arrow-key navigation (Left/Right, Up/Down) stays within the strip and wraps at boundaries — correct roving tabindex behaviour pattern (implemented via keydown handler rather than `tabindex` manipulation, which also works fine here). - Remove buttons are `44px × 44px` (via `h-[44px] w-[44px]`) — meets WCAG 2.2 minimum, good for the 60+ audience. - Navigation arrows (‹/›) also have `h-[44px] w-[44px]` — good. **`BulkDropZone.svelte`** - `role="region"` + `aria-label` + `aria-describedby` — landmark with accessible name and description. Correct. - File input wrapped in `<label>` — accessible CTA button. `sr-only` on the input, visible label text. Correct pattern. - `accept="application/pdf"` — communicates the constraint to assistive technology. **`UploadSaveBar.svelte`** - `<progress>` with `aria-label`, `aria-valuenow`, `aria-valuemin`, `aria-valuemax` — fully accessible progress indicator. - Save button `min-h-[44px]` — meets minimum touch target. - Discard button `min-h-[44px]` — good. --- ### Concern — `autofocus` Removal Affects Existing Forms `WhoWhenSection.svelte` had `autofocus` on the date input (when no initial date) and on the sender typeahead (when a date was pre-filled). These were removed in this PR. The removal is correct for the bulk layout — autofocus in a split-panel form with multiple fields is disruptive. However, `WhoWhenSection` is shared with the existing single-document create and edit forms, where autofocus guided new users through the form sequentially. If removing autofocus from those forms was unintentional, it's a UX regression for the primary digitisation flow. If intentional, no change needed — just confirming the decision was conscious. --- ### Follow-up Notes (not blocking) **1. `fixed` layout on mobile with virtual keyboard** `BulkDocumentEditLayout.svelte` uses: ```svelte <div class="fixed inset-x-0 bottom-0 flex flex-col" style="top: var(--header-height)"> ``` When the virtual keyboard opens on a tablet (which the transcriber persona uses), `fixed` elements can behave unexpectedly — the keyboard may not push the layout, potentially obscuring the save bar or the active input. This is a known iOS Safari issue with `fixed` + keyboard. Since transcribers are the primary user, and they're on laptop/tablet rather than phone, this is lower priority, but worth a manual test on iPad Safari. **2. Navigation arrows use text characters (‹/›)** These are typographic arrows rendered as text. On some fonts/platforms they render poorly or inconsistently. A 12px SVG chevron (same as the back arrow in the topbar) would be more consistent. Cosmetic. **3. Title input `<label>` wrapping is accessible — confirmed** Both the per-file and single-file title inputs use implicit label association (input inside `<label>`). No `for`/`id` needed when the input is a direct child of the label. Pattern is correct.
marcel merged commit 0797406f02 into main 2026-04-25 12:24:25 +02:00
marcel deleted branch feat/issue-317-bulk-upload 2026-04-25 12:24:26 +02:00
Sign in to join this conversation.
No Reviewers
No Label
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: marcel/familienarchiv#329